You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
There must be another discrepancy between generate_questions.py and the original script that was used to generate CLEVR. I have noticed that in CLEVR the answer distribution for counting questions is very skewed. For example, for one of the question families I have the following answer counts:
Here the 6th popular answer is "6" with the count of 17. This could not have happened if the current version of generate_questions.py were used, since it has a heuristic that forces all answer to occur at most 5 times as often as the 6th popular answer:
The main reason I have created this here is for the record, because it's unclear how this issue can be addressed. But I guess people who are using the code should be made aware.
The text was updated successfully, but these errors were encountered:
For generating the official released CLEVR data, we distributed question generation across multiple machines (using these flags) and merged the results. This means that the heuristics were only applied within the batch processed by each worker, and not enforced across the entire dataset. I suspect that this is the reason for the differing distributions that you are seeing.
Hi Justin @jcjohnson.
I am working on the CLEVR dataset.
I am not sure how to decide the right of a given Object as sometimes pixel coordinates don't make sense and sometimes 3D_coords don't make sense.
There must be another discrepancy between
generate_questions.py
and the original script that was used to generate CLEVR. I have noticed that in CLEVR the answer distribution for counting questions is very skewed. For example, for one of the question families I have the following answer counts:{'1': 2658, '0': 2555, '2': 1911, '5': 52, '3': 579, '6': 17, '4': 136, '7': 2, '9': 1}
Here the 6th popular answer is "6" with the count of 17. This could not have happened if the current version of
generate_questions.py
were used, since it has a heuristic that forces all answer to occur at most 5 times as often as the 6th popular answer:https://github.com/facebookresearch/clevr-dataset-gen/blob/master/question_generation/generate_questions.py#L322
The main reason I have created this here is for the record, because it's unclear how this issue can be addressed. But I guess people who are using the code should be made aware.
The text was updated successfully, but these errors were encountered: