Thanks and sorry for creating another issue.
I have already solved the camera JSON file issue, but I noticed for partial point cloud, there is root_data_dir named:
data/acronym/renders/objects_filtered_grasps_63cat_8k/
According to the following code, it seems that it required a data structure like below:
data/acronym/renders/objects_filtered_grasps_63cat_8k/
└── train/
├── scene_00001/
│ ├── scene_00001_cam_0.png
│ ├── scene_00001_cam_1.png
│ ├── (other depth images)
│ └── 1.npz
├── scene_00002/
│ ├── scene_00002_cam_0.png
│ ├── scene_00002_cam_1.png
│ ├── (other depth image)
│ └── 2.npz
└── ...
I assume that it is generated from acronym dataset, and I found that there is a script in:
https://github.com/NVlabs/acronym/blob/main/scripts/acronym_render_observations.py
to generate depth image according to the selected object and a selectd "support" object. I attach one example below.
But I am not sure if this the official way to generate data in your work, and I also couldn't find anything related to npz file generation in https://github.com/NVlabs/acronym/
Could you please kindly clarify the official way to generate the dataset as desired in the "data/acronym/renders/objects_filtered_grasps_63cat_8k/"? Or if there is any scripts to do so.
Thanks in advance.
Thanks and sorry for creating another issue.
I have already solved the camera JSON file issue, but I noticed for partial point cloud, there is root_data_dir named:
data/acronym/renders/objects_filtered_grasps_63cat_8k/
According to the following code, it seems that it required a data structure like below:
data/acronym/renders/objects_filtered_grasps_63cat_8k/
└── train/
├── scene_00001/
│ ├── scene_00001_cam_0.png
│ ├── scene_00001_cam_1.png
│ ├── (other depth images)
│ └── 1.npz
├── scene_00002/
│ ├── scene_00002_cam_0.png
│ ├── scene_00002_cam_1.png
│ ├── (other depth image)
│ └── 2.npz
└── ...
I assume that it is generated from acronym dataset, and I found that there is a script in:
https://github.com/NVlabs/acronym/blob/main/scripts/acronym_render_observations.py
to generate depth image according to the selected object and a selectd "support" object. I attach one example below.
But I am not sure if this the official way to generate data in your work, and I also couldn't find anything related to npz file generation in https://github.com/NVlabs/acronym/
Could you please kindly clarify the official way to generate the dataset as desired in the "data/acronym/renders/objects_filtered_grasps_63cat_8k/"? Or if there is any scripts to do so.
Thanks in advance.