Skip to content

Generating Datasets

Aditya Agarwal edited this page Jun 12, 2020 · 1 revision

Conveyor Dataset

  1. Collect bag files with required information from PR2:
rosbag record /camera/depth/camera_info /camera/depth/image /camera/depth/points /camera/rgb/camera_info /camera/rgb/image_color /tf -o sugar_1.bag
  1. Generate color images, depth image and registered depth image from the bag data by using the python script and playing the bag file (change the output_path to as required) :
rosbag play sugar_1_2020-01-20-13-37-47.bag
python image_node.py 
  1. Generate labels. The rotation estimate for the first frame is provided by user and translation is computed from the mean of filtered point cloud containing only points belonging to the object on the conveyor. Change the model path and output path (in yaml file) and initial rotation as required (in utils.py). Also check in RVIZ if filtered point cloud and point cloud corresponding to the output pose are correct. Also labels may not be generated for all images that are collected, a start index and filter according to number of points in the point cloud is set. Note that the final pose annotation is in the /base_footprint frame of the robot:
python image_node.py --label_images --config config_conveyor.yaml
  1. Do this for all bag files and generate the folder with all required GT info in the following structure :
conveyor_dataset/
  sugar_1/
     0.color.jpg
     0.depth.jpg
     0.mask.jpg
     0.pose.txt
  drill_1/
     ....
  1. Compile all images and GT into COCO format :
python convert_fat_coco.py 
  1. Test PERCH :
python fat_pose_image.py --config config_conveyor.yaml
Clone this wiki locally