Skip to content

Running Pose Sampler

Aditya Agarwal edited this page May 28, 2020 · 14 revisions

Creating Training and Testing Data

The data is created by running PERCH 2.0 6Dof flow on the YCB Video Dataset. For now it is for one object at a time only :

  1. Follow the steps in Running-With-Docker Wiki under Using Docker Image to set up PERCH 2.0 and MaskRCNN with Docker (make sure you are able to run Step 12 before going further).

  2. Make a new folder outside Docker for storing sampler data :

mkdir -p pose_sampler_data/sugar/test
mkdir -p pose_sampler_data/sugar/train
  1. Mount the above folder while running Docker to /data/pose_sampler_data.
  • If you are creating training data, config_docker.yaml should point to the training COCO annotation file
  • If you are creating test data, config_docker.yaml should point to the test COCO annotation file
  1. Next, check the run_ycb_6d function in fat_pose_image.py :
  • It should be set to run a single required object
  • If creating training data, the scene range should be 0 to 93
  • If creating test data, the scene range should be 48 to 60
  • The image range can be set as per requirement

Training Network

Training a pose sampling network using the costs calculated by PERCH 2.0.

  1. Follow the steps in Running-With-Docker Wiki under "Using Docker Image" to set up PERCH 2.0 and MaskRCNN with Docker.

  2. Clone the sampling CNN repo:

git clone https://github.com/SBPL-Cruz/perch_pose_sampler
  1. Start the Docker image and mount the cloned folder to Docker at /pose_sampler
  2. After getting into the Docker shell, run the following to make sure all Python modules are in the PYTHONPATH :
export PYTHONPATH=$PYTHONPATH:/pose_sampler/
export PYTHONPATH=$PYTHONPATH:/ros_python3_ws/src/perception/sbpl_perception/src/scripts/tools/fat_dataset
  1. Run the training code :
# For visualizing poses during training : 
Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; #skip this if not using the CPU version 
cd /pose_sampler/utils
python train_classification.py \
  --dataset /data/YCB_Video_Dataset \
  --dataset_type ycb \
  --dataset_annotation /data/YCB_Video_Dataset/instances_train_bbox_pose_sampler.json \
  --test_dataset_annotation /data/YCB_Video_Dataset/instances_keyframe_bbox_pose_sampler.json \
  --batchsize 40 --nepoch 50 --render_poses
  1. Run tensorboard outside Docker to visualize in the browser :
cd perch_pose_sampler/utils
tensorboard --logdir experiments
Clone this wiki locally