-
Notifications
You must be signed in to change notification settings - Fork 5
Running Pose Sampler
Training a pose sampling network using the costs calculated by PERCH 2.0.
-
Follow the steps in Running-With-Docker Wiki under "Using Docker Image" to set up PERCH 2.0 Docker image.
-
Clone the sampling CNN repo:
git clone https://github.com/SBPL-Cruz/perch_pose_sampler
-
Start the Docker image and mount the cloned folder to Docker at
/pose_sampler
by adding a-v <local path to perch_pose_sampler>:/pose_sampler
to the Docker run command -
After getting into the Docker shell, run the following to make sure all Python modules are in the
PYTHONPATH
:
export PYTHONPATH=$PYTHONPATH:/pose_sampler/
export PYTHONPATH=$PYTHONPATH:/ros_python3_ws/src/perception/sbpl_perception/src/scripts/tools/fat_dataset
-
Download the sampler test and train annotation files into the YCB_Video_Dataset folder in your local area. Note that this annotation file contains scores of 80 viewpoints for every input image that have been calculated based on the cost from PERCH 2.0 runs on these images.
-
Run the training code :
# For visualizing poses during training :
Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; #skip this if not using the CPU version
cd /pose_sampler/utils
python train_classification.py \
--dataset /data/YCB_Video_Dataset \
--dataset_type ycb \
--dataset_annotation /data/YCB_Video_Dataset/instances_train_bbox_pose_sampler_80.json \
--test_dataset_annotation /data/YCB_Video_Dataset/instances_keyframe_bbox_pose_sampler_80.json \
--batchsize 1 --nepoch 50 --render_poses
- Run tensorboard outside Docker to visualize in the browser. If tensorboard is not installed outside, you can install it by
pip install tensorboard
:
cd perch_pose_sampler/utils
tensorboard --logdir experiments
The data is created by running PERCH 2.0 6Dof flow on the YCB Video Dataset. For now it is for one object at a time only :
-
Follow the steps in Running-With-Docker Wiki under
Using Docker Image
to set up PERCH 2.0 and MaskRCNN with Docker (make sure you are able to run Step 12 before going further). -
Make a new folder outside Docker for storing sampler data :
mkdir -p pose_sampler_data/sugar/test
mkdir -p pose_sampler_data/sugar/train
-
Mount the above folder while running Docker to
/data/pose_sampler_data
.- If you are creating training data,
config_docker.yaml
should point to the training COCO annotation file - If you are creating test data,
config_docker.yaml
should point to the test COCO annotation file
- If you are creating training data,
-
Next, check the
run_ycb_6d
function infat_pose_image.py
:- It should be set to run a single required object
- If creating training data, the scene range should be 0 to 93
- If creating test data, the scene range should be 48 to 60
- The image range can be set as per requirement
-
Run the code from inside Docker :
python fat_pose_image.py --config config_docker.yaml
- The PERCH C++ code will dump outputs as well as network data in json files in the
perch_outputs
folder - Once you are done with running the code, copy the folders corresponding to images from
perch_outputs`` to required train or test folder. ***Make sure there are no random run folders in
perch_outputs`` if you are copying everything in the folder **:
cp perch_outputs/* pose_sampler_data/sugar/train
- Once the folders for both train and test are copied, you can run the
convert_fat_coco.py
script to convert the data to COCO format which can be used for training the network. The script will go through each pose in each scene and assign a score out of 1 using the cost computed by PERCH. It will also discretize the pose using viewpoints and inplane rotations.
Look for the code section on DATASET_TYPE = "ycb_sampler"
:
- Make sure this is the only section set to
True
and everything else is set toFalse
: - For testing, the testing section should be uncommented (Output file :
instances_keyframe_bbox_pose_sampler
) - For training the testing section should be uncommented (Output file :
instances_train_bbox_pose_sampler
)
- Run the convert script to create json files containing the annotations in the YCB_Video_Dataset folder:
python convert_fat_coco.py