forked from venkatrn/perception
-
Notifications
You must be signed in to change notification settings - Fork 5
/
params.json
1 lines (1 loc) · 2.8 KB
/
params.json
1
{"name":"PERCH 2.0","tagline":"Fast and High Quality GPU-based Perception via Search for Object Pose Estimation","body":"# PERCH 2.0 : Fast and High-Quality GPU-based Perception via Search for Object Pose Estimation\r\n\r\n![Image of 6-Dof](images/6dof_flow.png)\r\n\r\nOverview\r\n--------\r\nThis library provides implementations for single and multi-object pose estimation from RGB-D sensor (MS Kinect, ASUS Xtion, Intel RealSense etc.) data and CAD models. It can evaluate thousands of poses in parallel on a GPU in order to find the pose that best explains the observed scene using CUDA. Each pose is refined in parallel through CUDA based GICP. PERCH 2.0 works in conjunction with an instance segmentation CNN for 6-Dof pose estimation (Tested with YCB Video Dataset).\r\n\r\nFeatures\r\n------------\r\n* Detect 3Dof poses (in a tabletop setting) in under 1s without any CNN training\r\n* Get high detection accuracies required for tasks such as robotic manipulation \r\n* Get accurate 6-Dof poses directly from output of a 2D segmentation CNN\r\n\r\nSystem Requirements\r\n------------\r\n- Ubuntu (>= 16.04) \r\n- NVidia GPU (>= 4GB)\r\n\r\nDocker Setup\r\n------------\r\nFollow the steps outlined in this [Wiki](https://github.com/SBPL-Cruz/perception/wiki/Running-With-Docker#using-docker-image) to setup the code on your machine. The code will be built and run from the Docker image.\r\n\r\nRunning with YCB Video Dataset\r\n-----------------------\r\nFollow the steps outlined in this [Wiki](https://github.com/SBPL-Cruz/perception/wiki/Running-With-Docker#running-6-dof--ycb_video_dataset) to run the code on YCB Video Dataset. It can run using PoseCNN masks, ground truth masks or a custom MaskRCNN model trained by us. The model is trained to detect full bounding boxes and instance segmentation masks of YCB objects in the dataset.\r\n\r\nResults : \r\n![](https://cdn.mathpix.com/snip/images/oUibumUIATzIIYEr81i_wcgp7rs0HyF109AcUCspE3Q.original.fullsize.png)\r\n\r\nRunning with Robot\r\n------------------\r\nPERCH 2.0 communicates with the robot's camera using ROS. Follow the steps outlined in this [Wiki](https://github.com/SBPL-Cruz/perception/wiki/Running-on-Robot) to first test the code with bagfiles. You can then use the bagfile setup of your choice and modify it as per the robot requirements.\r\n\r\nCitation\r\n----\r\nPlease use the citation below if you use our code :\r\n```\r\n@mastersthesis{Agarwal-2020-122934,\r\nauthor = {Aditya Agarwal},\r\ntitle = {Fast and High-Quality GPU-based Deliberative Perception for Object Pose Estimation},\r\nyear = {2020},\r\nmonth = {June},\r\nschool = {},\r\naddress = {Pittsburgh, PA},\r\nnumber = {CMU-RI-TR-20-22},\r\nkeywords = {pose estimation, deliberative perception, manipulation},\r\n}\r\n```\r\n","note":"Don't delete this file! It's used internally to help with page regeneration."}