-
Notifications
You must be signed in to change notification settings - Fork 759
Description
The current implementation doesn't draw anything from certain relative angles between the source depth image frame and target frame (called rgb in the code and documentation but it is just any valid camera info with a valid frame), draws increasingly thin slivers as it approaches those angles, and then when a range check fails it doesn't draw anything at all.
I have the beginning of a quick fix here lucasw@478fd5b but it'll have to do 3 or 4x the computation. It's not the full triangle rasterization mentioned in a todo
but would get decent results in more cases. It gets a little ridiculous to embed what looks more and more like a full software 3D renderer here into this node but making it a little better seems reasonable.
This image shows off the filled areas getting reduced to slivers using the min/max of two values approach in the commit above, without it the slivers can get reduced to nothing at all because the range check can fail entirely.
This is the original code with for loops that sometimes don't execute at all:
https://github.com/ros-perception/image_pipeline/blob/noetic/depth_image_proc/src/nodelets/register.cpp#L291-L293
There look to be some other obvious refactors/cleanups to do in that same code (it's doing the same thing in three nearly identical blocks). Also I can look at creating a unit test for this (or is there one somewhere and I'm not seeing it?).
Probably few people who are using this have crazy high angles between the depth camera and the other camera frame, I'm only running into it now trying this out with synthetic data (I may get some of the synthetic data generation into the unit test, but it's nice to have manual test support also).