Replies: 2 comments 1 reply
-
|
Hi @RaykiDan If you wish to know the basic principles of how the stereoscopic depth works then the link below has an introductory guide. https://github.com/realsenseai/librealsense/blob/master/doc/depth-from-stereo.md The RealSense data sheet document also provides the following explanation. The RealSense D400 series depth camera uses stereo vision to calculate depth. The stereo vision implementation consists of a left imager, right imager, and an optional infrared projector. The infrared projector projects a non-visible static IR pattern to improve depth accuracy in scenes with low texture. The left and right imagers capture the scene and send imager data to the depth imaging (vision) |
Beta Was this translation helpful? Give feedback.
-
|
The camera generates a depth frame using raw left and right infrared frames in the camera hardware. These are not the same as the frames accessed in the infrared stream. This is why it is possible to stream depth frames when the left and right infrared streams are not enabled. You can access the sensors directly to control them using the SDK's Low-Level API but this feature is mostly undocumented and there is only one official example program - rs-data-collect - for accessing the Low-Level API. So implementing a solution for creating custom frames using that approach would be very difficult. https://github.com/realsenseai/librealsense/blob/master/doc/api_arch.md#low-level-device-api https://github.com/realsenseai/librealsense/tree/master/tools/data-collect If you used the genlock hardware sync method of triggering frame capture using a trigger signal sent through a wire from an external signal generator then you could use genlock's 'burst' mode to generate more than one depth frame on each trigger. Regarding valid depth, yes the image will not include depth that is within the invalid depth band image on the left side of the depth image. You can widen the depth field of view by using a camera model with a wider FOV than D435i, such as D455, or using two cameras and positioning them so that their fields of view overlap. The invalid depth band area widens as the camera moves closer to an observed surface and becomes smaller as the camera is moved further away. You cannot access the raw depth frame or change the reference. There has been a past example in #6311 (comment) of a RealSense user who chose to use OpenCV to generate their own depth frame from the left and right infrared frames instead of using the camera's depth frame. |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I'm a student and I'm currently learning computer vision especially in depth estimation topics.
I'm working on a project using RealSense D435i depth camera in python by using pyrealsense wrapper to run the librealsense in python language in Ubuntu 22.04 and I'm curious on how the RealSense SDK stereoscopic depth rendition works. I'm trying to find the depth function file to read the program itself. Can someone tell me which file/module is it? Thank you.
Beta Was this translation helpful? Give feedback.
All reactions