-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there any sample scripts of applications #10
Comments
As the comment of the function
However,
|
After deciphering the library, I was able to get the depth in mm from the camera by simply doing the following
I was able to convert it to mm and did some research. |
Hello shimoshida, Which exact product are you working with?
--> Depends on the product. The radial distance map (ToF only) needs to be undistorted by using the intrinsic camera parameters (k1,k2,fx,fy) and then transformed into the camera coordinate by shifting it to the front screen (f2rc). The cartesian distance map (stereo) directly yields the z value.
--> Per default the all data (after point cloud calculation) is in the camera coordinate system. Any adaptions are made by using the camera-to-world-matrix (m_c2w). The matrix is again part of the myCamParams. It is defined by the mounting position of the camera which can be set via API or SOPAS ET.
--> What is the output? The triplet should yield (x==0, y==0, z==measured_z_for_center_pixel) |
I am simply asking a question in an issue.
The current sample code only shows how to set up the camera and takes samples of intensity and depth.
I am currently working on a project to build a camera-based robotic system with your cameras, and would appreciate some more practical examples of how to use them.
For example...
The text was updated successfully, but these errors were encountered: