Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tangential Distortion / More Radial Distortion coefficients #159

Open
rolfvdhulst opened this issue Dec 23, 2019 · 8 comments
Open

Tangential Distortion / More Radial Distortion coefficients #159

rolfvdhulst opened this issue Dec 23, 2019 · 8 comments

Comments

@rolfvdhulst
Copy link
Contributor

rolfvdhulst commented Dec 23, 2019

I have been looking at the class camera_calibration a lot lately and I was wondering what the reason was to go for a camera distortion model which only incorporates one degree of camera distortion.

Is there a good reason tangential distortion is ignored or was it simply found too computationally intensive / there was no time to work on it so far? Using one degree of radial distortion seems quite minimalistic too. I am just learning about how camera's and camera calibration works so I'd love some better explanations if you have any. Although the lower orders dominate, it could be possible that there is significant improvements here which would for example minimalize the error in calibration between two camera's at the middle line.

In particular because I see that in #148 there is talk about adjusting the distortion model to support negative distortion better, I thought it would be a good idea to also mention this possibility. It would not affect runtime performance significantly as finding the roots for the distortion function is only necessary during calibration and to visualize the calibration in the interface of SSL-vision.

You can view this as a 'feature request'

@g3force
Copy link
Member

g3force commented Dec 31, 2019

@rhololkeolke and I experiment with using the OpenCV mechanisms to use a chessboard for camera intrinsic calibration. It might be an alternative to #148.

@joydeep-b might know more about the history and decisions of the camera model. Keep in mind, that the code is quite old ;)

@rolfvdhulst
Copy link
Contributor Author

rolfvdhulst commented Dec 31, 2019

Interesting! If you need any help, I'm enthousiastic to help with reviewing anything or building things. I could definitely see the chessboard working out; if you calibrate the camera and undistort the image using openCV,field calibration to find the position and orientation of the camera should become a lot more simple.

@joydeep-b
Copy link
Member

joydeep-b commented Dec 31, 2019

Distortion models can get arbitrarily complex, including tangential distortions, cylindrical distortions, large FOVs, etc. However, in practice, the lenses and cameras that we use with ssl-vision do not exhibit these kinds of distortions. Or more precisely, the impact of correcting for these distortions, on the error in re-projection is negligible.

However, adding more complex distortion models (including negative radial distortion, see issue #148 ) significantly slows down the computation. For example, handling negative radial distortion will require solving a general form cubic equation rather than a special case, for every pixel being undistorted. This is why we add more complex distortion models only when needed.

If you have a system that does experience significant tangential distortion, you could share example images to help investigate the magnitude of the error. The need for supporting negative radial distortion is evidenced by the newer cameras, but at the moment we don't have evidence that tangential distortion is actually observed.

@rolfvdhulst
Copy link
Contributor Author

After some digging I agree with you. My question came from a place of interest; I am building a simulator which uses real camera calibrations to compute pixel positions to forward to the user. I was simulating using the camera calibration from previous RoboCup, which gave problems because the calibration used at the Robocup 2019 was quite off near the boundaries of the simulated camera image because of #148, as the manual calibration is ok but not the best, giving a significant reprojection error. Since the error further away from the principal point as a result, I thought the problem was due to distortion coefficients. Robocup 2018 works just fine, so it's simply down to the automatic calibration not working.

Personally I am more concerned with the 'middle line' effect, where the calibrations of two overlapping camera's detects the robots in two distinct locations with some error between them (5-8 cm at previous Robocup) . I do not know however where this error originates from and how to reduce it effectively. If you could shed more light on this I'd be interested, but feel free to close this issue as my original question is solved.

@g3force
Copy link
Member

g3force commented Jan 3, 2020

My guess is, that the overlap is due to #148. As soon as we have a solution for it (either by implementing #148 or by using the calibration result from the chess pattern), we can check the overlap again.

@rolfvdhulst
Copy link
Contributor Author

@g3force that should not actually solve the overlap problem as the overlap was also a problem with the in 2018 when there was heavy positive distortion on all camera's.

@joydeep-b
Copy link
Member

Yes, agreed, that problem is related to the fact that single camera frames now cover a larger area of the field, and we do not get enough features across the image for a good calibration. It would definitely help to get more features near the centers of the image for calibration.

@g3force
Copy link
Member

g3force commented Jan 9, 2020

@rolfvdhulst the chess board calibration is now available in a first working draft: #163

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants