Determination of stereo camera world coordinates with respect to calibration target

23 views (last 30 days)
I'm having some issues getting a precise and accurate determination of the location of my cameras. I am defining my world coordinate system with the origin on the calibration target, and want to know the world coordinates of the camera centers relative to this. I take a set of ~20 images, calibrate the cameras using the stereo calibration functions, and then compute the camera centers as:
wocoIdx= 1; % index for the image with the calibration target located where I would like to define my world coordinate system
C1= -stereoParams.CameraParameters1.TranslationVectors(wocoIdx,:);
C2= -stereoParams.CameraParameters2.TranslationVectors(wocoIdx,:);
The issue is that this typically gives me cameras spaced ~111mm apart horizontally, versus the 140mm I have measured manually.
(1) Is camera height vs. camera spacing a limiting factor here? My cameras are about 1960mm from the world coordinate target and spaced ~140mm apart. When calibrating, I translate the calibration target in a range of 200mm towards/away from the camera (this is limited by the depth of field of the setup).
(2) Am I missing another step (rotation?) to obtain the camera centers? I've also tried using the triangulate function but had little success getting an accurate estimate. The approach I used with triangulate is:
undistortedImagePts1= undistortPoints(imagePoints(:,:,wocoIdx,1),stereoParams.CameraParameters1);
undistortedImagePts2= undistortPoints(imagePoints(:,:,wocoIdx,2),stereoParams.CameraParameters2);
C1= triangulate(undistortedImagePts1(1,:),undistortedImagePts2(1,:),stereoParams);
C2= C1 + stereoParams.TranslationOfCamera2';

Accepted Answer

Dima Lisin
Dima Lisin on 21 Jul 2015
Matt's is almost correct. The extrinsics R and t represent the transformation from the world coordinates into camera's coordinates. So t is not the camera center. You have to rotate as well. The only problem with Matt's solution is that it does not follow the vector-matrix multiplication convention used by the Computer Vision System Toolbox. The camera center in the world coordinates is
c = -t * R';
because t is a row vector.
Also, you do not have to use one of your calibration images. You can calibrate your cameras and then take a new picture of a checkerboard, and use the extrinsics function to compute the R and t relative to that board. See this example.
  3 Comments
Tracy
Tracy on 21 Jul 2015
Also, are there any rules of thumb about stereo camera geometry for calibration, e.g. how close the target should be to the cameras, how far apart the cameras should be spaced, etc. based on the properties of the cameras and lenses?
Dima Lisin
Dima Lisin on 22 Jul 2015
The baseline (distance between cameras) should be proportional to the distance to the object. If the object is too far away, then stereo disparity becomes too small to be measured. This is when you may want to increase the baseline. On the other hand, if the baseline is too wide, then you cannot measure the objects that are close. It is like moving your finger too close to your eyes. This also depends on the resolution of your cameras.
You can easily check if your setup is reasonable. Calibrate your cameras, take a pair of images, rectify them, create a red-cyan anaglyph from the rectified images using stereoAnaglyph function, and display it using imtool. Then use the ruler widget in imtool to measure the disparity between a few pairs of corresponding points. If the disparity is less than a pixel, than you want to increase baseline or move the cameras closer to the object. If the disparity is too large (more than a couple of hundred pixels), then reduce the baseline, or move the object farther away.

Sign in to comment.

More Answers (1)

Matt J
Matt J on 17 Jul 2015
Edited: Matt J on 17 Jul 2015
Am I missing another step (rotation?) to obtain the camera centers?
I think so. The camera center is the null vector of the 3x4 camera matrix or, in non-homogenous coordinates, the center C is given by
C=-R.'*t
where R is the camera rotation matrix and t is the translation vector from the camera extrinsic parameters. It looks like you are interpreting t to be the camera center.
  7 Comments
Matt J
Matt J on 20 Jul 2015
Edited: Matt J on 20 Jul 2015
Another suggestion would be to try the calibration again with the cameras at a smaller depth from the calibration target. I've heard speculation that the decomposition of a camera into extrinsic and intrinsic parameters becomes more ill-posed at large depths. If a smaller depth gives you better accuracy, it could at least tell you whether the camera centers are located in the physical bodies of the camera.
Tracy
Tracy on 21 Jul 2015
Great idea for checking. Recalibrating with the target closer to the cameras (~1.2 meters away instead of ~1.8) gives a spacing between cameras of 147mm (which is closer to what I'd expect). Tomorrow I'll try spacing the cameras farther apart for the longer depth of field, to see if it's a camera height : camera spacing issue like I suspected.
Thanks for your help, Matt! Would of course welcome any more advice but wanted to make sure I expressed my appreciation for your suggestions.

Sign in to comment.

Categories

Find more on MATLAB Support Package for USB Webcams in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!