# Camera calibration cameraPose Vs extrinsic

4 views (last 30 days)
chaton on 7 Dec 2015
Answered: Dima Lisin on 9 Dec 2015
Hello,
I am currently playing with the Matlab camera calibration toolbox (the one included in the new release). I used a synthetic dataset (without noise yet) and I wanted to compute the rotation and translation between two camera poses. I used the cameraPose function which gave me reasonable results, for instance: Ground truth (rotation): 0.8650 -0.4470 0.2279 0.4932 0.8411 -0.2219 -0.0925 0.3044 0.9481 Estimated rotation using cameraPose : 0.8635 -0.4516 0.2247 0.4956 0.8426 -0.2109 -0.0941 0.2935 0.9513
When I use the extrinsic function I obtained the following result for the second pose : 0.8650 -0.4470 0.2279 0.4932 0.8411 -0.2219 -0.0925 0.3044 0.9481 which is exactly the result I was expecting from cameraPose.
Am I missing something or do you have any idea why using cameraPose is less accurate than computing the extrinsic value ?
Cheers

Dima Lisin on 9 Dec 2015
It really depends on what exactly you did...
To compute the extrinsics you need a picture of a checkerboard. So most of the error in the pose would come from the error in localizing the checkerboard points.
To use cameraPose you need to know the fundamental matrix. If you get it from the estimateFundamentalMatrix function, then you first have to get matching points between two images. Now the error in the pose depends on the error in localizing two sets of points, and on how correct the matches are.
So if this is what you did, then it makes sense for the extrinsics function to be more accurate. On the other hand, if you computed the fundamental matrix analytically, then this reasoning does not apply. So, more information about how exactly you have set up your experiment would help.

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!