The calibrator app says the patterns are "too similar" even if they are not

I am trying to calibrate some images with a checkerbard calibration and the (single) Camera Calibrator app, but the app claims the patterns are "too similar" even if they are not. I know that they aren't too smilar because I have a specular setting with another camera that works. Moreover, if you apply some sort of transformation to the images (sometimes flipping them upside down, sometimes mirroring them) the app works fine and calibrate.
The images can be downloaded from this repo. cam1 works fine. cam2 doesn't work. cam2Rotated works fine.
Does anyone have an idea of what is wrong with these images?
Thanks for your help!

2 Comments

No, it is basically the same issue. Last time I 'solved' it by rotating the image by 180 degrees. I found out that this solution doesn't always work, so I would really like to understand what's wrong with this.

Sign in to comment.

 Accepted Answer

After a bit of debugging, I find out that the error "Unable to estimate camera parameters. The 3-D orientations of the calibration pattern might be too similar across images. Remove any similar images and recalibrate the cameras again." was not due to having 'too similar' patterns, but rather to the position at which the principal point converges after calibration.
In the function 'estimateCameraParameters' there is a sanity check done through 'validateEstimatedResult' that checks if the principal point is positive in the camera reference frame. By removing 'validateEstimatedResult' you will be able to get an output from 'estimateCameraParameters'. Same check is done other times in the cameraIntrinsicsImpl.m file, which you invoke when calling params.Intrinsics.
What I did to solve the problem and get the intrinsics is to modify the following matlab functions:
  • cameraIntrinsicsImpl.m
  • estimateCameraParameters.m
  • validatePrincipalPoint.m
and remove all the points where there's a check on the positivness of the principal point. I do not attach the modified files for now becuase I am not sure it is ok to share modified matlab functions online. After removing all the checks everything work, also the app.
Said that, if you are experinecing a similar issue (i.e. this 'too similar patterns' error when patterns are clearly different) keep in mind that it may be due to something else. I believe this other question is related to the same problem (negative principal point cordinates which causes an error when we try to create a camera object).
For more details I advise reading all the comment thred!

More Answers (1)

Your cam2 images are of poor quality. Corner detection done in isolation fails pretty badly for some of them (see below). Did you visually inspect all 44 images to verify corner detection success by the app? Regardless, I think you need to improve the image collection quality, to make things easier on the calibrator app. I'm sure the designers have done what they can to make calibration robust to poor conditions, but it is both unnecessary and unwise to try to dependent on that.
load data
figure(1); procIt(I1)
boardSize = 1×2
6 9
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
Nraw = 40
Nfiltered = 40
figure(2); procIt(I2)
boardSize = 1×2
8 11
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
Nraw = 70
Nfiltered = 61
function procIt(I)
[imagePoints,boardSize] = detectCheckerboardPoints(I);
boardSize
Nraw=height(imagePoints)
cut = any(~isfinite(imagePoints),2);
imagePoints(cut,:)=[];
Nfiltered=height(imagePoints)
J = insertText(I,imagePoints,1:size(imagePoints,1));
J = insertMarker(J,imagePoints,'o','MarkerColor','red','Size',5);
imshow(J);
title(sprintf('Detected a %d x %d Checkerboard',boardSize));
end

12 Comments

Hi, thanks for the feedback!
I know some images have worse quality, but that is not the issue here. I tried the calibration by removing the bad quality images where the corner are badly detected, but it still doesnt work.
Also, if you rotate the imaes by 180 degrees (I did it with imageJ through Image--> Transform--> and two 90 degees rotations) then it works, which proves it is not an image quality issue, but rather some kind of geometrical issue or misinterpretation of the images' metadata from the calibrator app.
I also repeated teh experiment by aquiring other sets of images from the same orientation, but it never works, I don't really understand why..
Also, if you rotate the imaes by 180 degrees ... then it works, which proves it is not an image quality issue.
I don't see how it proves that...
I tried the calibration by removing the bad quality images where the corner are badly detected, but it still doesnt work.
Probably because the images you are left with are "too similar" as the warning message tells you.
In any case, the repo of images you gave us is unpruned. We don't know which images you discarded and therefore, we are left blind to the data you are actually using...
The fact that rotated images works proves that images are not 'too similar', but something else must be happening. Indeed, if images were too similar, they would still be after a 180 degrees rotation. Rotating all the images does not remove similarity. Does it make sense to you?
But it's not literally the images that the warning is about. It's the set of chequerboard corners that are extracted from those images that is raising the "too similar" warning. You haven't shown us the extracted corner data, but I have no reason to think the corner extraction algorithm is rotation invariant. In fact, the modified test below seems to refute that -- flipping the image leads to both a different detected board size and different numbers of detected corners.
You could upload both the pruned images and their detected corners so that they can be examined, but I think it would be a quicker path to a solution is just to take better images, under better conditions. For example, it makes no sense to me that you have the chequerboard behind a reflective glass sheet. You should put it in front of the glass, so that you don't have reflections from the glass, or even take the images in another room altogether with better lighting conditions.
load data
figure(1); procIt(I2)
boardSize = 1×2
8 11
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
Nraw = 70
Nfiltered = 61
figure(2); procIt(rot90(I2,2))
boardSize = 1×2
7 12
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
Nraw = 66
Nfiltered = 60
function procIt(I)
[imagePoints,boardSize] = detectCheckerboardPoints(I);
boardSize
Nraw=height(imagePoints)
cut = any(~isfinite(imagePoints),2);
imagePoints(cut,:)=[];
Nfiltered=height(imagePoints)
J = insertText(I,imagePoints,1:size(imagePoints,1));
J = insertMarker(J,imagePoints,'o','MarkerColor','red','Size',5);
imshow(J);
title(sprintf('Detected a %d x %d Checkerboard',boardSize));
end
That is strange, usually corner detectors (like Harris) are rotation invariant.
I will try to get better quality images, even if I did it several time already and nothing really changed... The camera from that perspective keeps failing in calibrating. I am a bit skeptical believing it is just image quality, because I did it many many times and the error always appear in the camera recording from that perspective. The other camera have never any issue in this sense.
I just tried out charuco calibration, but I run in the same exact error.
Anyway tomorrow I'll try to get more images, thank you again!
An update.
I recorded higher quality images, but the problem persists. I went deeper into the calibration process by scripting the calibration instead of using the app (see file main00_calibrationSingleCamera.mat). This is what I found.
1) For the camera on the left (cam1, the one that works, i.e., the app doesn't crash) the principal point is very off. To my understending, it should be close to the center of the camera (for the resolution considered should be at [720,540]), but Instead, it converges at [2895,903]. Even if the principal point is located outside the image borders, the function works without giving any error and the reprojection errors are very small.
2) For the camera on the right, the calibration doesnt work and I still get the same error (patterns might be too similar) even from the script. However, I forced the calibration to converge by removing the sanity check 'validateEstimatedResult' in the 'estimateCameraParameters' function (see attached function estimateCameraParameters_mod). In this way, the calibration converge and produces an output (called params in my code) that has small reprojection errors, but when you try to compute K through K = params.Intrinsics.K you get the following error:
The value of 'principalPoint' is invalid. Expected principalPoint to be positive.
Indeed, the variable params.PrincipalPoint is -638.6390,170.0831 which is not only very off and outside the image space, but negative.
The issue here is that the sanity check done by validateEstimatedResult doesn't check if the principal point is inside the image border, but only if it is positive or not. This check indeed breaks the left/right asymmetry which is why I am getting asymmetric results from a symmetric setup.
For context, the center points values are so off probably because the distortion is high as I have an air-glass-water interface and a large angle between the glass and the camera:
This configurations is probably what push the principal point soo far out of the image, and it does it in a symmetric manner (as the setup is symmetric).
as I have an air-glass-water interface and a large angle between the glass and the camera:
I think you've answered your own question, haven't you? Imaging through a water tank will have Snell refracation, and probably will not fit the calibration model (pinhole projection +radial distortion).

No, distortion is not the answer. The distortion is the same for both camera, but the one on the left works and the one on the right doesn’t. The issue is on how the estimateCameraParameters function works. It checks if the principal point is positive, which breaks the left/right distortion symmetry. As you see from my setup, the interface is the same for both cameras, and this is probably what shift the principal point so far off the center (and that’s fine, I don’t really care as soon as re-projection errors are small). For the camera on the left the principal point shift on the right and go out from the image boundaries but in the positive direction - and it’s fine. For the camera on the right it shifts on the left and goes out of the image but in the negative direction - and here the function crashes.

If you think you understand what is happening, then what question remains? You're looking for a workaround? Flip the input images so that the algorithm thinks the principal point is to the right instead of to the left.
I will regard the question as solved when I will be able to correct the matlab functions that are respondible for this error. In particular, the functions are validateEstimatedResult for the inconsistency of the positivness check, and the parseInputsSimulation in the cameraIntrinsicsImpl.m file.
I wouldn't trust any of this. Did you look at any of the estimated parameters besides the principal point? What about the extrinsics? Do they make sense?
Yes, the extrinsics makes sense. The cameras are roughly in the correct position and orientation as they are in the lab.
Btw, I have just modified the functions to avoid the positivness check of the principal point and now all works, included the calibration from the app; now I regards the question as solved.

Sign in to comment.

Products

Release

R2025a

Asked:

on 12 Dec 2025

Edited:

on 16 Dec 2025

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!