projecting 2D (x, y) coordinates onto a 3D plane

I'm looking to project 2D points (x, y) from my image onto a 3D plane (X, Y, Z). The plane into which I want to project my points is defined by the position of my checkerboard in one of my calibration images (image 9). However, when using the following formula:
wxy1=K[Rt]⎡XYZ1
The calculated 3D points do not lie on the specified plane. In fact, I am attaching a photo to my question that shows both the extrinsic parameters of my camera calibration and the detected checkerboard points, reprojected onto the 3D plane using the previously mentioned formula.
I am also attaching a snippet of my code associated with this transformation. Does anyone have a solution to provide? Is there an error in my code or my reasoning?
[param_int]= estimateCameraParameters(imagepoints_tot(:,:,:,1), worldPoints,'EstimateSkew', false, 'EstimateTangentialDistortion', true, ...
'NumRadialDistortionCoefficients', 3, 'WorldUnits', 'mm', ...
'InitialIntrinsicMatrix', [], 'InitialRadialDistortion', [], ...
'ImageSize',imagesize );
%%
Mat_transfo=[param_int.RotationMatrices(:,:,9),transpose(param_int.TranslationVectors(9,:))]
Mat_transfo=transpose(param_int.IntrinsicMatrix)*Mat_transfo
Mat_transfo=pinv(Mat_transfo)
translationVector=[param_int.TranslationVectors(9,:),1]
%%
for i = 1:88
pointsdamier=Mat_transfo*[imagepoints_tot(i,1,9,1);imagepoints_tot(i,2,9,1);1]
pointsdamier=pointsdamier/pointsdamier(4)+transpose(translationVector)
hold on
plot3(pointsdamier(1),pointsdamier(3),pointsdamier(2),'bO','MarkerSize',1)
end
showExtrinsics(param_int)

7 Comments

You asked "Is there an error in my code or my reasoning?" I cannot answer whether there is an error in the code, because I do not yet understand the reasoning. I cannot tell from the figure you provided exactly where the [x,y] points are. The figure shows a camera with coordinate system Xc,Yc,Zc. The plot has axes X,Y,Z. It appears that the plot axes X,Y,Z have the same orientaiotn and axes as Xc,Yc,Zc. The plot also shows 9 planes embeeded in the 3D space. YOur first line of code estimates the camera position, orientation, etc, based on knowing the worldponts and the imagepoints. Is it your intention to do this? If so, please provide the worldpoints and imagepoints as an attachement, so others can assist you better.
If you have weither the world points or the image points, and you want to know the other, then you nee to supply the camera position and orientaiton, or the plane orientation, and position of its origin, relative to the camera. If that is your intention, provide a point set and the information that deifnes the plane or camera position and orientation.
Matt J
Matt J on 15 Dec 2023
Edited: Matt J on 15 Dec 2023
The camera parameters can only be used to determine the infinite 3D line, extending from the camera center, that your 2D image point corresponds to. To find where that line intersects a 3D plane, which is your goal, you also need to have the equation for that plane. I cannot see in your steps where you incorporate that.
I understand your comment, and I will try to respond as best as I can. In my first line of code, I am indeed estimating the position of my camera relative to the position of my checkerboard in multiple calibration images. I store the intrinsic and extrinsic parameters of my camera in the variable 'param_int.' In the extrinsic parameters, we find the relative position (rotation matrix and translation vector) of my checkerboards in the different calibration images. Therefore, I have the relative position of my plane 9 with respect to the camera. To verify that my transformation allows me to project 2D points (x, y) onto the plane, I took the position (x, y) of the detected points on my checkerboard when it is in position 9 (i.e., the points that were used to calculate the relative position of my plane with respect to my camera, `imagepoints_tot(:, :, 9, 1)`), and applied this transformation to them. However, the position (X, Y, Z) of the reconstructed 3D checkerboard points is not contained within plane 9."
I am seeking an explanation and a solution for this. Along with my response, I am attaching the file of my variable: `imagepoints_tot(:, :, :, 1)`, which contains the positions (x, y) of the detected points on my checkerboard in various calibration images.I am also attaching the variables 'worldpoints' and 'imagesize.
I thank you for your response, and please don't hesitate to point out if anything is not clear enough.
@giuseppe, thank you. I will reply tomorrow.
The plane onto which I am attempting to project my points is defined by the position of the checkerboard in position 9. Therefore, I know its relative position with respect to my camera, as the rotation matrix and translation vector associated with this plane are contained in the extrinsic parameters of the camera calibration ('param_int'). I then refer to the transformation mentioned on the following page: https://fr.mathworks.com/help/vision/ref/cameraparameters.html. Please see the associated image.
Attach a .mat file with param_init and imagepoints.

Sign in to comment.

 Accepted Answer

I think you know more than I do about Matlab's camera projection algorithms. Therefore I do not expect that I will be of that much help to you.
You use 9 images of the same set of 88 world points to calibrate the camera. I think that this generates one estimate of the camera intrinsic parameters and nine estimates of the extrinsic parameters. It is as if you took 9 images of the same set of 88 points, while relocating and re-orienting the camera for each image.
Your world points do not have a W (equivalent to Z) coordinate. In this case, does Matlab assume that each world point is on a plane with W=0? I am not sure why the world points lack a W coordinate.
You computed a method to reconstruct the 88 points from image 9. You plot those reconstructed points on the same 3D plot as produced by 'showExtrinsics', which shows the planes of the 9 original images, in camera coordinates. The reconstructed points are close to plane 9, but do not coincide exactly with the plane. I suspect that you would like to know why the reconstructed points do not exactly coincide. I cannot say, since I do not fully understand your method for back-projection of the points in image 9.
You assemble the total matrix that converts world points to image points, for image 9. Let us call this Mat1. Mat 1 is 3x4. I think that you believe that Mat1*[U;V;W;1]=[x,y,1]; where [U;V;W] are the coordinates of each point, in the world coordinate system, and x,y are the image coordinates of each point. Am I correct? Homogeneous coordinates are used so that the matrix can do translation as well as rotation. I am looking at your lines of code, which I have modified slightly, to give new names to the matrices on the left hand side:
Mat_transfo=[param_int.RotationMatrices(:,:,9),transpose(param_int.TranslationVectors(9,:))];
Mat1=transpose(param_int.IntrinsicMatrix)*Mat_transfo;
Why do you use the transpose of IntrinsicMatrix, rather than the un-transposed IntrinsicMatrix, when constructing Mat1? If Matlab's camera matrices are designed to operate on row vectors, then I would expect you would have to transpose the RotationMatrix and the IntrinsicMatrix. But you did not transpose the rotation matrix.
Then you compute the pseudoinverse of Mat2:
Mat2=pinv(Mat1);
Mat2 is 4x3. One can verify that Mat1*Mat2=eye(3).
You compute Mat2*[x;y;1]. This produces a 4-component column vector. Then you normalize the vector by the value of its 4th component. Why? I would expect this component to be one, anyway. Then you add the translation vector. Why? I would expect that the use of homogeneous coordinates, and the incorporation of the translation vector into Mat1, would mean that Mat2 takes care of the translation (or its inverse), and therefore you would not have to translate again.
Why do you do
plot3(pointsdamier(1),pointsdamier(3),pointsdamier(2))
instead of
plot3(pointsdamier(1),pointsdamier(2),pointsdamier(3))
?
The projection from a plane to a plane (unlike the projection from thee dimensions to a plane) is reversible. It can be represented by a 3x3 matrix, if the projection is projective, or close to it (i.e. not too much distortion). One may find the 3x3 matrix that minimizes the sum squared error by singular value decomposition. I need to think a bit more about this for this. I wonder if this approach could be used instead of Matlab's camera estimation routine.

5 Comments

Hello and thank you for your response. I don't claim to know more than you in the field; indeed, I am a student in biomechanics facing certain computer vision challenges to complete my PhD. It seems that you have fully understood my issue, and you rightly point out that if I invert the intrinsic matrix, I should probably also invert the rotation matrix. However, after trying this, it doesn't seem to work. Similarly, regarding your comment about adding the translation vector after the transformation. One solution that comes to mind would be to find the intersection of the 3D infinite line corresponding to my coordinates (x, y) with the plane. However, I am unable to find the equation of the plane (or its normal vector) using the parameters contained in my extrinsic calibration parameters (rotation matrices and translation vector).
Your world points and your image points are two dimensional. We can use the pseudo inverse to find the matrix Ht, which maps from world to image. Ht gives the best-fit prediction of the actual image points. I call this matrix Ht because it is the transpose of the homography matrix described here. We invert Ht to project back from actual image points to estimated world points. The attached script does this, and it displays the results graphically. This solution uses only one set of image points: image 9. I saved the image 9 points as a separate file, which is attached.
This does not tell you where the world points are in the (3D) camera coordinate system. But it does estimate the world point locations in the (2D, planar) world point coordinate system.
In your ealirer comment, you said "I am unable to find the equation of the plane (or its normal vector) using the parameters contained in my extrinsic calibration parameters (rotation matrices and translation vector)."
I assume you mean the equation for the plane in camera coordinates. The 3D plot generated by 'showIntrinsics' uses camera coordinates.
The plane, in 3D world coordinates, has z=0 everywhere. Therefore its normal vector is nw=[0,0,1] (in world coordinates). The origin (in world coordinates) is one of the points in the plane: Ow=[0,0,0].
Let us assume that world and camera coordinates are related as follows:
where Pc and Pw are row vectors representing points in camera and world coordinates, respectively. Row vector T is the world coordinates origin, expressed in camera coordinates.
The camera coordinate representation of the plane is
nc = nw*R = R(:,3) (since nw=[0,0,1])
(One does not include the translation, T, when transforming a direction.)
Oc = Ow*R + T = T (since Ow=[0,0,0])
Now you have one point in the plane, and the normal to the plane, in camera coordinates. This specifies the plane uniquely.
The value of T, for image 9, in your code, can be obtained in (at least) two ways:
T9=param_int.PatternExtrinsics(9).Translation;
or
T9=param_int.TranslationVectors(9,:)
These two methods give the same result fot T9.
The rotation matrix for image 9 can be obtained two ways:
R9a=param_int.PatternExtrinsics(9).R;
or
R9b=param_int.RotationMatrices(:,:,9);
R9a and R9b are transpose, and inverse, of eachother. I don't know why they are not the same. Perhaps this explains why you needed to invert the intrinsic matrix and not the rotation matrix.
In the attached script, I use R9b and the equation above to convert world points from world point coordinates to camera coordinates. I plot the corner points, in camera coords, on the 3D plot. The plotted corner points align exactly with the corners of the plane. See screen shot below. Therefore, if you convert all the points this way, I am confident that they will all lie on the plane.
If you have further questions, please contact me by email by clicking on the envelope icon which appears when you click on the "WR" circle next to my answers.
Good luck with your PhD project.
Hello, thank you so much for your response; it's exactly what I needed! Furthermore, your explanations are very clear and have helped me fully understand the process you followed and why my approach was incorrect. Once again, thank you for taking the time to address my issue.
@Giuseppe, you’re welcome, and thank you for your kind comments.

Sign in to comment.

More Answers (0)

Categories

Find more on MATLAB Support Package for USB Webcams in Help Center and File Exchange

Products

Release

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!