Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

triangulate

3-D locations of undistorted matching points in stereo images

Syntax

worldPoints = triangulate(matchedPoints1,matchedPoints2,stereoParams)
worldPoints = triangulate(matchedPoints1,matchedPoints2,cameraMatrix1,cameraMatrix2)
[worldPoints,reprojectionErrors] = triangulate(___)

Description

example

worldPoints = triangulate(matchedPoints1,matchedPoints2,stereoParams) returns 3-D locations of matching pairs of undistorted image points from two stereo images.

worldPoints = triangulate(matchedPoints1,matchedPoints2,cameraMatrix1,cameraMatrix2) returns the 3-D locations of the matching pairs in a world coordinate system. These locations are defined by camera projection matrices.

[worldPoints,reprojectionErrors] = triangulate(___) additionally returns reprojection errors for the world points using any of the input arguments from previous syntaxes.

Examples

collapse all

Load stereo parameters.

load('webcamsSceneReconstruction.mat');

Read in the stereo pair of images.

I1 = imread('sceneReconstructionLeft.jpg');
I2 = imread('sceneReconstructionRight.jpg');

Undistort the images.

I1 = undistortImage(I1,stereoParams.CameraParameters1);
I2 = undistortImage(I2,stereoParams.CameraParameters2);

Detect a face in both images.

faceDetector = vision.CascadeObjectDetector;
face1 = faceDetector(I1);
face2 = faceDetector(I2);

Find the center of the face.

center1 = face1(1:2) + face1(3:4)/2;
center2 = face2(1:2) + face2(3:4)/2;

Compute the distance from camera 1 to the face.

point3d = triangulate(center1, center2, stereoParams);
distanceInMeters = norm(point3d)/1000;

Display the detected face and distance.

distanceAsString = sprintf('%0.2f meters', distanceInMeters);
I1 = insertObjectAnnotation(I1,'rectangle',face1,distanceAsString,'FontSize',18);
I2 = insertObjectAnnotation(I2,'rectangle',face2, distanceAsString,'FontSize',18);
I1 = insertShape(I1,'FilledRectangle',face1);
I2 = insertShape(I2,'FilledRectangle',face2);

imshowpair(I1, I2, 'montage');

Input Arguments

collapse all

Coordinates of points in image 1, specified as an M-by-2 matrix of M number of [x y] coordinates, or as a SURFPoints, MSERRegions, cornerPoints, or BRISKPoints object. The matchedPoints1 and matchedPoints2 inputs must contain points that are matched using a function such as matchFeatures.

Coordinates of points in image 2, specified as an M-by-2 matrix of M number of [x y] coordinates, or as a SURFPoints, MSERRegions, cornerPoints, or BRISKPoints object. The matchedPoints1 and matchedPoints2 inputs must contain points that are matched using a function such as matchFeatures.

Camera parameters for stereo system, specified as a stereoParameters object. The object contains the intrinsic, extrinsic, and lens distortion parameters of the stereo camera system. You can use the estimateCameraParameters function to estimate camera parameters and return a stereoParameters object.

When you pass a stereoParameters object to the function, the origin of the world coordinate system is located at the optical center of camera 1. The x-axis points to the right, the y-axis points down, and the z-axis points away from the camera.

Projection matrix for camera 1, specified as a 4-by-3 matrix. The matrix maps a 3-D point in homogeneous coordinates onto the corresponding point in the camera's image. This input describes the location and orientation of camera 1 in the world coordinate system. cameraMatrix1 must be a real and nonsparse numeric matrix. You can obtain the camera matrix using the cameraMatrix function.

Camera matrices passed to the function, define the world coordinate system.

Projection matrix for camera 1, specified as a 4-by-3 matrix. The matrix maps a 3-D point in homogeneous coordinates onto the corresponding point in the camera's image. This input describes the location and orientation of camera 1 in the world coordinate system. cameraMatrix1 must be a real and nonsparse numeric matrix. You can obtain the camera matrix using the cameraMatrix function.

Camera matrices passed to the function, define the world coordinate system.

Output Arguments

collapse all

3-D locations of matching pairs of undistorted image points, specified as an M-by-3 matrix. The matrix contains M number of [x,y, z] locations of matching pairs of undistorted image points from two stereo images.

When you specify the camera geometry using stereoParameters, the world point coordinates are relative to the optical center of camera 1.

When you specify the camera geometry using cameraMatrix1 and cameraMatrix2, the world point coordinates are defined by the camera matrices.

The function returns worldPoints as double, if matchedPoints1 and matchedPoints2 are double. Otherwise the function returns worldPoints as single.

Data Types: single | double

Reprojection errors, returned as an M-by-1 vector. The function projects each world point back into both images. Then in each image, the function calculates the reprojection error as the distance between the detected and the reprojected point. The reprojectionErrors vector contains the average reprojection error for each world point.

Tips

The triangulate function does not account for lens distortion. You can undistort the images using the undistortImage function before detecting the points. Alternatively, you can undistort the points themselves using the undistortPoints function.

References

[1] Hartley, R. and A. Zisserman. "Multiple View Geometry in Computer Vision." Cambridge University Press, p. 312, 2003.

Extended Capabilities

Introduced in R2014b