3-D locations of world points matched across multiple images



xyzPoints = triangulateMultiview(pointTracks,cameraPoses,intrinsics) returns locations of 3-D world points that correspond to points matched across multiple images taken with calibrated cameras. pointTracks specifies an array of matched points. cameraPoses and intrinsics specify camera pose information and intrinsics, respectively. The function does not account for lens distortion.

[xyzPoints,reprojectionErrors] = triangulateMultiview(___) additionally returns the mean reprojection error for each 3-D world point using all input arguments in the prior syntax.


collapse all

Load images in the workspace.

imageDir = fullfile(toolboxdir('vision'),'visiondata','structureFromMotion');
images = imageSet(imageDir);

Load precomputed camera parameters.

data = load(fullfile(imageDir,'cameraParams.mat'));

Get camera intrinsic parameters.

intrinsics = data.cameraParams.Intrinsics;

Compute features for the first image.

I = rgb2gray(read(images,1));
I = undistortImage(I,intrinsics);
pointsPrev = detectSURFFeatures(I);
[featuresPrev,pointsPrev] = extractFeatures(I,pointsPrev);

Load camera locations and orientations.


Create an imageviewset object.

vSet = imageviewset;
vSet = addView(vSet,1,rigid3d(orientations(:,:,1),locations(1,:)),...

Compute features and matches for the rest of the images.

for i = 2:images.Count
  I = rgb2gray(read(images,i));
  I = undistortImage(I,intrinsics);
  points = detectSURFFeatures(I);
  [features,points] = extractFeatures(I,points);
  vSet = addView(vSet,i,rigid3d(orientations(:,:,i), locations(i,:)),...
  pairsIdx = matchFeatures(featuresPrev,features,'MatchThreshold',5);
  vSet = addConnection(vSet,i-1,i,'Matches',pairsIdx);
  featuresPrev = features;

Find point tracks.

tracks = findTracks(vSet);

Get camera poses.

cameraPoses = poses(vSet);

Find 3-D world points.

[xyzPoints,errors] = triangulateMultiview(tracks,cameraPoses,intrinsics);
z = xyzPoints(:,3);
idx = errors < 5 & z > 0 & z < 20;
pcshow(xyzPoints(idx, :),'VerticalAxis','y','VerticalAxisDir','down','MarkerSize',30);
hold on
plotCamera(cameraPoses, 'Size', 0.2);
hold off

Input Arguments

collapse all

Matched points across multiple images, specified as an N-element array of pointTrack objects. Each element contains two or more points that match across multiple images.

Camera pose information, specified as a two-column or three-column table. You can obtain cameraPoses from an imageviewset object by using poses object function.

Two-column Table

ViewIDView identifier in the pointTrack object, specified as an integer.
AbsolutePoseAbsolute pose of the view, specified as a rigid3d object. You can obtain the AbsolutePose from the imageviewset object by using the poses object function.

Three-column Table

ViewIDView identifier in the pointTrack object, specified as an integer.
OrientationCamera orientation, specified as a 3-by-3 rotation matrix.
LocationCamera location coordinates, specified as a three-element vector of the form [x, y, z] and represented in the data units of the parent axes.

Camera intrinsics, specified as a cameraIntrinsics object or an M-element vector of cameraIntrinsics objects. M is the number of camera poses. When all images are captured by the same camera, specify one cameraIntrinsics object When images are captured by different cameras, specify a vector.

Output Arguments

collapse all

3-D world points, returned as an N-by-3 matrix. Each row represents one 3-D world point and is of the form [x, y, z]. N is the number of 3-D world points.

Data Types: single | double

Reprojection errors, returned as an N-element vector. To calculate reprojection errors, first the function projects each world point back into each image. Then in each image, the function calculates the distance between the detected and the reprojected point. Each element of the reprojectionErrors output is the average reprojection error for corresponding world point in the xyzPoints output.Section of a checkerboard showing a point detected in the image a small distance from the 3-D point reprojected into the image. The distance between them is marked as the reprojection error.


Before detecting the points, correct the images for lens distortion by using by using the undistortImage function. Alternatively, you can directly undistort the points by using the undistortPoints function.


[1] Hartley, Richard, and Andrew Zisserman. Multiple View Geometry in Computer Vision. 2nd ed. Cambridge, UK; New York; Cambridge University Press, 2003.

Introduced in R2016a