Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

triangulateMultiview

3-D locations of undistorted points matched across multiple images

Syntax

xyzPoints = triangulateMultiview(pointTracks,cameraPoses,cameraParams)
[xyzPoints,reprojectionErrors] = triangulateMultiview(pointTracks,cameraPoses,cameraParams)

Description

example

xyzPoints = triangulateMultiview(pointTracks,cameraPoses,cameraParams) returns locations of 3-D world points that correspond to points matched across multiple images taken with a calibrated camera.

[xyzPoints,reprojectionErrors] = triangulateMultiview(pointTracks,cameraPoses,cameraParams) also returns reprojection errors for the world points.

Code Generation Support:
Supports Code Generation: No
Supports MATLAB Function block: No
Code Generation Support, Usage Notes, and Limitations

Examples

collapse all

Load images.

imageDir = fullfile(toolboxdir('vision'),'visiondata',...
    'structureFromMotion');
images = imageSet(imageDir);

Load precomputed camera parameters.

load(fullfile(imageDir,'cameraParams.mat'));

Compute features for the first image.

I = rgb2gray(read(images,1));
I = undistortImage(I,cameraParams);
pointsPrev = detectSURFFeatures(I);
[featuresPrev,pointsPrev] = extractFeatures(I,pointsPrev);

Load camera locations and orientations.

load(fullfile(imageDir,'cameraPoses.mat'));

Create a viewSet object.

vSet = viewSet;
vSet = addView(vSet, 1,'Points',pointsPrev,'Orientation',...
    orientations(:,:,1),'Location',locations(1,:));

Compute features and matches for the rest of the images.

for i = 2:images.Count
  I = rgb2gray(read(images, i));
  I = undistortImage(I, cameraParams);
  points = detectSURFFeatures(I);
  [features, points] = extractFeatures(I, points);
  vSet = addView(vSet,i,'Points',points,'Orientation',...
      orientations(:,:,i),'Location',locations(i,:));
  pairsIdx = matchFeatures(featuresPrev,features,'MatchThreshold',5);
  vSet = addConnection(vSet,i-1,i,'Matches',pairsIdx);
  featuresPrev = features;
end

Find point tracks.

tracks = findTracks(vSet);

Get camera poses.

cameraPoses = poses(vSet);

Find 3-D world points.

[xyzPoints,errors] = triangulateMultiview(tracks,cameraPoses,cameraParams);
z = xyzPoints(:,3);
idx = errors < 5 & z > 0 & z < 20;
pcshow(xyzPoints(idx, :),'VerticalAxis','y','VerticalAxisDir','down','MarkerSize',30);
hold on
plotCamera(cameraPoses, 'Size', 0.1);
hold off

Input Arguments

collapse all

Matching points across multiple images, specified as an N-element array of pointTrack objects. Each element contains two or more points that match across multiple images.

Camera pose information, specified as a three-column table. The table contains columns for ViewId, Orientation, and Location. The view IDs correspond to the IDs in the pointTracks object. Specify the orientations as 3-by-3 rotation matrices and the locations as three-element vectors. You can obtain cameraPoses from a viewSet object by using its poses method.

Camera parameters, specified as a cameraParameters or cameraIntrinsics object. You can return the cameraParameters object using the estimateCameraParameters function. The cameraParameters object contains the intrinsic, extrinsic, and lens distortion parameters of a camera.

Output Arguments

collapse all

3-D world points, specified as an N-by-3 array of [x,y,z] coordinates.

Data Types: single | double

Reprojection errors, returned as an N-by-1 vector. The function projects each world point back into both images. Then in each image, the function calculates the reprojection error as the distance between the detected and the reprojected point. The reprojectionErrors vector contains the average reprojection error for each world point.

Tips

Because triangulateMultiview does not account for lens distortion, you can undistort the images before detecting the points by using undistortImage. Alternatively, you can undistort the points directly using undistortPoints.

References

[1] Hartley, R. and A. Zisserman. "Multiple View Geometry in Computer Vision." Cambridge University Press, p. 312, 2003.

Introduced in R2016a