Documentation Center

  • Trial Software
  • Product Updates

Registration

Find Image Rotation and Scale Using Automated Feature Matching

This example shows how to automatically determine the geometric transformation between a pair of images. When one image is distorted relative to another by rotation and scale, use detectSURFFeatures and estimateGeometricTransform to find the rotation angle and scale factor. You can then transform the distorted image to recover the original image.

Step 1: Read Image

Bring an image into the workspace.

original = imread('cameraman.tif');
imshow(original);
text(size(original,2),size(original,1)+15, ...
    'Image courtesy of Massachusetts Institute of Technology', ...
    'FontSize',7,'HorizontalAlignment','right');

Step 2: Resize and Rotate the Image

scale = 0.7;
J = imresize(original, scale); % Try varying the scale factor.

theta = 30;
distorted = imrotate(J,theta); % Try varying the angle, theta.
figure, imshow(distorted)

You can experiment by varying the scale and rotation of the input image. However, note that there is a limit to the amount you can vary the scale before the feature detector fails to find enough features.

Step 3: Find Matching Features Between Images

Detect features in both images.

ptsOriginal  = detectSURFFeatures(original);
ptsDistorted = detectSURFFeatures(distorted);

Extract feature descriptors.

[featuresOriginal,  validPtsOriginal]  = extractFeatures(original,  ptsOriginal);
[featuresDistorted, validPtsDistorted] = extractFeatures(distorted, ptsDistorted);

Match features by using their descriptors.

indexPairs = matchFeatures(featuresOriginal, featuresDistorted);

Retrieve locations of corresponding points for each image.

matchedOriginal  = validPtsOriginal(indexPairs(:,1));
matchedDistorted = validPtsDistorted(indexPairs(:,2));

Show putative point matches.

figure;
showMatchedFeatures(original,distorted,matchedOriginal,matchedDistorted);
title('Putatively matched points (including outliers)');

Step 4: Estimate Transformation

Find a transformation corresponding to the matching point pairs using the statistically robust M-estimator SAmple Consensus (MSAC) algorithm, which is a variant of the RANSAC algorithm. It removes outliers while computing the transformation matrix. You may see varying results of the transformation computation because of the random sampling employed by the MSAC algorithm.

[tform, inlierDistorted, inlierOriginal] = estimateGeometricTransform(...
    matchedDistorted, matchedOriginal, 'similarity');

Display matching point pairs used in the computation of the transformation.

figure;
showMatchedFeatures(original,distorted,inlierOriginal,inlierDistorted);
title('Matching points (inliers only)');
legend('ptsOriginal','ptsDistorted');

Step 5: Solve for Scale and Angle

Use the geometric transform, tform, to recover the scale and angle. Since we computed the transformation from the distorted to the original image, we need to compute its inverse to recover the distortion.

Let sc = s*cos(theta)
Let ss = s*sin(theta)
Then, Tinv = [sc -ss  0;
              ss  sc  0;
              tx  ty  1]
where tx and ty are x and y translations, respectively.

Compute the inverse transformation matrix.

Tinv  = tform.invert.T;

ss = Tinv(2,1);
sc = Tinv(1,1);
scaleRecovered = sqrt(ss*ss + sc*sc)
thetaRecovered = atan2(ss,sc)*180/pi
scaleRecovered =

    0.7002


thetaRecovered =

   29.9224

The recovered values should match your scale and angle values selected in Step 2: Resize and Rotate the Image.

Step 6: Recover the Original Image

Recover the original image by transforming the distorted image.

outputView = imref2d(size(original));
recovered  = imwarp(distorted,tform,'OutputView',outputView);

Compare recovered to original by looking at them side-by-side in a montage.

figure, imshowpair(original,recovered,'montage')

The recovered (right) image quality does not match the original (left) image because of the distortion and recovery process. In particular, the image shrinking causes loss of information. The artifacts around the edges are due to the limited accuracy of the transformation. If you were to detect more points in Step 4: Find Matching Features Between Images, the transformation would be more accurate. For example, we could have used a corner detector, detectFASTFeatures, to complement the SURF feature detector which finds blobs. Image content and image size also impact the number of detected features.

Video Mosaicking

This example shows how to create a mosaic from a video sequence. Video mosaicking is the process of stitching video frames together to form a comprehensive view of the scene. The resulting mosaic image is a compact representation of the video data, which is often used in video compression and surveillance applications.

Introduction

This example illustrates how to use detectFASTFeatures, extractFeatures, matchFeatures, and estimateGeometricTransform to create a mosaic image from a video sequence. First, the example identifies the corners in the first (reference) and second video frames. Then, it calculates the affine transformation matrix that best describes the transformation between corner positions in these frames. Finally, the example overlays the second image onto the first image. The example repeats this process to create a mosaic image of the video scene.

Initialization

Define the size and location of the output mosaic image.

[w, h]     = deal(680, 400);  % Size of the mosaic
[x0, y0]   = deal(-5, -60);   % Upper-left corner of the mosaic
xLim = [0.5, w+0.5] + x0;
yLim = [0.5, h+0.5] + y0;
outputView = imref2d([h,w], xLim, yLim);

Create a VideoFileReader System object to read video from a file.

hsrc = vision.VideoFileReader('vipmosaicking.avi', 'ImageColorSpace', ...
    'RGB', 'PlayCount', 1);

Create a AlphaBlender System object to overlay the consecutive video frames to produce the mosaic.

halphablender = vision.AlphaBlender( ...
    'Operation', 'Binary mask', 'MaskSource', 'Input port');

Create two VideoPlayer System objects, one to display the corners of each frame and other to draw the mosaic.

hVideo1 = vision.VideoPlayer('Name', 'Corners');
hVideo1.Position(1) = hVideo1.Position(1) - 350;

hVideo2 = vision.VideoPlayer('Name', 'Mosaic');
hVideo2.Position(1) = hVideo1.Position(1) + 400;
hVideo2.Position([3 4]) = [750 500];

Initialize some variables which will be used later.

points = cornerPoints(zeros(0, 2));
features = binaryFeatures(zeros([0 64], 'uint8'));
failedToMatchPoints = true; % A new mosaic will be created if
                            % failedToMatchPoints is true

Stream Processing Loop

Create a processing loop to create mosaic from the input video. This loop uses the System objects you instantiated above.

while ~isDone(hsrc)
    % Save the points and features computed from the previous image
    pointsPrev   = points;
    featuresPrev = features;

    % To speed up mosaicking, select and process every 5th image
    for i = 1:5
        rgb = step(hsrc);
        if isDone(hsrc)
            break;
        end
    end

    % Convert the image from RGB to intensity.
    I = rgb2gray(rgb);

    % Detect corners in the image
    corners = detectFASTFeatures(I);

    % Extract FREAK feature vectors for the corners
    [features, points] = extractFeatures(I, corners);

    % Match features computed from the current and the previous images
    indexPairs = matchFeatures(features, featuresPrev);

    % Check if there are enough corresponding points in the current and the
    % previous images
    if size(indexPairs, 1) > 2
        matchedPoints     = points(indexPairs(:, 1), :);
        matchedPointsPrev = pointsPrev(indexPairs(:, 2), :);

        % Find corresponding locations in the current and the previous
        % images, and compute a geometric transformation from the
        % corresponding locations
        [tform, ~, ~, failedToMatchPoints] = estimateGeometricTransform(...
            matchedPoints, matchedPointsPrev, 'affine');
    end

    if failedToMatchPoints
        % If the current image does not match the previous one, reset the
        % transformation and the mosaic image
        xtform = eye(3);
        mosaic = zeros(h, w, 3, 'single');
    else
        % If the current image matches with the previous one, compute the
        % transformation for mapping the current image onto the mosaic
        % image
        xtform = xtform * tform.T;
    end

    % Display the current image and the corner points
    cornerImage = insertMarker(rgb, corners.Location, 'Color', 'red');
    step(hVideo1, cornerImage);

    % Creat a mask which specifies the region of the transformed image.
    mask = imwarp(ones(size(I)), affine2d(xtform), 'OutputView', outputView) >= 1;

    % Warp the current image onto the mosaic image
    transformedImage = imwarp(rgb, affine2d(xtform), 'OutputView', outputView);
    mosaic = step(halphablender, mosaic, transformedImage, mask);
    step(hVideo2, mosaic);
end

release(hsrc);

The Corners window displays the input video along with the detected corners and Mosaic window displays the mosaic created from the input video.

Was this topic helpful?