Computer Vision System Toolbox

Image Search using Point Features

This example shows you how to search a set of images for an object given a representative image of the object. It shows how to use a KDTreeSearcher object from the Statistics Toolbox™ to efficiently find nearest neighbor matches for SURFpoints in the query and image collection. Because it uses SURF feature descriptors, this matching method is robust to in-plane and some out-of-plane rotation as well as changes in scaling.

For another example that matches SURF features between images, see Detect Objects in a Cluttered Scene Using Point Feature MatchingDetect Objects in a Cluttered Scene Using Point Feature Matching.

Step 1: Prepare Collection of Images to Search

Read the set of reference images each containing a different object. Multiple views of the same object could be included in this collection in order to capture hidden or occluded areas.

% Several example images
imageNames = {'elephant.jpg','cameraman.tif','peppers.png','saturn.png',...
    'pears.png','stapleRemover.jpg','football.jpg','mandi.tif',...
    'kids.tif','liftingbody.png','office_5.jpg','gantrycrane.png',...
    'moon.tif','circuit.tif','tape.png','coins.png'};

% Initialize structure for images and associated information
numImages = numel(imageNames);
emptyEntry = struct('image',[],'thumbnail',[]);
imageCollection = repmat(emptyEntry,[1 numImages]);
thumbnailSize = 400;
for i = 1:numel(imageCollection)
    imageCollection(i).image = imread(imageNames{i});
    % Convert color images to gray scale
    if size(imageCollection(i).image,3)==3
        imageCollection(i).image = rgb2gray(imageCollection(i).image);
    end
    % Store scaled versions of images for display
    imageCollection(i).thumbnail=imresize(imageCollection(i).image,...
        [thumbnailSize,thumbnailSize]);
end
figure
montage(cat(4,imageCollection.thumbnail));
title('Image Collection');

Step 2: Detect Feature Points in Image Collection

Detect and display feature points in first image. Use of local features serves two purposes. It makes the search process more robust to changes in scale and orientation and reduces the amount of data that needs to be stored and analyzed.

examplePoints=detectSURFFeatures(imageCollection(1).image);
figure; imshow(imageCollection(1).image);
title('100 Strongest Feature Points from first Collection Image');
hold on;
plot(examplePoints.selectStrongest(100));

Detect features in the entire image collection.

for l = 1:numel(imageCollection)
    % detect SURF feature points
    imageCollection(l).points = detectSURFFeatures(imageCollection(l).image,...
        'MetricThreshold',600);
    % extract SURF descriptors
    [imageCollection(l).featureVectors,imageCollection(l).validPoints] = ...
        extractFeatures(imageCollection(l).image,imageCollection(l).points);

    % Save the number of features in each image for indexing
    imageCollection(l).featureCount = size(imageCollection(l).featureVectors,1);
end

Step 3: Build Feature Dataset

Combine all of the features from each image into a matrix. Use this matrix to initialize a KDTreeSearcherKDTreeSearcher object from the Statistics Toolbox™. This object allows for fast searching for nearest neighbors of high-dimensional data. In this case, a nearest neighbor of a SURF descriptor may be another view of the same point.

% Combine all features into dataset
featureDataset = double(vertcat(imageCollection.featureVectors));

% instantiate a kd tree
imageFeatureKDTree = KDTreeSearcher(featureDataset);

Step 4: Choose Query Image

Load an image that contains the object to be identified and select the object by specifying a bounding box that encloses the object. Note that this image contains a photo of the elephant and the staple remover that was taken from a different perspective. In other words, it is an image that is not part of the training set. When running this example on your own, you can experiment by choosing the staple remover instead of the elephant.

query.wholeImage = imread('clutteredDesk.jpg');
figure; axesHandle=axes; imshow(query.wholeImage); title('Query Image')
% Note that once the bounding box is displayed, you can move it with your mouse.
% For example, try choosing the staple remover.
rectangleHandle=imrect(axesHandle,[130 175 330 365]); % chooses the elephant

Step 5: Detect Feature Points in Query Image

Detect and display features points in query image

% Consider only selected region
query.image=imcrop(query.wholeImage,getPosition(rectangleHandle));
% Detect SURF features
query.points = detectSURFFeatures(query.image,'MetricThreshold',600);
% Extract SURF descriptors
[query.featureVectors,query.points] = ...
    extractFeatures(query.image,query.points);

% Display feature points
figure; imshow(query.image);
title('100 Strongest Feature Points from Query Image');
hold on;
plot(query.points.selectStrongest(100));

Step 6: Search Image Collection for the Query Image

For all of the features in the query image, find two nearest neighbors in the dataset and compute the distance to each neighbor. Note that the knnsearch function will return the nearest neighbors, even if none of the features are a close match. To throw away those bad matches, we will use a ratio of the two closest neighbor distances. This technique is describe in more detail in [1].

% Match each query feature to two (K=2) closest features in the dataset.
[matches, distance] = knnsearch(imageFeatureKDTree,query.featureVectors,'K',2);
% Matches contains K indices of the K nearest features in the dataset for each
% feature in the query image. The distance to the second nearest neighbor
% will be used for removing outliers.

Using histchistc, count the number of features that matched from each image. Each pair of indices, in the indexIntervals below, constitutes an index interval that corresponds to an image.

indexIntervals = [0, cumsum([imageCollection.featureCount])] + 1;
counts = histc(matches(:, 1), indexIntervals);

% Display count of nearest neighbor features and bins.
figure; bar(indexIntervals,counts,'histc')
title('Number of Nearest Neighbor Features from Each Image')

Visualize the strength with which each image in the collection matches the query image. Size of each image below is proportional to the number of matching features.

minImageSize       = 20; % Size for images with no matches
imageScalingFactor = (thumbnailSize - minImageSize)/2;
whiteBackground    = intmax('uint8');

if max(counts)==0
    disp('No Features Matched')
else
    for i = 1:numel(imageCollection) % Scale each image
        newSize = round( counts(i) / max(counts) * imageScalingFactor) * 2 ...
        + minImageSize; % Compute a new size for display
        scaledImage = imresize(imageCollection(i).thumbnail,[newSize,newSize]);
        imageCollection(i).imageHistogramView = padarray(scaledImage,...
            [(thumbnailSize-newSize)/2,(thumbnailSize-newSize)/2],...
            whiteBackground);
    end
end

figure; montage(cat(4,imageCollection.imageHistogramView));
title('Size Corresponds to Number of Nearest Neighbor Occurrences');

Observe that the image containing the desk is still considered a strong match. It's an outlier that will be eliminated in the next step.

Step 7: Eliminate Outliers Using Distance Tests

Many of the SURF features detected in the query image have no matching feature in the dataset. To prevent false matches, it is important to remove those nearest neighbor matches that are far from their query feature. The poorly matched features can be detected by comparing the distances of the first and second nearest neighbor. If the distances are similar, as calculated by their ratio, the match is rejected [1]. Additionally, ignore matches that are far apart [2].

goodRatioMatches = distance(:,1) < distance(:,2) * .8; % Ratio Test [1]
goodDistanceMatches = distance(:,1) < .25;             % Distance threshold [2]

goodMatches = matches(goodDistanceMatches & goodRatioMatches,1);

% Count number of features that matched from each image using stored
% indices for dataset matrix
counts=histc(goodMatches, indexIntervals);

% Visualize the matches.
minImageSize = 20; % Size for images with no matches
imageScalingFactor = (thumbnailSize - minImageSize)/2;
whiteBackground = 255; % Assumes uint8 images

if max(counts)==0
    disp('No Features Matched')
else
    for i = 1:numel(imageCollection) % Scale each image
        newSize = round( counts(i) / max(counts) * imageScalingFactor) * 2 ...
        + minImageSize; % Compute a new size for display
        scaledImage = imresize(imageCollection(i).thumbnail,[newSize,newSize]);
        imageCollection(i).imageHistogramView = padarray(scaledImage,...
            [(thumbnailSize-newSize)/2,(thumbnailSize-newSize)/2],...
            whiteBackground);
    end
end
figure; montage(cat(4,imageCollection.imageHistogramView));
title('Size Corresponds to Number of Matched Features')

Notice the improvement in the results after the outliers were removed.

References

[1] David Lowe, "Distinctive Image Features From Scale-Invariant Keypoints," International Journal of Computer Vision, 60, 2 (2004)

[2] K. Mikolajczyk and C. Shmid, "A Performance Evaluation of Local Descriptors," IEEE PAMI, 27, 10 (2005)