Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

invertedImageIndex class

Search index that maps visual words to images

Syntax

imageIndex = invertedImageIndex(bag)
imageIndex = invertedImageIndex(bag,'SaveFeatureLocations',tf)
imageIndex = invertedImageIndex(___,Name,Value)

Construction

imageIndex = invertedImageIndex(bag) returns a search index object that you can use with the retrieveImages function to search for an image. The object stores the visual word-to-image mapping based on the input bag, a bagOfFeatures object.

imageIndex = invertedImageIndex(bag,'SaveFeatureLocations',tf) optionally specifies whether or not to save the feature location data in imageIndex.

imageIndex = invertedImageIndex(___,Name,Value) uses additional options specified by one or more Name,Value pair arguments, using any of the preceding syntaxes.

Input Arguments

expand all

Bag of visual words, specified as a bagOfFeatures object.

Save feature locations, specified as a logical scalar. When you set this property to true, the image feature locations are saved in the imageIndex output object. Use location data to verify the spatial or geometric image search results. If you do not require feature locations, set this property to false to reduce memory consumption.

Properties

expand all

Indexed image locations, stored as a cell array.

Visual words, stored as a 1-by-M vector of visualWords objects for each indexed image. The visualWords object contains the WordIndex, Location, VocabularySize, and Count properties for each indexed image.

Word occurrence, specified as an M-by-1 vector. The vector contains the percentage of images in which each visual word occurs. These percentages are analogous to document frequency in text retrieval applications. The WordFrequency property contains the percentage of images in which each visual word occurs. It is often helpful to suppress the most common words to reduce the search set when looking for the most relevant images. Also helpful, is to suppress rare words as they probably come from outliers in the image set.

You can control how much the top and bottom end of the visual word distribution affects the search results by tuning the WordFrequencyRange property. A good way to set this value is to plot the sorted WordFrequency values.

Bag of visual words, specified as the bagOfFeatures object used in the index.

Percentage of similar words required between a query and a potential image match, specified as a numeric value in the range [0, 1]. To obtain more search results, lower this threshold.

Word frequency range, specified as a two-element vector of a lower and upper percentage, [lower upper]. Use the word frequency range to ignore common words (the upper percentage range) or rare words (the lower percentage range) within the image index. These words often occur as repeated patterns or outliers and can reduce search accuracy. You can control how much the top and bottom end of the visual word distribution affects the search results by tuning the WordFrequencyRange property. A good way to set this value is to plot the sorted WordFrequency values.

Methods

addImagesAdd new images to image index
removeImagesRemove images from image index

Examples

expand all

Define a set of images to search.

imageFiles = ...
  {'elephant.jpg', 'cameraman.tif', ...
   'peppers.png',  'saturn.png',...
   'pears.png',    'stapleRemover.jpg', ...
   'football.jpg', 'mandi.tif',...
   'kids.tif',     'liftingbody.png', ...
   'office_5.jpg', 'gantrycrane.png',...
   'moon.tif',     'circuit.tif', ...
   'tape.png',     'coins.png'};

imgSet = imageSet(imageFiles);

Learn the visual vocabulary.

bag = bagOfFeatures(imgSet,'PointSelection','Detector',...
  'VocabularySize',1000);
Creating Bag-Of-Features.
-------------------------
* Image category 1: <undefined>
* Selecting feature point locations using the Detector method.
* Extracting SURF features from the selected feature point locations.
** detectSURFFeatures is used to detect key points for feature extraction.

* Extracting features from 16 images in image set 1...done. Extracted 3680 features.

* Keeping 80 percent of the strongest features from each category.

* Balancing the number of features across all image categories to improve clustering.
** Image category 1 has the least number of strongest features: 2944.
** Using the strongest 2944 features from each of the other image categories.

* Using K-Means clustering to create a 1000 word visual vocabulary.
* Number of features          : 2944
* Number of clusters (K)      : 1000

* Initializing cluster centers...100.00%.
* Clustering...completed 10/100 iterations (~0.26 seconds/iteration)...converged in 10 iterations.

* Finished creating Bag-Of-Features

Create an image search index and add images.

imageIndex = invertedImageIndex(bag);

addImages(imageIndex, imgSet);
Encoding images using Bag-Of-Features.
--------------------------------------
* Image category 1: <undefined>
* Encoding 16 images from image set 1...done.

* Finished encoding images.

Specify a query image and an ROI to search for the target object, elephant.

queryImage = imread('clutteredDesk.jpg');
queryROI = [130 175 330 365]; 

figure
imshow(queryImage)
rectangle('Position',queryROI,'EdgeColor','yellow')

You can also use the imrect function to select an ROI interactively. For example, queryROI = getPosition(imrect).

Find images that contain the object.

imageIDs = retrieveImages(queryImage,imageIndex,'ROI',queryROI)
imageIDs = 

     1
    11
     2
     6
     3
    12
     8
    14
     9
    13

bestMatch = imageIDs(1);

figure
imshow(imageIndex.ImageLocation{bestMatch})

References

Sivic, J. and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. ICCV (2003) pg 1470-1477.

Philbin, J., O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. CVPR (2007).

Introduced in R2015a