Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

Point Feature Types

Image feature detection is a building block of many computer vision tasks, such as image registration, tracking, and object detection. The Computer Vision System Toolbox™ includes a variety of functions for image feature detection. These functions return points objects that store information specific to particular types of features, including (x,y) coordinates (in the Location property). You can pass a points object from a detection function to a variety of other functions that require feature points as inputs. The algorithm that a detection function uses determines the type of points object it returns.

Functions That Return Points Objects

Points ObjectReturned ByType of Feature
cornerPointsdetectFASTFeatures
Features from accelerated segment test (FAST) algorithm
Uses an approximate metric to determine corners.[1]

Corners
Single-scale detection
Point tracking, image registration with little or no scale change, corner detection in scenes of human origin, such as streets and indoor scenes.

detectMinEigenFeatures
Minimum eigenvalue algorithm
Uses minimum eigenvalue metric to determine corner locations. [4]
detectHarrisFeatures
Harris-Stephens algorithm
More efficient than the minimum eigenvalue algorithm.[3]
BRISKPointsdetectBRISKFeatures
Binary Robust Invariant Scalable Keypoints (BRISK) algorithm [6]

Corners
Multiscale detection
Point tracking, image registration, handles changes in scale and rotation, corner detection in scenes of human origin, such as streets and indoor scenes

SURFPointsdetectSURFFeatures
Speeded-up robust features (SURF) algorithm[11]

Blobs
Multiscale detection
Object detection and image registration with scale and rotation changes

KAZEPointsdetectKAZEFeatures KAZE is not an acronym, but a name derived from the Japanese word kaze , which means wind. The reference is to the flow of air ruled by nonlinear processes on a large scale.[12]

Multi-scale blob features

Reduced blurring of object boundaries

MSERRegions

detectMSERFeatures
Maximally stable extremal regions (MSER) algorithm

[7] [8] [9] [10]

Regions of uniform intensity
Multi-scale detection
Registration, wide baseline stereo calibration, text detection, object detection. Handles changes to scale and rotation. More robust to affine transforms in contrast to other detectors.

Functions That Accept Points Objects

FunctionDescription
relativeCameraPose

Compute relative rotation and translation between camera poses

estimateFundamentalMatrixEstimate fundamental matrix from corresponding points in stereo images
estimateGeometricTransformEstimate geometric transform from matching point pairs
estimateUncalibratedRectificationUncalibrated stereo rectification
extractFeaturesExtract interest point descriptors
MethodFeature Vector
BRISKThe function sets the Orientation property of the validPoints output object to the orientation of the extracted features, in radians.
FREAKThe function sets the Orientation property of the validPoints output object to the orientation of the extracted features, in radians.
SURFThe function sets the Orientation property of the validPoints output object to the orientation of the extracted features, in radians.

When you use an MSERRegions object with the SURF method, the Centroid property of the object extracts SURF descriptors. The Axes property of the object selects the scale of the SURF descriptors such that the circle representing the feature has an area proportional to the MSER ellipse area. The scale is calculated as 1/4*sqrt((majorAxes/2).* (minorAxes/2)) and saturated to 1.6, as required by the SURFPoints object.

KAZENon-linear pyramid-based features.

The function sets the Orientation property of the validPoints output object to the orientation of the extracted features, in radians.

When you use an MSERRegions object with the KAZE method, the Location property of the object is used to extract KAZE descriptors.

The Axes property of the object selects the scale of the KAZE descriptors such that the circle representing the feature has an area proportional to the MSER ellipse area.

BlockSimple square neighbhorhood.

The Block method extracts only the neighborhoods fully contained within the image boundary. Therefore, the output, validPoints, can contain fewer points than the input POINTS.

AutoThe function selects the Method based on the class of the input points and implements:
The FREAK method for a cornerPoints input object.
The SURF method for a SURFPoints or MSERRegions input object.
The FREAK method for a BRISKPoints input object.

For an M-by-2 input matrix of [x y] coordinates, the function implements the Block method.

extractHOGFeaturesExtract histogram of oriented gradients (HOG) features
insertMarkerInsert markers in image or video
showMatchedFeaturesDisplay corresponding feature points
triangulate3-D locations of undistorted matching points in stereo images
undistortPointsCorrect point coordinates for lens distortion

References

[1] Rosten, E., and T. Drummond, “Machine Learning for High-Speed Corner Detection.” 9th European Conference on Computer Vision. Vol. 1, 2006, pp. 430–443.

[2] Mikolajczyk, K., and C. Schmid. “A performance evaluation of local descriptors.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 27, Issue 10, 2005, pp. 1615–1630.

[3] Harris, C., and M. J. Stephens. “A Combined Corner and Edge Detector.” Proceedings of the 4th Alvey Vision Conference. August 1988, pp. 147–152.

[4] Shi, J., and C. Tomasi. “Good Features to Track.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. June 1994, pp. 593–600.

[5] Tuytelaars, T., and K. Mikolajczyk. “Local Invariant Feature Detectors: A Survey.” Foundations and Trends in Computer Graphics and Vision. Vol. 3, Issue 3, 2007, pp. 177–280.

[6] Leutenegger, S., M. Chli, and R. Siegwart. “BRISK: Binary Robust Invariant Scalable Keypoints.” Proceedings of the IEEE International Conference. ICCV, 2011.

[7] Nister, D., and H. Stewenius. "Linear Time Maximally Stable Extremal Regions." Lecture Notes in Computer Science. 10th European Conference on Computer Vision. Marseille, France: 2008, no. 5303, pp. 183–196.

[8] Matas, J., O. Chum, M. Urba, and T. Pajdla. "Robust wide-baseline stereo from maximally stable extremal regions." Proceedings of British Machine Vision Conference. 2002, pp. 384–396.

[9] Obdrzalek D., S. Basovnik, L. Mach, and A. Mikulik. "Detecting Scene Elements Using Maximally Stable Colour Regions." Communications in Computer and Information Science. La Ferte-Bernard, France: 2009, Vol. 82 CCIS (2010 12 01), pp 107–115.

[10] Mikolajczyk, K., T. Tuytelaars, C. Schmid, A. Zisserman, T. Kadir, and L. Van Gool. "A Comparison of Affine Region Detectors." International Journal of Computer Vision. Vol. 65, No. 1–2, November, 2005, pp. 43–72 .

[11] Bay, H., A. Ess, T. Tuytelaars, and L. Van Gool. “SURF:Speeded Up Robust Features.” Computer Vision and Image Understanding (CVIU).Vol. 110, No. 3, 2008, pp. 346–359.

[12] Alcantarilla, P.F., A. Bartoli, and A.J. Davison. "KAZE Features", ECCV 2012, Part VI, LNCS 7577 pp. 214, 2012

Related Topics