Contents

vision.PointTracker System object

Package: vision

Track points in video using Kanade-Lucas-Tomasi (KLT) algorithm

Description

The point tracker object tracks a set of points using the Kanade-Lucas-Tomasi (KLT), feature-tracking algorithm. You can use the point tracker for video stabilization, camera motion estimation, and object tracking. It works particularly well for tracking objects that do not change shape and for those that exhibit visual texture. The point tracker is often used for short-term tracking as part of a larger tracking framework.

As the point tracker algorithm progresses over time, points can be lost due to lighting variation, out of plane rotation, or articulated motion. To track an object over a long period of time, you may need to reacquire points periodically.

Construction

pointTracker = vision.PointTracker returns a System object, pointTracker, that tracks a set of points in a video.

pointTracker = vision.PointTracker(Name,Value) configures the tracker object properties, specified as one or more name-value pair arguments. Unspecified properties have default values.

Code Generation Support
Supports MATLAB® Function block: Yes
System Objects in MATLAB Code Generation.
Code Generation Support, Usage Notes, and Limitations.

Initialize Tracking Process

To initialize the tracking process, you must use the initialize method to specify the initial locations of the points and the initial video frame.

initialize(pointTracker,points,I) initializes points to track and sets the initial video frame. The initial locations POINTS, must be an M-by-2 array of [x y] coordinates. The initial video frame, I, must be a 2-D grayscale or RGB image and must be the same size and data type as the video frames passed to the step method.

The detectFASTFeatures, detectSURFFeatures, detectHarrisFeatures, and detectMinEigenFeatures functions are few of the many ways to obtain the initial points for tracking.

To track a set of points:

  1. Define and set up your point tracker object using the constructor.

  2. Call the step method with the input image, I, and the point tracker object, pointTracker. See the following syntax for using the step method.

After initializing the tracking process, use the step method to track the points in subsequent video frames. You can also reset the points at any time by using the setPoints method.

[points,point_validity] = step(pointTracker,I) tracks the points in the input frame, I using the point tracker object, pointTracker. The output points contain an M-by-2 array of [x y] coordinates that correspond to the new locations of the points in the input frame, I. The output, point_validity provides an M-by-1 logical array, indicating whether or not each point has been reliably tracked.

A point can be invalid for several reasons. The point can become invalid if it falls outside of the image. Also, it can become invalid if the spatial gradient matrix computed in its neighborhood is singular. If the bidirectional error is greater than the MaxBidirectionalError threshold, this condition can also make the point invalid.

[points,point_validity,scores] = step(pointTracker,I) additionally returns the confidence score for each point. The M-by-1 output array, scores, contains values between 0 and 1. These values correspond to the degree of similarity between the neighborhood around the previous location and new location of each point. These values are computed as a function of the sum of squared differences between the previous and new neighborhoods. The greatest tracking confidence corresponds to a perfect match score of 1.

setPoints(pointTracker, points) sets the points for tracking. The method sets the M-by-2 points array of [x y] coordinates with the points to track. You can use this method if the points need to be redetected because too many of them have been lost during tracking.

setPoints(pointTracker,points,point_validity) additionally lets you mark points as either valid or invalid. The input logical vector point_validity of length M, contains the true or false value corresponding to the validity of the point to be tracked. The length M corresponds to the number of points. A false value indicates an invalid point that should not be tracked. For example, you can use this method with the estimateGeometricTransform function to determine the transformation between the point locations in the previous and current frames. You can mark the outliers as invalid.

Properties

NumPyramidLevels

Number of pyramid levels

Specify an integer scalar number of pyramid levels. The point tracker implementation of the KLT algorithm uses image pyramids. The object generates an image pyramid, where each level is reduced in resolution by a factor of two compared to the previous level. Selecting a pyramid level greater than 1, enables the algorithm to track the points at multiple levels of resolution, starting at the lowest level. Increasing the number of pyramid levels allows the algorithm to handle larger displacements of points between frames. However, computation cost also increases. Recommended values are between 1 and 4.

Each pyramid level is formed by down-sampling the previous level by a factor of two in width and height. The point tracker begins tracking each point in the lowest resolution level, and continues tracking until convergence. The object propagates the result of that level to the next level as the initial guess of the point locations. In this way, the tracking is refined with each level, up to the original image. Using the pyramid levels allows the point tracker to handle large pixel motions, which can comprise distances greater than the neighborhood size.

Default: 3

MaxBidirectionalError

Forward-backward error threshold

Specify a numeric scalar for the maximum bidirectional error. If the value is less than inf, the object tracks each point from the previous to the current frame. It then tracks the same points back to the previous frame. The object calculates the bidirectional error. This value is the distance in pixels from the original location of the points to the final location after the backward tracking. The corresponding points are considered invalid when the error is greater than the value set for this property. Recommended values are between 0 and 3 pixels.

Using the bidirectional error is an effective way to eliminate points that could not be reliably tracked. However, the bidirectional error requires additional computation. When you set the MaxBidirectionalError property to inf, the object does not compute the bidirectional error.

Default: inf

BlockSize

Size of neighborhood

Specify a two-element vector, [height, width] to represent the neighborhood around each point being tracked. The height and width must be odd integers. This neighborhood defines the area for the spatial gradient matrix computation. The minimum value for BlockSize is [5 5]. Increasing the size of the neighborhood, increases the computation time.

Default: [31 31]

MaxIterations

Maximum number of search iterations

Specify a positive integer scalar for the maximum number of search iterations for each point. The KLT algorithm performs an iterative search for the new location of each point until convergence. Typically, the algorithm converges within 10 iterations. This property sets the limit on the number of search iterations. Recommended values are between 10 and 50.

Default: 30

Methods

initializeInitialize video frame and points to track
setPointsSet points to track
step Track points in video using Kanade-Lucas-Tomasi (KLT) algorithm

Examples

expand all

Track a Face

Create System objects for reading and displaying video and for drawing a bounding box of the object.

videoFileReader = vision.VideoFileReader('visionface.avi');
videoPlayer = vision.VideoPlayer('Position', [100, 100, 680, 520]);

Read the first video frame, which contains the object, define the region.

objectFrame = step(videoFileReader);
objectRegion = [264, 122, 93, 93];

As an alternative, you can use the following commands to select the object region using a mouse. The object must occupy the majority of the region.

figure; imshow(objectFrame); objectRegion=round(getPosition(imrect))

Show initial frame with a red bounding box.

objectImage = insertShape(objectFrame, 'Rectangle', objectRegion,'Color', 'red');
figure; imshow(objectImage); title('Yellow box shows object region');

Detect interest points in the object region.

points = detectMinEigenFeatures(rgb2gray(objectFrame), 'ROI', objectRegion);

Display the detected points.

pointImage = insertMarker(objectFrame, points.Location, '+', 'Color', 'white');
figure, imshow(pointImage), title('Detected interest points');

Create a tracker object.

tracker = vision.PointTracker('MaxBidirectionalError', 1);

Initialize the tracker.

initialize(tracker, points.Location, objectFrame);

Read, track, display points, and results in each video frame.

while ~isDone(videoFileReader)
      frame = step(videoFileReader);
      [points, validity] = step(tracker, frame);
      out = insertMarker(frame, points(validity, :), '+');
      step(videoPlayer, out);
end

Release the video reader and player.

release(videoPlayer);
release(videoFileReader);

References

Lucas, Bruce D. and Takeo Kanade. "An Iterative Image Registration Technique with an Application to Stereo Vision,"Proceedings of the 7th International Joint Conference on Artificial Intelligence, April, 1981, pp. 674–679.

Tomasi, Carlo and Takeo Kanade. Detection and Tracking of Point Features, Computer Science Department, Carnegie Mellon University, April, 1991.

Shi, Jianbo and Carlo Tomasi. "Good Features to Track," IEEE Conference on Computer Vision and Pattern Recognition, 1994, pp. 593–600.

Kalal, Zdenek, Krystian Mikolajczyk, and Jiri Matas. "Forward-Backward Error: Automatic Detection of Tracking Failures," Proceedings of the 20th International Conference on Pattern Recognition, 2010, pages 2756–2759, 2010.

Was this topic helpful?