System object: vision.PointTracker
Track points in video using Kanade-Lucas-Tomasi (KLT) algorithm
[POINTS,POINT_VALIDITY] = step(H,I)
[POINTS,POINT_VALIDITY,SCORES] = step(H,I)
[POINTS,POINT_VALIDITY] = step(H,I) tracks the points in the input frame, I. The input POINTS contain an M-by-2 array of [x y] coordinates that correspond to the new locations of the points in the input frame, I. The input, POINT_VALIDITY provides an M-by-1 logical array, indicating whether or not each point has been reliably tracked. The input frame, I must be the same size as the image passed to the initialize method.
[POINTS,POINT_VALIDITY,SCORES] = step(H,I) additionally returns the confidence score for each point. The M-by-1 output array, SCORE, contains values between 0 and 1. These values are computed as a function of the sum of squared differences between the BlockSize-by-BlockSize neighborhood around the point in the previous frame, and the corresponding neighborhood in the current frame. The greatest tracking confidence corresponds to a perfect match score of 1.
Note: H specifies the System object™ on which to run this step method.
The object performs an initialization the first time the step method is executed. This initialization locks nontunable properties and input specifications, such as dimensions, complexity, and data type of the input data. If you change a nontunable property or an input specification, the System object issues an error. To change nontunable properties or inputs, you must first call the release method to unlock the object.