Accelerating the pace of engineering and science

# vision.OpticalFlow System object

Package: vision

Estimate object velocities

## Description

The OpticalFlow System object™ estimates object velocities from one image or video frame to another. It uses either the Horn-Schunck or the Lucas-Kanade method.

## Construction

opticalFlow = vision.OpticalFlow returns an optical flow System object, opticalFlow. This object estimates the direction and speed of object motion from one image to another or from one video frame to another.

opticalFlow = vision.OpticalFlow(Name,Value) returns an optical flow System object, H, with each specified property set to the specified value. You can specify additional name-value pair arguments in any order as (Name1, Value1,...,NameN,ValueN).

Code Generation Support
Supports MATLAB® Function block: Yes
System Objects in MATLAB Code Generation.
Code Generation Support, Usage Notes, and Limitations.

### To estimate velocity:

1. Define and set up your text inserter using the constructor.

2. Call the step method with the input image, I, the optical flow object, opticalFlow, and any optional properties. See the syntax below for using the step method.

VSQ = step(opticalFlow,I) computes the optical flow of input image, I, from one video frame to another, and returns VSQ, specified as a matrix of velocity magnitudes.

V = step(opticalFlow,I) computes the optical flow of input image. I, from one video frame to another, and returns V, specified as a complex matrix of horizontal and vertical components. This applies when you set the OutputValue property to 'Horizontal and vertical components in complex form'.

[...] = step(opticalFlow,I1,I2) computes the optical flow of the input image I1, using I2 as a reference frame. This applies when you set the ReferenceFrameSource property to 'Input port'.

[..., IMV] = step(opticalFlow,I) outputs the delayed input image, IMV. The delay is equal to the latency introduced by the computation of the motion vectors. This property applies when you set the Method property to 'Lucas-Kanade', the TemporalGradientFilter property to 'Derivative of Gaussian', and the MotionVectorImageOutputport property to true.

## Methods

 clone Create optical flow object with same property values getNumInputs Number of expected inputs to step method getNumOutputs Number of outputs from step method isLocked Locked status for input attributes and non-tunable properties release Allow property value and input characteristics changes step Estimate direction and speed of object motion between video frames

## Examples

expand all

### Track Cars Using Optical Flow

Set up objects.

```videoReader = vision.VideoFileReader('viptraffic.avi','ImageColorSpace','Intensity','VideoOutputDataType','uint8');
converter = vision.ImageDataTypeConverter;
opticalFlow = vision.OpticalFlow('ReferenceFrameDelay', 1);
opticalFlow.OutputValue = 'Horizontal and vertical components in complex form';
shapeInserter = vision.ShapeInserter('Shape','Lines','BorderColor','Custom', 'CustomBorderColor', 255);
videoPlayer = vision.VideoPlayer('Name','Motion Vector');
```

Convert the image to single precision, then compute optical flow for the video. Generate coordinate points and draw lines to indicate flow. Display results.

```while ~isDone(videoReader)
im = step(converter, frame);
of = step(opticalFlow, im);
lines = videooptflowlines(of, 20);
if ~isempty(lines)
out =  step(shapeInserter, im, lines);
step(videoPlayer, out);
end
end
```

Close the video reader and player

```release(videoPlayer);

## Algorithms

To compute the optical flow between two images, you must solve the following optical flow constraint equation:

${I}_{x}u+{I}_{y}v+{I}_{t}=0$

In this equation, the following values are represented:

• ${I}_{x}$, ${I}_{y}$ and ${I}_{t}$ are the spatiotemporal image brightness derivatives

• u is the horizontal optical flow

• v is the vertical optical flow

Because this equation is underconstrained, there are several methods to solve for u and v:

• Horn-Schunck Method

See the following two sections for descriptions of these methods

### Horn-Schunck Method

By assuming that the optical flow is smooth over the entire image, the Horn-Schunck method computes an estimate of the velocity field, $\left[\begin{array}{cc}u& v{\right]}^{T}\end{array}$, that minimizes this equation:

$E=\iint \left({I}_{x}u+{I}_{y}v+{I}_{t}{\right)}^{2}dxdy+\alpha \iint \left\{{\left(\frac{\partial u}{\partial x}\right)}^{2}+{\left(\frac{\partial u}{\partial y}\right)}^{2}+{\left(\frac{\partial v}{\partial x}\right)}^{2}+{\left(\frac{\partial v}{\partial y}\right)}^{2}\right\}dxdy$

In this equation, $\frac{\partial u}{\partial x}$ and $\frac{\partial u}{\partial y}$ are the spatial derivatives of the optical velocity component u, and $\alpha$ scales the global smoothness term. The Horn-Schunck method minimizes the previous equation to obtain the velocity field, [u v], for each pixel in the image, which is given by the following equations:

$\begin{array}{l}{u}_{x,y}^{k+1}={\overline{u}}_{x,y}^{k}-\frac{{I}_{x}\left[{I}_{x}{\overline{u}}^{k}{}_{x,y}+{I}_{y}{\overline{v}}^{k}{}_{x,y}+{I}_{t}\right]}{{\alpha }^{2}+{I}_{x}^{2}+{I}_{y}^{2}}\\ {v}_{x,y}^{k+1}={\overline{v}}_{x,y}^{k}-\frac{{I}_{y}\left[{I}_{x}{\overline{u}}^{k}{}_{x,y}+{I}_{y}{\overline{v}}^{k}{}_{x,y}+{I}_{t}\right]}{{\alpha }^{2}+{I}_{x}^{2}+{I}_{y}^{2}}\end{array}$

In this equation, $\left[\begin{array}{cc}{u}_{x,y}^{k}& {v}_{x,y}^{k}\end{array}\right]$ is the velocity estimate for the pixel at (x,y), and $\left[\begin{array}{cc}{\overline{u}}_{x,y}^{k}& {\overline{v}}_{x,y}^{k}\end{array}\right]$ is the neighborhood average of $\left[\begin{array}{cc}{u}_{x,y}^{k}& {v}_{x,y}^{k}\end{array}\right]$. For k=0, the initial velocity is 0.

When you choose the Horn-Schunck method, u and v are solved as follows:

1. Compute ${I}_{x}$ and ${I}_{y}$ using the Sobel convolution kernel: $\left[\begin{array}{ccc}-1& -2& \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}-1;& 0& 0\end{array}& 0;& 1\end{array}& 2& 1\end{array}\end{array}\right]$, and its transposed form for each pixel in the first image.

2. Compute ${I}_{t}$ between images 1 and 2 using the $\left[\begin{array}{cc}-1& 1\end{array}\right]$ kernel.

3. Assume the previous velocity to be 0, and compute the average velocity for each pixel using $\left[\begin{array}{ccc}0& 1& \begin{array}{ccc}0;& 1& \begin{array}{ccc}0& 1;& \begin{array}{ccc}0& 1& 0\end{array}\end{array}\end{array}\end{array}\right]$ as a convolution kernel.

4. Iteratively solve for u and v.

To solve the optical flow constraint equation for u and v, the Lucas-Kanade method divides the original image into smaller sections and assumes a constant velocity in each section. Then, it performs a weighted least-square fit of the optical flow constraint equation to a constant model for ${\left[\begin{array}{cc}u& v\end{array}\right]}^{T}$ in each section, $\Omega$, by minimizing the following equation:

$\sum _{x\in \Omega }{W}^{2}{\left[{I}_{x}u+{I}_{y}v+{I}_{t}\right]}^{2}$

Here, W is a window function that emphasizes the constraints at the center of each section. The solution to the minimization problem is given by the following equation:

$\left[\begin{array}{cc}\sum {W}^{2}{I}_{x}^{2}& \sum {W}^{2}{I}_{x}{I}_{y}\\ \sum {W}^{2}{I}_{y}{I}_{x}& \sum {W}^{2}{I}_{y}^{2}\end{array}\right]\left[\begin{array}{c}u\\ v\end{array}\right]=-\left[\begin{array}{c}\sum {W}^{2}{I}_{x}{I}_{t}\\ \sum {W}^{2}{I}_{y}{I}_{t}\end{array}\right]$

When you choose the Lucas-Kanade method, ${I}_{t}$ is computed using a difference filter or a derivative of a Gaussian filter.

The two following sections explain how ${I}_{x}$, ${I}_{y}$, ${I}_{t}$, and then u and v are computed.

#### Difference Filter

When you set the Temporal gradient filter to Difference filter [-1 1], u and v are solved as follows:

1. Compute ${I}_{x}$ and ${I}_{y}$ using the kernel $\left[\begin{array}{cccc}-1& 8& 0& \begin{array}{cc}-8& 1\end{array}\end{array}\right]/12$ and its transposed form.

If you are working with fixed-point data types, the kernel values are signed fixed-point values with word length equal to 16 and fraction length equal to 15.

2. Compute ${I}_{t}$ between images 1 and 2 using the $\left[\begin{array}{cc}-1& 1\end{array}\right]$ kernel.

3. Smooth the gradient components, ${I}_{x}$, ${I}_{y}$, and ${I}_{t}$, using a separable and isotropic 5-by-5 element kernel whose effective 1-D coefficients are $\left[\begin{array}{cccc}\begin{array}{cc}1& 4\end{array}& 6& 4& 1\end{array}\right]/16$. If you are working with fixed-point data types, the kernel values are unsigned fixed-point values with word length equal to 8 and fraction length equal to 7.

4. Solve the 2-by-2 linear equations for each pixel using the following method:

• If $A=\left[\begin{array}{cc}a& b\\ b& c\end{array}\right]=\left[\begin{array}{cc}\sum {W}^{2}{I}_{x}^{2}& \sum {W}^{2}{I}_{x}{I}_{y}\\ \sum {W}^{2}{I}_{y}{I}_{x}& \sum {W}^{2}{I}_{y}^{2}\end{array}\right]$

Then the eigenvalues of A are ${\lambda }_{i}=\frac{a+c}{2}±\frac{\sqrt{4{b}^{2}+{\left(a-c\right)}^{2}}}{2};i=1,2$

In the fixed-point diagrams, $P=\frac{a+c}{2},Q=\frac{\sqrt{4{b}^{2}+{\left(a-c\right)}^{2}}}{2}$

• The eigenvalues are compared to the threshold, $\tau$, that corresponds to the value you enter for the threshold for noise reduction. The results fall into one of the following cases:

Case 1: ${\lambda }_{1}\ge \tau$ and ${\lambda }_{2}\ge \tau$

A is nonsingular, the system of equations are solved using Cramer's rule.

Case 2: ${\lambda }_{1}\ge \tau$ and ${\lambda }_{2}<\tau$

A is singular (noninvertible), the gradient flow is normalized to calculate u and v.

Case 3: ${\lambda }_{1}<\tau$ and ${\lambda }_{2}<\tau$

The optical flow, u and v, is 0.

#### Derivative of Gaussian

If you set the temporal gradient filter to Derivative of Gaussian, u and v are solved using the following steps. You can see the flow chart for this process at the end of this section:

1. Compute ${I}_{x}$ and ${I}_{y}$ using the following steps:

1. Use a Gaussian filter to perform temporal filtering. Specify the temporal filter characteristics such as the standard deviation and number of filter coefficients using the BufferedFramesCount property.

2. Use a Gaussian filter and the derivative of a Gaussian filter to smooth the image using spatial filtering. Specify the standard deviation and length of the image smoothing filter using the ImageSmoothingFilterStandardDeviationt property.

2. Compute ${I}_{t}$ between images 1 and 2 using the following steps:

1. Use the derivative of a Gaussian filter to perform temporal filtering. Specify the temporal filter characteristics such as the standard deviation and number of filter coefficients using the BufferedFramesCount parameter.

2. Use the filter described in step 1b to perform spatial filtering on the output of the temporal filter.

3. Smooth the gradient components, ${I}_{x}$, ${I}_{y}$, and ${I}_{t}$, using a gradient smoothing filter. Use the GradientSmoothingFilterStandardDeviation property to specify the standard deviation and the number of filter coefficients for the gradient smoothing filter.

4. Solve the 2-by-2 linear equations for each pixel using the following method:

• If $A=\left[\begin{array}{cc}a& b\\ b& c\end{array}\right]=\left[\begin{array}{cc}\sum {W}^{2}{I}_{x}^{2}& \sum {W}^{2}{I}_{x}{I}_{y}\\ \sum {W}^{2}{I}_{y}{I}_{x}& \sum {W}^{2}{I}_{y}^{2}\end{array}\right]$

Then the eigenvalues of A are ${\lambda }_{i}=\frac{a+c}{2}±\frac{\sqrt{4{b}^{2}+{\left(a-c\right)}^{2}}}{2};i=1,2$

• When the object finds the eigenvalues, it compares them to the threshold, $\tau$, that corresponds to the value you enter for the NoiseReductionThreshold parameter. The results fall into one of the following cases:

Case 1: ${\lambda }_{1}\ge \tau$ and ${\lambda }_{2}\ge \tau$

A is nonsingular, so the object solves the system of equations using Cramer's rule.

Case 2: ${\lambda }_{1}\ge \tau$ and ${\lambda }_{2}<\tau$

A is singular (noninvertible), so the object normalizes the gradient flow to calculate u and v.

Case 3: ${\lambda }_{1}<\tau$ and ${\lambda }_{2}<\tau$

The optical flow, u and v, is 0.

## References

[1] Barron, J.L., D.J. Fleet, S.S. Beauchemin, and T.A. Burkitt. Performance of optical flow techniques. CVPR, 1992.