This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materials including this page, select Japan from the country navigator on the bottom of this page.

opticalFlowFarneback class

Estimate optical flow using Farneback method


Estimate the direction and speed of a moving object from one image or video frame to another using the Farneback method.


opticFlow = opticalFlowFarneback returns an optical flow object that you can use to estimate the direction and speed of an object’s motion. estimateFlow method of this class uses the Farneback algorithm to estimate the optical flow.

opticFlow = opticalFlowFarneback(Name,Value) includes additional options specified by one or more Name,Value pair arguments.

Input Arguments

expand all

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'NumPyramidLevels',3

expand all

Number of pyramid layers, specified as the comma-separated pair consisting of 'NumPyramidLevels' and a positive scalar. The value includes the initial image as one of the layers. When you set this value to 1, opticalFlowFarneback uses the original image only. It does not add any pyramid layers.

The opticalFlowFarneback algorithm generates an image pyramid, where each level has a lower resolution compared to the previous level. When you select a pyramid level greater than 1, the algorithm can track the points at multiple levels of resolution, starting at the lowest level. Increasing the number of pyramid levels enables the algorithm to handle larger displacements of points between frames. However, the number of computations also increases. Recommended values are between 1 and 4. The diagram shows an image pyramid with 3 levels.

The algorithm forms each pyramid level by downsampling the previous level. The tracking begins in the lowest resolution level, and continues tracking until convergence. The optical flow algorithm propagates the result of that level to the next level as the initial guess of the point locations. In this way, the algorithm refines the tracking with each level, ending with the original image. Using the pyramid levels enables the optical flow algorithm to handle large pixel motions, which can be distances greater than the neighborhood size.

Image scale, specified as the comma-separated pair consisting of 'PyramidScale' and a positive scalar in the range (0,1). The pyramid scale is applied to each image at every pyramid level. A value of 0.5 creates a classical pyramid, where each level reduces in resolution by a factor of two compared to the previous level.

Number of search iterations per pyramid level, specified as the comma-separated pair consisting of 'NumIterations' and a positive integer. The Farneback algorithm performs an iterative search for the new location of each point until convergence.

Size of the pixel neighborhood, specified as the comma-separated pair consisting of 'NeighborhoodSize' and a positive integer. Increase the neighborhood size to increase blurred motion. The blur motion yields a more robust estimation of optical flow. A typical value for NeighborhoodSize is 5 or 7.

Averaging filter size, specified as the comma-separated pair consisting of 'FilterSize' and a positive integer in the range [2, Inf). After the algorithm computes the displacement (flow), the averaging over neighborhoods is done using a Gaussian filter of size (FilterSize * FilterSize). Additionally, the pixels close to the borders are given a reduced weight because the algorithm assumes that the polynomial expansion coefficients are less reliable there. Increasing the filter size increases the robustness of the algorithm to image noise. The larger the filter size, the greater the algorithm handles image noise and fast motion detection, making it more robust.


estimateFlowEstimate optical flow
resetReset the internal state of the object


expand all

Load a video.

vidReader = VideoReader('visiontraffic.avi','CurrentTime',11);

Set up an optical flow object to do the estimate.

opticFlow = opticalFlowFarneback;

Read in video frames and estimate optical flow of each frame. Display the video frames with flow vectors.

while hasFrame(vidReader)
    frameRGB = readFrame(vidReader);
    frameGray = rgb2gray(frameRGB);

    flow = estimateFlow(opticFlow,frameGray); 

    hold on
    plot(flow,'DecimationFactor',[5 5],'ScaleFactor',2)
    hold off 


[1] Farneback, G. “Two-Frame Motion Estimation Based on Polynomial Expansion.” Proceedings of the 13th Scandinavian Conference on Image Analysis. Gothenburg, Sweden, 2003.

Extended Capabilities

Introduced in R2015b