Accelerating the pace of engineering and science

# vision.ForegroundDetector System object

Package: vision

Foreground detection using Gaussian mixture models

## Description

The ForegroundDetector System object compares a color or grayscale video frame to a background model to determine whether individual pixels are part of the background or the foreground. It then computes a foreground mask. By using background subtraction, you can detect foreground objects in an image taken from a stationary camera.

## Construction

detector = vision.ForegroundDetector returns a foreground detector System object, detector. Given a series of either grayscale or color video frames, the object computes and returns the foreground mask using Gaussian mixture models (GMM).

detector = vision.ForegroundDetector(Name,Value) returns a foreground detector System object, detector, with each specified property name set to the specified value. Name can also be a property name and Value is the corresponding value. You can specify several name-value pair arguments in any order as Name1,Value1,…,NameN,ValueN.

Code Generation Support
Supports MATLAB® Function block: No
Generates platform-dependent library: Yes for MATLAB host.
Generates portable C code for non-host target.
System Objects in MATLAB Code Generation.
Generates portable C code for non-host target.
Generated code for this function uses a precompiled platform-specific shared library.
Code Generation Support, Usage Notes, and Limitations.

## Properties

 AdaptLearningRate Adapt learning rate Enables the object to adapt the learning rate during the period specified by the NumTrainingFrames property. When you set this property to true, the object sets the LearningRate property to 1/(current frame number). When you set this property to false, the LearningRate property must be set at each time step. Default: true NumTrainingFrames Number of initial video frames for training background model Set this property to the number of training frames at the start of the video sequence. When you set the AdaptLearningRate to false, this property will not be available. Default: 150 LearningRate Learning rate for parameter updates Specify the learning rate to adapt model parameters. This property controls how quickly the model adapts to changing conditions. Set this property appropriately to ensure algorithm stability. When you set AdaptLearningRate to true, the LearningRate property takes effect only after the training period specified by NumTrainingFrames is over. When you set the AdaptLearningRate to false, this property will not be available. This property is tunable. Default: 0.005 MinimumBackgroundRatio Threshold to determine background model Set this property to represent the minimum of the apriori probabilities for pixels to be considered background values. Multimodal backgrounds can not be handled, if this value is too small. Default: 0.7 NumGaussians Number of Gaussian modes in the mixture model Specify the number of Gaussian modes in the mixture model. Set this property to a positive integer. Typically this value is 3, 4 or 5. Set this value to 3 or greater to be able to model multiple background modes. Default: 5 InitialVariance Variance when initializing a new Gaussian mode Initial variance to initialize all distributions that compose the foreground-background mixture model. For a uint8 input a typical value is ${30}^{2}$. For a floating point input this typical value is scaled to ${\left(30}{255}\right)}^{2}$. This property applies to all color channels for color inputs. Default: (30/255)^2;

## Methods

 clone Create foreground detector with same property values getNumInputs Number of expected inputs to step method getNumOutputs Number of outputs from step method isLocked Locked status for input attributes and nontunable properties release Allow property value and input characteristics changes reset Reset the GMM model to its initial state step Detect foreground using Gaussian mixture models

## Examples

Track Cars:

```videoSource = vision.VideoFileReader('viptraffic.avi','ImageColorSpace','Intensity','VideoOutputDataType','uint8');
detector = vision.ForegroundDetector(...
'NumTrainingFrames', 5, ... % 5 because of short video
'InitialVariance', 30*30); % initial standard deviation of 30
blob = vision.BlobAnalysis(...
'CentroidOutputPort', false, 'AreaOutputPort', false, ...
'BoundingBoxOutputPort', true, ...
'MinimumBlobAreaSource', 'Property', 'MinimumBlobArea', 250);
shapeInserter = vision.ShapeInserter('BorderColor','White');

videoPlayer = vision.VideoPlayer();
while ~isDone(videoSource)
frame  = step(videoSource);