vision.ForegroundDetector System object

Package: vision

Foreground detection using Gaussian mixture models


The ForegroundDetector System object compares a color or grayscale video frame to a background model to determine whether individual pixels are part of the background or the foreground. It then computes a foreground mask. By using background subtraction, you can detect foreground objects in an image taken from a stationary camera.


detector = vision.ForegroundDetector returns a foreground detector System object, detector. Given a series of either grayscale or color video frames, the object computes and returns the foreground mask using Gaussian mixture models (GMM).

detector = vision.ForegroundDetector(Name,Value) returns a foreground detector System object, detector, with each specified property name set to the specified value. Name can also be a property name and Value is the corresponding value. You can specify several name-value pair arguments in any order as Name1,Value1,…,NameN,ValueN.

Code Generation Support
Supports MATLAB® Function block: No
Generates platform-dependent library: Yes for MATLAB host.
Generates portable C code for non-host target.
System Objects in MATLAB Code Generation.
Generates portable C code for non-host target.
Generated code for this function uses a precompiled platform-specific shared library.
Code Generation Support, Usage Notes, and Limitations.



Adapt learning rate

Enables the object to adapt the learning rate during the period specified by the NumTrainingFrames property. When you set this property to true, the object sets the LearningRate property to 1/(current frame number). When you set this property to false, the LearningRate property must be set at each time step.

Default: true


Number of initial video frames for training background model

Set this property to the number of training frames at the start of the video sequence.

When you set the AdaptLearningRate to false, this property will not be available.

Default: 150


Learning rate for parameter updates

Specify the learning rate to adapt model parameters. This property controls how quickly the model adapts to changing conditions. Set this property appropriately to ensure algorithm stability.

When you set AdaptLearningRate to true, the LearningRate property takes effect only after the training period specified by NumTrainingFrames is over.

When you set the AdaptLearningRate to false, this property will not be available. This property is tunable.

Default: 0.005


Threshold to determine background model

Set this property to represent the minimum of the apriori probabilities for pixels to be considered background values. Multimodal backgrounds can not be handled, if this value is too small.

Default: 0.7


Number of Gaussian modes in the mixture model

Specify the number of Gaussian modes in the mixture model. Set this property to a positive integer. Typically this value is 3, 4 or 5. Set this value to 3 or greater to be able to model multiple background modes.

Default: 5


Variance when initializing a new Gaussian mode

Initial variance to initialize all distributions that compose the foreground-background mixture model. For a uint8 input a typical value is 302. For a floating point input this typical value is scaled to (30255)2. This property applies to all color channels for color inputs.

Default: (30/255)^2;


cloneCreate foreground detector with same property values
getNumInputsNumber of expected inputs to step method
getNumOutputsNumber of outputs from step method
isLockedLocked status for input attributes and nontunable properties
release Allow property value and input characteristics changes
resetReset the GMM model to its initial state
stepDetect foreground using Gaussian mixture models


Track Cars:

videoSource = vision.VideoFileReader('viptraffic.avi','ImageColorSpace','Intensity','VideoOutputDataType','uint8');
detector = vision.ForegroundDetector(...
       'NumTrainingFrames', 5, ... % 5 because of short video
       'InitialVariance', 30*30); % initial standard deviation of 30
blob = vision.BlobAnalysis(...
       'CentroidOutputPort', false, 'AreaOutputPort', false, ...
       'BoundingBoxOutputPort', true, ...
       'MinimumBlobAreaSource', 'Property', 'MinimumBlobArea', 250);
shapeInserter = vision.ShapeInserter('BorderColor','White');

videoPlayer = vision.VideoPlayer();
while ~isDone(videoSource)
     frame  = step(videoSource);
     fgMask = step(detector, frame);
     bbox   = step(blob, fgMask);
     out    = step(shapeInserter, frame, bbox); % draw bounding boxes around cars
     step(videoPlayer, out); % view results in the video player


[1] P. Kaewtrakulpong, R. Bowden, An Improved Adaptive Background Mixture Model for Realtime Tracking with Shadow Detection, In Proc. 2nd European Workshop on Advanced Video Based Surveillance Systems, AVBS01, VIDEO BASED SURVEILLANCE SYSTEMS: Computer Vision and Distributed Processing (September 2001)

[2] Stauffer, C. and Grimson, W.E.L,Adaptive Background Mixture Models for Real-Time Tracking, Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, Vol. 2 (06 August 1999), pp. 2246-252 Vol. 2.

Was this topic helpful?