Documentation

vision.ForegroundDetector System object

Package: vision

Foreground detection using Gaussian mixture models

Description

The ForegroundDetector System object compares a color or grayscale video frame to a background model to determine whether individual pixels are part of the background or the foreground. It then computes a foreground mask. By using background subtraction, you can detect foreground objects in an image taken from a stationary camera.

Construction

detector = vision.ForegroundDetector returns a foreground detector System object, detector. Given a series of either grayscale or color video frames, the object computes and returns the foreground mask using Gaussian mixture models (GMM).

detector = vision.ForegroundDetector(Name,Value) returns a foreground detector System object, detector, with each specified property name set to the specified value. Name can also be a property name and Value is the corresponding value. You can specify several name-value pair arguments in any order as Name1,Value1,…,NameN,ValueN.

Code Generation Support
Supports MATLAB® Function block: No
Generates platform-dependent library: Yes for MATLAB host.
Generates portable C code for non-host target.
System Objects in MATLAB Code Generation.
Generates portable C code for non-host target.
Generated code for this function uses a precompiled platform-specific shared library.
Code Generation Support, Usage Notes, and Limitations.

Properties

collapse all

AdaptLearningRateAdapt learning rate'true' (default) | 'false'

Adapt learning rate, specified as the comma-separated pair consisting of 'AdaptLearningRate' and a logical scalar 'true' or 'false'. This property enables the object to adapt the learning rate during the period specified by the NumTrainingFrames property. When you set this property to true, the object sets the LearningRate property to 1/(current frame number). When you set this property to false, the LearningRate property must be set at each time step.

NumTrainingFramesNumber of initial video frames for training background model150 (default) | integer

Number of initial video frames for training background model, specified as the comma-separated pair consisting of 'NumTrainingFrames' and an integer. When you set the AdaptLearningRate to false, this property will not be available.

LearningRateLearning rate for parameter updates0.005 (default) | numeric scalar

Learning rate for parameter updates, specified as the comma-separated pair consisting of 'LearningRate' and a numeric scalar. Specify the learning rate to adapt model parameters. This property controls how quickly the model adapts to changing conditions. Set this property appropriately to ensure algorithm stability.

When you set AdaptLearningRate to true, the LearningRate property takes effect only after the training period specified by NumTrainingFrames is over.

When you set the AdaptLearningRate to false, this property will not be available. This property is tunable.

MinimumBackgroundRatioThreshold to determine background model0.7 (default) | numeric scalar

Threshold to determine background model, specified as the comma-separated pair consisting of 'MinimumBackgroundRatio' and a numeric scalar. Set this property to represent the minimum of the apriori probabilities for pixels to be considered background values. Multimodal backgrounds can not be handled, if this value is too small.

NumGaussiansNumber of Gaussian modes in the mixture model5 (default) | positive integer

Number of Gaussian modes in the mixture model

Number of Gaussian modes in the mixture model, specified as the comma-separated pair consisting of 'NumGaussians' and a positive integer. Typically this value is 3, 4 or 5. Set this value to 3 or greater to be able to model multiple background modes.

InitialVarianceInitial mixture model variance'Auto' (default) | numeric scalar

Initial mixture model variance, specified as the comma-separated pair consisting of 'InitialVariance' and as a numeric scalar or the 'Auto' string.

Image Data TypeInitial Variance
double/single(30/255)^2
uint830^2

This property applies to all color channels for color inputs.

Methods

cloneCreate foreground detector with same property values
getNumInputsNumber of expected inputs to step method
getNumOutputsNumber of outputs from step method
isLockedLocked status for input attributes and nontunable properties
release Allow property value and input characteristics changes
resetReset the GMM model to its initial state
stepDetect foreground using Gaussian mixture models

Examples

collapse all

Track Cars

Create system objects to read file.

videoSource = vision.VideoFileReader('viptraffic.avi','ImageColorSpace','Intensity','VideoOutputDataType','uint8');

Setting frames to 5 because it is a short video. Set initial standard deviation.

detector = vision.ForegroundDetector(...
       'NumTrainingFrames', 5, ...
       'InitialVariance', 30*30);

Perform blob analysis.

blob = vision.BlobAnalysis(...
       'CentroidOutputPort', false, 'AreaOutputPort', false, ...
       'BoundingBoxOutputPort', true, ...
       'MinimumBlobAreaSource', 'Property', 'MinimumBlobArea', 250);

Insert a border.

shapeInserter = vision.ShapeInserter('BorderColor','White');

Play results. Draw bounding boxes around cars.

videoPlayer = vision.VideoPlayer();
while ~isDone(videoSource)
     frame  = step(videoSource);
     fgMask = step(detector, frame);
     bbox   = step(blob, fgMask);
     out    = step(shapeInserter, frame, bbox);
     step(videoPlayer, out);
end

Release objects.

release(videoPlayer);
release(videoSource);

References

[1] P. Kaewtrakulpong, R. Bowden, An Improved Adaptive Background Mixture Model for Realtime Tracking with Shadow Detection, In Proc. 2nd European Workshop on Advanced Video Based Surveillance Systems, AVBS01, VIDEO BASED SURVEILLANCE SYSTEMS: Computer Vision and Distributed Processing (September 2001)

[2] Stauffer, C. and Grimson, W.E.L,Adaptive Background Mixture Models for Real-Time Tracking, Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, Vol. 2 (06 August 1999), pp. 2246-252 Vol. 2.

Introduced in R2011a

Was this topic helpful?