Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

vision.ForegroundDetector System object

Foreground detection using Gaussian mixture models

Description

The ForegroundDetector System object compares a color or grayscale video frame to a background model to determine whether individual pixels are part of the background or the foreground. It then computes a foreground mask. By using background subtraction, you can detect foreground objects in an image taken from a stationary camera.

Note

Starting in R2016b, instead of using the step method to perform the operation defined by the System object™, you can call the object with arguments, as if it were a function. For example, y = step(obj,x) and y = obj(x) perform equivalent operations.

Construction

detector = vision.ForegroundDetector returns a foreground detector System object, detector. Given a series of either grayscale or color video frames, the object computes and returns the foreground mask using Gaussian mixture models (GMM).

detector = vision.ForegroundDetector(Name,Value) returns a foreground detector System object, detector, with each specified property name set to the specified value. Name can also be a property name and Value is the corresponding value. You can specify several name-value pair arguments in any order as Name1,Value1,…,NameN,ValueN.

Properties

expand all

Adapt learning rate, specified as the comma-separated pair consisting of 'AdaptLearningRate' and a logical scalar 'true' or 'false'. This property enables the object to adapt the learning rate during the period specified by the NumTrainingFrames property. When you set this property to true, the object sets the LearningRate property to 1/(current frame number). When you set this property to false, the LearningRate property must be set at each time step.

Number of initial video frames for training background model, specified as the comma-separated pair consisting of 'NumTrainingFrames' and an integer. When you set the AdaptLearningRate to false, this property will not be available.

Learning rate for parameter updates, specified as the comma-separated pair consisting of 'LearningRate' and a numeric scalar. Specify the learning rate to adapt model parameters. This property controls how quickly the model adapts to changing conditions. Set this property appropriately to ensure algorithm stability.

When you set AdaptLearningRate to true, the LearningRate property takes effect only after the training period specified by NumTrainingFrames is over.

When you set the AdaptLearningRate to false, this property will not be available. This property is tunable.

Threshold to determine background model, specified as the comma-separated pair consisting of 'MinimumBackgroundRatio' and a numeric scalar. Set this property to represent the minimum of the apriori probabilities for pixels to be considered background values. Multimodal backgrounds can not be handled, if this value is too small.

Number of Gaussian modes in the mixture model

Number of Gaussian modes in the mixture model, specified as the comma-separated pair consisting of 'NumGaussians' and a positive integer. Typically this value is 3, 4 or 5. Set this value to 3 or greater to be able to model multiple background modes.

Initial mixture model variance, specified as the comma-separated pair consisting of 'InitialVariance' and as a numeric scalar or the 'Auto' character vector.

Image Data TypeInitial Variance
double/single(30/255)^2
uint830^2

This property applies to all color channels for color inputs.

Methods

resetReset the GMM model to its initial state
stepDetect foreground using Gaussian mixture models
Common to All System Objects
clone

Create System object with same property values

getNumInputs

Expected number of inputs to a System object

getNumOutputs

Expected number of outputs of a System object

isLocked

Check locked states of a System object (logical)

release

Allow System object property value changes

Examples

expand all

Create system objects to read file.

videoSource = vision.VideoFileReader('viptraffic.avi',...
    'ImageColorSpace','Intensity','VideoOutputDataType','uint8');

Setting frames to 5 because it is a short video. Set initial standard deviation.

detector = vision.ForegroundDetector(...
       'NumTrainingFrames', 5, ... 
       'InitialVariance', 30*30);

Perform blob analysis.

blob = vision.BlobAnalysis(...
       'CentroidOutputPort', false, 'AreaOutputPort', false, ...
       'BoundingBoxOutputPort', true, ...
       'MinimumBlobAreaSource', 'Property', 'MinimumBlobArea', 250);

Insert a border.

shapeInserter = vision.ShapeInserter('BorderColor','White');

Play results. Draw bounding boxes around cars.

videoPlayer = vision.VideoPlayer();
while ~isDone(videoSource)
     frame  = step(videoSource);
     fgMask = step(detector, frame);
     bbox   = step(blob, fgMask);
     out    = step(shapeInserter, frame, bbox);
     step(videoPlayer, out); 
end

Release objects.

release(videoPlayer);

release(videoSource);

References

[1] P. Kaewtrakulpong, R. Bowden, An Improved Adaptive Background Mixture Model for Realtime Tracking with Shadow Detection, In Proc. 2nd European Workshop on Advanced Video Based Surveillance Systems, AVBS01, VIDEO BASED SURVEILLANCE SYSTEMS: Computer Vision and Distributed Processing (September 2001)

[2] Stauffer, C. and Grimson, W.E.L,Adaptive Background Mixture Models for Real-Time Tracking, Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, Vol. 2 (06 August 1999), pp. 2246-252 Vol. 2.

Extended Capabilities

Introduced in R2011a