This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

vision.ForegroundDetector System object

Foreground detection using Gaussian mixture models

Description

The ForegroundDetector compares a color or grayscale video frame to a background model to determine whether individual pixels are part of the background or the foreground. It then computes a foreground mask. By using background subtraction, you can detect foreground objects in an image taken from a stationary camera.

To detect foreground in an image :

  1. Create the vision.ForegroundDetector object and set its properties.

  2. Call the object with arguments, as if it were a function.

To learn more about how System objects work, see What Are System Objects? (MATLAB).

Creation

Syntax

detector = vision.ForegroundDetector
detector = vision.ForegroundDetector(Name,Value)

Description

example

detector = vision.ForegroundDetector computes and returns a foreground mask using the Gaussian mixture model (GMM).

detector = vision.ForegroundDetector(Name,Value) sets properties using one or more name-value pairs. Enclose each property name in quotes. For example, detector = vision.ForegroundDetector('LearningRate',0.005)

Properties

expand all

Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the release function unlocks them.

If a property is tunable, you can change its value at any time.

For more information on changing property values, see System Design in MATLAB Using System Objects (MATLAB).

Adapt learning rate, specified as 'true' or 'false'. This property enables the object to adapt the learning rate during the period specified by the NumTrainingFrames property. When you set this property to true, the object sets the LearningRate property to 1/(current frame number). When you set this property to false, the LearningRate property must be set at each time step.

Number of initial video frames for training background model, specified as an integer. When you set the AdaptLearningRate to false, this property will not be available.

Learning rate for parameter updates, specified as a numeric scalar. Specify the learning rate to adapt model parameters. This property controls how quickly the model adapts to changing conditions. Set this property appropriately to ensure algorithm stability.

The learning rate specified by this property can only be implemented when you set the AdaptLearningRate to true and after the training period specified by NumTrainingFrames is over.

Tunable: Yes

Threshold to determine background model, specified as a numeric scalar. Set this property to represent the minimum possibility for pixels to be considered background values. Multimodal backgrounds cannot be handled if this value is too small.

Number of Gaussian modes in the mixture model, specified as a positive integer. Typically, you would set this value to 3, 4 or 5. Set the value to 3 or greater to be able to model multiple background modes.

Initial mixture model variance, specified as a numeric scalar or the 'Auto' character vector.

Image Data TypeInitial Variance
double/single(30/255)^2
uint830^2

This property applies to all color channels for color inputs.

Usage

For versions earlier than R2016b, use the step function to run the System object™ algorithm. The arguments to step are the object you created, followed by the arguments shown in this section.

For example, y = step(obj,x) and y = obj(x) perform equivalent operations.

Syntax

foregroundMask = detector(I)
foregroundMask = detector(I,learningRate)

Description

example

foregroundMask = detector(I) computes the foreground mask for input image I, and returns a logical mask. Values of 1 in the mask correspond to foreground pixels.

foregroundMask = detector(I,learningRate) computes the foreground mask using the LearningRate.

Input Arguments

expand all

Input image, specified as grayscale or truecolor (RGB).

Learning rate for parameter updates, specified as a numeric scalar. Specify the learning rate to adapt model parameters. This property controls how quickly the model adapts to changing conditions. Set this property appropriately to ensure algorithm stability.

The learning rate specified by this property can only be implemented when you set the AdaptLearningRate to true and after the training period specified by NumTrainingFrames is over.

Tunable: Yes

Output Arguments

expand all

Foreground mask computed using a Gaussian mixture model, returned as a binary mask.

Object Functions

To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named obj, use this syntax:

release(obj)

expand all

stepRun System object algorithm
releaseRelease resources and allow changes to System object property values and input characteristics
resetReset internal states of System object

Examples

expand all

Create a video source object to read file.

videoSource = vision.VideoFileReader('viptraffic.avi',...
    'ImageColorSpace','Intensity','VideoOutputDataType','uint8');

Create a detector object and set the number of training frames to 5 (because it is a short video.) Set initial standard deviation.

detector = vision.ForegroundDetector(...
       'NumTrainingFrames', 5, ...
       'InitialVariance', 30*30);

Perform blob analysis.

blob = vision.BlobAnalysis(...
       'CentroidOutputPort', false, 'AreaOutputPort', false, ...
       'BoundingBoxOutputPort', true, ...
       'MinimumBlobAreaSource', 'Property', 'MinimumBlobArea', 250);

Insert a border.

shapeInserter = vision.ShapeInserter('BorderColor','White');

Play results. Draw bounding boxes around cars.

videoPlayer = vision.VideoPlayer();
while ~isDone(videoSource)
     frame  = videoSource();
     fgMask = detector(frame);
     bbox   = blob(fgMask);
     out    = shapeInserter(frame,bbox);
     videoPlayer(out);
end

Release objects.

release(videoPlayer);
release(videoSource);

References

[1] Kaewtrakulpong, P. and R. Bowden. An Improved Adaptive Background Mixture Model for Realtime Tracking with Shadow Detection. In Proc. 2nd European Workshop on Advanced Video Based Surveillance Systems, AVBS01, VIDEO BASED SURVEILLANCE SYSTEMS: Computer Vision and Distributed Processing (September 2001)

[2] Stauffer, C. and W.E.L. Grimson. Adaptive Background Mixture Models for Real-Time Tracking, Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, Vol. 2 (06 August 1999), pp. 2246-252 Vol. 2.

Extended Capabilities

Introduced in R2011a