Documentation Center

  • Trial Software
  • Product Updates

Graphics

Abandoned Object Detection

This example shows how to track objects at a train station and to determine which ones remain stationary. Abandoned objects in public areas concern authorities since they might pose a security risk. Algorithms, such as the one used in this example, can be used to assist security officers monitoring live surveillance video by directing their attention to a potential area of interest.

This example illustrates how to use the Blob Analysis and MATLAB® Function blocks to design a custom tracking algorithm. The example implements this algorithm using the following steps: 1) Eliminate video areas that are unlikely to contain abandoned objects by extracting a region of interest (ROI). 2) Perform video segmentation using background subtraction. 3) Calculate object statistics using the Blob Analysis block. 4) Track objects based on their area and centroid statistics. 5) Visualize the results.

Watch the Abandoned Object Detection exampleWatch the Abandoned Object Detection example.

Example Model

The following figure shows the Abandoned Object Detection example model.

Store Background Subsystem

This example uses the first frame of the video as the background. To improve accuracy, the example uses both intensity and color information for the background subtraction operation. During this operation, Cb and Cr color channels are stored in a complex array.

If you are designing a professional surveillance system, you should implement a more sophisticated segmentation algorithm.

Detect Subsystem

The Detect subsystem contains the main algorithm. Inside this subsystem, the Luminance Segmentation and Color Segmentation subsystems perform background subtraction using the intensity and color data. The example combines these two segmentation results using a binary OR operator. The Blob Analysis block computes statistics of the objects present in the scene.

Abandoned Object Tracker subsystem, shown below, uses the object statistics to determine which objects are stationary. To view the contents of this subsystem, right-click the subsystem and select Look Under Mask. To view the tracking algorithm details, double-click the Abandoned Object Tracker block. The MATLAB® code in this block is an example of how to implement your custom code to augment Computer Vision System Toolbox™ functionality.

Abandoned Object Detection Results

The All Objects window marks the region of interest (ROI) with a yellow box and all detected objects with green boxes.

The Threshold window shows the result of the background subtraction in the ROI.

The Abandoned Objects window highlights the abandoned objects with a red box.

Abandoned Object Detection

This example shows how to track objects at a train station and it determines which ones remain stationary. Abandoned objects in public areas concern authorities since they might pose a security risk. Algorithms, such as the one used in this example, can be used to assist security officers monitoring live surveillance video by directing their attention to a potential area of interest.

This example illustrates how to use the BlobAnalysis System object to identify objects and track them. The example implements this algorithm using the following steps:

  • Extract a region of interest (ROI), thus eliminating video areas that are unlikely to contain abandoned objects.

  • Perform video segmentation using background subtraction.

  • Calculate object statistics using the blob analysis System object.

  • Track objects based on their area and centroid statistics.

  • Visualize the results.

Initialize Required Variables and System Objects

Use these next sections of code to initialize the required variables and System objects.

Rectangular ROI [x y width height], where [x y] is the uppef left corner of the ROI

roi = [100 80 360 240];
% Maximum number of objects to track
maxNumObj = 200;
% Number of frames that an object must remain stationary before an alarm is
% raised
alarmCount = 45;
% Maximum number of frames that an abandoned object can be hidden before it
% is no longer tracked
maxConsecutiveMiss = 4;
areaChangeFraction = 13;     % Maximum allowable change in object area in percent
centroidChangeFraction = 18; % Maximum allowable change in object centroid in percent
% Minimum ratio between the number of frames in which an object is detected
% and the total number of frames, for that object to be tracked.
minPersistenceRatio = 0.7;
% Offsets for drawing bounding boxes in original input video
PtsOffset = int32(repmat([roi(1), roi(2), 0, 0],[maxNumObj 1]));

Create a VideoFileReader System object to read video from a file.

hVideoSrc = vision.VideoFileReader;
hVideoSrc.Filename = 'viptrain.avi';
hVideoSrc.VideoOutputDataType = 'single';

Create a ColorSpaceConverter System object to convert the RGB image to Y'CbCr format.

hColorConv = vision.ColorSpaceConverter('Conversion', 'RGB to YCbCr');

Create an Autothresholder System object to convert an intensity image to a binary image.

hAutothreshold = vision.Autothresholder('ThresholdScaleFactor', 1.3);

Create a MorphologicalClose System object to fill in small gaps in the detected objects.

hClosing = vision.MorphologicalClose('Neighborhood', strel('square',5));

Create a BlobAnalysis System object to find the area, centroid, and bounding box of the objects in the video.

hBlob = vision.BlobAnalysis('MaximumCount', maxNumObj, 'ExcludeBorderBlobs', true);
hBlob.MinimumBlobArea = 100;
hBlob.MaximumBlobArea = 2500;

Create System objects to display results.

pos = [10 300 roi(3)+25 roi(4)+25];
hAbandonedObjects = vision.VideoPlayer('Name', 'Abandoned Objects', 'Position', pos);
pos(1) = 46+roi(3); % move the next viewer to the right
hAllObjects = vision.VideoPlayer('Name', 'All Objects', 'Position', pos);
pos = [80+2*roi(3) 300 roi(3)-roi(1)+25 roi(4)-roi(2)+25];
hThresholdDisplay = vision.VideoPlayer('Name', 'Threshold', 'Position', pos);

Video Processing Loop

Create a processing loop to perform abandoned object detection on the input video. This loop uses the System objects you instantiated above.

firsttime = true;
while ~isDone(hVideoSrc)
    Im = step(hVideoSrc);

    % Select the region of interest from the original video
    OutIm = Im(roi(2):end, roi(1):end, :);

    YCbCr = step(hColorConv, OutIm);
    CbCr  = complex(YCbCr(:,:,2), YCbCr(:,:,3));

    % Store the first video frame as the background
    if firsttime
        firsttime = false;
        BkgY      = YCbCr(:,:,1);
        BkgCbCr   = CbCr;
    end
    SegY    = step(hAutothreshold, abs(YCbCr(:,:,1)-BkgY));
    SegCbCr = abs(CbCr-BkgCbCr) > 0.05;

    % Fill in small gaps in the detected objects
    Segmented = step(hClosing, SegY | SegCbCr);

    % Perform blob analysis
    [Area, Centroid, BBox] = step(hBlob, Segmented);

    % Call the helper function that tracks the identified objects and
    % returns the bounding boxes and the number of the abandoned objects.
    [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ...
       areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ...
       minPersistenceRatio, alarmCount);

    % Display the abandoned object detection results
    Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,...
        'Color','red','Opacity',0.5);
    % insert number of abandoned objects in the frame
    Imr = insertText(Imr, [1 1], OutCount);
    step(hAbandonedObjects, Imr);

    BlobCount = size(BBox,1);

    BBoxOffset = BBox + int32(repmat([roi(1) roi(2) 0  0],[BlobCount 1]));
    Imr = insertShape(Im,'Rectangle',BBoxOffset,'Color','green');

    % Display all the detected objects

    % insert number of all objects in the frame
    Imr = insertText(Imr, [1 1], OutCount);
    Imr = insertShape(Imr,'Rectangle',roi);
    %Imr = step(hDrawBBox, Imr, roi);
    step(hAllObjects, Imr);

    % Display the segmented video
    SegBBox = PtsOffset;
    SegBBox(1:BlobCount,:) = BBox;
    SegIm = insertShape(double(repmat(Segmented,[1 1 3])),'Rectangle', SegBBox,'Color','green');
    %SegIm = step(hDrawRectangles3, repmat(Segmented,[1 1 3]), SegBBox);
    step(hThresholdDisplay, SegIm);
end

release(hVideoSrc);

The Abandoned Objects window highlights the abandoned objects with a red box. The All Objects window marks the region of interest (ROI) with a yellow box and all detected objects with green boxes. The Threshold window shows the result of the background subtraction in the ROI.

Annotate Video Files with Frame Numbers

You can use the vision.TextInserter System object in MATLAB, or theInsert Text block in a Simulink® model, to overlay text on video streams. In this Simulink model example, you add a running count of the number of video frames to a video using the Insert Text block. The model contains the From Multimedia File block to import the video into the Simulink model, a Frame Counter block to count the number of frames in the input video, and two Video Viewer blocks to view the original and annotated videos.

You can open the example model by typing

ex_vision_annotate_video_file_with_frame_numbersex_vision_annotate_video_file_with_frame_numbers

on the MATLAB® command line.

  1. Run your model.

  2. The model displays the original and annotated videos.

Color Formatting

For this example, the color format for the video was set to Intensity, and therefore the color value for the text was set to a scaled value. If instead, you set the color format to RGB, then the text value must satisfy this format, and requires a 3-element vector.

Inserting Text

Use the Insert Text block to annotate the video stream with a running frame count. Set the block parameters as follows:

  • Main pane, Text = ['Frame count' sprintf('\n') 'Source frame: %d']

  • Main pane, Color value = 1

  • Main pane, Location [x y] = [2 85]

  • Font pane, Font face = LucindaTypewriterRegular

By setting the Text parameter to ['Frame count' sprintf('\n') 'Source frame: %d'], you are asking the block to print Frame count on one line and the Source frame: on a new line. Because you specified %d, an ANSI C printf-style format specification, the Variables port appears on the block. The block takes the port input in decimal form and substitutes this input for the %d in the string. You used the Location [x y] parameter to specify where to print the text. In this case, the location is 85 rows down and 2 columns over from the top-left corner of the image.

Configuration Parameters

Set the configuration parameters. Open the Configuration dialog box by selecting Model Configuration Parameters from the Simulation menu. Set the parameters as follows:

  • Solver pane, Stop time = inf

  • Solver pane, Type = Fixed-step

  • Solver pane, Solver = Discrete (no continuous states)

Draw Shapes and Lines

When you specify the type of shape to draw, you must also specify it's location on the image. The table shows the format for the points input for the different shapes.

Rectangle

ShapePTS inputDrawn Shape
Single RectangleFour-element row vector
[x y width height] where
  • x and y are the one-based coordinates of the upper-left corner of the rectangle.

  • width and height are the width, in pixels, and height, in pixels, of the rectangle. The values of width and height must be greater than 0.

M Rectangles

M-by-4 matrix

where each row of the matrix corresponds to a different rectangle and is of the same form as the vector for a single rectangle.

Line and Polyline

You can draw one or more lines, and one or more polylines. A polyline contains a series of connected line segments.

ShapePTS inputDrawn Shape
Single LineFour-element row vector [x1 y1 x2 y2] where
  • x1 and y1 are the coordinates of the beginning of the line.

  • x2 and y2 are the coordinates of the end of the line.

M Lines

M-by-4 matrix

where each row of the matrix corresponds to a different line and is of the same form as the vector for a single line.

Single Polyline with (L-1) SegmentsVector of size 2L, where L is the number of vertices, with format, [x1, y1, x2, y2, ..., xL, yL].
  • x1 and y1 are the coordinates of the beginning of the first line segment.

  • x2 and y2 are the coordinates of the end of the first line segment and the beginning of the second line segment.

  • xL and yL are the coordinates of the end of the (L-1)th line segment.

The polyline always contains (L-1) number of segments because the first and last vertex points do not connect. The block produces an error message when the number of rows is less than two or not a multiple of two.

M Polylines with (L-1) Segments

2L-by-N matrix

where each row of the matrix corresponds to a different polyline and is of the same form as the vector for a single polyline. When you require one polyline to contain less than (L–1) number of segments, fill the matrix by repeating the coordinates of the last vertex.

The block produces an error message if the number of rows is less than two or not a multiple of two.

Polygon

You can draw one or more polygons.

ShapePTS inputDrawn Shape
Single Polygon with L line segmentsRow vector of size 2L, where L is the number of vertices, with format, [x1 y1 x2 y2 ... xL yL] where
  • x1 and y1 are the coordinates of the beginning of the first line segment.

  • x2 and y2 are the coordinates of the end of the first line segment and the beginning of the second line segment.

  • xL and yL are the coordinates of the end of the (L-1)th line segment and the beginning of the Lth line segment.

The block connects [x1 y1] to [xL yL] to complete the polygon. The block produces an error if the number of rows is negative or not a multiple of two.

M Polygons with the largest number of line segments in any line being L

M-by-2L matrix

where each row of the matrix corresponds to a different polygon and is of the same form as the vector for a single polygon. If some polygons are shorter than others, repeat the ending coordinates to fill the polygon matrix.

The block produces an error message if the number of rows is less than two or is not a multiple of two.

Circle

You can draw one or more circles.

ShapePTS inputDrawn Shape
Single CircleThree-element row vector
[x y radius] where
  • x and y are coordinates for the center of the circle.

  • radius is the radius of the circle, which must be greater than 0.

M Circles

M-by-3 matrix

where each row of the matrix corresponds to a different circle and is of the same form as the vector for a single circle.

Was this topic helpful?