Documentation

This is machine translation

Translated by Microsoft
Mouse over text to see original. Click the button below to return to the English verison of the page.

vision.BlockMatcher System object

Package: vision

Estimate motion between images or video frames

Description

The BlockMatcher object estimates motion between images or video frames.

    Note:   Starting in R2016b, instead of using the step method to perform the operation defined by the System object™, you can call the object with arguments, as if it were a function. For example, y = step(obj,x) and y = obj(x) perform equivalent operations.

Construction

H = vision.BlockMatcher returns a System object, H, that estimates motion between two images or two video frames. The object performs this estimation using a block matching method by moving a block of pixels over a search region.

H = vision.BlockMatcher(Name,Value) returns a block matcher System object, H, with each specified property set to the specified value. You can specify additional name-value pair arguments in any order as (Name1, Value1,...,NameN,ValueN).

Code Generation Support
Supports Code Generation: No
Supports MATLAB® Function block: No
System Objects in MATLAB Code Generation.
Code Generation Support, Usage Notes, and Limitations.

Properties

ReferenceFrameSource

Reference frame source

Specify the source of the reference frame as one of Input port | Property. When you set the ReferenceFrameSource property to Input port a reference frame input must be specified to the step method of the block matcher object. The default is Property.

ReferenceFrameDelay

Number of frames between reference and current frames

Specify the number of frames between the reference frame and the current frame as a scalar integer value greater than or equal to zero. This property applies when you set the ReferenceFrameSource property to Property.

The default is 1.

SearchMethod

Best match search method

Specify how to locate the block of pixels in frame k+1 that best matches the block of pixels in frame k. You can specify the search method as Exhaustive or Three-step. If you set this property to Exhaustive, the block matcher object selects the location of the block of pixels in frame k+1. The block matcher does so by moving the block over the search region one pixel at a time, which is computationally expensive.

If you set this property to Three-step, the block matcher object searches for the block of pixels in frame k+1 that best matches the block of pixels in frame k using a steadily decreasing step size. The object begins with a step size approximately equal to half the maximum search range. In each step, the object compares the central point of the search region to eight search points located on the boundaries of the region and moves the central point to the search point whose values is the closest to that of the central point. The object then reduces the step size by half, and begins the process again. This option is less computationally expensive, though sometimes it does not find the optimal solution.

The default is Exhaustive.

BlockSize

Block size

Specify the size of the block in pixels.

The default is [17 17].

Overlap

Input image subdivision overlap

Specify the overlap (in pixels) of two subdivisions of the input image.

The default is [0 0].

MaximumDisplacement

Maximum displacement search

Specify the maximum number of pixels that any center pixel in a block of pixels can move, from image to image or from frame to frame. The block matcher object uses this property to determine the size of the search region.

The default is [7 7].

MatchCriteria

Match criteria between blocks

Specify how the System object measures the similarity of the block of pixels between two frames or images. Specify as one of Mean square error (MSE) | Mean absolute difference (MAD). The default is Mean square error (MSE).

OutputValue

Motion output form

Specify the desired form of motion output as one of Magnitude-squared | Horizontal and vertical components in complex form. The default is Magnitude-squared.

 Fixed-Point Properties

Methods

cloneCreate block matcher object with same property values
getNumInputsNumber of expected inputs to step method
getNumOutputsNumber of outputs from step method
isLockedLocked status for input attributes and nontunable properties
release Allow property value and input characteristics changes
step Compute motion of input image

Examples

expand all

Read and convert RGB image to grayscale.

img1 = im2double(rgb2gray(imread('onion.png')));

Create objects.

htran = vision.GeometricTranslator('Offset',[5 5],...
        'OutputSize','Same as input image');
hbm = vision.BlockMatcher('ReferenceFrameSource',...
        'Input port','BlockSize',[35 35]);
hbm.OutputValue = 'Horizontal and vertical components in complex form';
halphablend = vision.AlphaBlender;
Warning: The vision.GeometricTranslator will be removed in a future release.
Use the imtranslate function with equivalent functionality instead. 

Offset the first image by [5 5] pixels to create second image.

img2 = step(htran,img1);

Compute motion for the two images.

motion = step(hbm,img1,img2);

Blend two images.

img12 = step(halphablend,img2,img1);

Use quiver plot to show the direction of motion on the images.

[X Y] = meshgrid(1:35:size(img1,2),1:35:size(img1,1));
imshow(img12);
hold on;
quiver(X(:),Y(:),real(motion(:)),imag(motion(:)),0);
hold off;

Algorithms

This object implements the algorithm, inputs, and outputs described on the Block Matching block reference page. The object properties correspond to the block parameters.

Introduced in R2012a

Was this topic helpful?