This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

vision.BlockMatcher System object

Estimate motion between images or video frames


The BlockMatcher object estimates motion between images or video frames.


Starting in R2016b, instead of using the step method to perform the operation defined by the System object™, you can call the object with arguments, as if it were a function. For example, y = step(obj,x) and y = obj(x) perform equivalent operations.


H = vision.BlockMatcher returns a System object, H, that estimates motion between two images or two video frames. The object performs this estimation using a block matching method by moving a block of pixels over a search region.

H = vision.BlockMatcher(Name,Value) returns a block matcher System object, H, with each specified property set to the specified value. You can specify additional name-value pair arguments in any order as (Name1, Value1,...,NameN,ValueN).

Code Generation Support
Supports Code Generation: No
Supports MATLAB® Function block: No
System Objects in MATLAB Code Generation (MATLAB Coder).
Code Generation Support, Usage Notes, and Limitations.



Reference frame source

Specify the source of the reference frame as one of Input port | Property. When you set the ReferenceFrameSource property to Input port a reference frame input must be specified to the step method of the block matcher object. The default is Property.


Number of frames between reference and current frames

Specify the number of frames between the reference frame and the current frame as a scalar integer value greater than or equal to zero. This property applies when you set the ReferenceFrameSource property to Property.

The default is 1.


Best match search method

Specify how to locate the block of pixels in frame k+1 that best matches the block of pixels in frame k. You can specify the search method as Exhaustive or Three-step. If you set this property to Exhaustive, the block matcher object selects the location of the block of pixels in frame k+1. The block matcher does so by moving the block over the search region one pixel at a time, which is computationally expensive.

If you set this property to Three-step, the block matcher object searches for the block of pixels in frame k+1 that best matches the block of pixels in frame k using a steadily decreasing step size. The object begins with a step size approximately equal to half the maximum search range. In each step, the object compares the central point of the search region to eight search points located on the boundaries of the region and moves the central point to the search point whose values is the closest to that of the central point. The object then reduces the step size by half, and begins the process again. This option is less computationally expensive, though sometimes it does not find the optimal solution.

The default is Exhaustive.


Block size

Specify the size of the block in pixels.

The default is [17 17].


Input image subdivision overlap

Specify the overlap (in pixels) of two subdivisions of the input image.

The default is [0 0].


Maximum displacement search

Specify the maximum number of pixels that any center pixel in a block of pixels can move, from image to image or from frame to frame. The block matcher object uses this property to determine the size of the search region.

The default is [7 7].


Match criteria between blocks

Specify how the System object measures the similarity of the block of pixels between two frames or images. Specify as one of Mean square error (MSE) | Mean absolute difference (MAD). The default is Mean square error (MSE).


Motion output form

Specify the desired form of motion output as one of Magnitude-squared | Horizontal and vertical components in complex form. The default is Magnitude-squared.

 Fixed-Point Properties


step Compute motion of input image
Common to All System Objects

Create System object with same property values


Expected number of inputs to a System object


Expected number of outputs of a System object


Check locked states of a System object (logical)


Allow System object property value changes


expand all

Read and convert RGB image to grayscale.

img1 = im2double(rgb2gray(imread('onion.png')));

Create a block matcher and alph blender object.

hbm = vision.BlockMatcher('ReferenceFrameSource',...
        'Input port','BlockSize',[35 35]);
hbm.OutputValue = 'Horizontal and vertical components in complex form';
halphablend = vision.AlphaBlender;

Offset the first image by [5 5] pixels to create a second image.

img2 = imtranslate(img1,[5,5]);

Compute motion for the two images.

motion = step(hbm,img1,img2);

Blend the two images.

img12 = step(halphablend,img2,img1);

Use a quiver plot to show the direction of motion on the images.

[X Y] = meshgrid(1:35:size(img1,2),1:35:size(img1,1));         
hold on;
hold off;


This object implements the algorithm, inputs, and outputs described on the Block Matching block reference page. The object properties correspond to the block parameters.

Introduced in R2012a