Acquire live image data from image acquisition device
Image Acquisition Toolbox
The From Video Device block lets you acquire image and video data streams from image acquisition devices, such as cameras and frame grabbers, in order to bring the image data into a Simulink® model. The block also lets you configure and preview the acquisition directly from Simulink.
The From Video Device block opens, initializes, configures, and controls an acquisition device. The opening, initializing, and configuring occur once, at the start of the model's execution. During the model's run time, the block buffers image data, delivering one image frame for each simulation time step.
The block has no input ports. You can configure the block to have either one output port, or three output ports corresponding to the uncompressed color bands, such as red, green, and blue, or Y, Cb, Cr. The previous figure shows both configurations.
The From Video Device block supports the use of Simulink Accelerator mode. This feature speeds up the execution of Simulink models.
The From Video Device block supports the use of model referencing. This feature lets your model include other Simulink models as modular components.
For more information on these features, see the Simulink documentation.
The From Video Device block supports the use of code generation
along with the
packNGo function to group required
source code and dependent shared libraries. See the next section.
Note: For an in-depth example of using this block, see Saving Video Data to a File.
The From Video Device block supports generating code from the block. This enables models containing the From Video Device block to run successfully in Accelerator, Rapid Accelerator, and Deployed modes.
You can use the Image Acquisition Toolbox™, Simulink Coder™, and Embedded Coder® products together to generate code (on the host end) that you can use to implement your model for a practical application. For more information on code generation, see the Simulink Coder documentation.
Note: If you are using a GigE Vision camera: you do not need to install GenICam™ to use the GigE adaptor, because it is now included in the installation of the toolbox. However, if you are using the From Video Device block and doing code generation, you would need to install GenICam to run the generated application outside of MATLAB.
The From Video Device block generates code with limited portability.
The block uses precompiled shared libraries, such as DLLs, to support
I/O for specific types of devices. The Simulink Coder software
provides functions to help you set up and manage the build information
for your models. One of the Build Information functions that Simulink Coder provides
packNGo. This function allows you to package
model code and dependent shared libraries into a zip file for deployment.
The target system does not need to have MATLAB® installed but
it does need to be supported by MATLAB.
The block supports use of the
Source-specific properties for your device are honored when code is
generated. The generated code compiles with both C and C++ compilers.
To set up
set_param(gcs, 'PostCodeGenCommand', 'packNGo(buildInfo)');
In this example,
gcs is the current model
that you wish to build. Building the model creates a zip file with
the same name as model name. You can move this zip file to another
machine and the source code in the zip file can be built to create
an executable which can be run independent of MATLAB and Simulink.
For more information on
Note: The From Video Device block supports the use of Simulink Rapid Accelerator mode and code generation on Windows® platforms. Code generation is also supported on Linux®, but Rapid Accelerator mode is not.
Note: If you get a "Device in use" error message when using the block with certain hardware, such as Matrox®, close any programs that are using the hardware, and then try using the block again.
On Linux platforms, you need to add the directory where
you unzip the libraries to the environment variable
In the Source Block Parameters dialog box, the options that appear are dependent on the device you are using. The first diagram illustrates the fields that may appear if your device supports hardware triggering and Bayer Sensor Alignment as a color space option.
The second diagram illustrates the options that may appear if your device supports using either one output port or multiple output ports for the color bands (the Ports mode option). Ports mode is visible if the selected device and format settings can output color data.
The following fields appear in the Source Block Parameters dialog box. Some fields may not appear, as they are device dependent. If your selected device does not support a feature, it may not appear in the dialog box.
The image acquisition device to which you want to connect. The items in the list vary, depending on which devices you have connected to your system. All video capture devices supported by the Image Acquisition Toolbox software are supported by the block.
Shows the video formats supported by the selected device. This
list varies with each device. If your device supports the use of camera
From camera file will be one of the choices
in the list.
This option only appears if you select a device that supports
camera files. You can select
From camera file from
the Video format field, and enter the path and
file name, or use the Browse button to locate
The available input sources for the specified device and format. You can use the Edit properties button to edit the source properties. That will open the Property Inspector.
Edits video source device-specific properties, such as brightness and contrast. It opens the Property Inspector. The properties that are listed vary be device. Properties that can be edited are indicated by a pencil icon or a drop-down list in the table. Properties that are grayed out cannot be edited. When you close the Property Inspector, your edits are saved.
This option only appears if the selected device supports hardware triggering. Select the check box to enable hardware triggering. Once enabled, you can select the Trigger configuration.
This option only appears if the selected device supports hardware
triggering. Check the Enable hardware triggering box
to enable it. Once enabled, you can select the Trigger configuration.
The configuration choices are listed by trigger source/trigger condition.
TTL/fallingEdge means that TTL is
the trigger source and the falling edge of the signal is the condition
that triggers the hardware.
Use this field to input a row vector that specifies the region of acquisition in the video image. The format is [row, column, height, width]. The default values for row and column are 0. The default values for height and width are set to the maximum allowable value, indicated by the video format's resolution. Therefore you only need to change the values in this field if you do not want to capture the full image size.
Use this field to select the color space for devices that support
color. Possible values are
YCbCr. The default value is
If your device supports Bayer Sensor Alignment, a fourth value of
This field is only visible if your device supports Bayer sensor
alignment. You must set the Output color space field
bayer, then it becomes activated. Use this to
set the 2-by-2 sensor alignment. Possible values are
bggr. The default
Preview the video image. It opens the Video Preview window that is part of the Image Acquisition Toolbox software. If you change something in the Source Block Parameters dialog box while the preview is running, the image will adjust accordingly. This lets you set up your image acquisition to the way you want it to be acquired by the block when you run the model.
Specify the sample time of the block during the simulation. This is the rate at which the block is executed during simulation. The default is 1/30.
Note: The block sample time does not set the frame rate on the device that is used in simulation. Frame rate is determined by the video format specified (standard format or from a camera file). Some devices even list frame rate as a device-specific source property. Frame rate is not related to the Block sample time option in the dialog. Block sample time defines the rate at which the block executes during simulation time.
Used to specify either a single output port for all color spaces,
or one port for each band (for example, R, G, B). When you select
multidimensional signal, the output signal will be combined
into one line consisting of signal information for all color signals.
Separate color signals if you want to use
three ports corresponding to the uncompressed red, green, and blue
color bands. Note that some devices will use YCbCr for the separate
The block acquires data in the default
The image data type when the block outputs frames. This data
type indicates how image frames are output from the block to Simulink. It supports all MATLAB data types and
Kinect® for Windows Metadata Output Ports
This is used to return skeleton information in Simulink during simulation and code generation. You can output metadata information in normal, accelerator, and deployed simulation modes. Each metadata item in the Selected Metadata list becomes an output port on the block.
If you are using a Kinect for Windows camera, and
you select the Depth sensor as your Device and
Depth Source as your Video source,
the Metadata Output Ports section appears.
The Metadata Output Ports section lists the metadata that is associated with the Kinect Depth sensor.
This section is only visible when a Kinect Depth sensor is selected. The All Metadata list shows which metadata are available. The Selected Metadata list shows which metadata items will be returned to Simulink. This is empty by default. To use one of the metadata, add it from the All to the Selected list by selecting it in the All list and clicking the Add button (blue arrow icon). The Remove button (red X icon) removes an item from the Selected Metadata list. You can also use the Move up and Move down buttons to change the order of items in the Selected list. The list supports multi-select as well.
You can see in the example above that three metadata items have been put in the Selected list. When you click Apply, output ports are created on the block for these metadata, as shown here. The first port is the depth frame.
For descriptions and information on these metadata fields and using Kinect for Windows with the Image Acquisition Toolbox, see Acquiring Image and Skeletal Data Using Kinect.