Documentation Center

  • Trial Software
  • Product Updates

Contents

Working with Image Data in MATLAB Workspace

Understanding Image Data

The illustrations in this documentation show the video stream and the contents of the memory buffer as a sequence of individual frames. In reality, each frame is a multidimensional array. The following figure illustrates the format of an individual frame.

Format of an Individual Frame

The following sections describes how the toolbox

This section also describes several ways to view acquired image data.

Determining the Dimensions of Image Data

The video format used by the image acquisition device is the primary determinant of the width, height, and the number of bands in each image frame. Image acquisition devices typically support multiple video formats. You select the video format when you create the video input object (described in Specifying the Video Format). The video input object stores the video format in the VideoFormat property.

Industry-standard video formats, such as RS170 or PAL, include specifications of the image frame width and height, referred to as the image resolution. For example, the RS170 standard defines the width and height of the image frame as 640-by-480 pixels. Other devices, such as digital cameras, support the definition of many different, nonstandard image resolutions. The video input object stores the video resolution in the VideoResolution property.

Each image frame is three dimensional; however, the video format determines the number of bands in the third dimension. For color video formats, such as RGB, each image frame has three bands: one each for the red, green, and blue data. Other video formats, such as the grayscale RS170 standard, have only a single band. The video input object stores the size of the third dimension in the NumberOfBands property.

    Note   Because devices typically express video resolution as width-by-height, the toolbox uses this convention for the VideoResolution property. However, when data is brought into the MATLAB® workspace, the image frame dimensions are listed in reverse order, height-by-width, because MATLAB expresses matrix dimensions as row-by-column.

ROIs and Image Dimensions

When you specify a region-of-interest (ROI) in the image being captured, the dimensions of the ROI determine the dimensions of the image frames returned. The VideoResolution property specifies the dimensions of the image data being provided by the device; the ROIPosition property specifies the dimensions of the image frames being logged. See the ROIPosition property reference page for more information.

Video Format and Image Dimensions

The following example illustrates how video format affects the size of the image frames returned.

  1. Select a video format — Use the imaqhwinfo function to view the list of video formats supported by your image acquisition device. This example shows the video formats supported by a Matrox® Orion frame grabber. The formats are industry standard, such as RS170, NTSC, and PAL. These standards define the image resolution.

    info = imaqhwinfo('matrox');
    
    info.DeviceInfo.SupportedFormats
    
    ans = 
      Columns 1 through 4
    
        'M_RS170'    'M_RS170_VIA_RGB'    'M_CCIR'    'M_CCIR_VIA_RGB'    
    
      Columns 5 through 8
    
    'M_NTSC'    'M_NTSC_RGB'      'M_NTSC_YC'    'M_PAL' 
    
      Columns 9 through 10
    
    'M_PAL_RGB'    'M_PAL_YC'
  2. Create an image acquisition object — This example creates a video input object for a Matrox image acquisition device using the default video format, RS170. To run this example on your system, use the imaqhwinfo function to get the object constructor for your image acquisition device and substitute that syntax for the following code.

    vid = videoinput('matrox',1);
  3. View the video format and video resolution properties — The toolbox creates the object with the default video format. This format defines the video resolution.

    get(vid,'VideoFormat')
    
    ans =
    
       M_RS170
    
    get(vid,'VideoResolution')
    
    ans =
    
       [640 480]
  4. Bring a single frame into the workspace — Call the getsnapshot function to bring a frame into the workspace.

    frame = getsnapshot(vid);

    The dimensions of the returned data reflect the image resolution and the value of the NumberOfBands property.

    vid.NumberOfBands
    ans =
    
       1
    
    size(frame)
    
    ans =
    
       480 640
  5. Start the image acquisition object — Call the start function to start the image acquisition object.

    start(vid)

    The object executes an immediate trigger and begins acquiring frames of data.

  6. Bring multiple frames into the workspace — Call the getdata function to bring multiple image frames into the MATLAB workspace.

    data = getdata(vid,10);

    The getdata function brings 10 frames of data into the workspace. Note that the returned data is a four-dimensional array: each frame is three-dimensional and the nth frame is indicated by the fourth dimension.

    size(data)
    
    ans =
    
       480 640 1 10
  7. Clean up — Always remove image acquisition objects from memory, and the variables that reference them, when you no longer need them.

    delete(vid)
    clear vid

Determining the Data Type of Image Frames

By default, the toolbox returns image frames in the data type used by the image acquisition device. If there is no MATLAB data type that matches the object's native data type, getdata chooses a MATLAB data type that preserves numerical accuracy. For example, in RGB 555 format, each color component is expressed in 5-bits. getdata returns each color as a uint8 value.

You can specify the data type you want getdata to use for the returned data. For example, you can specify that getdata return image frames as an array of class double. To see a list of all the data types supported, see the getdata reference page.

The following example illustrates the data type of returned image data.

  1. Create an image acquisition object — This example creates a video input object for a Matrox image acquisition device. To run this example on your system, use the imaqhwinfo function to get the object constructor for your image acquisition device and substitute that syntax for the following code.

    vid = videoinput('matrox',1);
  2. Bring a single frame into the workspace — Call the getsnapshot function to bring a frame into the workspace.

    frame = getsnapshot(vid);
  3. View the class of the returned data — Use the class function to determine the data type used for the returned image data.

    class(frame)
    
    ans =
    
      uint8
  4. Clean up — Always remove image acquisition objects from memory, and the variables that reference them, when you no longer need them.

    delete(vid)
    clear vid

Specifying the Color Space

For most image acquisition devices, the video format of the video stream determines the color space of the acquired image data, that is, the way color information is represented numerically.

For example, many devices represent colors as RGB values. In this color space, colors are represented as a combination of various intensities of red, green, and blue. Another color space, widely used for digital video, is the YCbCr color space. In this color space, luminance (brightness or intensity) information is stored as a single component (Y). Chrominance (color) information is stored as two color-difference components (Cb and Cr). Cb represents the difference between the blue component and a reference value. Cr represents the difference between the red component and a reference value.

The toolbox can return image data in grayscale, RGB, and YCbCr. To specify the color representation of the image data, set the value of the ReturnedColorSpace property. To display image frames using the image, imagesc, or imshow functions, the data must use the RGB color space. Another MathWorks® product, the Image Processing Toolbox™ software, includes functions that convert YCbCr data to RGB data, and vice versa.

    Note   Some devices that claim to support the YUV color space actually support the YCbCr color space. YUV is similar to YCbCr but not identical. The difference between YUV and YCbCr is the scaling factor applied to the result. YUV refers to a particular scaling factor used in composite NTSC and PAL formats. In most cases, you can specify the YCbCr color space for devices that support YUV.

You can determine your device's default color space by using the get function: get(vid,'ReturnedColorSpace'), where vid is the name of the video object. An example of this is shown in step 2 in the example below. There may be situations when you wish to change the color space. The example below shows a case where the default color space is rgb, and you change it to grayscale (step 3).

The following example illustrates how to specify the color space of the returned image data.

  1. Create an image acquisition object — This example creates a video input object for a generic Windows® image acquisition device. To run this example on your system, use the imaqhwinfo function to get the object constructor for your image acquisition device and substitute that syntax for the following code.

    vid = videoinput('winvideo',1);
  2. View the default color space used for the data — The value of the ReturnedColorSpace property indicates the color space of the image data.

    get(vid,'ReturnedColorSpace')
    
    ans = 
    
    rgb
  3. Modify the color space used for the data — To change the color space of the returned image data, set the value of the ReturnedColorSpace property.

    set(vid,'ReturnedColorSpace','grayscale')
    
    ans = 
    
    grayscale
  4. Clean up — Always remove image acquisition objects from memory, and the variables that reference them, when you no longer need them.

    delete(vid)
    clear vid

Converting Bayer Images

You can use the ReturnedColorSpace and BayerSensorAlignment properties to control Bayer demosaicing.

If your camera uses Bayer filtering, the toolbox supports the Bayer pattern and can return color if desired. By setting the ReturnedColorSpace property to 'bayer', the Image Acquisition Toolbox™ software will demosaic Bayer patterns returned by the hardware. This color space setting will interpolate Bayer pattern encoded images into standard RGB images.

In order to perform the demosaicing, the toolbox needs to know the pixel alignment of the sensor. This is the order of the red, green, and blue sensors and is normally specified by describing the four pixels in the upper-left corner of the sensor. It is the band sensitivity alignment of the pixels as interpreted by the camera's internal hardware. You must get this information from the camera's documentation and then specify the value for the alignment.

If your camera can return Bayer data, the toolbox can automatically convert it to RGB data for you, or you can specify it to do so. The following two examples illustrate both use cases.

Manual Conversion

The camera in this example has a Bayer sensor. The GigE Vision™ standard allows cameras to inform applications that the data is Bayer encoded and provides enough information for the application to convert the Bayer pattern into a color image. In this case the toolbox automatically converts the Bayer pattern into an RGB image.

  1. Create a video object vid using the GigE Vision adaptor and the designated video format.

    vid = videoinput('gige', 1, 'BayerGB8_640x480');
  2. View the default color space used for the data.

    vid.ReturnedColorSpace
    
    ans = 
    
    rgb
  3. Create a one-frame image img using the getsnapshot function.

    img = getsnapshot(vid);
  4. View the size of the acquired image.

    size(img)
    
    ans = 
    
    480  640  3 
  5. Sometimes you might not want the toolbox to automatically convert the Bayer pattern into a color image. For example, there are a number of different algorithms to convert from a Bayer pattern into an RGB image and you might wish to specify a different one than the toolbox uses or you might want to further process the raw data before converting it into a color image.

    % Set the color space to grayscale.
    vid.ReturnedColorSpace = 'grayscale';
    
    % Acquire another image frame.
    img = getsnapshot(vid);
    
    % Now check the size of the new frame acquired using grayscale.
    size(img)
    
    ans = 
    
    480  640 

    Notice how the size changed from the rgb image to the grayscale image by comparing the size output in steps 4 and 5.

  6. You can optionally use the demosaic function in the Image Processing Toolbox to convert Bayer patterns into color images.

    % Create an image colorImage by using the demosaic function on the 
    % image img and convert it to color.
    colorImage = demosaic(img, 'gbrg');
    
    % Now check the size of the new color image.
    size(colorImage)
    
    ans = 
    
    480  640  3
  7. Always remove image acquisition objects from memory, and the variables that reference them, when you no longer need them.

    delete(vid)
    clear vid

Automatic Conversion

The camera in this example returns data that is a Bayer mosaic, but the toolbox doesn't know it since the DCAM standard doesn't have any way for the camera to communicate that to software applications. You need to know that by reading the camera specifications or manual. The toolbox can automatically convert the Bayer encoded data to RGB data, but it must be programmed to do so.

  1. Create a video object vid using the DCAM adaptor and the designated video format for raw data.

    vid = videoinput('dcam', 1, 'F7_RAW8_640x480');
  2. View the default color space used for the data.

    vid.ReturnedColorSpace
    
    ans = 
    
    grayscale
  3. Create a one-frame image img using the getsnapshot function.

    img = getsnapshot(vid);
  4. View the size of the acquired image.

    size(img)
    
    ans = 
    
    480  640 
  5. The value of the ReturnedColorSpace property is grayscale because Bayer data is single-banded and the toolbox doesn't yet know that it needs to decode the data. Setting the ReturnedColorSpace property to 'bayer' indicates that the toolbox should decode the data.

    % Set the color space to Bayer.
    vid.ReturnedColorSpace = 'bayer';
  6. In order to properly decode the data, the toolbox also needs to know the alignment of the Bayer filter array. This should be in the camera documentation. You can then use the BayerSensorAlignment property to set the alignment.

    % Set the alignment.
    vid.BayerSensorAlignment = 'grbg';

    The getdata and getsnapshot functions will now return color data.

    % Acquire another image frame.
    img = getsnapshot(vid);
    
    % Now check the size of the new frame acquired returning color data.
    size(img)
    
    ans = 
    
    480  640  3

    Remove the image acquisition object from memory.

    delete(vid)
    clear vid

Viewing Acquired Data

Once you bring the data into the MATLAB workspace, you can view it as you would any other image in MATLAB.

The Image Acquisition Toolbox™ software includes a function, imaqmontage, that you can use to view all the frames of a multiframe image array in a single MATLAB image object. imaqmontage arranges the frames so that they roughly form a square. imaqmontage can be useful for visually comparing multiple frames.

MATLAB includes two functions, image and imagesc, that display images in a figure window. Both functions create a MATLAB image object to display the frame. You can use image object properties to control aspects of the display. The imagesc function automatically scales the input data.

The Image Processing Toolbox software includes an additional display routine called imshow. Like image and imagesc, this function creates a MATLAB image object. However, imshow also automatically sets various image object properties to optimize the display.

Was this topic helpful?