Documentation Center

  • Trial Software
  • Product Updates

Analyze Image Texture

Texture Analysis

The toolbox supports a set of functions that you can use for texture analysis. Texture analysis refers to the characterization of regions in an image by their texture content. Texture analysis attempts to quantify intuitive qualities described by terms such as rough, smooth, silky, or bumpy as a function of the spatial variation in pixel intensities. In this sense, the roughness or bumpiness refers to variations in the intensity values, or gray levels.

Texture analysis is used in a variety of applications, including remote sensing, automated inspection, and medical image processing. Texture analysis can be used to find the texture boundaries, called texture segmentation. Texture analysis can be helpful when objects in an image are more characterized by their texture than by intensity, and traditional thresholding techniques cannot be used effectively.

Filter Images Using Standard Statistical Measures

The toolbox includes several texture analysis functions that filter an image using standard statistical measures, listed in the following table.

FunctionDescription
rangefiltCalculates the local range of an image.
stdfiltCalculates the local standard deviation of an image.
entropyfiltCalculates the local entropy of a grayscale image. Entropy is a statistical measure of randomness.

These statistics can characterize the texture of an image because they provide information about the local variability of the intensity values of pixels in an image. For example, in areas with smooth texture, the range of values in the neighborhood around a pixel will be a small value; in areas of rough texture, the range will be larger. Similarly, calculating the standard deviation of pixels in a neighborhood can indicate the degree of variability of pixel values in that region.

The following sections provide additional information about the texture functions:

Understanding How Texture Filter Functions Work

The functions all operate in a similar way: they define a neighborhood around the pixel of interest, calculate the statistic for that neighborhood, and use that value as the value of the pixel of interest in the output image.

This example shows how the rangefilt function operates on a simple array.

A = [ 1 2 3 4 5; 6 7 8 9 10; 11 12 13 14 15; 16 17 18 19 20 ]

A =

     1     2     3     4     5
     6     7     8     9    10
    11    12    13    14    15
    16    17    18    19    20

B = rangefilt(A)

B =

     6     7     7     7     6
    11    12    12    12    11
    11    12    12    12    11
     6     7     7     7     6

The following figure shows how the value of element B(2,4) was calculated from A(2,4). By default, the rangefilt function uses a 3-by-3 neighborhood but you can specify neighborhoods of different shapes and sizes.

Determining Pixel Values in Range Filtered Output Image

The stdfilt and entropyfilt functions operate similarly, defining a neighborhood around the pixel of interest and calculating the statistic for the neighborhood to determine the pixel value in the output image. The stdfilt function calculates the standard deviation of all the values in the neighborhood.

The entropyfilt function calculates the entropy of the neighborhood and assigns that value to the output pixel. Note that, by default, the entropyfilt function defines a 9-by-9 neighborhood around the pixel of interest. To calculate the entropy of an entire image, use the entropy function.

Detect Regions of Texture in Images

The following example illustrates how the texture filter functions can detect regions of texture in an image. In the figure, the background is smooth; there is very little variation in the gray-level values. In the foreground, the surface contours of the coins exhibit more texture. In this image, foreground pixels have more variability and thus higher range values. Range filtering makes the edges and contours of the coins more visible.

To see an example of using filtering functions, view the Texture Segmentation Using Texture Filters example.

  1. Read in the image and display it.

    I = imread('eight.tif');
    imshow(I)

  2. Filter the image with the rangefilt function and display the results. Note how range filtering highlights the edges and surface contours of the coins.

    K = rangefilt(I);
    figure, imshow(K)

Gray-Level Co-Occurrence Matrix (GLCM)

A statistical method of examining texture that considers the spatial relationship of pixels is the gray-level co-occurrence matrix (GLCM), also known as the gray-level spatial dependence matrix. The GLCM functions characterize the texture of an image by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image, creating a GLCM, and then extracting statistical measures from this matrix. (The texture filter functions, described in Filter Images Using Standard Statistical Measures, cannot provide information about shape, i.e., the spatial relationships of pixels in an image.)

After you create the GLCMs, you can derive several statistics from them using the graycoprops function. These statistics provide information about the texture of an image. The following table lists the statistics.

Statistic

Description

Contrast

Measures the local variations in the gray-level co-occurrence matrix.

Correlation

Measures the joint probability occurrence of the specified pixel pairs.

Energy

Provides the sum of squared elements in the GLCM. Also known as uniformity or the angular second moment.

Homogeneity

Measures the closeness of the distribution of elements in the GLCM to the GLCM diagonal.

Create a Gray-Level Co-Occurrence Matrix

To create a GLCM, use the graycomatrix function. The graycomatrix function creates a gray-level co-occurrence matrix (GLCM) by calculating how often a pixel with the intensity (gray-level) value i occurs in a specific spatial relationship to a pixel with the value j. By default, the spatial relationship is defined as the pixel of interest and the pixel to its immediate right (horizontally adjacent), but you can specify other spatial relationships between the two pixels. Each element (i,j) in the resultant glcm is simply the sum of the number of times that the pixel with value i occurred in the specified spatial relationship to a pixel with value j in the input image.

The number of gray levels in the image determines the size of the GLCM. By default, graycomatrix uses scaling to reduce the number of intensity values in an image to eight, but you can use the NumLevels and the GrayLimits parameters to control this scaling of gray levels. See the graycomatrix reference page for more information.

The gray-level co-occurrence matrix can reveal certain properties about the spatial distribution of the gray levels in the texture image. For example, if most of the entries in the GLCM are concentrated along the diagonal, the texture is coarse with respect to the specified offset. You can also derive several statistical measures from the GLCM. See for more information.

To illustrate, the following figure shows how graycomatrix calculates the first three values in a GLCM. In the output GLCM, element (1,1) contains the value 1 because there is only one instance in the input image where two horizontally adjacent pixels have the values 1 and 1, respectively. glcm(1,2) contains the value 2 because there are two instances where two horizontally adjacent pixels have the values 1 and 2. Element (1,3) in the GLCM has the value 0 because there are no instances of two horizontally adjacent pixels with the values 1 and 3. graycomatrix continues processing the input image, scanning the image for other pixel pairs (i,j) and recording the sums in the corresponding elements of the GLCM.

Process Used to Create the GLCM

Specify Offset Used in GLCM Calculation

By default, the graycomatrix function creates a single GLCM, with the spatial relationship, or offset, defined as two horizontally adjacent pixels. However, a single GLCM might not be enough to describe the textural features of the input image. For example, a single horizontal offset might not be sensitive to texture with a vertical orientation. For this reason, graycomatrix can create multiple GLCMs for a single input image.

To create multiple GLCMs, specify an array of offsets to the graycomatrix function. These offsets define pixel relationships of varying direction and distance. For example, you can define an array of offsets that specify four directions (horizontal, vertical, and two diagonals) and four distances. In this case, the input image is represented by 16 GLCMs. When you calculate statistics from these GLCMs, you can take the average.

You specify these offsets as a p-by-2 array of integers. Each row in the array is a two-element vector, [row_offset, col_offset], that specifies one offset. row_offset is the number of rows between the pixel of interest and its neighbor. col_offset is the number of columns between the pixel of interest and its neighbor. This example creates an offset that specifies four directions and 4 distances for each direction. For more information about specifying offsets, see the graycomatrix reference page.

offsets = [ 0 1; 0 2; 0 3; 0 4;...
           -1 1; -2 2; -3 3; -4 4;...
           -1 0; -2 0; -3 0; -4 0;...
           -1 -1; -2 -2; -3 -3; -4 -4];

The figure illustrates the spatial relationships of pixels that are defined by this array of offsets, where D represents the distance from the pixel of interest.

Derive Statistics from GLCM and Plot Correlation

This example shows how to create a set of GLCMs and derive statistics from them. The example also illustrates how the statistics returned by graycoprops have a direct relationship to the original input image.

  1. Read grayscale image and display it. The example converts the truecolor image to a grayscale image and then rotates it 90° for this example.

    circuitBoard = rot90(rgb2gray(imread('board.tif')));
    imshow(circuitBoard)

  2. Define offsets of varying direction and distance. Because the image contains objects of a variety of shapes and sizes that are arranged in horizontal and vertical directions, the example specifies a set of horizontal offsets that only vary in distance.

    offsets0 = [zeros(40,1) (1:40)'];
  3. Create the GLCMs. Call the graycomatrix function specifying the offsets.

    glcms = graycomatrix(circuitBoard,'Offset',offsets0)
  4. Derive statistics from the GLCMs using the graycoprops function. The example calculates the contrast and correlation.

    stats = graycoprops(glcms,'Contrast Correlation');
  5. Plot correlation as a function of offset.

    figure, plot([stats.Correlation]);
    title('Texture Correlation as a function of offset');
    xlabel('Horizontal Offset')
    ylabel('Correlation')

    The plot contains peaks at offsets 7, 15, 23, and 30. If you examine the input image closely, you can see that certain vertical elements in the image have a periodic pattern that repeats every seven pixels. The following figure shows the upper left corner of the image and points out where this pattern occurs.

Was this topic helpful?