Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

Convolution2DLayer

2-D convolutional layer

Description

A 2-D convolutional layer applies sliding filters to the input. The layer convolves the input by moving the filters along the input vertically and horizontally and computing the dot product of the weights and the input, and then adding a bias term.

Creation

Syntax

layer = convolution2dLayer(filterSize,numFilters)
layer = convolution2dLayer(filterSize,numFilters,Name,Value)

Description

layer = convolution2dLayer(filterSize,numFilters) creates a 2-D convolutional layer and sets the FilterSize and NumFilters properties.

example

layer = convolution2dLayer(filterSize,numFilters,Name,Value) sets the optional Stride, NumChannels, WeightLearnRateFactor, BiasLearnRateFactor, WeightL2Factor, BiasL2Factor, and Name properties using name-value pairs. To specify input padding, use the 'Padding' name-value pair argument. For example, convolution2dLayer(11,96,'Stride',4,'Padding',1) creates a 2-D convolutional layer with 96 filters of size [11 11], a stride of [4 4], and zero padding of size 1 along all edges of the layer input. You can specify multiple name-value pairs. Enclose each property name in single quotes.

Input Arguments

expand all

Name-Value Pair Arguments

Use comma-separated name-value pair arguments to specify the size of the zero padding to add along the edges of the layer input or to set the Stride, NumChannels, WeightLearnRateFactor, BiasLearnRateFactor, WeightL2Factor, BiasL2Factor, and Name properties. Enclose names in single quotes.

Example: convolution2dLayer(3,16,'Padding','same') creates a 2-D convolutional layer with 16 filters of size [3 3] and 'same' padding. At training time the software calculates the size of the zero padding so that the layer output has the same size as the input.

expand all

Padding to add to input edges, specified as the comma-separated pair consisting of 'Padding' and one of the following:

  • 'same' The software calculates the size of the padding at training time so that the output has the same size as the input when the stride equals 1. If the stride is larger than 1, then the output size is ceil(inputSize/stride), where inputSize is the height or width of the input and stride is the stride in the corresponding dimension. The software adds the same amount of padding to the top and bottom, and to the left and right, if possible. If an odd amount of padding must be added vertically, then the software adds padding to the bottom. If an odd amount of padding must be added horizontally, then the software adds extra padding to the right.

  • Nonnegative integer p — Add padding of size p to all the edges of the input.

  • Vector [a b] of nonnegative integers — Add padding of size a to the top and bottom of the input and padding of size b to the left and right.

  • Vector [t b l r] of nonnegative integers — Add padding of size t to the top, b to the bottom, l to the left, and r to the right of the input.

Example: 'Padding',1 adds one row of padding to the top and bottom, and one column of padding to the left and right of the input.

Example: 'Padding','same' adds padding so that the output has the same size as the input (if the stride equals 1).

Properties

expand all

Height and width of the filters, specified as a vector of two positive integers [h w], where h is the height and w is the width. FilterSize defines the size of the local regions to which the neurons connect in the input.

If you set FilterSize using an input argument, then you can specify FilterSize as scalar to use the same value for both dimensions.

Example: [5 5] specifies filters of height 5 and width 5.

Number of filters, specified as a positive integer. This number corresponds to the number of neurons in the convolutional layer that connect to the same region in the input. This parameter determines the number of channels (feature maps) in the output of the convolutional layer.

Example: 96

Step size for traversing the input vertically and horizontally, specified as a vector of two positive integers [a b], where a is the vertical step size and b is the horizontal step size. When creating the layer, you can specify Stride as a scalar to use the same value for both dimensions.

Example: [2 3] specifies a vertical step size of 2 and a horizontal step size of 3.

Size of padding to apply to input borders, specified as a vector of four nonnegative integers [t b l r], where t is the padding applied to the top, b is the padding applied to the bottom, l is the padding applied to the left, and r is the padding applied to the right.

When you create a layer, use the 'Padding' name-value pair argument to specify padding.

Example: [1 1 2 2] adds one row of padding to the top and bottom, and two columns of padding to the left and right of the input.

Method to determine padding size, specified as 'manual' or 'same'.

If you specify a scalar or vector of nonnegative integers as the 'Padding' value when creating a layer, then PaddingMode equals 'manual'.

If you specify 'same' as the 'Padding' value when creating a layer, then PaddingMode equals 'same'. The software calculates the size of the padding at training time so that the output has the same size as the input when the stride equals 1. If the stride is larger than 1, then the output size is ceil(inputSize/stride), where inputSize is the height or width of the input and stride is the stride in the corresponding dimension. The software adds the same amount of padding to the top and bottom, and to the left and right, if possible. If an odd amount of padding must be added vertically, then the software adds padding to the bottom. If an odd amount of padding must be added horizontally, then the software adds extra padding to the right.

Note

Padding property will be removed in a future release. Use PaddingSize instead. When you create a layer, use the 'Padding' name-value pair argument to specify padding.

Size of padding to apply to input borders vertically and horizontally, specified as a vector of two nonnegative integers [a b], where a is the padding applied to the top and bottom of the input data and b is the padding applied to the left and right.

Example: [1 1] adds one row of padding to the top and bottom, and one column of padding to the left and right of the input.

Layer weights for the convolutional layer, specified as a FilterSize(1)-by-FilterSize(2)-by-NumChannels-by-NumFilters array.

You cannot set this property using a name-value pair.

Data Types: single | double

Layer biases for the convolutional layer, specified as a 1-by-1-by-NumFilters array.

You cannot set this property using a name-value pair.

Data Types: single | double

Number of channels for each filter, specified as 'auto' or a positive integer.

This parameter is always equal to the channels of the input to this convolutional layer. For example, if the input is a color image, then the number of channels for the input is 3. If the number of filters for the convolutional layer prior to the current layer is 16, then the number of channels for this layer is 16.

If NumChannels is 'auto', then the software infers the correct value for the number of channels during training time.

Example: 256

Learning rate factor for the weights, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the weights in the layer. For example, if WeightLearnRateFactor is 2, then the learning rate for the weights in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

Example: 2

Learning rate factor for the biases, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the biases in the layer. For example, if BiasLearnRateFactor is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

Example: 2

L2 regularization factor for the weights, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the weights in the layer. For example, if WeightL2Factor is 2, then the L2 regularization for the weights in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

Example: 2

L2 regularization factor for the biases, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the biases in the layer. For example, if BiasL2Factor is 2, then the L2 regularization for the biases in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

Example: 2

Layer name, specified as a character vector. If Name is set to '', then the software automatically assigns a name at training time.

Data Types: char

Examples

expand all

Create a convolutional layer with 96 filters, each with a height and width of 11. Use a stride (step size) of 4 in the horizontal and vertical directions.

layer = convolution2dLayer(11,96,'Stride',4)
layer = 
  Convolution2DLayer with properties:

           Name: ''

   Hyperparameters
     FilterSize: [11 11]
    NumChannels: 'auto'
     NumFilters: 96
         Stride: [4 4]
    PaddingMode: 'manual'
    PaddingSize: [0 0 0 0]

   Learnable Parameters
        Weights: []
           Bias: []

  Show all properties

Include a convolutional layer in a Layer array.

layers = [ ...
    imageInputLayer([28 28 1])
    convolution2dLayer(5,20)
    reluLayer
    maxPooling2dLayer(2,'Stride',2)
    fullyConnectedLayer(10)
    softmaxLayer
    classificationLayer]
layers = 
  7x1 Layer array with layers:

     1   ''   Image Input             28x28x1 images with 'zerocenter' normalization
     2   ''   Convolution             20 5x5 convolutions with stride [1  1] and padding [0  0  0  0]
     3   ''   ReLU                    ReLU
     4   ''   Max Pooling             2x2 max pooling with stride [2  2] and padding [0  0  0  0]
     5   ''   Fully Connected         10 fully connected layer
     6   ''   Softmax                 softmax
     7   ''   Classification Output   crossentropyex

Create a convolutional layer with 32 filters, each with a height and width of 5. Pad the input image with 2 pixels along its border. Set the learning rate factor for the bias to 2. Manually initialize the weights from a Gaussian distribution with a standard deviation of 0.0001.

layer = convolution2dLayer(5,32,'Padding',2,'BiasLearnRateFactor',2)
layer = 
  Convolution2DLayer with properties:

           Name: ''

   Hyperparameters
     FilterSize: [5 5]
    NumChannels: 'auto'
     NumFilters: 32
         Stride: [1 1]
    PaddingMode: 'manual'
    PaddingSize: [2 2 2 2]

   Learnable Parameters
        Weights: []
           Bias: []

  Show all properties

Suppose the input has color images. Manually initialize the weights from a Gaussian distribution with standard deviation of 0.0001.

layer.Weights = randn([5 5 3 32]) * 0.0001;

The size of the local regions in the layer is 5-by-5. The number of color channels for each region is 3. The number of feature maps is 32 (the number of filters). Therefore, there are 5*5*3*32 weights in the layer.

randn([5 5 3 32]) returns a 5-by-5-by-3-by-32 array of values from a Gaussian distribution with a mean of 0 and a standard deviation of 1. Multiplying the values by 0.0001 sets the standard deviation of the Gaussian distribution equal to 0.0001.

Similarly, initialize the biases from a Gaussian distribution with a mean of 1 and a standard deviation of 0.00001.

layer.Bias = randn([1 1 32])*0.00001 + 1;

There are 32 feature maps, and therefore 32 biases. randn([1 1 32]) returns a 1-by-1-by-32 array of values from a Gaussian distribution with a mean of 0 and a standard deviation of 1. Multiplying the values by 0.00001 sets the standard deviation of values equal to 0.00001, and adding 1 sets the mean of the Gaussian distribution equal to 1.

Suppose the size of the input is 28-by-28-1. Create a convolutional layer with 16 filters that have a height of 6 and a width of 4, that traverses the input with a stride of 4 both horizontally and vertically. Make sure the convolution covers the input completely.

For the convolution to fully cover the input, both the horizontal and vertical output dimensions must be integer numbers. For the horizontal output dimension to be an integer, one row zero padding is required on the top and bottom of the image: (28 – 6+ 2*1)/4 + 1 = 7. For the vertical output dimension to be an integer, no zero padding is required: (28 – 4+ 2*0)/4 + 1 = 7. Construct the convolutional layer as follows:

layer = convolution2dLayer([6 4],16,'Stride',4,'Padding',[1 0])
layer = 
  Convolution2DLayer with properties:

           Name: ''

   Hyperparameters
     FilterSize: [6 4]
    NumChannels: 'auto'
     NumFilters: 16
         Stride: [4 4]
    PaddingMode: 'manual'
    PaddingSize: [1 1 0 0]

   Learnable Parameters
        Weights: []
           Bias: []

  Show all properties

Definitions

expand all

References

[1] Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. "Handwritten Digit Recognition with a Back-Propagation Network." In Advances in Neural Information Processing Systems. 1990.

[2] LeCun, Y., L. Bottou, Y. Bengio, and P. Haffner. ''Gradient-based Learning Applied to Document Recognition.'' Proceedings of the IEEE. Vol 86, pp. 2278–2324, 1998.

[3] Murphy, K. P. Machine Learning: A Probabilistic Perspective. Cambridge, Massachusetts: The MIT Press, 2012.

Introduced in R2016a

Was this topic helpful?