Check validity of custom layer
checkLayer(
checks the validity of a custom layer using generated data of the sizes in
layer
,validInputSize
)validInputSize
. For layers with a single input, set
validInputSize
to a typical size of input data to the
layer. For layers with multiple inputs, set validInputSize
to a
cell array of typical sizes, where each element corresponds to a layer input.
checkLayer(
specifies the dimension of the data that corresponds to observations. If you specify
this parameter, then the function checks the layer for both a single observation and
multiple observations.layer
,validInputSize
,'ObservationDimension',dim
)
Check the validity of the example custom layer preluLayer
.
Define a custom PReLU layer. To create this layer, save the file preluLayer.m
in the current folder.
Create an instance of the layer and check that it is valid using checkLayer
. Set the valid input size to the typical size of a single observation input to the layer. For a single input, the layer expects observations of size hbywbyc, where h, w, and c are the height, width, and number of channels of the previous layer output, respectively.
Specify validInputSize
as the typical size of an input array.
layer = preluLayer(20,'prelu');
validInputSize = [5 5 20];
checkLayer(layer,validInputSize)
Skipping multiobservation tests. To enable tests with multiple observations, specify the 'ObservationDimension' option in checkLayer. For 2D image data, set 'ObservationDimension' to 4. For 3D image data, set 'ObservationDimension' to 5. For sequence data, set 'ObservationDimension' to 2. Skipping GPU tests. No compatible GPU device found. Running nnet.checklayer.TestLayerWithoutBackward ......... Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 9 Passed, 0 Failed, 0 Incomplete, 8 Skipped. Time elapsed: 0.29008 seconds.
The results show the number of passed, failed, and skipped tests. If you do not specify the 'ObservationsDimension'
option, or do not have a GPU, then the function skips the corresponding tests.
Check Multiple Observations
For multiobservation input, the layer expects an array of observations of size hbywbycbyN, where h, w, and c are the height, width, and number of channels, respectively, and N is the number of observations.
To check the layer validity for multiple observations, specify the typical size of an observation and set 'ObservationDimension'
to 4.
layer = preluLayer(20,'prelu'); validInputSize = [5 5 20]; checkLayer(layer,validInputSize,'ObservationDimension',4)
Skipping GPU tests. No compatible GPU device found. Running nnet.checklayer.TestLayerWithoutBackward .......... ... Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 13 Passed, 0 Failed, 0 Incomplete, 4 Skipped. Time elapsed: 0.14776 seconds.
In this case, the function does not detect any issues with the layer.
layer
— Custom layernnet.layer.Layer
object  nnet.layer.ClassificationLayer
object  nnet.layer.RegressionLayer
objectCustom layer, specified as an nnet.layer.Layer
object,
nnet.layer.ClassificationLayer
object, or
nnet.layer.RegressionLayer
object. For an example
showing how to define your own custom layer, see Define Custom Deep Learning Layer with Learnable Parameters.
validInputSize
— Valid input sizesValid input sizes of the layer, specified as a vector of positive integers or cell array of vectors of positive integers.
For layers with a single input, specify
validInputSize
as a vector of integers corresponding to the dimensions of
the input data. For example, [5 5 10]
corresponds to valid input data of size 5by5by10.
For layers with multiple inputs, specify
validInputSize
as a cell array of vectors, where each vector corresponds
to a layer input and the elements of the vectors correspond to the dimensions of the
corresponding input data. For example, {[24 24 20],[24 24
10]}
corresponds to the valid input sizes of two
inputs, where 24by24by20 is a valid input size for the first
input and 24by24by10 is a valid input size for the second
input.
For more information, see Layer Input Sizes.
For large input sizes, the gradient checks take longer to run. To speed up the tests, specify a smaller valid input size.
Example: [5 5 10]
Example: {[24 24 20],[24 24 10]}
Data Types: single
 double
 int8
 int16
 int32
 int64
 uint8
 uint16
 uint32
 uint64
 cell
dim
— Observation dimensionObservation dimension, specified as a positive integer.
The observation dimension specifies which dimension of the layer input data corresponds to observations. For example, if the layer expects input data is of size hbywbycbyN, where h, w, and c correspond to the height, width, and number of channels of the input data, respectively, and N corresponds to the number of observations, then the observation dimension is 4. For more information, see Layer Input Sizes.
If you specify the observation dimension, then the
checkLayer
function checks that the layer functions
are valid using generated data with minibatches of size 1 and 2. If you do
not specify the observation dimension, then the function skips the
corresponding tests.
Data Types: single
 double
 int8
 int16
 int32
 int64
 uint8
 uint16
 uint32
 uint64
For each layer, the valid input size and the observation dimension depend on the output of the previous layer.
For intermediate layers (layers of type nnet.layer.Layer
),
the valid input size and the observation dimension depend on the type of data
input to the layer. For layers with a single input, specify
validInputSize
as a vector of integers corresponding to the dimensions of
the input data.For layers with multiple inputs, specify
validInputSize
as a cell array of vectors, where each vector corresponds
to a layer input and the elements of the vectors correspond to the dimensions of the
corresponding input data.
For large input sizes, the gradient checks take longer to run. To speed up the tests, specify a smaller valid input size.
Layer Input  Input Size  Observation Dimension 

2D images  hbywbycbyN, where h, w, and c correspond to the height, width, and number of channels of the images respectively, and N is the number of observations.  4 
3D images  hbywbyDbycbyN, where h, w, D, and c correspond to the height, width, depth, and number of channels of the 3D images respectively, and N is the number of observations.  5 
Vector sequences  cbyNbyS, where c is the number of features of the sequences, N is the number of observations, and S is the sequence length.  2 
2D image sequences  hbywbycbyNbyS, where h, w, and c correspond to the height, width, and number of channels of the images respectively, N is the number of observations, and S is the sequence length.  4 
3D image sequences  hbywbydbycbyNbyS, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3D images respectively, N is the number of observations, and S is the sequence length.  5 
For example, for 2D image classification problems, set
validInputSize
to [h w c]
, where
h
, w
, and c
correspond to the height, width, and number of channels of the images,
respectively, and 'ObservationDimension'
to
4
.,
For output layers (layers of type
nnet.layer.ClassificationLayer
or
nnet.layer.RegressionLayer
), set
validInputSize
to the typical size of a single input
observation Y
to the layer.
For classification problems, the valid input size and the observation
dimension of Y
depend on the type of problem:
Classification Task  Input Size  Observation Dimension 

2D image classification  1by1byKbyN, where K is the number of classes and N is the number of observations.  4 
3D image classification  1by1by1byKbyN, where K is the number of classes and N is the number of observations.  5 
Sequencetolabel classification  KbyN, where K is the number of classes and N is the number of observations.  2 
Sequencetosequence classification  KbyNbyS, where K is the number of classes, N is the number of observations, and S is the sequence length.  2 
For example, for 2D image classification problems, set
validInputSize
to [1 1 K]
, where
K
is the number of classes, and
'ObservationDimension'
to 4
.
For regression problems, the dimensions of Y
also depend on
the type of problem. The following table describes the dimensions of
Y
.
Regression Task  Input Size  Observation Dimension 

2D image regression  1by1byRbyN, where R is the number of responses and N is the number of observations.  4 
2D Imagetoimage regression  hbywbycbyN ,
where h, w, and
c are the height, width, and number of channels
of the output respectively, and N is the number of
observations.  4 
3D image regression  1by1by1byRbyN, where R is the number of responses and N is the number of observations.  5 
3D Imagetoimage regression  hbywbydbycbyN ,
where h, w, d,
and c are the height, width, depth, and number of
channels of the output respectively, and N is the
number of observations.  5 
Sequencetoone regression  RbyN, where R is the number of responses and N is the number of observations.  2 
Sequencetosequence regression  RbyNbyS, where R is the number of responses, N is the number of observations, and S is the sequence length.  2 
For example, for 2D image regression problems, set
validInputSize
to [1 1 R]
, where
R
is the number of responses, and
'ObservationDimension'
to 4
.
The checkLayer
function checks the validity of a custom layer
by performing a series of tests, described in these tables. For more information on
the tests used by checkLayer
, see Check Custom Layer Validity.
The checkLayer
function uses these tests to check the validity of custom
intermediate layers (layers of type nnet.layer.Layer
).
Test  Description 

functionSyntaxesAreCorrect  The syntaxes of the layer functions are correctly defined. 
predictDoesNotError  predict does not error. 
forwardDoesNotError  When specified, 
forwardPredictAreConsistentInSize  When 
backwardDoesNotError  When specified, backward does not error. 
backwardIsConsistentInSize  When

predictIsConsistentInType  The outputs of 
forwardIsConsistentInType  When 
backwardIsConsistentInType  When 
gradientsAreNumericallyCorrect  When backward is specified, the gradients computed
in backward are consistent with the numerical
gradients. 
backwardPropagationDoesNotError  When backward is not specified, the derivatives
can be computed using automatic differentiation. 
The tests predictIsConsistentInType
, forwardIsConsistentInType
, and backwardIsConsistentInType
also check for GPU compatibility. To execute the layer functions on a GPU, the functions must support inputs and outputs of type gpuArray
with the underlying data type single
.
The checkLayer
function uses these tests to check the
validity of custom output layers (layers of type
nnet.layer.ClassificationLayer
or
nnet.layer.RegressionLayer
).
Test  Description 

forwardLossDoesNotError  forwardLoss does not error. 
backwardLossDoesNotError  backwardLoss does not error. 
forwardLossIsScalar  The output of forwardLoss is scalar. 
backwardLossIsConsistentInSize  When backwardLoss is specified, the output of
backwardLoss is consistent in size:
dLdY is the same size as the predictions
Y . 
forwardLossIsConsistentInType  The output of 
backwardLossIsConsistentInType  When 
gradientsAreNumericallyCorrect  When backwardLoss is specified, the gradients computed
in backwardLoss are numerically correct. 
backwardPropagationDoesNotError  When backwardLoss is not specified, the derivatives
can be computed using automatic differentiation. 
The forwardLossIsConsistentInType
and
backwardLossIsConsistentInType
tests also check for GPU compatibility. To
execute the layer functions on a GPU, the functions must support inputs and outputs of type
gpuArray
with the underlying data type single
.
analyzeNetwork
 trainNetwork
 trainingOptions
A modified version of this example exists on your system. Do you want to open this version instead?
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
Select web siteYou can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.