layerNormalizationLayer
Description
A layer normalization layer normalizes a minibatch of data across all channels for each observation independently. To speed up training of recurrent and multilayer perceptron neural networks and reduce the sensitivity to network initialization, use layer normalization layers after the learnable layers, such as LSTM and fully connected layers.
After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.
Creation
Description
creates a
layer normalization layer.layer
= layerNormalizationLayer
sets the optional layer
= layerNormalizationLayer(Name,Value)
Epsilon
, Parameters and Initialization, Learning Rate and Regularization, and Name
properties using one or more namevalue arguments. For
example, layerNormalizationLayer('Name','layernorm')
creates a layer
normalization layer with name 'layernorm'
.
Properties
Layer Normalization
Epsilon
— Constant to add to minibatch variances
1e5
(default)  positive scalar
Constant to add to the minibatch variances, specified as a positive scalar.
The software adds this constant to the minibatch variances before normalization to ensure numerical stability and avoid division by zero.
Before R2023a: Epsilon
must be greater than
or equal to 1e5
.
Data Types: single
 double
 int8
 int16
 int32
 int64
 uint8
 uint16
 uint32
 uint64
NumChannels
— Number of input channels
"auto"
(default)  positive integer
This property is readonly.
Number of input channels, specified as one of the following:
"auto"
— Automatically determine the number of input channels at training time.Positive integer — Configure the layer for the specified number of input channels.
NumChannels
and the number of channels in the layer input data must match. For example, if the input is an RGB image, thenNumChannels
must be 3. If the input is the output of a convolutional layer with 16 filters, thenNumChannels
must be 16.
Data Types: single
 double
 int8
 int16
 int32
 int64
 uint8
 uint16
 uint32
 uint64
 char
 string
OperationDimension
— Dimension to normalize over
"auto"
(default)  "channelonly"
 "spatialchannel"
 "batchexcluded"
Since R2023a
Dimension to normalize over, specified as one of these values:
"auto"
— For feature, sequence, 1D image, or spatialtemporal input, normalize over the channel dimension. Otherwise, normalize over the spatial and channel dimensions."channelonly"
— Normalize over the channel dimension."spatialchannel"
— Normalize over the spatial and channel dimensions."batchexcluded"
— Normalize over all dimensions except for the batch dimension.
Parameters and Initialization
ScaleInitializer
— Function to initialize channel scale factors
'ones'
(default)  'narrownormal'
 function handle
Function to initialize the channel scale factors, specified as one of the following:
'ones'
– Initialize the channel scale factors with ones.'zeros'
– Initialize the channel scale factors with zeros.'narrownormal'
– Initialize the channel scale factors by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.Function handle – Initialize the channel scale factors with a custom function. If you specify a function handle, then the function must be of the form
scale = func(sz)
, wheresz
is the size of the scale. For an example, see Specify Custom Weight Initialization Function.
The layer only initializes the channel scale factors when the Scale
property is empty.
Data Types: char
 string
 function_handle
OffsetInitializer
— Function to initialize channel offsets
'zeros'
(default)  'ones'
 'narrownormal'
 function handle
Function to initialize the channel offsets, specified as one of the following:
'zeros'
– Initialize the channel offsets with zeros.'ones'
– Initialize the channel offsets with ones.'narrownormal'
– Initialize the channel offsets by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.Function handle – Initialize the channel offsets with a custom function. If you specify a function handle, then the function must be of the form
offset = func(sz)
, wheresz
is the size of the scale. For an example, see Specify Custom Weight Initialization Function.
The layer only initializes the channel offsets when the Offset
property is empty.
Data Types: char
 string
 function_handle
Scale
— Channel scale factors
[]
(default)  numeric array
Channel scale factors γ, specified as a numeric array.
The channel scale factors are learnable parameters. When you train a network using the
trainnet
function or initialize a dlnetwork
object, if Scale
is nonempty, then the software uses the Scale
property as the initial value. If Scale
is empty, then the software uses the initializer specified by
ScaleInitializer
.
Depending on the type of layer input, the trainnet
and
dlnetwork
functions automatically reshape this property to have of
the following sizes:
Layer Input  Property Size 

feature input  NumChannels by1 
vector sequence input  
1D image input (since R2023a)  1byNumChannels 
1D image sequence input (since R2023a)  
2D image input  1by1byNumChannels 
2D image sequence input  
3D image input  1by1by1byNumChannels 
3D image sequence input 
Data Types: single
 double
Offset
— Channel offsets
[]
(default)  numeric array
Channel offsets β, specified as a numeric vector.
The channel offsets are learnable parameters. When you train a network using the trainnet
function or initialize a dlnetwork
object, if Offset
is nonempty, then the software uses the Offset
property as the initial value. If Offset
is empty, then the software uses the initializer specified by
OffsetInitializer
.
Depending on the type of layer input, the trainnet
and
dlnetwork
functions automatically reshape this property to have of
the following sizes:
Layer Input  Property Size 

feature input  NumChannels by1 
vector sequence input  
1D image input (since R2023a)  1byNumChannels 
1D image sequence input (since R2023a)  
2D image input  1by1byNumChannels 
2D image sequence input  
3D image input  1by1by1byNumChannels 
3D image sequence input 
Data Types: single
 double
Learning Rate and Regularization
ScaleLearnRateFactor
— Learning rate factor for scale factors
1
(default)  nonnegative scalar
Learning rate factor for the scale factors, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the learning rate for the scale factors in a layer. For example, if ScaleLearnRateFactor
is 2
, then the learning rate for the scale factors in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions
function.
Data Types: single
 double
 int8
 int16
 int32
 int64
 uint8
 uint16
 uint32
 uint64
OffsetLearnRateFactor
— Learning rate factor for offsets
1
(default)  nonnegative scalar
Learning rate factor for the offsets, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the learning rate
for the offsets in a layer. For example, if OffsetLearnRateFactor
is 2
, then the learning rate for the offsets in the layer is twice
the current global learning rate. The software determines the global learning rate based
on the settings specified with the trainingOptions
function.
Data Types: single
 double
 int8
 int16
 int32
 int64
 uint8
 uint16
 uint32
 uint64
ScaleL2Factor
— L_{2} regularization factor for scale factors
1
(default)  nonnegative scalar
L_{2} regularization factor for the scale factors, specified as a nonnegative scalar.
The software multiplies this factor by the global L_{2} regularization
factor to determine the learning rate for the scale factors in a layer. For example, if
ScaleL2Factor
is 2
, then the
L_{2} regularization for the offsets in the layer is twice the
global L_{2} regularization factor. You can specify the global
L_{2} regularization factor using the trainingOptions
function.
Data Types: single
 double
 int8
 int16
 int32
 int64
 uint8
 uint16
 uint32
 uint64
OffsetL2Factor
— L_{2} regularization factor for offsets
1
(default)  nonnegative scalar
L_{2} regularization factor for the offsets, specified as a nonnegative scalar.
The software multiplies this factor by the global L_{2} regularization
factor to determine the learning rate for the offsets in a layer. For example, if
OffsetL2Factor
is 2
, then the
L_{2} regularization for the offsets in the layer is twice the
global L_{2} regularization factor. You can specify the global
L_{2} regularization factor using the trainingOptions
function.
Data Types: single
 double
 int8
 int16
 int32
 int64
 uint8
 uint16
 uint32
 uint64
Layer
Name
— Layer name
""
(default)  character vector  string scalar
NumInputs
— Number of inputs
1
(default)
This property is readonly.
Number of inputs to the layer, returned as 1
. This layer accepts a
single input only.
Data Types: double
InputNames
— Input names
{'in'}
(default)
This property is readonly.
Input names, returned as {'in'}
. This layer accepts a single input
only.
Data Types: cell
NumOutputs
— Number of outputs
1
(default)
This property is readonly.
Number of outputs from the layer, returned as 1
. This layer has a
single output only.
Data Types: double
OutputNames
— Output names
{'out'}
(default)
This property is readonly.
Output names, returned as {'out'}
. This layer has a single output
only.
Data Types: cell
Examples
Create Layer Normalization Layer
Create a layer normalization layer with the name 'layernorm'
.
layer = layerNormalizationLayer('Name','layernorm')
layer = LayerNormalizationLayer with properties: Name: 'layernorm' NumChannels: 'auto' Hyperparameters Epsilon: 1.0000e05 OperationDimension: 'auto' Learnable Parameters Offset: [] Scale: [] Use properties method to see a list of all properties.
Include a layer normalization layer in a Layer
array.
layers = [ imageInputLayer([32 32 3]) convolution2dLayer(3,16,'Padding',1) layerNormalizationLayer reluLayer maxPooling2dLayer(2,'Stride',2) convolution2dLayer(3,32,'Padding',1) layerNormalizationLayer reluLayer fullyConnectedLayer(10) softmaxLayer]
layers = 10x1 Layer array with layers: 1 '' Image Input 32x32x3 images with 'zerocenter' normalization 2 '' 2D Convolution 16 3x3 convolutions with stride [1 1] and padding [1 1 1 1] 3 '' Layer Normalization Layer normalization 4 '' ReLU ReLU 5 '' 2D Max Pooling 2x2 max pooling with stride [2 2] and padding [0 0 0 0] 6 '' 2D Convolution 32 3x3 convolutions with stride [1 1] and padding [1 1 1 1] 7 '' Layer Normalization Layer normalization 8 '' ReLU ReLU 9 '' Fully Connected 10 fully connected layer 10 '' Softmax softmax
Algorithms
Layer Normalization Layer
The layer normalization operation normalizes the elements x_{i} of the input by first calculating the mean μ_{L} and variance σ_{L}^{2} over the spatial, time, and channel dimensions for each observation independently. Then, it calculates the normalized activations as
$$\widehat{{x}_{i}}=\frac{{x}_{i}{\mu}_{L}}{\sqrt{{\sigma}_{L}^{2}+\u03f5}},$$
where ϵ is a constant that improves numerical stability when the variance is very small.
To allow for the possibility that inputs with zero mean and unit variance are not optimal for the operations that follow layer normalization, the layer normalization operation further shifts and scales the activations using the transformation
$${y}_{i}=\gamma {\widehat{x}}_{i}+\beta ,$$
where the offset β and scale factor γ are learnable parameters that are updated during network training.
Layer Input and Output Formats
Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray
objects.
The format of a dlarray
object is a string of characters in which each
character describes the corresponding dimension of the data. The formats consist of one or
more of these characters:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, you can describe 2D image data that is represented as a 4D array, where the
first two dimensions correspond to the spatial dimensions of the images, the third
dimension corresponds to the channels of the images, and the fourth dimension
corresponds to the batch dimension, as having the format "SSCB"
(spatial, spatial, channel, batch).
You can interact with these dlarray
objects in automatic differentiation
workflows, such as those for developing a custom layer, using a functionLayer
object, or using the forward
and predict
functions with
dlnetwork
objects.
This table shows the supported input formats of LayerNormalizationLayer
objects and the
corresponding output format. If the software passes the output of the layer to a custom
layer that does not inherit from the nnet.layer.Formattable
class, or a
FunctionLayer
object with the Formattable
property
set to 0
(false
), then the layer receives an
unformatted dlarray
object with dimensions ordered according to the formats
in this table. The formats listed here are only a subset. The layer may support additional
formats such as formats with additional "S"
(spatial) or
"U"
(unspecified) dimensions.
Input Format  Output Format 

























In dlnetwork
objects, LayerNormalizationLayer
objects also support
these input and output format combinations.
Input Format  Output Format 









References
[1] Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. “Layer Normalization.” Preprint, submitted July 21, 2016. https://arxiv.org/abs/1607.06450.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
Version History
Introduced in R2021aR2023b: Code generation support
Generate C or C++ code using MATLAB^{®} Coder™ or generate CUDA^{®} code for NVIDIA^{®} GPUs using GPU Coder™.
R2023a: Specify operation dimension
Specify which dimensions to normalize over using the OperationDimension
option.
R2023a: Epsilon
supports values less than 1e5
The Epsilon
option also
supports positive values less than 1e5
.
R2023a: Layer supports 1D image sequence data
LayerNormalizationLayer
objects support normalizing 1D image sequence data (data with
one spatial and one time dimension).
R2023a: Layer normalizes over channel and spatial dimensions of sequence data
Starting in R2023a, by default, the layer normalizes sequence data over the channel and
spatial dimensions. In previous versions, the software normalizes over all dimensions except
for the batch dimension (the spatial, time, and channel dimensions). Normalization over the
channel and spatial dimensions is usually better suited for this type of data. To reproduce
the previous behavior, set OperationDimension
to
"batchexcluded"
.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
 América Latina (Español)
 Canada (English)
 United States (English)
Europe
 Belgium (English)
 Denmark (English)
 Deutschland (Deutsch)
 España (Español)
 Finland (English)
 France (Français)
 Ireland (English)
 Italia (Italiano)
 Luxembourg (English)
 Netherlands (English)
 Norway (English)
 Österreich (Deutsch)
 Portugal (English)
 Sweden (English)
 Switzerland
 United Kingdom (English)