TrainingOptionsRMSProp
Training options for RMSProp optimizer
Description
Training options for RMSProp (root mean square propagation) optimizer, including learning rate information, L2 regularization factor, and mini-batch size.
Creation
Create a TrainingOptionsRMSProp
object using trainingOptions
and specifying "rmsprop"
as the first
input argument.
Properties
RMSProp
MaxEpochs
— Maximum number of epochs
30
(default) | positive integer
Maximum number of epochs (full passes of the data) to use for training, specified as a positive integer.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
MiniBatchSize
— Size of mini-batch
128
(default) | positive integer
Size of the mini-batch to use for each training iteration, specified as a positive integer. A mini-batch is a subset of the training set that is used to evaluate the gradient of the loss function and update the weights.
If the mini-batch size does not evenly divide the number of training samples, then the software discards the training data that does not fit into the final complete mini-batch of each epoch. If the mini-batch size is smaller then the number of training samples, then the software does not discard any data.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Shuffle
— Option for data shuffling
"once"
(default) | "never"
| "every-epoch"
Option for data shuffling, specified as one of these values:
"once"
— Shuffle the training and validation data once before training."never"
— Do not shuffle the data."every-epoch"
— Shuffle the training data before each training epoch, and shuffle the validation data before each neural network validation. If the mini-batch size does not evenly divide the number of training samples, then the software discards the training data that does not fit into the final complete mini-batch of each epoch. To avoid discarding the same data every epoch, set theShuffle
training option to"every-epoch"
.
InitialLearnRate
— Initial learning rate
0.001
(default) | positive scalar
Initial learning rate used for training, specified as a positive scalar.
If the learning rate is too low, then training can take a long time. If the learning rate is too high, then training might reach a suboptimal result or diverge.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
LearnRateScheduleSettings
— Settings for learning rate schedule
structure
This property is read-only.
Settings for the learning rate schedule, specified as a structure.
LearnRateScheduleSettings
has the field
Method
, which specifies the type of method for adjusting the learning
rate. The possible methods are:
'none'
— The learning rate is constant throughout training.'piecewise'
— The learning rate drops periodically during training.
If Method
is 'piecewise'
, then
LearnRateScheduleSettings
contains two more fields:
DropRateFactor
— The multiplicative factor by which the learning rate drops during trainingDropPeriod
— The number of epochs that passes between adjustments to the learning rate during training
Specify the settings for the learning schedule rate using trainingOptions
.
Data Types: struct
SquaredGradientDecayFactor
— Decay rate of squared gradient moving average
0.9
(default) | nonnegative scalar less than 1
Decay rate of squared gradient moving average for the RMSProp solver,
specified as a nonnegative scalar less than 1
.
Typical values of the decay rate are 0.9
, 0.99
, and 0.999
, corresponding to averaging lengths of 10
, 100
, and 1000
parameter updates, respectively.
For more information, see Root Mean Square Propagation.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Epsilon
— Denominator offset
1e-8
(default) | positive scalar
Denominator offset for the RMSProp solver, specified as a positive scalar.
The solver adds the offset to the denominator in the neural network parameter updates to avoid division by zero. The default value works well for most tasks.
For more information about the different solvers, see Root Mean Square Propagation.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Data Formats
InputDataFormats
— Description of input data dimensions
"auto"
(default) | string array | cell array of character vectors | character vector
Since R2023b
Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.
If InputDataFormats
is "auto"
, then the software uses
the formats expected by the network input. Otherwise, the software uses the specified
formats for the corresponding network input.
A data format is a string of characters, where each character describes the type of the corresponding dimension of the data.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, for an array containing a batch of sequences where the first, second, and
third dimension correspond to channels, observations, and time steps, respectively, you can
specify that it has the format "CBT"
.
You can specify multiple dimensions labeled "S"
or "U"
.
You can use the labels "C"
, "B"
, and
"T"
at most once. The software ignores singleton trailing
"U"
dimensions located after the second dimension.
For more information, see Deep Learning Data Formats.
This option supports the trainnet
function only.
Data Types: char
| string
| cell
TargetDataFormats
— Description of target data dimensions
"auto"
(default) | string array | cell array of character vectors | character vector
Since R2023b
Description of the target data dimensions, specified as one of these values:
"auto"
— If the target data has the same number of dimensions as the input data, then thetrainnet
function uses the format specified byInputDataFormats
. If the target data has a different number of dimensions to the input data, then thetrainnet
function uses the format expected by the loss function.Data formats, specified as a string array, character vector, or cell array of character vectors — The
trainnet
function uses the specified data formats.
A data format is a string of characters, where each character describes the type of the corresponding dimension of the data.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, for an array containing a batch of sequences where the first, second, and
third dimension correspond to channels, observations, and time steps, respectively, you can
specify that it has the format "CBT"
.
You can specify multiple dimensions labeled "S"
or "U"
.
You can use the labels "C"
, "B"
, and
"T"
at most once. The software ignores singleton trailing
"U"
dimensions located after the second dimension.
For more information, see Deep Learning Data Formats.
This option supports the trainnet
function only.
Data Types: char
| string
| cell
Monitoring
Plots
— Plots to display during neural network training
"none"
(default) | "training-progress"
Plots to display during neural network training, specified as one of these values:
"none"
— Do not display plots during training."training-progress"
— Plot training progress.
trainnet
Function
The plot shows the mini-batch loss, validation loss, training mini-batch and
validation metrics as specified by the Metrics
property, and additional information about the training
progress.
To programmatically open and close the training progress plot after training, use
the show
and close
functions with the second output of the trainnet
function. You
can use the show
function to view the training progress even if
the Plots
training option is specified as
"none"
.
trainNetwork
Function
The plot shows the mini-batch loss and accuracy, validation loss and accuracy, and
additional information about the training progress. For more information about the
trainNetwork
training progress plot, see Monitor Deep Learning Training Progress.
Metrics
— Metrics to track
[]
(default) | character vector | string array | function handle | cell array | metric object
Since R2023b
Metrics to track, specified as a character vector or string scalar of a
built-in metric name, a string array of names, a built-in or custom metric object, a function
handle (@myMetric
), or a cell array of names, metric objects, and
function handles.
Built-in metric name — Specify metrics as a string scalar, character vector, or string array of built-in metric names. Supported values are
"accuracy"
,"fscore"
,"recall"
,"precision"
,"rmse"
, and"auc"
.Built-in metric object — If you need more flexibility, you can use built-in metric objects. The software supports these built-in metric objects:
When you create a built-in metric object, you can specify additional options such as the averaging type and whether the task is single-label or multilabel.
Custom metric function handle — If the metric you need is not a built-in metric, then you can specify custom metrics using a function handle. The function must have the syntax
metric = metricFunction(Y,T)
, whereY
corresponds to the network predictions andT
corresponds to the target responses. For networks with multiple outputs, the syntax must bemetric = metricFunction(Y1,…,YN,T1,…TM)
, whereN
is the number of outputs andM
is the number of targets. For more information, see Define Custom Metric Function.Note
When you have validation data in mini-batches, the software computes the validation metric for each mini-batch and then returns the average of those values. For some metrics, this behavior can result in a different metric value than if you compute the metric using the whole validation set at once. In most cases, the values are similar. To use a custom metric that is not batch-averaged for the validation data, you must create a custom metric object. For more information, see Define Custom Deep Learning Metric Object.
Custom metric object — If you need greater customization, then you can define your own custom metric object. For an example that shows how to create a custom metric, see Define Custom F-Beta Score Metric Object . For general information about creating custom metrics, see Define Custom Deep Learning Metric Object. Specify your custom metric as the
Metrics
option of thetrainingOptions
function.
This option supports the trainnet
and
trainBERTDocumentClassifier
(Text Analytics Toolbox) functions only.
Example: Metrics=["accuracy","fscore"]
Example: Metrics=["accuracy",@myFunction,precisionObj]
Verbose
— Flag to display training progress information
1
(true
) (default) | 0
(false
)
Flag to display training progress information in the command window, specified as
1
(true
) or 0
(false
).
The content of the verbose output depends on the function that you use for training.
trainnet
Function
When you use the trainnet
function, the verbose output
displays a table with these variables:
Variable | Description |
---|---|
Iteration | Iteration number |
Epoch | Epoch number |
TimeElapsed | Time elapsed in hours, minutes, and seconds |
LearnRate | Learning rate |
TrainingLoss | Training loss |
ValidationLoss | Validation loss. If you do not specify validation data, then the software does not display this information. |
If you specify additional metrics in the training options, then
they also appear in the verbose output. For example, if you set the Metrics
training option to "accuracy"
, then the information includes the
TrainingAccuracy
and ValidationAccuracy
variables.
When training stops, the verbose output displays the reason for stopping.
To specify validation data, use the ValidationData
training option.
trainNetwork
Function
When you use the trainNetwork
function, the verbose output displays a table. The variables of the table depends on the type of neural network.
For classification neural networks, the table contains these variables:
Variable | Description |
---|---|
Epoch | Epoch number. An epoch corresponds to a full pass of the data. |
Iteration | Iteration number. An iteration corresponds to a mini-batch. |
Time Elapsed | Time elapsed in hours, minutes, and seconds. |
Mini-batch Accuracy | Classification accuracy on the mini-batch. |
Validation Accuracy | Classification accuracy on the validation data. If you do not specify validation data, then the software does not display this information. |
Mini-batch Loss | Loss on the mini-batch. If the output layer is a
ClassificationOutputLayer object, then the loss is
the cross entropy loss for multi-class classification problems with
mutually exclusive classes. |
Validation Loss | Loss on the validation data. If the output layer is a
ClassificationOutputLayer object, then the loss is
the cross entropy loss for multi-class classification problems with
mutually exclusive classes. If you do not specify validation data, then
the software does not display this information. |
Base Learning Rate | Base learning rate. The software multiplies the learn rate factors of the layers by this value. |
For regression neural networks, the table contains these variables:
Variable | Description |
---|---|
Epoch | Epoch number. An epoch corresponds to a full pass of the data. |
Iteration | Iteration number. An iteration corresponds to a mini-batch. |
Time Elapsed | Time elapsed in hours, minutes, and seconds. |
Mini-batch RMSE | Root-mean-squared-error (RMSE) on the mini-batch. |
Validation RMSE | RMSE on the validation data. If you do not specify validation data, then the software does not display this information. |
Mini-batch Loss | Loss on the mini-batch. If the output layer is a
RegressionOutputLayer object, then the loss is the
half-mean-squared-error. |
Validation Loss | Loss on the validation data. If the output layer is a
RegressionOutputLayer object, then the loss is the
half-mean-squared-error. If you do not specify validation data, then the
software does not display this information. |
Base Learning Rate | Base learning rate. The software multiplies the learn rate factors of the layers by this value. |
When training stops, the verbose output displays the reason for stopping.
To specify validation data, use the ValidationData
training option.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| logical
VerboseFrequency
— Frequency of verbose printing
50
(default) | positive integer
Frequency of verbose printing, which is the number of iterations between printing to
the command window, specified as a positive integer. This option only has an effect when
the Verbose
training option is 1
(true
).
If you validate the neural network during training, then the software also prints to the command window every time validation occurs.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
OutputFcn
— Output functions
function handle | cell array of function handles
Output functions to call during training, specified as a function handle or cell array of function handles. The software calls the functions once before the start of training, after each iteration, and once when training is complete.
The functions must have the syntax stopFlag = f(info)
, where info
is a structure containing information about the training progress, and stopFlag
is a scalar that indicates to stop training early. If stopFlag
is 1
(true
), then the software stops training. Otherwise, the software continues training.
The fields of the structure info
depend on the training function
that you use.
trainnet
Function
The trainnet
function passes the output function the
structure info
that contains these fields:
Field | Description |
---|---|
Epoch | Epoch number |
Iteration | Iteration number |
TimeElapsed | Time since start of training |
LearnRate | Iteration learn rate |
TrainingLoss | Iteration training loss |
ValidationLoss | Validation loss, if specified and evaluated at iteration. |
State | Iteration training state, specified as "start" , "iteration" , or "done" . |
If you specify additional metrics in the training options, then
they also appear in the training information. For example, if you set the
Metrics
training option to "accuracy"
, then the
information includes the TrainingAccuracy
and
ValidationAccuracy
fields.
If a field is not calculated or relevant for a certain call to the output functions, then that field contains an empty array.
For an example showing how to use output functions, see Customize Output During Deep Learning Network Training.
trainNetwork
Function
The trainNetwork
function passes the output function the
structure info
that contains these fields:
Field | Description |
---|---|
Epoch | Current epoch number |
Iteration | Current iteration number |
TimeSinceStart | Time in seconds since the start of training |
TrainingLoss | Current mini-batch loss |
ValidationLoss | Loss on the validation data |
BaseLearnRate | Current base learning rate |
TrainingAccuracy
| Accuracy on the current mini-batch (classification neural networks) |
TrainingRMSE | RMSE on the current mini-batch (regression neural networks) |
ValidationAccuracy | Accuracy on the validation data (classification neural networks) |
ValidationRMSE | RMSE on the validation data (regression neural networks) |
State | Current training state, with a possible value of
"start" , "iteration" ,
or "done" . |
If a field is not calculated or relevant for the call to the output functions, then that field contains an empty array.
For an example showing how to use output functions, see Customize Output During Deep Learning Network Training.
Data Types: function_handle
| cell
Validation
ValidationData
— Data to use for validation during training
[]
(default) | datastore | table | cell array
Data to use for validation during training, specified as []
, a
datastore, a table, or a cell array containing the validation predictors and
responses.
During training, the software calculates the validation accuracy and validation loss on the validation data. To specify the validation frequency, use the ValidationFrequency
training option. You can also use the validation data to stop training automatically when the validation loss stops decreasing. To turn on automatic validation stopping, use the ValidationPatience
training option.
If ValidationData
is []
, then the software does
not validate the neural network during training.
If your neural network has layers that behave differently during prediction than during training (for example, dropout layers), then the validation accuracy can be higher than the training accuracy.
The validation data is shuffled according to the Shuffle
training option. If
Shuffle
is "every-epoch"
, then the
validation data is shuffled before each neural network validation.
The supported formats depend on the training function that you use.
trainnet
Function
Specify the validation data as a datastore or the cell array
{predictors,targets}
, where predictors
contains
the validation predictors and targets
contains the validation targets.
Specify the validation predictors and targets using any of the formats supported by the
trainnet
function.
For more information, see the input arguments of the trainnet
function.
trainNetwork
Function
Specify the validation data as a datastore, table, or the cell array
{predictors,targets}
, where predictors
contains the validation predictors and targets
contains the
validation targets. Specify the validation predictors and targets using any of the
formats supported by the trainNetwork
function.
For more information, see the input arguments of the trainNetwork
function.
trainBERTDocumentClassifier
Function (Text Analytics Toolbox)
Specify the validation data as one of these values:
Cell array
{documents,targets}
, wheredocuments
contains the input documents, andtargets
contains the document labelsTable, where the first variable contains the input documents and the second variable contains the document labels.
For more information, see the input arguments of the trainBERTDocumentClassifier
(Text Analytics Toolbox) function.
ValidationFrequency
— Frequency of neural network validation
50
(default) | positive integer
Frequency of neural network validation in number of iterations, specified as a positive integer.
The ValidationFrequency
value is the number of iterations between
evaluations of validation metrics. To specify validation data, use the ValidationData
training option.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
ValidationPatience
— Patience of validation stopping
Inf
(default) | positive integer
Patience of validation stopping of neural network training, specified as a positive
integer or Inf
.
ValidationPatience
specifies the number of times that the loss on
the validation set can be larger than or equal to the previously smallest loss before
neural network training stops. If ValidationPatience
is
Inf
, then the values of the validation loss do not cause training
to stop early.
The returned neural network depends on the OutputNetwork
training option. To return the neural network with the
lowest validation loss, set the OutputNetwork
training option to
"best-validation-loss"
.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
OutputNetwork
— Neural network to return when training completes
"last-iteration"
(default) | "best-validation-loss"
Neural network to return when training completes, specified as one of the following:
"last-iteration"
– Return the neural network corresponding to the last training iteration."best-validation-loss"
– Return the neural network corresponding to the training iteration with the lowest validation loss. To use this option, you must specify theValidationData
training option.
Regularization and Normalization
L2Regularization
— Factor for L2 regularization
0.0001
(default) | nonnegative scalar
Factor for L2 regularization (weight decay), specified as a nonnegative scalar. For more information, see L2 Regularization.
You can specify a multiplier for the L2 regularization for neural network layers with learnable parameters. For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
ResetInputNormalization
— Option to reset input layer normalization
1
(true
) (default) | 0
(false
)
Option to reset input layer normalization, specified as one of the following:
1
(true
) — Reset the input layer normalization statistics and recalculate them at training time.0
(false
) — Calculate normalization statistics at training time when they are empty.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| logical
BatchNormalizationStatistics
— Mode to evaluate statistics in batch normalization layers
"auto"
(default) | "population"
| "moving"
Mode to evaluate the statistics in batch normalization layers, specified as one of the following:
"population"
— Use the population statistics. After training, the software finalizes the statistics by passing through the training data once more and uses the resulting mean and variance."moving"
— Approximate the statistics during training using a running estimate given by update stepswhere and denote the updated mean and variance, respectively, and denote the mean and variance decay values, respectively, and denote the mean and variance of the layer input, respectively, and and denote the latest values of the moving mean and variance values, respectively. After training, the software uses the most recent value of the moving mean and variance statistics. This option supports CPU and single GPU training only.
"auto"
— Use the"moving"
option for thetrainnet
function and the"population"
option for thetrainNetwork
function.
Gradient Clipping
GradientThreshold
— Gradient threshold
Inf
(default) | positive scalar
Gradient threshold, specified as Inf
or a positive scalar. If the
gradient exceeds the value of GradientThreshold
, then the gradient
is clipped according to the GradientThresholdMethod
training
option.
For more information, see Gradient Clipping.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
GradientThresholdMethod
— Gradient threshold method
"l2norm"
(default) | "global-l2norm"
| "absolute-value"
Gradient threshold method used to clip gradient values that exceed the gradient threshold, specified as one of the following:
"l2norm"
— If the L2 norm of the gradient of a learnable parameter is larger thanGradientThreshold
, then scale the gradient so that the L2 norm equalsGradientThreshold
."global-l2norm"
— If the global L2 norm, L, is larger thanGradientThreshold
, then scale all gradients by a factor ofGradientThreshold/
L. The global L2 norm considers all learnable parameters."absolute-value"
— If the absolute value of an individual partial derivative in the gradient of a learnable parameter is larger thanGradientThreshold
, then scale the partial derivative to have magnitude equal toGradientThreshold
and retain the sign of the partial derivative.
For more information, see Gradient Clipping.
Sequence
SequenceLength
— Option to pad or truncate sequences
"longest"
(default) | "shortest"
| positive integer
Option to pad, truncate, or split input sequences, specified as one of the following:
"longest"
— Pad sequences in each mini-batch to have the same length as the longest sequence. This option does not discard any data, though padding can introduce noise to the neural network."shortest"
— Truncate sequences in each mini-batch to have the same length as the shortest sequence. This option ensures that no padding is added, at the cost of discarding data.Positive integer — For each mini-batch, pad the sequences to the length of the longest sequence in the mini-batch, and then split the sequences into smaller sequences of the specified length. If splitting occurs, then the software creates extra mini-batches. If the specified sequence length does not evenly divide the sequence lengths of the data, then the mini-batches containing the ends those sequences have length shorter than the specified sequence length. Use this option if the full sequences do not fit in memory. Alternatively, try reducing the number of sequences per mini-batch by setting the
MiniBatchSize
option to a lower value.This option supports the
trainNetwork
function only.
To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| char
| string
SequencePaddingDirection
— Direction of padding or truncation
"right"
(default) | "left"
Direction of padding or truncation, specified as one of the following:
"right"
— Pad or truncate sequences on the right. The sequences start at the same time step and the software truncates or adds padding to the end of the sequences."left"
— Pad or truncate sequences on the left. The software truncates or adds padding to the start of the sequences so that the sequences end at the same time step.
Because recurrent layers process sequence data one time step at a time, when the recurrent
layer OutputMode
property is "last"
, any padding in
the final time steps can negatively influence the layer output. To pad or truncate sequence
data on the left, set the SequencePaddingDirection
option to "left"
.
For sequence-to-sequence neural networks (when the OutputMode
property is
"sequence"
for each recurrent layer), any padding in the first time
steps can negatively influence the predictions for the earlier time steps. To pad or
truncate sequence data on the right, set the SequencePaddingDirection
option to "right"
.
To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.
SequencePaddingValue
— Value to pad sequences
0
(default) | scalar
Value by which to pad input sequences, specified as a scalar.
Do not pad sequences with NaN
, because doing so can propagate
errors throughout the neural network.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Hardware
ExecutionEnvironment
— Hardware resource for training neural network
"auto"
(default) | "cpu"
| "gpu"
| "multi-gpu"
| "parallel"
| "parallel-auto"
| "parallel-cpu"
| "parallel-gpu"
Hardware resource for training neural network, specified as one of these values:
Execution Environment | Hardware Resources Used |
---|---|
"auto" | Use a local GPU if one is available. Otherwise, use the local CPU. |
"cpu" | Use the local CPU. |
"gpu" | Use the local GPU. |
"multi-gpu" | Use multiple GPUs on one machine, using a local parallel pool based on your default cluster profile. If there is no current parallel pool, the software starts a parallel pool with pool size equal to the number of available GPUs. |
"parallel" | Use a local or remote parallel pool. If there is no current parallel pool, the software starts one using the default cluster profile. If the pool has access to GPUs, then only workers with a unique GPU perform training computation and excess workers become idle. If the pool does not have GPUs, then training takes place on all available CPU workers instead. |
"parallel-auto" |
|
"parallel-cpu" |
|
"parallel-gpu" |
|
The "gpu"
, "multi-gpu"
, "parallel"
, "parallel-auto"
, "parallel-cpu"
, and "parallel-gpu"
options require Parallel Computing Toolbox™. To use a GPU for deep learning, you must also have a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If you choose one of these options and Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.
For more information on when to use the different execution environments, see Scale Up Deep Learning in Parallel, on GPUs, and in the Cloud.
To see an improvement in performance when training in parallel, try scaling up the MiniBatchSize
and InitialLearnRate
training options by the number of GPUs.
When you train a network using the trainNetwork
function, the
"multi-gpu"
and "parallel"
options do not support
neural networks containing custom layers with state parameters or built-in layers that are
stateful at training time. For example:
recurrent layers such as
LSTMLayer
,BiLSTMLayer
, orGRULayer
objects when theSequenceLength
training option is a positive integerBatchNormalizationLayer
objects when theBatchNormalizationStatistics
training option is set to"moving"
WorkerLoad
— Parallel worker load division
scalar from 0
to 1
| positive integer | numeric vector
Parallel worker load division between GPUs or CPUs, specified as one of the following:
Scalar from
0
to1
— Fraction of workers on each machine to use for neural network training computation. If you train the neural network using data in a mini-batch datastore with background dispatch enabled, then the remaining workers fetch and preprocess data in the background.Positive integer — Number of workers on each machine to use for neural network training computation. If you train the neural network using data in a mini-batch datastore with background dispatch enabled, then the remaining workers fetch and preprocess data in the background.
Numeric vector — Neural network training load for each worker in the parallel pool. For a vector
W
, workeri
gets a fractionW(i)/sum(W)
of the work (number of examples per mini-batch). If you train a neural network using data in a mini-batch datastore with background dispatch enabled, then you can assign a worker load of 0 to use that worker for fetching data in the background. The specified vector must contain one value per worker in the parallel pool.
If the parallel pool has access to GPUs, then workers without a unique GPU are never used for training computation. The default for pools with GPUs is to use all workers with a unique GPU for training computation, and the remaining workers for background dispatch. If the pool does not have access to GPUs and CPUs are used for training, then the default is to use one worker per machine for background data dispatch.
This option supports stochastic solvers only (when the solverName
argument is "sgdm"
, "adam"
, or
"rmsprop"
).
This option supports the trainNetwork
function only.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
DispatchInBackground
— Flag to enable background dispatch
0
(false
) (default) | 1
(true
)
Flag to enable background dispatch, specified as 0
(false
) or 1
(true
).
Background dispatch uses parallel workers to fetch and preprocess data from a datastore during training. Use this option when your mini-batches require significant preprocessing. For more information on when to use background dispatch, see Use Datastore for Parallel Training and Background Dispatching.
When DispatchInBackground
is set to true
, the
software opens a local parallel pool using the default profile, if a local pool is not
currently open. Non-local parallel pools are not supported.
Using this option requires Parallel Computing Toolbox. The input datastore must be subsettable or partitionable. To use this
option, custom datastores must implement the matlab.io.datastore.Subsettable
class.
This option supports stochastic solvers only (when the solverName
argument is "sgdm"
, "adam"
, or
"rmsprop"
).
This option does not support the trainnet
function when training in parallel.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Checkpoints
CheckpointPath
— Path for saving checkpoint neural networks
""
(default) | string scalar | character vector
Path for saving the checkpoint neural networks, specified as a string scalar or character vector.
If you do not specify a path (that is, you use the default
""
), then the software does not save any checkpoint neural networks.If you specify a path, then the software saves checkpoint neural networks to this path and assigns a unique name to each neural network. You can then load any checkpoint neural network and resume training from that neural network.
If the folder does not exist, then you must first create it before specifying the path for saving the checkpoint neural networks. If the path you specify does not exist, then the software throws an error.
For more information about saving neural network checkpoints, see Save Checkpoint Networks and Resume Training.
Data Types: char
| string
CheckpointFrequency
— Frequency of saving checkpoint neural networks
1
(default) | positive integer
Frequency of saving checkpoint neural networks, specified as a positive integer.
If CheckpointFrequencyUnit
is "epoch"
, then the software
saves checkpoint neural networks every CheckpointFrequency
epochs.
If CheckpointFrequencyUnit
is "iteration"
, then the
software saves checkpoint neural networks every
CheckpointFrequency
iterations.
This option only has an effect when CheckpointPath
is
nonempty.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
CheckpointFrequencyUnit
— Checkpoint frequency unit
"epoch"
(default) | "iteration"
Checkpoint frequency unit, specified as "epoch"
or "iteration"
.
If CheckpointFrequencyUnit
is "epoch"
, then the software
saves checkpoint neural networks every CheckpointFrequency
epochs.
If CheckpointFrequencyUnit
is "iteration"
, then the
software saves checkpoint neural networks every
CheckpointFrequency
iterations.
This option only has an effect when CheckpointPath
is nonempty.
Examples
Create Training Options for the RMSProp Optimizer
Create a set of options for training a neural network using the RMSProp optimizer. Set the maximum number of epochs for training to 20, and use a mini-batch with 64 observations at each iteration. Specify the learning rate and the decay rate of the moving average of the squared gradient. Turn on the training progress plot.
options = trainingOptions("rmsprop", ... InitialLearnRate=3e-4, ... SquaredGradientDecayFactor=0.99, ... MaxEpochs=20, ... MiniBatchSize=64, ... Plots="training-progress")
options = TrainingOptionsRMSProp with properties: SquaredGradientDecayFactor: 0.9900 Epsilon: 1.0000e-08 InitialLearnRate: 3.0000e-04 MaxEpochs: 20 LearnRateSchedule: 'none' LearnRateDropFactor: 0.1000 LearnRateDropPeriod: 10 MiniBatchSize: 64 Shuffle: 'once' WorkerLoad: [] CheckpointFrequency: 1 CheckpointFrequencyUnit: 'epoch' SequenceLength: 'longest' DispatchInBackground: 0 L2Regularization: 1.0000e-04 GradientThresholdMethod: 'l2norm' GradientThreshold: Inf Verbose: 1 VerboseFrequency: 50 ValidationData: [] ValidationFrequency: 50 ValidationPatience: Inf CheckpointPath: '' ExecutionEnvironment: 'auto' OutputFcn: [] Metrics: [] Plots: 'training-progress' SequencePaddingValue: 0 SequencePaddingDirection: 'right' InputDataFormats: "auto" TargetDataFormats: "auto" ResetInputNormalization: 1 BatchNormalizationStatistics: 'auto' OutputNetwork: 'last-iteration'
Algorithms
Root Mean Square Propagation
Stochastic gradient descent with momentum uses a single learning rate for all the parameters. Other optimization algorithms seek to improve network training by using learning rates that differ by parameter and can automatically adapt to the loss function being optimized. Root mean square propagation (RMSProp) is one such algorithm. It keeps a moving average of the element-wise squares of the parameter gradients,
β2 is the squared gradient decay factor of the moving average. Common values of the decay rate are 0.9, 0.99, and 0.999. The corresponding averaging lengths of the squared gradients equal 1/(1-β2), that is, 10, 100, and 1000 parameter updates, respectively. The RMSProp algorithm uses this moving average to normalize the updates of each parameter individually,
where the division is performed element-wise. Using RMSProp effectively decreases the learning rates of parameters with large gradients and increases the learning rates of parameters with small gradients. ɛ is a small constant added to avoid division by zero.
L2 Regularization
Adding a regularization term for the weights to the loss function is one way to reduce overfitting [1], [2]. The regularization term is also called weight decay. The loss function with the regularization term takes the form
where is the weight vector, is the regularization factor (coefficient), and the regularization function is
Note that the biases are not regularized [2]. You can specify the regularization factor by using the L2Regularization
training option. You can also specify different regularization factors for different layers and parameters. For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.
The loss function that the software uses for network training includes the regularization term. However, the loss value displayed in the command window and training progress plot during training is the loss on the data only and does not include the regularization term.
Gradient Clipping
If the gradients increase in magnitude exponentially, then the training is unstable and can diverge within a few iterations. This "gradient explosion" is indicated by a training loss that goes to NaN
or Inf
. Gradient clipping helps prevent gradient explosion by stabilizing the training at higher learning rates and in the presence of outliers [3]. Gradient clipping enables networks to be trained faster, and does not usually impact the accuracy of the learned task.
There are two types of gradient clipping.
Norm-based gradient clipping rescales the gradient based on a threshold, and does not change the direction of the gradient. The
"l2norm"
and"global-l2norm"
values ofGradientThresholdMethod
are norm-based gradient clipping methods.Value-based gradient clipping clips any partial derivative greater than the threshold, which can result in the gradient arbitrarily changing direction. Value-based gradient clipping can have unpredictable behavior, but sufficiently small changes do not cause the network to diverge. The
"absolute-value"
value ofGradientThresholdMethod
is a value-based gradient clipping method.
References
[1] Bishop, C. M. Pattern Recognition and Machine Learning. Springer, New York, NY, 2006.
[2] Murphy, K. P. Machine Learning: A Probabilistic Perspective. The MIT Press, Cambridge, Massachusetts, 2012.
[3] Pascanu, R., T. Mikolov, and Y. Bengio. "On the difficulty of training recurrent neural networks". Proceedings of the 30th International Conference on Machine Learning. Vol. 28(3), 2013, pp. 1310–1318.
Version History
Introduced in R2018aR2023b: Specify input and target data formats
Specify the input and target data formats using the InputDataFormats
and TargetDataFormats
options, respectively.
This option supports the trainnet
function only.
R2023b: Train neural network in parallel using only CPU or only GPU resources
Train a neural network in parallel using specific hardware resources by specifying the
ExecutionEnvironment
as "parallel-cpu"
or
"parallel-gpu"
.
This option supports the trainnet
function only.
R2023b: BatchNormalizationStatistics
default is "auto"
Starting in R2023b, the BatchNormalizationStatistics
training option default
value is "auto"
.
This change does not affect the behavior of the function. If you have code that checks the BatchNormalizationStatistics
property, then update your code to account for the "auto"
option.
R2022b: trainNetwork
pads mini-batches to length of longest sequence before splitting when you specify SequenceLength
training option as an integer
Starting in R2022b, when you train a neural network with sequence data using the trainNetwork
function and the SequenceLength
option is an integer, the software pads sequences to the
length of the longest sequence in each mini-batch and then splits the sequences into
mini-batches with the specified sequence length. If SequenceLength
does
not evenly divide the sequence length of the mini-batch, then the last split mini-batch has
a length shorter than SequenceLength
. This behavior prevents the neural
network training on time steps that contain only padding values.
In previous releases, the software pads mini-batches of sequences to have a length matching the nearest multiple of SequenceLength
that is greater than or equal to the mini-batch length and then splits the data. To reproduce this behavior, use a custom training loop and implement this behavior when you preprocess mini-batches of data.
R2018b: ValidationPatience
training option default is Inf
Starting in R2018b, the default value of the ValidationPatience
training option is Inf
, which means that automatic stopping via validation is turned off. This behavior prevents the training from stopping before sufficiently learning from the data.
In previous versions, the default value is 5
. To reproduce this behavior, set the ValidationPatience
option to 5
.
See Also
trainnet
| trainNetwork
| trainingOptions
Topics
- Create Simple Deep Learning Neural Network for Classification
- Transfer Learning Using Pretrained Network
- Resume Training from Checkpoint Network
- Deep Learning with Big Data on CPUs, GPUs, in Parallel, and on the Cloud
- Specify Layers of Convolutional Neural Network
- Set Up Parameters and Train Convolutional Neural Network
Open Example
You have a modified version of this example. Do you want to open this example with your edits?
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)