Gradient descent with adaptive learning rate backpropagation
net.trainFcn = 'traingda'
[net,tr] = train(net,...)
traingda is a network training function that updates weight and bias
values according to gradient descent with adaptive learning rate.
net.trainFcn = 'traingda' sets the network
[net,tr] = train(net,...) trains the network with
Training occurs according to
traingda training parameters, shown here
with their default values:
Maximum number of epochs to train
Ratio to increase learning rate
Ratio to decrease learning rate
Maximum validation failures
Maximum performance increase
Minimum performance gradient
Epochs between displays (
Generate command-line output
Show training GUI
Maximum time to train in seconds
You can create a standard network that uses
cascadeforwardnet. To prepare a custom
network to be trained with
net.trainParam properties to desired
In either case, calling
train with the resulting network trains the
help feedforwardnet and
With standard steepest descent, the learning rate is held constant throughout training. The performance of the algorithm is very sensitive to the proper setting of the learning rate. If the learning rate is set too high, the algorithm can oscillate and become unstable. If the learning rate is too small, the algorithm takes too long to converge. It is not practical to determine the optimal setting for the learning rate before training, and, in fact, the optimal learning rate changes during the training process, as the algorithm moves across the performance surface.
You can improve the performance of the steepest descent algorithm if you allow the learning rate to change during the training process. An adaptive learning rate attempts to keep the learning step size as large as possible while keeping learning stable. The learning rate is made responsive to the complexity of the local error surface.
An adaptive learning rate requires some changes in the training procedure used by
traingd. First, the initial network output and error are calculated. At
each epoch new weights and biases are calculated using the current learning rate. New outputs
and errors are then calculated.
As with momentum, if the new error exceeds the old error by more than a predefined ratio,
max_perf_inc (typically 1.04), the new weights and biases are discarded. In
addition, the learning rate is decreased (typically by multiplying by
= 0.7). Otherwise, the new weights, etc., are kept. If the new error is less than the old
error, the learning rate is increased (typically by multiplying by
This procedure increases the learning rate, but only to the extent that the network can learn without large error increases. Thus, a near-optimal learning rate is obtained for the local terrain. When a larger learning rate could result in stable learning, the learning rate is increased. When the learning rate is too high to guarantee a decrease in error, it is decreased until stable learning resumes.
Try the Neural Network Design
nnd12vl [HDB96] for an illustration of the performance of the variable learning rate algorithm.
Backpropagation training with an adaptive learning rate is implemented with the function
traingda, which is called just like
traingd, except for
the additional training parameters
lr_inc. Here is how it is called to train the previous two-layer
p = [-1 -1 2 2; 0 5 0 5]; t = [-1 -1 1 1]; net = feedforwardnet(3,'traingda'); net.trainParam.lr = 0.05; net.trainParam.lr_inc = 1.05; net = train(net,p,t); y = net(p)
traingda can train any network as long as its weight, net input, and
transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
with respect to the weight and bias variables
X. Each variable is adjusted
according to gradient descent:
dX = lr*dperf/dX
At each epoch, if performance decreases toward the goal, then the learning rate is
increased by the factor
lr_inc. If performance increases by more than the
max_perf_inc, the learning rate is adjusted by the factor
lr_dec and the change that increased the performance is not made.
Training stops when any of these conditions occurs:
The maximum number of
epochs (repetitions) is reached.
The maximum amount of
time is exceeded.
Performance is minimized to the
The performance gradient falls below
Validation performance has increased more than
max_fail times since
the last time it decreased (when using validation).