Conjugate gradient backpropagation with Powell-Beale restarts
net.trainFcn = 'traincgb'
[net,tr] = train(net,...)
traincgb is a network training function that
updates weight and bias values according to the conjugate gradient
backpropagation with Powell-Beale restarts.
net.trainFcn = 'traincgb' sets the network
[net,tr] = train(net,...) trains the network
Training occurs according to
parameters, shown here with their default values:
Maximum number of epochs to train
Epochs between displays (
Generate command-line output
Show training GUI
Maximum time to train in seconds
Minimum performance gradient
Maximum validation failures
Name of line search routine to use
Parameters related to line search methods (not all used for all methods):
Scale factor that determines sufficient reduction in
Scale factor that determines sufficiently large step size
Initial step size in interval location step
Parameter to avoid small reductions in performance, usually
Lower limit on change in step size
Upper limit on change in step size
Maximum step length
Minimum step length
Maximum step size
You can create a standard network that uses
To prepare a custom network to be trained with
to desired values.
In either case, calling
train with the resulting
network trains the network with
This example shows how to train a neural network using the
traincgb train function.
Here a neural network is trained to predict body fat percentages.
[x, t] = bodyfat_dataset; net = feedforwardnet(10, 'traincgb'); net = train(net, x, t); y = net(x);
For all conjugate gradient algorithms, the search direction is periodically reset to the negative of the gradient. The standard reset point occurs when the number of iterations is equal to the number of network parameters (weights and biases), but there are other reset methods that can improve the efficiency of training. One such reset method was proposed by Powell [Powe77], based on an earlier version proposed by Beale [Beal72]. This technique restarts if there is very little orthogonality left between the current gradient and the previous gradient. This is tested with the following inequality:
If this condition is satisfied, the search direction is reset to the negative of the gradient.
traincgb routine has somewhat better
traincgp for some problems, although
performance on any given problem is difficult to predict. The storage
requirements for the Powell-Beale algorithm (six vectors) are slightly
larger than for Polak-Ribiére (four vectors).
traincgb can train any network as long as
its weight, net input, and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
respect to the weight and bias variables
variable is adjusted according to the following:
X = X + a*dX;
dX is the search direction. The parameter
selected to minimize the performance along the search direction. The
line search function
searchFcn is used to locate
the minimum point. The first search direction is the negative of the
gradient of performance. In succeeding iterations the search direction
is computed from the new gradient and the previous search direction
according to the formula
dX = -gX + dX_old*Z;
gX is the gradient. The parameter
be computed in several different ways. The Powell-Beale variation
of conjugate gradient is distinguished by two features. First, the
algorithm uses a test to determine when to reset the search direction
to the negative of the gradient. Second, the search direction is computed
from the negative gradient, the previous search direction, and the
last search direction before the previous reset. See Powell, Mathematical
Programming, Vol. 12, 1977, pp. 241 to 254, for a more
detailed discussion of the algorithm.
Training stops when any of these conditions occurs:
The maximum number of
The maximum amount of
time is exceeded.
Performance is minimized to the
The performance gradient falls below
Validation performance has increased more than
since the last time it decreased (when using validation).
Powell, M.J.D., "Restart procedures for the conjugate gradient method," Mathematical Programming, Vol. 12, 1977, pp. 241–254