One-step secant backpropagation
net.trainFcn = 'trainoss'
[net,tr] = train(net,...)
trainoss is a network training function that
updates weight and bias values according to the one-step secant method.
net.trainFcn = 'trainoss' sets the network
[net,tr] = train(net,...) trains the network
Training occurs according to
parameters, shown here with their default values:
Maximum number of epochs to train
Maximum validation failures
Minimum performance gradient
Name of line search routine to use
Epochs between displays (
Generate command-line output
Show training GUI
Maximum time to train in seconds
Parameters related to line search methods (not all used for all methods):
Scale factor that determines sufficient reduction in
Scale factor that determines sufficiently large step size
Initial step size in interval location step
Parameter to avoid small reductions in performance, usually
Lower limit on change in step size
Upper limit on change in step size
Maximum step length
Minimum step length
Maximum step size
You can create a standard network that uses
To prepare a custom network to be trained with
to desired values.
In either case, calling
train with the resulting
network trains the network with
Here a neural network is trained to predict median house prices.
[x,t] = house_dataset; net = feedforwardnet(10,'trainoss'); net = train(net,x,t); y = net(x)
Because the BFGS algorithm requires more storage and computation in each iteration than the conjugate gradient algorithms, there is need for a secant approximation with smaller storage and computation requirements. The one step secant (OSS) method is an attempt to bridge the gap between the conjugate gradient algorithms and the quasi-Newton (secant) algorithms. This algorithm does not store the complete Hessian matrix; it assumes that at each iteration, the previous Hessian was the identity matrix. This has the additional advantage that the new search direction can be calculated without computing a matrix inverse.
The one step secant method is described in [Batt92]. This algorithm requires less storage and computation per epoch than the BFGS algorithm. It requires slightly more storage and computation per epoch than the conjugate gradient algorithms. It can be considered a compromise between full quasi-Newton algorithms and conjugate gradient algorithms.
trainoss can train any network as long as
its weight, net input, and transfer functions have derivative functions.
Backpropagation is used to calculate derivatives of performance
respect to the weight and bias variables
variable is adjusted according to the following:
X = X + a*dX;
dX is the search direction. The parameter
selected to minimize the performance along the search direction. The
line search function
searchFcn is used to locate
the minimum point. The first search direction is the negative of the
gradient of performance. In succeeding iterations the search direction
is computed from the new gradient and the previous steps and gradients,
according to the following formula:
dX = -gX + Ac*X_step + Bc*dgX;
gX is the gradient,
the change in the weights on the previous iteration, and
the change in the gradient from the last iteration. See Battiti (Neural
Computation, Vol. 4, 1992, pp. 141–166) for a more
detailed discussion of the one-step secant algorithm.
Training stops when any of these conditions occurs:
The maximum number of
The maximum amount of
time is exceeded.
Performance is minimized to the
The performance gradient falls below
Validation performance has increased more than
since the last time it decreased (when using validation).
Battiti, R., "First and second order methods for learning: Between steepest descent and Newton's method," Neural Computation, Vol. 4, No. 2, 1992, pp. 141–166