Scaled conjugate gradient backpropagation
net.trainFcn = 'trainscg' sets the network
trainscg is a network training function that updates weight and bias
values according to the scaled conjugate gradient method.
Training occurs according to
trainscg training parameters, shown here
with their default values:
net.trainParam.epochs — Maximum number of epochs to train. The
default value is 1000.
net.trainParam.show — Epochs between displays
NaN for no displays). The default value is 25.
net.trainParam.showCommandLine — Generate command-line output. The
default value is
net.trainParam.showWindow — Show training GUI. The default value is
net.trainParam.goal — Performance goal. The default value is
net.trainParam.time — Maximum time to train in seconds. The default
net.trainParam.min_grad — Minimum performance gradient. The default
net.trainParam.max_fail — Maximum validation failures. The default
net.trainParam.mu — Marquardt adjustment parameter. The default
value is 0.005.
net.trainParam.sigma — Determine change in weight for second
derivative approximation. The default value is
net.trainParam.lambda — Parameter for regulating the indefiniteness
of the Hessian. The default value is
This example shows how to solve a problem consisting of inputs
p and targets
t by using a network.
p = [0 1 2 3 4 5]; t = [0 0 0 1 1 1];
A two-layer feed-forward network with two hidden neurons and this training function is created.
net = feedforwardnet(2,'trainscg');
Here the network is trained and tested.
net = train(net,p,t); a = net(p)
help feedforwardnet and
for other examples.
trainedNet— Trained network
Trained network, returned as a
tr— Training record
Training record (
perf), returned as a
structure whose fields depend on the network training function
net.NET.trainFcn). It can include fields such as:
Training, data division, and performance functions and parameters
Data division indices for training, validation and test sets
Data division masks for training validation and test sets
Number of epochs (
num_epochs) and the best epoch
A list of training state names (
Fields for each state name recording its value throughout training
Performances of the best network (
You can create a standard network that uses
cascadeforwardnet. To prepare a custom
network to be trained with
'trainscg'. This sets
net.trainParam properties to desired values.
In either case, calling
train with the resulting network trains the
trainscg can train any network as long as its weight, net input, and
transfer functions have derivative functions. Backpropagation is used to calculate derivatives
perf with respect to the weight and bias variables
The scaled conjugate gradient algorithm is based on conjugate directions, as in
this algorithm does not perform a line search at each iteration. See Moller (Neural
Networks, Vol. 6, 1993, pp. 525–533) for a more detailed discussion of the scaled
conjugate gradient algorithm.
Training stops when any of these conditions occurs:
The maximum number of
epochs (repetitions) is reached.
The maximum amount of
time is exceeded.
Performance is minimized to the
The performance gradient falls below
Validation performance has increased more than
max_fail times since
the last time it decreased (when using validation).
 Moller. Neural Networks, Vol. 6, 1993, pp. 525–533