This is machine translation

Translated by Microsoft
Mouse over text to see original. Click the button below to return to the English verison of the page.


Scaled conjugate gradient backpropagation


net.trainFcn = 'trainscg'
[net,tr] = train(net,...)


trainscg is a network training function that updates weight and bias values according to the scaled conjugate gradient method.

net.trainFcn = 'trainscg' sets the network trainFcn property.

[net,tr] = train(net,...) trains the network with trainscg.

Training occurs according to trainscg training parameters, shown here with their default values:


Maximum number of epochs to train


Epochs between displays (NaN for no displays)


Generate command-line output


Show training GUI


Performance goal


Maximum time to train in seconds


Minimum performance gradient


Maximum validation failures


Determine change in weight for second derivative approximation


Parameter for regulating the indefiniteness of the Hessian

Network Use

You can create a standard network that uses trainscg with feedforwardnet or cascadeforwardnet. To prepare a custom network to be trained with trainscg,

  1. Set net.trainFcn to 'trainscg'. This sets net.trainParam to trainscg's default parameters.

  2. Set net.trainParam properties to desired values.

In either case, calling train with the resulting network trains the network with trainscg.


Here is a problem consisting of inputs p and targets t to be solved with a network.

p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];

A two-layer feed-forward network with two hidden neurons and this training function is created.

net = feedforwardnet(2,'trainscg');

Here the network is trained and retested.

net = train(net,p,t);
a = net(p)

See help feedforwardnet and help cascadeforwardnet for other examples.

More About

collapse all


trainscg can train any network as long as its weight, net input, and transfer functions have derivative functions. Backpropagation is used to calculate derivatives of performance perf with respect to the weight and bias variables X.

The scaled conjugate gradient algorithm is based on conjugate directions, as in traincgp, traincgf, and traincgb, but this algorithm does not perform a line search at each iteration. See Moller (Neural Networks, Vol. 6, 1993, pp. 525–533) for a more detailed discussion of the scaled conjugate gradient algorithm.

Training stops when any of these conditions occurs:

  • The maximum number of epochs (repetitions) is reached.

  • The maximum amount of time is exceeded.

  • Performance is minimized to the goal.

  • The performance gradient falls below min_grad.

  • Validation performance has increased more than max_fail times since the last time it decreased (when using validation).


Moller, Neural Networks, Vol. 6, 1993, pp. 525–533

Introduced before R2006a

Was this topic helpful?