# trainbfg

BFGS quasi-Newton backpropagation

## Syntax

```net.trainFcn = 'trainbfg' [net,tr] = train(net,...) ```

## Description

`trainbfg` is a network training function that updates weight and bias values according to the BFGS quasi-Newton method.

`net.trainFcn = 'trainbfg'` sets the network `trainFcn` property.

`[net,tr] = train(net,...)` trains the network with `trainbfg`.

Training occurs according to `trainbfg` training parameters, shown here with their default values:

 `net.trainParam.epochs` `1000` Maximum number of epochs to train `net.trainParam.showWindow` `true` Show training window `net.trainParam.show` `25` Epochs between displays (`NaN` for no displays) `net.trainParam.showCommandLine` `false` Generate command-line output `net.trainParam.goal` `0` Performance goal `net.trainParam.time` `inf` Maximum time to train in seconds `net.trainParam.min_grad` `1e-6` Minimum performance gradient `net.trainParam.max_fail` `6` Maximum validation failures `net.trainParam.searchFcn` `'srchbac'` Name of line search routine to use

Parameters related to line search methods (not all used for all methods):

 `net.trainParam.scal_tol` `20` Divide into `delta` to determine tolerance for linear search. `net.trainParam.alpha` `0.001` Scale factor that determines sufficient reduction in `perf` `net.trainParam.beta` `0.1` Scale factor that determines sufficiently large step size `net.trainParam.delta` `0.01` Initial step size in interval location step `net.trainParam.gama` `0.1` Parameter to avoid small reductions in performance, usually set to 0.1 (see `srch_cha`) `net.trainParam.low_lim` `0.1` Lower limit on change in step size `net.trainParam.up_lim` ` 0.5` Upper limit on change in step size `net.trainParam.maxstep` `100` Maximum step length `net.trainParam.minstep` `1.0e-6` Minimum step length `net.trainParam.bmax` `26` Maximum step size `net.trainParam.batch_frag` `0` In case of multiple batches, they are considered independent. Any nonzero value implies a fragmented batch, so the final layer’s conditions of a previous trained epoch are used as initial conditions for the next epoch.

## Network Use

You can create a standard network that uses `trainbfg` with `feedfowardnet` or `cascadeforwardnet`. To prepare a custom network to be trained with `trainbfg`:

1. Set `NET.trainFcn` to `'trainbfg'`. This sets `NET.trainParam` to `trainbfg`’s default parameters.

2. Set `NET.trainParam` properties to desired values.

In either case, calling `train` with the resulting network trains the network with `trainbfg`.

## Examples

collapse all

This example shows how to train a neural network using the `trainbfg` train function.

Here a neural network is trained to predict body fat percentages.

```[x, t] = bodyfat_dataset; net = feedforwardnet(10, 'trainbfg'); net = train(net, x, t); y = net(x);```

collapse all

### BFGS Quasi-Newton Backpropagation

Newton’s method is an alternative to the conjugate gradient methods for fast optimization. The basic step of Newton’s method is

`${x}_{k+1}={x}_{k}-{A}_{k}^{-1}{g}_{k}$`

where ${A}_{k}^{-1}$ is the Hessian matrix (second derivatives) of the performance index at the current values of the weights and biases. Newton’s method often converges faster than conjugate gradient methods. Unfortunately, it is complex and expensive to compute the Hessian matrix for feedforward neural networks. There is a class of algorithms that is based on Newton’s method, but which does not require calculation of second derivatives. These are called quasi-Newton (or secant) methods. They update an approximate Hessian matrix at each iteration of the algorithm. The update is computed as a function of the gradient. The quasi-Newton method that has been most successful in published studies is the Broyden, Fletcher, Goldfarb, and Shanno (BFGS) update. This algorithm is implemented in the `trainbfg` routine.

The BFGS algorithm is described in [DeSc83]. This algorithm requires more computation in each iteration and more storage than the conjugate gradient methods, although it generally converges in fewer iterations. The approximate Hessian must be stored, and its dimension is n `x` n, where n is equal to the number of weights and biases in the network. For very large networks it might be better to use Rprop or one of the conjugate gradient algorithms. For smaller networks, however, `trainbfg` can be an efficient training function.

## Algorithms

`trainbfg` can train any network as long as its weight, net input, and transfer functions have derivative functions.

Backpropagation is used to calculate derivatives of performance `perf` with respect to the weight and bias variables `X`. Each variable is adjusted according to the following:

```X = X + a*dX; ```

where `dX` is the search direction. The parameter `a` is selected to minimize the performance along the search direction. The line search function `searchFcn` is used to locate the minimum point. The first search direction is the negative of the gradient of performance. In succeeding iterations the search direction is computed according to the following formula:

```dX = -H\gX; ```

where `gX` is the gradient and `H` is a approximate Hessian matrix. See page 119 of Gill, Murray, and Wright (Practical Optimization, 1981) for a more detailed discussion of the BFGS quasi-Newton method.

Training stops when any of these conditions occurs:

• The maximum number of `epochs` (repetitions) is reached.

• The maximum amount of `time` is exceeded.

• Performance is minimized to the `goal`.

• The performance gradient falls below `min_grad`.

• Validation performance has increased more than `max_fail` times since the last time it decreased (when using validation).

## References

Gill, Murray, & Wright, Practical Optimization, 1981