## Documentation Center |

Bayesian regulation backpropagation

`net.trainFcn = 'trainbr'[net,tr] = train(net,...)`

`trainbr` is a network training function that
updates the weight and bias values according to Levenberg-Marquardt
optimization. It minimizes a combination of squared errors and weights,
and then determines the correct combination so as to produce a network
that generalizes well. The process is called Bayesian regularization.

`net.trainFcn = 'trainbr'`

`[net,tr] = train(net,...)`

Training occurs according to `trainbr`'s
training parameters, shown here with their default values:

net.trainParam.epochs | 100 | Maximum number of epochs to train |

net.trainParam.goal | 0 | Performance goal |

net.trainParam.mu | 0.005 | Marquardt adjustment parameter |

net.trainParam.mu_dec | 0.1 | Decrease factor for |

net.trainParam.mu_inc | 10 | Increase factor for |

net.trainParam.mu_max | 1e10 | Maximum value for |

net.trainParam.max_fail | 0 | Maximum validation failures |

net.trainParam.mem_reduc | 1 | Factor to use for memory/speed tradeoff |

net.trainParam.min_grad | 1e-10 | Minimum performance gradient |

net.trainParam.show | 25 | Epochs between displays ( |

net.trainParam.showCommandLine | 0 | Generate command-line output |

net.trainParam.showWindow | 1 | Show training GUI |

net.trainParam.time | inf | Maximum time to train in seconds |

Validation stops are disabled by default (`max_fail
= 0`) so that training can continue until an optimal combination
of errors and weights is found. However, some weight/bias minimization
can still be achieved with shorter training times if validation is
enabled by setting `max_fail` to 6 or some other
strictly positive value.

You can create a standard network that uses `trainbr` with `feedforwardnet` or `cascadeforwardnet`.
To prepare a custom network to be trained with `trainbr`,

In either case, calling `train` with the resulting
network trains the network with `trainbr`. See `feedforwardnet` and `cascadeforwardnet` for
examples.

Here is a problem consisting of inputs `p` and
targets `t` to be solved with a network. It involves
fitting a noisy sine wave.

p = [-1:.05:1]; t = sin(2*pi*p)+0.1*randn(size(p));

A feed-forward network is created with a hidden layer of 2 neurons.

net = feedforwardnet(2,'trainbr');

Here the network is trained and tested.

net = train(net,p,t); a = net(p)

This function uses the Jacobian for calculations, which assumes
that performance is a mean or sum of squared errors. Therefore networks
trained with this function must use either the `mse` or `sse` performance
function.

MacKay, *Neural Computation*, Vol. 4, No.
3, 1992, pp. 415–447

Foresee and Hagan, *Proceedings of the International
Joint Conference on Neural Networks*, June, 1997

`cascadeforwardnet` | `feedforwardnet` | `trainbfg` | `traincgb` | `traincgf` | `traincgp` | `traingda` | `traingdm` | `traingdx` | `trainlm` | `trainrp` | `trainscg`

Was this topic helpful?