Documentation Center

  • Trial Software
  • Product Updates

Contents

trainb

Batch training with weight and bias learning rules

Syntax

net.trainFcn = 'trainb'
[net,tr] = train(net,...)

Description

trainb is not called directly. Instead it is called by train for networks whose net.trainFcn property is set to 'trainb', thus:

net.trainFcn = 'trainb'

[net,tr] = train(net,...)

trainb trains a network with weight and bias learning rules with batch updates. The weights and biases are updated at the end of an entire pass through the input data.

Training occurs according to trainb's training parameters, shown here with their default values:

net.trainParam.epochs100

Maximum number of epochs to train

net.trainParam.goal0

Performance goal

net.trainParam.max_fail5

Maximum validation failures

net.trainParam.show25

Epochs between displays (NaN for no displays)

net.trainParam.showCommandLinefalse

Generate command-line output

net.trainParam.showWindowtrue

Show training GUI

net.trainParam.timeinf

Maximum time to train in seconds

Network Use

You can create a standard network that uses trainb by calling linearlayer.

To prepare a custom network to be trained with trainb,

  1. Set net.trainFcn to 'trainb'. This sets net.trainParam to trainb's default parameters.

  2. Set each net.inputWeights{i,j}.learnFcn to a learning function. Set each net.layerWeights{i,j}.learnFcn to a learning function. Set each net.biases{i}.learnFcn to a learning function. (Weight and bias learning parameters are automatically set to default values for the given learning function.)

To train the network,

  1. Set net.trainParam properties to desired values.

  2. Set weight and bias learning parameters to desired values.

  3. Call train.

More About

expand all

Algorithms

Each weight and bias is updated according to its learning function after each epoch (one pass through the entire set of input vectors).

Training stops when any of these conditions is met:

  • The maximum number of epochs (repetitions) is reached.

  • Performance is minimized to the goal.

  • The maximum amount of time is exceeded.

  • Validation performance has increased more than max_fail times since the last time it decreased (when using validation).

See Also

|

Was this topic helpful?