This is machine translation

Translated by Microsoft
Mouse over text to see original. Click the button below to return to the English verison of the page.


Unsupervised random order weight/bias training


net.trainFcn = 'trainru'
[net,tr] = train(net,...)


trainru is not called directly. Instead it is called by train for networks whose net.trainFcn property is set to 'trainru', thus:

net.trainFcn = 'trainru' sets the network trainFcn property.

[net,tr] = train(net,...) trains the network with trainru.

trainru trains a network with weight and bias learning rules with incremental updates after each presentation of an input. Inputs are presented in random order.

Training occurs according to trainru training parameters, shown here with their default values:



Maximum number of epochs to train


Epochs between displays (NaN for no displays)



Generate command-line output



Show training GUI



Maximum time to train in seconds

Network Use

To prepare a custom network to be trained with trainru,

  1. Set net.trainFcn to 'trainru'. This sets net.trainParam to trainru's default parameters.

  2. Set each net.inputWeights{i,j}.learnFcn to a learning function.

  3. Set each net.layerWeights{i,j}.learnFcn to a learning function.

  4. Set each net.biases{i}.learnFcn to a learning function. (Weight and bias learning parameters are automatically set to default values for the given learning function.)

To train the network,

  1. Set net.trainParam properties to desired values.

  2. Set weight and bias learning parameters to desired values.

  3. Call train.

More About

collapse all


For each epoch, all training vectors (or sequences) are each presented once in a different random order, with the network and weight and bias values updated accordingly after each individual presentation.

Training stops when any of these conditions is met:

  • The maximum number of epochs (repetitions) is reached.

  • The maximum amount of time is exceeded.

See Also


Introduced in R2010b

Was this topic helpful?