Gradient descent with momentum weight and bias learning function
[dW,LS] = learngdm(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learngdm('
learngdm is the gradient descent with momentum
weight and bias learning function.
[dW,LS] = learngdm(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes
Learning parameters, none,
Learning state, initially should be =
New learning state
Learning occurs according to
learning parameters, shown here with their default values.
info = learngdm(' returns
useful information for each
Names of learning parameters
Default learning parameters
Returns 1 if this function uses
Here you define a random gradient
G for a
weight going to a layer with three neurons from an input with two
elements. Also define a learning rate of 0.5 and momentum constant
gW = rand(3,2); lp.lr = 0.5; lp.mc = 0.8;
learngdm only needs these values
to calculate a weight change (see "Algorithm" below),
use them to do so. Use the default initial learning state.
ls = ; [dW,ls] = learngdm(,,,,,,,gW,,,lp,ls)
learngdm returns the weight change and a
new learning state.
You can create a standard network that uses
To prepare the weights and the bias of layer
a custom network to adapt with
net.adaptParam automatically becomes
Each weight and bias learning parameter property is automatically
learngdm's default parameters.
To allow the network to adapt,
to desired values.
help newff or
help newcf for
learngdm calculates the weight change
a given neuron from the neuron's input
E, the weight (or bias)
LR, and momentum constant
according to gradient descent with momentum:
dW = mc*dWprev + (1-mc)*lr*gW
The previous weight change
dWprev is stored
and read from the learning state