Gradient descent with momentum weight and bias learning function
[dW,LS] = learngdm(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learngdm('code
')
learngdm
is the gradient descent with momentum
weight and bias learning function.
[dW,LS] = learngdm(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes
several inputs,
W 

P 

Z 

N 

A 

T 

E 

gW 

gA 

D 

LP  Learning parameters, none, 
LS  Learning state, initially should be = 
and returns
dW 

LS  New learning state 
Learning occurs according to learngdm
's
learning parameters, shown here with their default values.
LP.lr  0.01  Learning rate 
LP.mc  0.9  Momentum constant 
info = learngdm('
returns
useful information for each code
')code
string:
'pnames'  Names of learning parameters 
'pdefaults'  Default learning parameters 
'needg'  Returns 1 if this function uses 
Here you define a random gradient G
for a
weight going to a layer with three neurons from an input with two
elements. Also define a learning rate of 0.5 and momentum constant
of 0.8:
gW = rand(3,2); lp.lr = 0.5; lp.mc = 0.8;
Because learngdm
only needs these values
to calculate a weight change (see "Algorithm" below),
use them to do so. Use the default initial learning state.
ls = []; [dW,ls] = learngdm([],[],[],[],[],[],[],gW,[],[],lp,ls)
learngdm
returns the weight change and a
new learning state.
You can create a standard network that uses learngdm
with newff
, newcf
,
or newelm
.
To prepare the weights and the bias of layer i
of
a custom network to adapt with learngdm
,
Set net.adaptFcn
to 'trains'
.
net.adaptParam
automatically becomes trains
's
default parameters.
Set each net.inputWeights{i,j}.learnFcn
to 'learngdm'
.
Set each net.layerWeights{i,j}.learnFcn
to 'learngdm'
.
Set net.biases{i}.learnFcn
to 'learngdm'
.
Each weight and bias learning parameter property is automatically
set to learngdm
's default parameters.
To allow the network to adapt,
Set net.adaptParam
properties
to desired values.
Call adapt
with
the network.
See help newff
or help newcf
for
examples.