Gradient descent weight and bias learning function
[dW,LS] = learngd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learngd('
learngd is the gradient descent weight and bias learning
[dW,LS] = learngd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several
Learning parameters, none,
Learning state, initially should be
New learning state
Learning occurs according to
learngd’s learning parameter, shown here
with its default value.
info = learngd(' returns useful
information for each supported
code character vector:
Names of learning parameters
Default learning parameters
Returns 1 if this function uses
Here you define a random gradient
gW for a weight going to a layer with
three neurons from an input with two elements. Also define a learning rate of 0.5.
gW = rand(3,2); lp.lr = 0.5;
learngd only needs these values to calculate a weight change
(see “Algorithm” below), use them to do so.
dW = learngd(,,,,,,,gW,,,lp,)
learngd calculates the weight change
dW for a given
neuron from the neuron’s input
P and error
E, and the
weight (or bias) learning rate
LR, according to the gradient descent
dw = lr*gW.