Conscience bias learning function
[dB,LS] = learncon(B,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learncon('
learncon is the conscience bias learning function used to increase the
net input to neurons that have the lowest average output until each neuron responds
approximately an equal percentage of the time.
[dB,LS] = learncon(B,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
Learning parameters, none,
Learning state, initially should be =
New learning state
Learning occurs according to
learncon’s learning parameter, shown here
with its default value.
info = learncon(' returns useful
information for each supported
code character vector:
Names of learning parameters
Default learning parameters
Returns 1 if this function uses
Deep Learning Toolbox™ 2.0 compatibility: The
LP.lr described above equals 1 minus the
bias time constant used by
trainc in the Deep Learning Toolbox 2.0 software.
Here you define a random output
A and bias vector
for a layer with three neurons. You also define the learning rate
a = rand(3,1); b = rand(3,1); lp.lr = 0.5;
learncon only needs these values to calculate a bias change (see
“Algorithm” below), use them to do so.
dW = learncon(b,,,,a,,,,,,lp,)
To prepare the bias of layer
i of a custom network to learn with
net.trainParam automatically becomes
net.adaptParam automatically becomes
'learncon'. .(Each weight learning parameter property is automatically set
learncon’s default parameters.)
To train the network (or enable it to adapt),
net.adaptParam) properties as desired.
learncon calculates the bias change
db for a given
neuron by first updating each neuron’s conscience, i.e., the running
average of its output:
c = (1-lr)*c + lr*a
The conscience is then used to compute a bias for the neuron that is greatest for smaller conscience values.
b = exp(1-log(c)) - b
C from the bias values each time
it is called.)