Normalized perceptron weight and bias learning function
[dW,LS] = learnpn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnpn('
learnpn is a weight and bias learning function.
It can result in faster learning than
input vectors have widely varying magnitudes.
[dW,LS] = learnpn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes
Learning parameters, none,
Learning state, initially should be =
New learning state
info = learnpn(' returns
useful information for each
Names of learning parameters
Default learning parameters
Returns 1 if this function uses
Here you define a random input
P and error
a layer with a two-element input and three neurons.
p = rand(2,1); e = rand(3,1);
learnpn only needs these values to
calculate a weight change (see "Algorithm" below), use
them to do so.
dW = learnpn(,p,,,,,e,,,,,)
You can create a standard network that uses
To prepare the weights and the bias of layer
a custom network to learn with
net.trainParam automatically becomes
net.adaptParam automatically becomes
(Each weight and bias learning parameter property automatically becomes
the empty matrix, because
learnpn has no learning
To train the network (or enable it to adapt),
properties to desired values.
help newp for adaption and training examples.
Perceptrons do have one real limitation. The set of input vectors must be linearly separable if a solution is to be found. That is, if the input vectors with targets of 1 cannot be separated by a line or hyperplane from the input vectors associated with values of 0, the perceptron will never be able to classify them correctly.
learnpn calculates the weight change
a given neuron from the neuron's input
E according to the normalized perceptron
pn = p / sqrt(1 + p(1)^2 + p(2)^2) + ... + p(R)^2) dw = 0, if e = 0 = pn', if e = 1 = -pn', if e = -1
The expression for
dW can be summarized as
dw = e*pn'