Contents

learnp

Perceptron weight and bias learning function

Syntax

[dW,LS] = learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnp('code')

Description

learnp is the perceptron weight/bias learning function.

[dW,LS] = learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,

W

S-by-R weight matrix (or b, and S-by-1 bias vector)

P

R-by-Q input vectors (or ones(1,Q))

Z

S-by-Q weighted input vectors

N

S-by-Q net input vectors

A

S-by-Q output vectors

T

S-by-Q layer target vectors

E

S-by-Q layer error vectors

gW

S-by-R weight gradient with respect to performance

gA

S-by-Q output gradient with respect to performance

D

S-by-S neuron distances

LP

Learning parameters, none, LP = []

LS

Learning state, initially should be = []

and returns

dW

S-by-R weight (or bias) change matrix

LS

New learning state

info = learnp('code') returns useful information for each code string:

'pnames'

Names of learning parameters

'pdefaults'

Default learning parameters

'needg'

Returns 1 if this function uses gW or gA

Examples

Here you define a random input P and error E for a layer with a two-element input and three neurons.

p = rand(2,1);
e = rand(3,1);

Because learnp only needs these values to calculate a weight change (see "Algorithm" below), use them to do so.

dW = learnp([],p,[],[],[],[],e,[],[],[],[],[])

Network Use

You can create a standard network that uses learnp with newp.

To prepare the weights and the bias of layer i of a custom network to learn with learnp,

  1. Set net.trainFcn to 'trainb'. (net.trainParam automatically becomes trainb's default parameters.)

  2. Set net.adaptFcn to 'trains'. (net.adaptParam automatically becomes trains's default parameters.)

  3. Set each net.inputWeights{i,j}.learnFcn to 'learnp'.

  4. Set each net.layerWeights{i,j}.learnFcn to 'learnp'.

  5. Set net.biases{i}.learnFcn to 'learnp'. (Each weight and bias learning parameter property automatically becomes the empty matrix, because learnp has no learning parameters.)

To train the network (or enable it to adapt),

  1. Set net.trainParam (or net.adaptParam) properties to desired values.

  2. Call train (adapt).

See help newp for adaption and training examples.

More About

expand all

Algorithms

learnp calculates the weight change dW for a given neuron from the neuron's input P and error E according to the perceptron learning rule:

dw = 0, if e = 0
     = p', if e = 1
     = -p', if e = -1

This can be summarized as

dw = e*p'

References

Rosenblatt, F., Principles of Neurodynamics, Washington, D.C., Spartan Press, 1961

See Also

| |

Was this topic helpful?