Documentation Center |
Self-organizing map weight learning function
[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnsom('code')
learnsom is the self-organizing map weight learning function.
[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W | S-by-R weight matrix (or S-by-1 bias vector) |
P | R-by-Q input vectors (or ones(1,Q)) |
Z | S-by-Q weighted input vectors |
N | S-by-Q net input vectors |
A | S-by-Q output vectors |
T | S-by-Q layer target vectors |
E | S-by-Q layer error vectors |
gW | S-by-R weight gradient with respect to performance |
gA | S-by-Q output gradient with respect to performance |
D | S-by-S neuron distances |
LP | Learning parameters, none, LP = [] |
LS | Learning state, initially should be = [] |
and returns
dW | S-by-R weight (or bias) change matrix |
LS | New learning state |
Learning occurs according to learnsom's learning parameters, shown here with their default values.
LP.order_lr | 0.9 | Ordering phase learning rate |
LP.order_steps | 1000 | Ordering phase steps |
LP.tune_lr | 0.02 | Tuning phase learning rate |
LP.tune_nd | 1 | Tuning phase neighborhood distance |
info = learnsom('code') returns useful information for each code string:
'pnames' | Names of learning parameters |
'pdefaults' | Default learning parameters |
'needg' | Returns 1 if this function uses gW or gA |
Here you define a random input P, output A, and weight matrix W for a layer with a two-element input and six neurons. You also calculate positions and distances for the neurons, which are arranged in a 2-by-3 hexagonal pattern. Then you define the four learning parameters.
p = rand(2,1); a = rand(6,1); w = rand(6,2); pos = hextop(2,3); d = linkdist(pos); lp.order_lr = 0.9; lp.order_steps = 1000; lp.tune_lr = 0.02; lp.tune_nd = 1;
Because learnsom only needs these values to calculate a weight change (see "Algorithm" below), use them to do so.
ls = []; [dW,ls] = learnsom(w,p,[],[],a,[],[],[],[],d,lp,ls)
You can create a standard network that uses learnsom with newsom.
Set net.trainFcn to 'trainr'. (net.trainParam automatically becomes trainr's default parameters.)
Set net.adaptFcn to 'trains'. (net.adaptParam automatically becomes trains's default parameters.)
Set each net.layerWeights{i,j}.learnFcn to 'learnsom'.
Set net.biases{i}.learnFcn to 'learnsom'. (Each weight learning parameter property is automatically set to learnsom's default parameters.)
To train the network (or enable it to adapt):