Selforganizing map weight learning function
[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnsom('code
')
learnsom
is the selforganizing map weight
learning function.
[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes
several inputs,
W 

P 

Z 

N 

A 

T 

E 

gW 

gA 

D 

LP  Learning parameters, none, 
LS  Learning state, initially should be = 
and returns
dW 

LS  New learning state 
Learning occurs according to learnsom
's
learning parameters, shown here with their default values.
LP.order_lr  0.9  Ordering phase learning rate 
LP.order_steps  1000  Ordering phase steps 
LP.tune_lr  0.02  Tuning phase learning rate 
LP.tune_nd  1  Tuning phase neighborhood distance 
info = learnsom('
returns
useful information for each code
')code
string:
'pnames'  Names of learning parameters 
'pdefaults'  Default learning parameters 
'needg'  Returns 1 if this function uses 
Here you define a random input P
, output A
,
and weight matrix W
for a layer with a twoelement
input and six neurons. You also calculate positions and distances
for the neurons, which are arranged in a 2by3 hexagonal pattern.
Then you define the four learning parameters.
p = rand(2,1); a = rand(6,1); w = rand(6,2); pos = hextop(2,3); d = linkdist(pos); lp.order_lr = 0.9; lp.order_steps = 1000; lp.tune_lr = 0.02; lp.tune_nd = 1;
Because learnsom
only needs these values
to calculate a weight change (see "Algorithm" below),
use them to do so.
ls = []; [dW,ls] = learnsom(w,p,[],[],a,[],[],[],[],d,lp,ls)
You can create a standard network that uses learnsom
with newsom
.
Set net.trainFcn
to 'trainr'
.
(net.trainParam
automatically becomes trainr
's
default parameters.)
Set net.adaptFcn
to 'trains'
.
(net.adaptParam
automatically becomes trains
's
default parameters.)
Set each net.inputWeights{i,j}.learnFcn
to 'learnsom'
.
Set each net.layerWeights{i,j}.learnFcn
to 'learnsom'
.
Set net.biases{i}.learnFcn
to 'learnsom'
.
(Each weight learning parameter property is automatically set to learnsom
's
default parameters.)
To train the network (or enable it to adapt):
Set net.trainParam
(or net.adaptParam
)
properties to desired values.
Call train
(adapt
).
learnsom
calculates the weight change dW
for
a given neuron from the neuron's input P
,
activation A2
, and learning rate LR
:
dw = lr*a2*(p'w)
where the activation A2
is found from the
layer output A
, neuron distances D
,
and the current neighborhood size ND
:
a2(i,q) = 1, if a(i,q) = 1 = 0.5, if a(j,q) = 1 and D(i,j) <= nd = 0, otherwise
The learning rate LR
and neighborhood size NS
are
altered through two phases: an ordering phase and a tuning phase.
The ordering phases lasts as many steps as LP.order_steps
.
During this phase LR
is adjusted from LP.order_lr
down
to LP.tune_lr
, and ND
is adjusted
from the maximum neuron distance down to 1. It is during this phase
that neuron weights are expected to order themselves in the input
space consistent with the associated neuron positions.
During the tuning phase LR
decreases slowly
from LP.tune_lr
, and ND
is always
set to LP.tune_nd
. During this phase the weights
are expected to spread out relatively evenly over the input space
while retaining their topological order, determined during the ordering
phase.