Self-organizing map weight learning function
[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnsom('
learnsom is the self-organizing map weight
[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes
Learning parameters, none,
Learning state, initially should be =
New learning state
Learning occurs according to
learning parameters, shown here with their default values.
Ordering phase learning rate
Ordering phase steps
Tuning phase learning rate
Tuning phase neighborhood distance
info = learnsom(' returns
useful information for each
Names of learning parameters
Default learning parameters
Returns 1 if this function uses
Here you define a random input
and weight matrix
W for a layer with a two-element
input and six neurons. You also calculate positions and distances
for the neurons, which are arranged in a 2-by-3 hexagonal pattern.
Then you define the four learning parameters.
p = rand(2,1); a = rand(6,1); w = rand(6,2); pos = hextop(2,3); d = linkdist(pos); lp.order_lr = 0.9; lp.order_steps = 1000; lp.tune_lr = 0.02; lp.tune_nd = 1;
learnsom only needs these values
to calculate a weight change (see "Algorithm" below),
use them to do so.
ls = ; [dW,ls] = learnsom(w,p,,,a,,,,,d,lp,ls)
You can create a standard network that uses
net.trainParam automatically becomes
net.adaptParam automatically becomes
(Each weight learning parameter property is automatically set to
To train the network (or enable it to adapt):
properties to desired values.
learnsom calculates the weight change
a given neuron from the neuron's input
A2, and learning rate
dw = lr*a2*(p'-w)
where the activation
A2 is found from the
A, neuron distances
and the current neighborhood size
a2(i,q) = 1, if a(i,q) = 1 = 0.5, if a(j,q) = 1 and D(i,j) <= nd = 0, otherwise
The learning rate
LR and neighborhood size
altered through two phases: an ordering phase and a tuning phase.
The ordering phases lasts as many steps as
During this phase
LR is adjusted from
ND is adjusted
from the maximum neuron distance down to 1. It is during this phase
that neuron weights are expected to order themselves in the input
space consistent with the associated neuron positions.
During the tuning phase
LR decreases slowly
ND is always
LP.tune_nd. During this phase the weights
are expected to spread out relatively evenly over the input space
while retaining their topological order, determined during the ordering