This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.


Self-organizing map weight learning function


[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnsom('code')


learnsom is the self-organizing map weight learning function.

[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,


S-by-R weight matrix (or S-by-1 bias vector)


R-by-Q input vectors (or ones(1,Q))


S-by-Q weighted input vectors


S-by-Q net input vectors


S-by-Q output vectors


S-by-Q layer target vectors


S-by-Q layer error vectors


S-by-R weight gradient with respect to performance


S-by-Q output gradient with respect to performance


S-by-S neuron distances


Learning parameters, none, LP = []


Learning state, initially should be = []

and returns


S-by-R weight (or bias) change matrix


New learning state

Learning occurs according to learnsom’s learning parameters, shown here with their default values.


Ordering phase learning rate


Ordering phase steps


Tuning phase learning rate


Tuning phase neighborhood distance

info = learnsom('code') returns useful information for each code string:


Names of learning parameters


Default learning parameters


Returns 1 if this function uses gW or gA


Here you define a random input P, output A, and weight matrix W for a layer with a two-element input and six neurons. You also calculate positions and distances for the neurons, which are arranged in a 2-by-3 hexagonal pattern. Then you define the four learning parameters.

p = rand(2,1);
a = rand(6,1);
w = rand(6,2);
pos = hextop(2,3);
d = linkdist(pos);
lp.order_lr = 0.9;
lp.order_steps = 1000;
lp.tune_lr = 0.02;
lp.tune_nd = 1;

Because learnsom only needs these values to calculate a weight change (see “Algorithm” below), use them to do so.

ls = [];
[dW,ls] = learnsom(w,p,[],[],a,[],[],[],[],d,lp,ls)

Network Use

You can create a standard network that uses learnsom with newsom.

  1. Set net.trainFcn to 'trainr'. (net.trainParam automatically becomes trainr’s default parameters.)

  2. Set net.adaptFcn to 'trains'. (net.adaptParam automatically becomes trains’s default parameters.)

  3. Set each net.inputWeights{i,j}.learnFcn to 'learnsom'.

  4. Set each net.layerWeights{i,j}.learnFcn to 'learnsom'.

  5. Set net.biases{i}.learnFcn to 'learnsom'. (Each weight learning parameter property is automatically set to learnsom’s default parameters.)

To train the network (or enable it to adapt):

  1. Set net.trainParam (or net.adaptParam) properties to desired values.

  2. Call train (adapt).


learnsom calculates the weight change dW for a given neuron from the neuron’s input P, activation A2, and learning rate LR:

dw = lr*a2*(p'-w)

where the activation A2 is found from the layer output A, neuron distances D, and the current neighborhood size ND:

a2(i,q) = 1,  if a(i,q) = 1
		 = 0.5, if a(j,q) = 1 and D(i,j) <= nd
		 = 0, otherwise

The learning rate LR and neighborhood size NS are altered through two phases: an ordering phase and a tuning phase.

The ordering phases lasts as many steps as LP.order_steps. During this phase LR is adjusted from LP.order_lr down to LP.tune_lr, and ND is adjusted from the maximum neuron distance down to 1. It is during this phase that neuron weights are expected to order themselves in the input space consistent with the associated neuron positions.

During the tuning phase LR decreases slowly from LP.tune_lr, and ND is always set to LP.tune_nd. During this phase the weights are expected to spread out relatively evenly over the input space while retaining their topological order, determined during the ordering phase.

See Also


Introduced before R2006a

Was this topic helpful?