Batch self-organizing map weight learning function
[dW,LS] = learnsomb(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnsomb('
learnsomb is the batch self-organizing map
weight learning function.
[dW,LS] = learnsomb(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes
Learning parameters, none,
Learning state, initially should be =
and returns the following:
New learning state
Learning occurs according to
learning parameter, shown here with its default value:
Initial neighborhood size
Ordering phase steps
info = learnsomb(' returns
useful information for each
Returns names of learning parameters.
Returns default learning parameters.
This example defines a random input
and weight matrix
W for a layer with a 2-element
input and 6 neurons. This example also calculates the positions and
distances for the neurons, which appear in a 2-by-3 hexagonal pattern.
p = rand(2,1); a = rand(6,1); w = rand(6,2); pos = hextop(2,3); d = linkdist(pos); lp = learnsomb('pdefaults');
learnsom only needs these values
to calculate a weight change (see Algorithm).
ls = ; [dW,ls] = learnsomb(w,p,,,a,,,,,d,lp,ls)
You can create a standard network that uses
To prepare the weights of layer i of a custom network to learn with
NET.trainParam automatically becomes
NET.adaptParam automatically becomes
(Each weight learning parameter property is automatically set to
To train the network (or enable it to adapt):
properties as desired.
learnsomb calculates the weight changes so
that each neuron's new weight vector is the weighted average
of the input vectors that the neuron and neurons in its neighborhood
responded to with an output of 1.
The ordering phase lasts as many steps as
During this phase, the neighborhood is gradually reduced from
a maximum size of
LP.init_neighborhood down to
where it remains from then on.