Documentation Center |
Batch self-organizing map weight learning function
[dW,LS] = learnsomb(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnsomb('code')
learnsomb is the batch self-organizing map weight learning function.
[dW,LS] = learnsomb(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs:
W | S-by-R weight matrix (or S-by-1 bias vector) |
P | R-by-Q input vectors (or ones(1,Q)) |
Z | S-by-Q weighted input vectors |
N | S-by-Q net input vectors |
A | S-by-Q output vectors |
T | S-by-Q layer target vectors |
E | S-by-Q layer error vectors |
gW | S-by-R gradient with respect to performance |
gA | S-by-Q output gradient with respect to performance |
D | S-by-S neuron distances |
LP | Learning parameters, none, LP = [] |
LS | Learning state, initially should be = [] |
and returns the following:
dW | S-by-R weight (or bias) change matrix |
LS | New learning state |
Learning occurs according to learnsomb's learning parameter, shown here with its default value:
LP.init_neighborhood | 3 | Initial neighborhood size |
LP.steps | 100 | Ordering phase steps |
info = learnsomb('code') returns useful information for each code string:
'pnames' | Returns names of learning parameters. |
'pdefaults' | Returns default learning parameters. |
'needg' | Returns 1 if this function uses gW or gA. |
This example defines a random input P, output A, and weight matrix W for a layer with a 2-element input and 6 neurons. This example also calculates the positions and distances for the neurons, which appear in a 2-by-3 hexagonal pattern.
p = rand(2,1); a = rand(6,1); w = rand(6,2); pos = hextop(2,3); d = linkdist(pos); lp = learnsomb('pdefaults');
Because learnsom only needs these values to calculate a weight change (see Algorithm).
ls = []; [dW,ls] = learnsomb(w,p,[],[],a,[],[],[],[],d,lp,ls)
You can create a standard network that uses learnsomb with selforgmap. To prepare the weights of layer i of a custom network to learn with learnsomb:
Set NET.trainFcn to 'trainr'. (NET.trainParam automatically becomes trainr's default parameters.)
Set NET.adaptFcn to 'trains'. (NET.adaptParam automatically becomes trains's default parameters.)
Set each NET.layerWeights{i,j}.learnFcn to 'learnsomb'. (Each weight learning parameter property is automatically set to learnsomb's default parameters.)
To train the network (or enable it to adapt):
adapt | selforgmap | train