I'm using a Learning Vector Quantization network (LVQ) to classify data collected as being of a particular road type.
The training data set consists of 1470 samples. Each input corresponds to 30 measured properties arranged as vectors. It is called 'input data'.
"Targets" is an 3x1470 matrix, where each ith column indicates which category the ith vector belongs to with a 1 in one element (and zeros in the other two elements).
I've created a network that assigns each of these input vectors to one of 20 subclasses. Thus, there are 20 neurons in the first competitive layer. These subclasses are then assigned to one of 3 output classes by the 3 neurons in the linear layer.
Here is the code:
inp1=load('input data.txt'); % Load input vectors
out1=load('Targets.txt'); % Load Target vectors
T = ind2vec(out1); % convert the Targets matrix to target vectors
net = newlvq(minmax(inp1),20,[.58 .36 .06]);
%The second-layer weights have 58% (849 of the 1470 in T above) of its columns with a 1 in the first row, (corresponding to class 1), and 36% of its columns will have a 1 in the second row &(corresponding to class 2) and 6% of its columns will have a 1 in the third row &(corresponding to class 3).
net = train(net, inp1, T)
Y = sim(net,inp1);
Yc = vec2ind(Y)
My questions are:
1. Is there anything obviously wrong with the code?
2. How do you know how many neurons to use in the competitive layer? I used 20?
3. How many epochs should you use? Is this linked to 'over training'?
4. How do you know that the LVQ is trained well? What do you look for on the plots Performance, Training state, Confusion, Receiver, Operating Characteristic?
No products are associated with this question.
I am having problems in running a code similar to your on a dataset similar to your. Even using the original training data as input to the trained net I get very inaccurate results, the test set is totally out ! more neurons in the hidden layer do not improve the model. I could not find a decent tutorial and examples on LVQ on Matlab, have you?
Play games and win prizes!Learn more