Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
NEURAL NETWORK PATTERN RECOGNITION TOOL

Subject: NEURAL NETWORK PATTERN RECOGNITION TOOL

From: Slawomir

Date: 24 Apr, 2012 14:42:08

Message: 1 of 8

Hello, everyone!

I am struggling with my pattern recognition network since a few hours.
So here is what I did:

I've downloaded the dataset from http://archive.ics.uci.edu/ml/machine-learning-databases/haberman/ , saved haberman.data as dane.txt, and tried to recognize pattern by neural network.

In order to help me on the very beginning, I've calculated exact number of my outputs samples using this code:

CODE:
% ===========================
% COUNTING THE NUMBER OF SAMPLES
% 1 = the patient survived 5 years or longer
% 2 = the patient died within 5 year

dane = load('dane.txt');
T = dane(:,4);
[a b] = size(T);
i=0;
j=0;
k=1;

while k<=a
    if T(k)==2
        i=i+1;
    else
        j=j+1;
    end
    k=k+1;
end

i % 1
j % 2
k=k-1 % NUMBER OF SAMPLES
% ===========================

and I have results:
a = 306
b = 1
i = 81
j = 225
k = 306


So okay, later on I am running my pattern recognition network, but I cannot get the exact results. For example in confusion matrix I have:
http://i44.tinypic.com/29et69s.jpg

1 = 196 samples (should be 81)
2 = 110 samples (should be 225)

Sometimes this network brought me results with 99% accuracy.
Why there is so huge difference?
Because of "net.divideFcn = 'dividerand'; " probably, right?
It is possible to accomplish 100%?
And can you tell if this is correct, or I'd messed up sth somewhere?


MAIN PROGRAM CODE:
% ==========================
dane = load('dane.txt');
size(dane)
P = dane(:,1:3)'; % input matrix
size(P)
T = dane(:,4)'; % target matrix
c = T- 1;
T = [c;c];
size(T)

% CREATING PATTERN RECOGNITION NETWORK
% ==========================
                             
hiddenLayerSize = 10;
net = patternnet(hiddenLayerSize);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
net.divideFcn = 'dividerand';
net.divideMode = 'sample';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
net.trainFcn = 'trainlm';
net.performFcn = 'mse';
net.plotFcns = {'plotperform','plottrainstate','ploterrhist'}

% NETWORK TRAINING
% ==========================

[net,tr] = train(net,P,T);

% NETWORK TEST
% ==========================

outputs = net(P);
errors = gsubtract(T,outputs); %(T-outputs);
performance = perform(net,T,outputs);
S =round( sim(net, P)) % network simulation
MSE = mse(T - S)

% COMPUTING TRAIN PROGRESS, VALIDATION AND TESTING PERFORMANCE
% ==========================

trainTargets = T .* tr.trainMask{1};
valTargets = T .* tr.valMask{1};
testTargets = T .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)

% PLOTS
% ==============================
%view(net)
% figure, plotperform(tr)
% figure, plottrainstate(tr)
figure, plotconfusion(T,outputs, 'ALL',trainTargets,outputs,'TRAIN',...
                                  valTargets,outputs,'VALIDATION', testTargets,outputs, 'TEST')
%figure, ploterrhist(errors)
% ==========================

Subject: NEURAL NETWORK PATTERN RECOGNITION TOOL

From: Greg Heath

Date: 25 Apr, 2012 06:10:18

Message: 2 of 8

On Apr 24, 10:42 am, "Slawomir " <SlawomirBab...@gmail.com> wrote:
> Hello, everyone!
>
> I am struggling with my pattern recognition network since a few hours.
> So here is what I did:
>
> I've downloaded the dataset fromhttp://archive.ics.uci.edu/ml/machine-learning-databases/haberman/, saved haberman.data as dane.txt, and tried to recognize pattern by neural network.
>
> In order to help me on the very beginning, I've calculated exact number of my outputs samples using this code:
>
> CODE:
> % ===========================
> % COUNTING THE NUMBER OF SAMPLES
> % 1 = the patient survived 5 years or longer
> % 2 = the patient died within 5 year
>
> dane = load('dane.txt');
> T = dane(:,4);
> [a b] = size(T);
> i=0;
> j=0;
> k=1;
>
> while k<=a
>     if T(k)==2
>         i=i+1;
>     else
>         j=j+1;
>     end
>     k=k+1;
> end
>
> i  % 1
> j  % 2
> k=k-1 % NUMBER OF SAMPLES
> % ===========================
>
> and I have results:
> a = 306
> b = 1
> i = 81
> j = 225
> k = 306
>
> So okay, later on I am running my pattern recognition network, but I cannot get the exact results. For example in confusion matrix I have:http://i44.tinypic.com/29et69s.jpg
>
> 1 = 196 samples (should be 81)
> 2 = 110 samples (should be 225)
>
> Sometimes this network brought me results with 99% accuracy.
> Why there is so huge difference?
> Because of "net.divideFcn = 'dividerand'; " probably, right?
> It is possible to accomplish 100%?
> And can you tell if this is correct, or I'd messed up sth somewhere?
>
> MAIN PROGRAM CODE:
> % ==========================
> dane = load('dane.txt');
> size(dane)
> P = dane(:,1:3)'; % input matrix
> size(P)
> T = dane(:,4)';   % target matrix
> c = T- 1;
> T = [c;c];
> size(T)
>
> % CREATING PATTERN RECOGNITION NETWORK
> % ==========================
>
> hiddenLayerSize = 10;
> net = patternnet(hiddenLayerSize);
> net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
> net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
> net.divideFcn = 'dividerand';
> net.divideMode = 'sample';
> net.divideParam.trainRatio = 70/100;
> net.divideParam.valRatio = 15/100;
> net.divideParam.testRatio = 15/100;
> net.trainFcn = 'trainlm';
> net.performFcn = 'mse';
> net.plotFcns = {'plotperform','plottrainstate','ploterrhist'}
>
> % NETWORK TRAINING
> % ==========================
>
> [net,tr] = train(net,P,T);
>
> % NETWORK TEST
> % ==========================
>
> outputs = net(P);
> errors = gsubtract(T,outputs); %(T-outputs);
> performance = perform(net,T,outputs);
> S =round( sim(net, P)) % network simulation
> MSE = mse(T - S)
>
> % COMPUTING TRAIN PROGRESS, VALIDATION AND TESTING PERFORMANCE
> % ==========================
>
> trainTargets = T .* tr.trainMask{1};
> valTargets = T  .* tr.valMask{1};
> testTargets = T  .* tr.testMask{1};
> trainPerformance = perform(net,trainTargets,outputs)
> valPerformance = perform(net,valTargets,outputs)
> testPerformance = perform(net,testTargets,outputs)
>
> % PLOTS
> % ==============================
> %view(net)
> % figure, plotperform(tr)
> % figure, plottrainstate(tr)
> figure, plotconfusion(T,outputs, 'ALL',trainTargets,outputs,'TRAIN',...
>                                   valTargets,outputs,'VALIDATION', testTargets,outputs, 'TEST')
> %figure, ploterrhist(errors)
> % ==========================

Plot T vs P(1,:),P(2,:) and P(3,:)

and try to find something that supports the assertion that the classes
are
significantly separable using P.

I can't see anything in the plots. Therefore I wouldn't waste my time
trying to design a classifier with this data.

Q1: Why did you dig up this data instead of trying some of the eight
pattern recognition examples provided in the NNTB documentation?

Q2: Do the references show a NN model that gets a low classification
rate?


Hope this helps.

Greg

Subject: NEURAL NETWORK PATTERN RECOGNITION TOOL

From: Slawomir

Date: 25 Apr, 2012 08:03:08

Message: 3 of 8

Hey, Greg!

Plot T vs P(1,:),P(2,:) and P(3,:)
and try to find something that supports the assertion that the classes are
significantly separable using P.

@ Okay, I will try this idea.

@ I can't see anything in the plots. Therefore I wouldn't waste my time
trying to design a classifier with this data.

I think the code is good, but the data that I used is not so good, because I got that cosmic results on output because I tried to do something similar, like predict the weather looking only at number of people and their names. Not so big connection to it.
I tried with Irises flowers, classification problem, and it looks to work as I would like to.

Q1: Why did you dig up this data instead of trying some of the eight
pattern recognition examples provided in the NNTB documentation?
@ Because of my teacher from AI forced us to do it. I tried to used this code that is already there instead trying to write one on my own. My idea was to fit existing code to input data from that repository and run it.

Q2: Do the references show a NN model that gets a low classification
rate?
@ No, quite opposite. The problem was created by wrongly chosen my input data.

But when I tried to run a new code, and I had problem with changing the number of neuron in the output layer. It should be 3 in this case, but I have there 2. And output equals 3.
I tried to change it, but I failed.

The "view(net)" was showing me 2 neurons instead of three.

Subject: NEURAL NETWORK PATTERN RECOGNITION TOOL

From: Greg Heath

Date: 25 Apr, 2012 16:08:25

Message: 4 of 8

On Apr 25, 4:03 am, "Slawomir " <SlawomirBab...@gmail.com> wrote:
> Hey, Greg!
>
> Plot T vs P(1,:),P(2,:) and P(3,:)
> and try to find something that supports the assertion that the classes are
> significantly separable using P.
>
> @ Okay, I will try this idea.
>
> @ I can't see anything in the plots. Therefore I wouldn't waste my time
> trying to design a classifier with this data.
>
> I think the code is good, but the data that I used is not so good, because I got that cosmic results on output because I tried to do something similar, like predict the weather looking only at number of people and their names. Not so big connection to it.
> I tried with Irises flowers, classification problem, and it looks to work as I would like to.
>
> Q1: Why did you dig up this data instead of trying some of the eight
> pattern recognition examples provided in the NNTB documentation?
> @ Because of my teacher from AI forced us to do it. I tried to used this code that is already there instead trying to write one on my own. My idea was to fit existing code to input data from that repository and run it.
>
> Q2: Do the references show a NN model that gets a low classification
> rate?
> @ No, quite opposite. The problem was created by wrongly chosen my input data.
>
> But when I tried to run a new code, and I had problem with changing the number of neuron in the output layer. It should be 3 in this case, but I have there 2. And output equals 3.
> I tried to change it, but I failed.
>
> The "view(net)" was showing me 2 neurons instead of three.

You lost me. The data was 4-D wth the last column the output target.

Therefore there would ordinarily be only one output node. However
sometimes people use 2 (columns of eye(2)).

I think that NNs is not the way to go here. Have you tried regression
trees?

Greg

Subject: NEURAL NETWORK PATTERN RECOGNITION TOOL

From: Greg Heath

Date: 25 Apr, 2012 16:11:14

Message: 5 of 8

On Apr 25, 4:03 am, "Slawomir " <SlawomirBab...@gmail.com> wrote:

> Q2: Do the references show a NN model that gets a low classification
> rate?
> @ No, quite opposite. The problem was created by wrongly chosen my input data.
>
> But when I tried to run a new code, and I had problem with changing the number of neuron in the output layer. It should be 3 in this case, but I have there 2. And output equals 3.
> I tried to change it, but I failed.

I don't understand this. Please explain.

Greg

Subject: NEURAL NETWORK PATTERN RECOGNITION TOOL

From: Slawomir

Date: 27 Apr, 2012 09:53:07

Message: 6 of 8

Okay, I've created in MS Word my answer, it will looks more clear and legible.

http://www.sendspace.com/file/rbik2q

Hope that it is more clear than before.
Thanks for help,
S?awek.

Subject: NEURAL NETWORK PATTERN RECOGNITION TOOL

From: Slawomir

Date: 27 Apr, 2012 20:06:13

Message: 7 of 8

Okay, dear Greg, and all others people reading this message, I will try to be more specific this time.

I used this data:

data.txt

5.1,3.5,1.4,0.2 ,0,0,0
4.9,3.0,1.4,0.2 ,0,0,0
4.7,3.2,1.3,0.2 ,0,0,0
4.6,3.1,1.5,0.2 ,0,0,0
5.0,3.6,1.4,0.2 ,0,0,0
5.4,3.9,1.7,0.4 ,0,0,0
4.6,3.4,1.4,0.3 ,0,0,0
5.0,3.4,1.5,0.2 ,0,0,0
4.4,2.9,1.4,0.2 ,0,0,0
4.9,3.1,1.5,0.1 ,0,0,0
5.4,3.7,1.5,0.2 ,0,0,0
4.8,3.4,1.6,0.2 ,0,0,0
4.8,3.0,1.4,0.1 ,0,0,0
4.3,3.0,1.1,0.1 ,0,0,0
5.8,4.0,1.2,0.2 ,0,0,0
5.7,4.4,1.5,0.4 ,0,0,0
5.4,3.9,1.3,0.4 ,0,0,0
5.1,3.5,1.4,0.3 ,0,0,0
5.7,3.8,1.7,0.3 ,0,0,0
5.1,3.8,1.5,0.3 ,0,0,0
5.4,3.4,1.7,0.2 ,0,0,0
5.1,3.7,1.5,0.4 ,0,0,0
4.6,3.6,1.0,0.2 ,0,0,0
5.1,3.3,1.7,0.5 ,0,0,0
4.8,3.4,1.9,0.2 ,0,0,0
5.0,3.0,1.6,0.2 ,0,0,0
5.0,3.4,1.6,0.4 ,0,0,0
5.2,3.5,1.5,0.2 ,0,0,0
5.2,3.4,1.4,0.2 ,0,0,0
4.7,3.2,1.6,0.2 ,0,0,0
4.8,3.1,1.6,0.2 ,0,0,0
5.4,3.4,1.5,0.4 ,0,0,0
5.2,4.1,1.5,0.1 ,0,0,0
5.5,4.2,1.4,0.2 ,0,0,0
4.9,3.1,1.5,0.1 ,0,0,0
5.0,3.2,1.2,0.2 ,0,0,0
5.5,3.5,1.3,0.2 ,0,0,0
4.9,3.1,1.5,0.1 ,0,0,0
4.4,3.0,1.3,0.2 ,0,0,0
5.1,3.4,1.5,0.2 ,0,0,0
5.0,3.5,1.3,0.3 ,0,0,0
4.5,2.3,1.3,0.3 ,0,0,0
4.4,3.2,1.3,0.2 ,0,0,0
5.0,3.5,1.6,0.6 ,0,0,0
5.1,3.8,1.9,0.4 ,0,0,0
4.8,3.0,1.4,0.3 ,0,0,0
5.1,3.8,1.6,0.2 ,0,0,0
4.6,3.2,1.4,0.2 ,0,0,0
5.3,3.7,1.5,0.2 ,0,0,0
5.0,3.3,1.4,0.2 ,0,0,0
7.0,3.2,4.7,1.4 ,0,1,0
6.4,3.2,4.5,1.5 ,0,1,0
6.9,3.1,4.9,1.5 ,0,1,0
5.5,2.3,4.0,1.3 ,0,1,0
6.5,2.8,4.6,1.5 ,0,1,0
5.7,2.8,4.5,1.3 ,0,1,0
6.3,3.3,4.7,1.6 ,0,1,0
4.9,2.4,3.3,1.0 ,0,1,0
6.6,2.9,4.6,1.3 ,0,1,0
5.2,2.7,3.9,1.4 ,0,1,0
5.0,2.0,3.5,1.0 ,0,1,0
5.9,3.0,4.2,1.5 ,0,1,0
6.0,2.2,4.0,1.0 ,0,1,0
6.1,2.9,4.7,1.4 ,0,1,0
5.6,2.9,3.6,1.3 ,0,1,0
6.7,3.1,4.4,1.4 ,0,1,0
5.6,3.0,4.5,1.5 ,0,1,0
5.8,2.7,4.1,1.0 ,0,1,0
6.2,2.2,4.5,1.5 ,0,1,0
5.6,2.5,3.9,1.1 ,0,1,0
5.9,3.2,4.8,1.8 ,0,1,0
6.1,2.8,4.0,1.3 ,0,1,0
6.3,2.5,4.9,1.5 ,0,1,0
6.1,2.8,4.7,1.2 ,0,1,0
6.4,2.9,4.3,1.3 ,0,1,0
6.6,3.0,4.4,1.4 ,0,1,0
6.8,2.8,4.8,1.4 ,0,1,0
6.7,3.0,5.0,1.7 ,0,1,0
6.0,2.9,4.5,1.5 ,0,1,0
5.7,2.6,3.5,1.0 ,0,1,0
5.5,2.4,3.8,1.1 ,0,1,0
5.5,2.4,3.7,1.0 ,0,1,0
5.8,2.7,3.9,1.2 ,0,1,0
6.0,2.7,5.1,1.6 ,0,1,0
5.4,3.0,4.5,1.5 ,0,1,0
6.0,3.4,4.5,1.6 ,0,1,0
6.7,3.1,4.7,1.5 ,0,1,0
6.3,2.3,4.4,1.3 ,0,1,0
5.6,3.0,4.1,1.3 ,0,1,0
5.5,2.5,4.0,1.3 ,0,1,0
5.5,2.6,4.4,1.2 ,0,1,0
6.1,3.0,4.6,1.4 ,0,1,0
5.8,2.6,4.0,1.2 ,0,1,0
5.0,2.3,3.3,1.0 ,0,1,0
5.6,2.7,4.2,1.3 ,0,1,0
5.7,3.0,4.2,1.2 ,0,1,0
5.7,2.9,4.2,1.3 ,0,1,0
6.2,2.9,4.3,1.3 ,0,1,0
5.1,2.5,3.0,1.1 ,0,1,0
5.7,2.8,4.1,1.3 ,0,1,0
6.3,3.3,6.0,2.5 ,0,0,1
5.8,2.7,5.1,1.9 ,0,0,1
7.1,3.0,5.9,2.1 ,0,0,1
6.3,2.9,5.6,1.8 ,0,0,1
6.5,3.0,5.8,2.2 ,0,0,1
7.6,3.0,6.6,2.1 ,0,0,1
4.9,2.5,4.5,1.7 ,0,0,1
7.3,2.9,6.3,1.8 ,0,0,1
6.7,2.5,5.8,1.8 ,0,0,1
7.2,3.6,6.1,2.5 ,0,0,1
6.5,3.2,5.1,2.0 ,0,0,1
6.4,2.7,5.3,1.9 ,0,0,1
6.8,3.0,5.5,2.1 ,0,0,1
5.7,2.5,5.0,2.0 ,0,0,1
5.8,2.8,5.1,2.4 ,0,0,1
6.4,3.2,5.3,2.3 ,0,0,1
6.5,3.0,5.5,1.8 ,0,0,1
7.7,3.8,6.7,2.2 ,0,0,1
7.7,2.6,6.9,2.3 ,0,0,1
6.0,2.2,5.0,1.5 ,0,0,1
6.9,3.2,5.7,2.3 ,0,0,1
5.6,2.8,4.9,2.0 ,0,0,1
7.7,2.8,6.7,2.0 ,0,0,1
6.3,2.7,4.9,1.8 ,0,0,1
6.7,3.3,5.7,2.1 ,0,0,1
7.2,3.2,6.0,1.8 ,0,0,1
6.2,2.8,4.8,1.8 ,0,0,1
6.1,3.0,4.9,1.8 ,0,0,1
6.4,2.8,5.6,2.1 ,0,0,1
7.2,3.0,5.8,1.6 ,0,0,1
7.4,2.8,6.1,1.9 ,0,0,1
7.9,3.8,6.4,2.0 ,0,0,1
6.4,2.8,5.6,2.2 ,0,0,1
6.3,2.8,5.1,1.5 ,0,0,1
6.1,2.6,5.6,1.4 ,0,0,1
7.7,3.0,6.1,2.3 ,0,0,1
6.3,3.4,5.6,2.4 ,0,0,1
6.4,3.1,5.5,1.8 ,0,0,1
6.0,3.0,4.8,1.8 ,0,0,1
6.9,3.1,5.4,2.1 ,0,0,1
6.7,3.1,5.6,2.4 ,0,0,1
6.9,3.1,5.1,2.3 ,0,0,1
5.8,2.7,5.1,1.9 ,0,0,1
6.8,3.2,5.9,2.3 ,0,0,1
6.7,3.3,5.7,2.5 ,0,0,1
6.7,3.0,5.2,2.3 ,0,0,1
6.3,2.5,5.0,1.9 ,0,0,1
6.5,3.0,5.2,2.0 ,0,0,1
6.2,3.4,5.4,2.3 ,0,0,1
5.9,3.0,5.1,1.8 ,0,0,1



This is dataset contains 3 groups (50 cases in every one) of iris flowers. Last 3 rows represents each group.
1st - 0,0,0,
2nd- 0,1,0,
3th- 0,0,1


I am running this code:
  % ==========================
  dane = load('dane.txt');
  size(dane)
  P = dane(:,1:4)';
  size(P)
  T = dane(:,5:7)';
  size(T)
 
  % CREATING PATTERN RECOGNITION NETWORK
  % ==========================
 
  hiddenLayerSize = 10;
  net = patternnet(hiddenLayerSize);
  net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
  net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
  net.divideFcn = 'dividerand';
  net.divideMode = 'sample';
  net.divideParam.trainRatio = 70/100;
  net.divideParam.valRatio = 15/100;
  net.divideParam.testRatio = 15/100;
  net.trainFcn = 'trainlm';
  net.performFcn = 'mse';
  net.plotFcns = {'plotperform','plottrainstate','ploterrhist'}
 
  % NETWORK TRAINING
  % ==========================
 
  [net,tr] = train(net,P,T);
 
  % NETWORK TEST
  % ==========================
 
  outputs = net(P);
  errors = gsubtract(T,outputs); %(T-outputs);
  performance = perform(net,T,outputs);
  S =round( sim(net, P)) % network simulation
  MSE = mse(T - S)
 
  % COMPUTING TRAIN PROGRESS, VALIDATION AND TESTING PERFORMANCE
  % ==========================
 
  trainTargets = T .* tr.trainMask{1};
  valTargets = T .* tr.valMask{1};
  testTargets = T .* tr.testMask{1};
  trainPerformance = perform(net,trainTargets,outputs)
  valPerformance = perform(net,valTargets,outputs)
  testPerformance = perform(net,testTargets,outputs)
 
  % PLOTS
  % ==============================
  %view(net)
  % figure, plotperform(tr)
  % figure, plottrainstate(tr)
  figure, plotconfusion(T,outputs, 'ALL',trainTargets,outputs,'TRAIN',...
                                    valTargets,outputs,'VALIDATION', testTargets,outputs, 'TEST')
  %figure, ploterrhist(errors)
  % ==========================


And I get the results:
1.
http://www.freeimagehosting.net/b6gav

I can classify every sample in my target class with incredible accuracy, 100%, but how to do that classification looks like below?
That output class is strictly connected with target class, diagonal one.

2.
http://www.freeimagehosting.net/31h9z
I took this from nnstart>pattern recognition tool>load_example_data>iris_flowers, and at the end: plot confusion.

3.
http://www.freeimagehosting.net/pjbwb
http://www.freeimagehosting.net/ob4kt

But using prepared by matlab code and trying to implement my data inside, my network view looks like this. I have 2 neurons in the output layer and 3 in the output. Probably that is why I am receiving this different classification, only the target class is taken under consideration.

And this is the network view from nnstart:

http://www.freeimagehosting.net/4i3u3

Three neurons in the output layer and 3 in the output.

How can I define those values in my code?
I’ve tried to set them up, but unsuccessfully.



I hope that I was more clear this time.
Thanks for helping me.

Subject: NEURAL NETWORK PATTERN RECOGNITION TOOL

From: Greg Heath

Date: 28 Apr, 2012 08:53:17

Message: 8 of 8

On Apr 27, 4:06 pm, "Slawomir " <SlawomirBab...@gmail.com> wrote:
> Okay, dear Greg, and all others people reading this message, I will try
> to be more specific this time.
>
> I used this data:
>
> data.txt
>
> 5.1,3.5,1.4,0.2 ,0,0,0
> 4.9,3.0,1.4,0.2 ,0,0,0

---SNIP

Why did you use this instead of the data set in the iris example
documentation???

[P,T] = iris_dataset;

> This is dataset contains 3 groups (50 cases in every one) of iris flowers.
> Last 3 rows represents each group.
> 1st - 0,0,0,

NO! Use

1st - 1,0,0

Because:

For classification, the targets should be columns of the unit matrix.
Then
the outputs are estimates of the class posterior probabilities
(conditional on
the input) . Consequently,

vec2ind(T) will yield the target class indices
and
vec2ind(net(P)) will yield the estimate of the input.

> 2nd- 0,1,0,
> 3th- 0,0,1
>
> I am running this code:
> % ==========================
> dane = load('dane.txt');
> size(dane)
> P = dane(:,1:4)';
> size(P)
> T = dane(:,5:7)';
> size(T)

UNDERSTAND YOUR DATA: Plot the 3 class indices vs the four inputs.

Before designing the net, I like to find, for reference, the mse and
PCTerr
 for the NAIVE MEAN-VALUE MODEL and the LINEAR MODEL . Search

    " greg-heath" MSE00 MSE0

    for details

> % CREATING PATTERN RECOGNITION NETWORK
> % ==========================

Initialize the random number generator here so that you can duplicate
your
results.

> hiddenLayerSize = 10;
> net = patternnet(hiddenLayerSize);
> net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
> net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
> net.divideFcn = 'dividerand';
> net.divideMode = 'sample';
> net.divideParam.trainRatio = 70/100;
> net.divideParam.valRatio = 15/100;
> net.divideParam.testRatio = 15/100;
> net.trainFcn = 'trainlm';
> net.performFcn = 'mse';

If you take advantage of the built-in defaults, you can replace the
above
eleven commands with the following one:

net = patternnet;

Reread the documentation so that you can take advantage of defaults.

> net.plotFcns = {'plotperform','plottrainstate','ploterrhist'}

This is a classifier, why didn't you include plotconfusion and
plotroc???

> % NETWORK TRAINING
> % ==========================
>
> [net,tr] = train(net,P,T);
>
> % NETWORK TEST
> % ==========================
>
> outputs = net(P);
> errors = gsubtract(T,outputs); %(T-outputs);
> performance = perform(net,T,outputs); % 0.0105 = mse(T-outputs)
> S =round( sim(net, P)) % network simulation % S = round(outputs)
> MSE = mse(T - S) % 0.0111

This is MSE of the total data set. However,
1. What are the MSEs of train/val/test?
2. This is a classification example. MSE tells you little about
classification
performance...

 Where are the class error rates, e,g, the confusion matrix??

> % COMPUTING TRAIN PROGRESS, VALIDATION AND TESTING PERFORMANCE
> % ==========================
>
> trainTargets = T .* tr.trainMask{1};
> valTargets = T .* tr.valMask{1};
> testTargets = T .* tr.testMask{1};
> trainPerformance = perform(net,trainTargets,outputs)
> valPerformance = perform(net,valTargets,outputs)
> testPerformance = perform(net,testTargets,outputs)

This works because of the way PERFORM ignores claculations with NaN.
However, I feel better when the masks are also applied to outputs!

> % PLOTS
> % ==============================
> %view(net)
> % figure, plotperform(tr)
> % figure, plottrainstate(tr)
> figure, plotconfusion(T,outputs, 'ALL',trainTargets,outputs,'TRAIN',...
> valTargets,outputs,'VALIDATION', testTargets,outputs, 'TEST')
> %figure, ploterrhist(errors)
> % ==========================
>
> And I get the results:
> 1.http://www.freeimagehosting.net/b6gav
>
> I can classify every sample in my target class with incredible accuracy, 100%,
but how to do that classification looks like below?
> That output class is strictly connected with target class, diagonal one.
>
> 2.http://www.freeimagehosting.net/31h9z
> I took this from nnstart>pattern recognition tool>load_example_data>iris_flowers, and
at the end: plot confusion.
>
> 3.http://www.freeimagehosting.net/pjbwbhttp://www.freeimagehosting.net/ob4kt
>
> But using prepared by matlab code and trying to implement my data inside, my network
view looks like this. I have 2 neurons in the output

CORRECTION: Change "output" to "hidden"

>layer and 3 in the output. Probably that is why I am receiving this different classification,
only the target class is taken under consideration.

I don't understand.

> And this is the network view from nnstart:
>
> http://www.freeimagehosting.net/4i3u3
>
> Three neurons in the output

CORRECTION: Change "output" to "hidden"

>layer and 3 in the output.
>
> How can I define those values in my code?
> I’ve tried to set them up, but unsuccessfully.
>
> I hope that I was more clear this time.
> Thanks for helping me.

Your basic problem with the confusion matrix is that the targets for
class 1 are not columns from the unit matrix [1 0 0 ]'.

Hope this helps.

Greg

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us