MATLAB Answers


Neural network performance evaluation????

Asked by Daud
on 25 Dec 2012

for evaluating NN performance for a given number of trail or retrain which approach is right and why?????

for trail=1:100
[net,tr,Y,E,Pf,Af] = train(...);


for trail=1:100
[net,tr,Y,E,Pf,Af] = train(...);

Note: i am getting decent result for both approach; but the later giving me best result.


2 Answers

Answer by Greg Heath
on 1 Jan 2013
 Accepted answer

Thank you for formally accepting my answer!


Answer by Greg Heath
on 27 Dec 2012

The first example is the correct one because it containss 100 random weight initializations. Therefore each net is a valid independent result.

The 2nd example just keeps training the same net more and more.

What, exactly, do you mean by decent results?

Is this regression or classification?

Are you using validation stopping?

How many acceptable solutions out of 100?

If regression, what are the means and standard deviations of the training, validation and testing NORMALIZED (with average target variance) mean-square-error?

I usually shoot for (but don't always get) NMSEtrn <= 0.01

For an I-H-O net

Ntrneq = prod(size(ttrn)) % Ntrn*O = No. of training equations

Nw = (I+1)*H +(H+1)*O % No. of unknown weights

NMSEtrn = sse(trn-ytrn)/(Ntrneq-Nw)/mean(var(ttrn',0))

NMSEi = mse(yi-ti)/mean(var(ti',1)) for i = val and test

I have posted many example in NEWSGROUP and ANSWERS. Try searching on

heath newff Ntrials

Hope this helps.

Thank you for formally accepting my answer.



Greg Heath
on 31 Dec 2012

NO! The second approach is in general, useless!

The idea is to train a network that will GENERALIZE well; i.e., to have good performance on nontraining data. If you have enough unknown weights compared to the number of training equations, you can get ridiculously low error rates if you train long enough.

The problem is that the network will probably not generalize well. That is, will not perform well on nontraining data.

That is why the validation set is used to represent unseen nontraining data and ends training whenever the validation error increases max_fail epochs (default = 6) in a row.

You cannot ignore that fact and just keep training.

The true measure of a net is the test set error. The validation set error is a prediction of what the test set error will be. Therefore, when it reaches a minimum in training, you should stop.

Have you taken a good look at the trn/val/tst training performance plots?

The training mse tends to monotonically decrease when the val and/or ttst mses are increasing.

This phenomenon is called overtraining an overfit (too many weights) net.

Search overfitting in the FAQ and elsewhere (e.g., my posts).

Again: Net performance is measured via performance on nondesign test sets:

data = design + test

design = train + validation.

I tend to use 10 trials (not trails!)for each candidate value of hidden nodes and choose successful nets with the smallest number of hidden nodes (& weights)

I have posted many, many designs. Key search words are heath Nw and Ntrials

Hope this helps.


on 1 Jan 2013

Thanks for ur answer and correcting my spelling "Trial"; But still i can't incorporate the facts u mentioned; by the way the total data set is divided in to test;train and val sets. And the recognition rate mentioned above is the overall recognition rate(train; val and test).

Why should i concern about over-fitting since i am using validation stop?

Ok; Greg i want a query after the trial "1" in 2nd approach suppose the weights (initial) are w1;w2...wn. Now in second "2" trial Are the weights (initial) changed? or same as trial number "1" w1;w2...;wn.

If weights (initial) are same in each trial i am totally agree with u; but if not........ confused.

on 1 Jan 2013

Greg i am posting my full code here: plz check it out

close all
load Input_n
run target_00
c_tr{1,n_trial}= [];cm_tr{1,n_trial}=[];ind_tr{1,n_trial}=[];per_tr{1,n_trial}=[];
c_ts{1,n_trial}= [];cm_ts{1,n_trial}=[];ind_ts{1,n_trial}=[];per_ts{1,n_trial}=[];
c_val{1,n_trial}= [];cm_val{1,n_trial}=[];ind_val{1,n_trial}=[];per_val{1,n_trial}=[];
c_ovrl{1,n_trial}= [];cm_ovrl{1,n_trial}= [];ind_ovrl{1,n_trial}= [];per_ovrl{1,n_trial}= [];
tr_info{1,n_trial} = [];
tr_net{1,n_trial}= [] ;
net.inputs{1}.processFcns = {'mapstd'};
%training parameters
%Division parameters
net.divideParam.trainRatio = 70/100; 
net.divideParam.valRatio = 20/100;  
net.divideParam.testRatio = 10/100; 
for i=1:100
close all
%net = init(net);
[net,tr,Y,E] = train(net,input_all,Targets);
tr_info{i} = tr;
tr_net{i} = net;
outputs_ovrl = sim(net,input_all);
%[m,b,r] = postreg(outputs_test,Targets(:,tr.testInd))
[c_tr{i},cm_tr{i},ind_tr{i},per_tr{i}] = confusion(Targets(:,tr.trainInd),Y);
[c_val{i},cm_val{i},ind_val{i},per_val{i}] = confusion(Targets(:,tr.valInd),outputs_val);
[c_ts{i},cm_ts{i},ind_ts{i},per_ts{i}] = confusion(Targets(:,tr.testInd),outputs_test);
[c_ovrl{i},cm_ovrl{i},ind_ovrl{i},per_ovrl{i}] = confusion(Targets,outputs_ovrl);
%grid on
%Result evaluation
Avg_recg_rt_ovrl = 100 - mean(cell2mat(c_ovrl));
Avg_recg_rt_tr = 100 - mean(cell2mat(c_tr));
Avg_recg_rt_ts = 100 - mean(cell2mat(c_ts));
Avg_recg_rt_val = 100 - mean(cell2mat(c_val));
[min_err trail_num] = min(cell2mat(c_ovrl));
best_recg = 100 - min_err;

Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi test

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

MATLAB Academy

New to MATLAB?

Learn MATLAB today!