# How to present the result of neural network

17 views (last 30 days)

Show older comments

##### 0 Comments

### Accepted Answer

Greg Heath
on 10 Jul 2012

>Are there any standard method to present the results of a neural network after training?

There are several common methods of presentation, depending on what needs to be emphasized.

Most common designs for regression and classifiation depend on

1. The training criterion for stopping

a. Training set MSE convergence (degree-of-freedom adjusted)

b. Validation set MSE minimum

c. Weight-regularized training set MSE convergence

2. The number of hidden nodes, H.

3. The random initial weights

I typically choose both 1a and 1b.Then, for regression, I tabulate and/or plot statistical summaries (min,median,mean,stdv,max) of normalized mean-square-error (NMSE) or Rsquared-statistic (R^2=1-NMSE) vs H. For classification, both class dependent and mixture classification error percentages tend to be the chosen measures of performance. For each value of H, 10-30 random weight initialization trials are used.

Of the 22 nnet plotting functions (doc nnet), the ones I find most useful are.

plotconfusion Plot classification confusion matrix

ploterrcorr Plot autocorrelation of error time series

ploterrhist Plot error histogram

plotfit Plot function fit

plotperform Plot network performance

plotregression Plot linear regression

plotroc Plot receiver operating characteristic

plottrainstate Plot training state values

> (Edited) I have trained a neural network where, by default, MATLAB used >70% of the data for training, 20% for testing and 10% for validation.

MATLAB's default is 70/15/15

>The training data yields a 99.7% success rate which is optimistically > biased because the estimate is highly dependent on the same data used to > estimate the weights. Now, how do I present my results?

I present separate results for training, validatation and test data.

> Do I need to create a different data set to verify it?

Not necessarily. The above-mentioned summary statistics of the multiple random initial weight trials tend to yield confidence in the results. However, if necessary, more formal confidence intervals can be estimated by assuming MSE is Chi-square distributed and/or error rate is binomially distributed.

Another source of variation that can be explored is the train/val/test ratio because the actual sizes tend to be more important than the ratios. See below.

>If so how big the data set should be?

You can find the theoretical expressions for standard error and confidence intervals from wikipedia or a stats handbook. Nontraining standard deviations decay like 1/sqrt(Nval or Ntst). However, weight estimation errors decay like 1/Ntrn. Consequently the division of a constant N requires tradeoffs.

I tend to feel more comfortable when the number of training equations for an I-H-O net, Neq = Ntrn*O is much greater than the number of unknown weights Nw =(I+1)*H+(H+1)*O. However, when that doesn't hold, validation set stopping and/or regularization tend to prevent the dreaded phemomenon of overtraining an overfit (i.e., large H) net.

>Also can I know which part of the data was used for training and which >part for testing? So that I can include that part in presenting results?

Check the structure tr in the training output;

[ net tr ] = train(net,x,t);

Hope this helps.

Greg

### More Answers (0)

### See Also

### Categories

Find more on

**Get Started with Statistics and Machine Learning Toolbox**in Help Center and File Exchange### Products

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!