MATLAB Answers


Normalizing data for neural networks

Asked by John
on 10 Jan 2012
Latest activity Commented on by Greg Heath
on 10 Jul 2015


I've read that it is good practice to normalize data before training a neural network.

There are different ways of normalizing data.

Does the data have to me normalized between 0 and 1? or can it be done using the standardize function - which won't necessarily give you numbers between 0 and 1 and could give you negative numbers.

Many thanks



No products are associated with this question.

5 Answers

Answer by Chandra Kurniawan
on 10 Jan 2012
 Accepted answer


I've heard that the artificial neural network training data must be normalized before the training process.

I have a code that can normalize your data into spesific range that you want.

p = [4 4 3 3 4;            
     2 1 2 1 1;
     2 2 2 4 2];
a = min(p(:));
b = max(p(:));
ra = 0.9;
rb = 0.1;
pa = (((ra-rb) * (p - a)) / (b - a)) + rb;

Let say you want to normalize p into 0.1 to 0.9.

p is your data.

ra is 0.9 and rb is 0.1.

Then your normalized data is pa


Greg Heath
on 11 Jan 2012

Demos in the comp.a.neural-nets FAQ indicate that better precision is obtained when the input data is relatively balanced about 0 AND TANSIG (instead of LOGSIG) activation functions are used in hidden layers.

Hope this helps.


Kaushal Raval
on 4 Jul 2015

i want to find out my output in original form how can i find it ? please help me

Greg Heath
on 10 Jul 2015

If you use the standard programs e.g., FITNET, PATTERNNET, TIMEDELAYNET, NARNET & NARXNET,

All of the normalization and de-normalization is done automatically (==>DONWORRIBOUTIT).

All you have to do is run the example programs in, e.g.,

 help fitnet
 doc fitnet

If you need additional sample data

 help nndatasets
 doc nndatasets

For more detailed examples search in the NEWSGROUP and ANSWERS. For example

 NEWSGROUP              2014-15     all-time
 tutorial                  58         2575
 tutorial neural           16          127
 tutorial neural greg      15           58

Hope this helps.


Answer by Greg Heath
on 11 Jan 2012

The best combination to use for a MLP (e.g., NEWFF) with one or more hidden layers is

1. TANSIG hidden layer activation functions

2. EITHER standardization (zero-mean/unit-variance: doc MAPSTD)

   OR [ -1 1 ] normalization ( [min,max] => [ -1, 1 ] ): doc MAPMINMAX)

Convincing demonstrations are available in the FAQ.

For classification among c classes, using columns of the c-dimensional unit matrix eye(c) as targets guarantees that the outputs can be interpreted as valid approximatations to input conditional posterior probabilities. For that reason, the commonly used normalization to [0.1 0.9] is not recommended.

WARNING: NEWFF automatically uses the MINMAX normalization as a default. Standardization must be explicitly specified.

Hope this helps.



on 11 Jan 2012

I dont have access to the Neural Network Toolbox anymore, but if I recall correctly you should be able to generate code from the nprtool GUI (last tab maybe?). You can use this code to do your work without the GUI, customize it as need be, and also learn from it to gain a deeper understanding.

What I think Greg is referring to above is the fact that the function "newff" (a quick function to initialize a network) uses the built in normalization (see toolbox function mapminmax). If you want to change this, you'll have to make some custom changes. I dont recall if the nprtool uses newff - this can be verified by generating and viewing the code.

This is all from memory as I dont have access to the toolbox anymore - so take my comments as general guidelines, not as absolute.

Good luck.

on 12 Jan 2012

Thank you

Greg Heath
on 13 Jan 2012

Standardization means zero-mean/unit-variance.

My preferences:

1. TANSIG in hidden layers

2. Standardize reals and mixtures of reals and binary.

3. {-1,1} for binary and reals that have bounds imposed by math or physics.

Hope this helps.


Answer by Greg Heath
on 14 Jan 2012

In general, if you decide to standardize or normalize, each ROW is treated SEPARATELY.

If you do this, either use MAPSTD, MAPMNMX, or the following:

[I N] = size(p)


meanp = repmat(mean(p,2),1,N);

stdp = repmat(std(p,0,2),1,N);

pstd = (p-meanp)./stdp ;


minp = repmat(min(p,[],2),1,N);

maxp = repmat(max(p,[],2),1,N);

pn = minpn +(maxpn-minpn).*(p-minp)./(maxp-pmin);

Hope this helps


  1 Comment

on 16 Jan 2012

Many thanks

Answer by Sarillee
on 25 Mar 2013


try this...

x is input....

y is the output...

  1 Comment

Greg Heath
on 10 May 2013

Not valid for matrix inputs

Answer by Imran Babar
on 8 May 2013

mu_input=mean(trainingInput); std_input=std(trainingInput); trainingInput=(trainingInput(:,:)-mu_input(:,1))/std_input(:,1);

I hope this will serve your purpose


Greg Heath
on 10 May 2013

Not valid for matrix inputs

Abul Fujail
on 12 Dec 2013

in case of matrix data, the min and max value corresponds to a column or the whole dataset. E.g. i have 5 input columns of data, in this case whether i should choose min/max for each column and do the normalization or min/max form all over the column and calculate.

Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi test

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

MATLAB Academy

New to MATLAB?

Learn MATLAB today!