I've read that it is good practice to normalize data before training a neural network.
There are different ways of normalizing data.
Does the data have to me normalized between 0 and 1? or can it be done using the standardize function - which won't necessarily give you numbers between 0 and 1 and could give you negative numbers.
I've heard that the artificial neural network training data must be normalized before the training process.
I have a code that can normalize your data into spesific range that you want.
p = [4 4 3 3 4; 2 1 2 1 1; 2 2 2 4 2];
a = min(p(:)); b = max(p(:)); ra = 0.9; rb = 0.1; pa = (((ra-rb) * (p - a)) / (b - a)) + rb;
Let say you want to normalize p into 0.1 to 0.9.
p is your data.
ra is 0.9 and rb is 0.1.
Then your normalized data is pa
The best combination to use for a MLP (e.g., NEWFF) with one or more hidden layers is
1. TANSIG hidden layer activation functions
2. EITHER standardization (zero-mean/unit-variance: doc MAPSTD)
OR [ -1 1 ] normalization ( [min,max] => [ -1, 1 ] ): doc MAPMINMAX)
Convincing demonstrations are available in the comp.ai.neural-nets FAQ.
For classification among c classes, using columns of the c-dimensional unit matrix eye(c) as targets guarantees that the outputs can be interpreted as valid approximatations to input conditional posterior probabilities. For that reason, the commonly used normalization to [0.1 0.9] is not recommended.
WARNING: NEWFF automatically uses the MINMAX normalization as a default. Standardization must be explicitly specified.
Hope this helps.
In general, if you decide to standardize or normalize, each ROW is treated SEPARATELY.
If you do this, either use MAPSTD, MAPMNMX, or the following:
[I N] = size(p)
meanp = repmat(mean(p,2),1,N);
stdp = repmat(std(p,0,2),1,N);
pstd = (p-meanp)./stdp ;
minp = repmat(min(p,,2),1,N);
maxp = repmat(max(p,,2),1,N);
pn = minpn +(maxpn-minpn).*(p-minp)./(maxp-pmin);
Hope this helps
x is input....
y is the output...