Why scaling data inside [-1,1] ?

3 views (last 30 days)
Andre
Andre on 15 Jan 2015
Answered: Greg Heath on 18 Jan 2015
What are the diferences between normalizating features in [0,1], [-1,1] or [-5,5] with NN minmax ?

Accepted Answer

Greg Heath
Greg Heath on 18 Jan 2015
The purpose of normalization is to keep inputs to transfer functions as close to the middle of the so called 'active region' as much as possible. For example, Warren Sarle posted the results of experimental examples in the FAQ of comp.ai.neural-nets indicating that in general, you can do no better than use bipolar inputs, outputs and transfer functions.
Nevertheless, it is easier in MATLAB to use unit sum unipolar [0,1] coding for target classification because of the functions vec2ind and ind2vec.
My interpretation of 'better' is faster and/or more accurate. Obviously, this result is machine dependent. So, given what you know now, you can perform your own speed and accuracy tests on your own machine.
You have to take into account how the weights are being initialized. That means understanding the functions init, initwb and initnw.
However, before you start, see my post "Nonsaturating Initial Weights" in comp.ai.neural-nets.
Hope this helps.
Thank you for formally accepting my answer
Greg

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!