Neural Network Normalization process

1 view (last 30 days)
Hello all,
I have a question regarding the NN normalization procedure. When a NN is trained using the train(net,x,y) command the function somehow normalizes x and y in order to ensure that the weights and biases of the network are bound by [-1,1].
Currently I am in the process of trying to apply a set of NN weights and biases analytically (instead of just calling net(xtest)) using the following equation:
ytest = Outputbias+Hiddenweight*tanh(Inputbias + Inputweight*xtest);
which produces an output -- however it does not produce the same output as (ytest = net(xtest)).
I'm assuming the difference is due to the fact that xtest is not normalized before using the above equation.
I tried to simply divide xtest by its maximum before feeding it in to the equation however the results still differ.
Does anyone know how xtest should be manipulated in order to produce the same output as net(xtest).
Thanks! Bryan

Accepted Answer

Greg Heath
Greg Heath on 8 Oct 2015
Hope this helps.
Thank you for formally accepting my answer
Greg

More Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!