Why is the result of using NNTOOL different from using TANSIG transfer function in Neural Network Toolbox 6.0.5 (R2010b)?

1 view (last 30 days)
I used NNTOOL to create a feed-forward backpropagation network. I found that the output is different when using NNTOOL from when using TANSIG transfer function. I have the following code to compare the output:
r = sim(network1, P(1))
lay1 = tansig(network1.IW{1}*P(1)+network1.b{1});
r1 = tansig(network1.LW{2}*lay1+network1.b{2})
Both layers have TANSIG transfer functions.

Accepted Answer

MathWorks Support Team
MathWorks Support Team on 17 Jan 2011
The network simulation code is correct. The reason for the different output is because the normalization step is missing before calling TANSIG function. To process input and output, they have to be normalized to the [-1 1] range. Once you have the output, you can scale it back to the original range. This can be done as follows:
load net.mat % load network object called network1
r = sim(network1, P(1))
% Get input and output processing values from network
xmin = network1.inputs{1}.processSettings{3}.xmin;
xmax = network1.inputs{1}.processSettings{3}.xmax;
xrange = xmax-xmin;
ymin = network1.outputs{2}.processSettings{2}.xmin;
ymax = network1.outputs{2}.processSettings{2}.xmax;
yrange = ymax-ymin;
P2 = (P(1)-xmin) * (2/xrange) - 1;
lay1 = tansig(network1.IW{1}*P(1)+network1.b{1});
r1 = tansig(network1.LW{2}*lay1+network1.b{2});
r2 = (r1+1) * (yrange/2) + ymin
This code gives the same answer for NNTOOL output (i.e. r) and output using TANSIG transfer function (i.e. r2).

More Answers (0)

Tags

Products


Release

R2010b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!