Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Using weights of a trained neural network

Subject: Using weights of a trained neural network

From: Vito

Date: 20 Mar, 2013 17:10:19

Message: 1 of 6

I trained a neural network using the MATLAB Neural Network Toolbox, and in particular using the command nprtool. After that, I exported a structure (called 'net') containing the informations about the NN generated.

In this way, I created a working neural network, that I can use as classifier.
This network has 200 inputs, 20 neurons in the first hidden layer, and 2 neurons in the last layer that provide a bidimensional output.

What I want to do is to use the weights in order to obtain the same results that I get using sim() function.

In order to solve this problem, I try to use the following code:

y1 = tansig(net.IW{1} * input + net.b{1});
Results = tansig(net.LW{2} * y1 + net.b{2});

Assuming that input is a monodimensional array of 200 elements, the previous code would work if net.IW{1} is a 20x200 matrix (20 neurons, 200 weights).

The problem is that I noticed that size(net.IW{1}) returns unexpected values:

>> size(net.IW{1})

    ans =

    20 199

I got the same problem with a network with 10000 input. In this case, the result wasn't 20x10000, but something like 20x9384 (I don't remember the exact value).

So, the question is: how can I make work the above code?

Subject: Using weights of a trained neural network

From: Greg Heath

Date: 21 Mar, 2013 06:28:06

Message: 2 of 6

"Vito" wrote in message <kicqhr$spo$1@newscl01ah.mathworks.com>...
> I trained a neural network using the MATLAB Neural Network Toolbox, and in particular using the command nprtool. After that, I exported a structure (called 'net') containing the informations about the NN generated.
>
> In this way, I created a working neural network, that I can use as classifier.
> This network has 200 inputs, 20 neurons in the first hidden layer, and 2 neurons in the last layer that provide a bidimensional output.
>
> What I want to do is to use the weights in order to obtain the same results that I get using sim() function.
>
> In order to solve this problem, I try to use the following code:
>
> y1 = tansig(net.IW{1} * input + net.b{1});
> Results = tansig(net.LW{2} * y1 + net.b{2});
>
> Assuming that input is a monodimensional array of 200 elements, the previous code would work if net.IW{1} is a 20x200 matrix (20 neurons, 200 weights).
>
> The problem is that I noticed that size(net.IW{1}) returns unexpected values:
>
> >> size(net.IW{1})
>
> ans =
>
> 20 199
>
> I got the same problem with a network with 10000 input. In this case, the result wasn't 20x10000, but something like 20x9384 (I don't remember the exact value).
>
> So, the question is: how can I make work the above code?

You need to find the dimension bugs. Post your code and we may be able to help.

As far as replicating the net function, don't forget that IW is acting on normalized inputs.

Hope this helps.

Greg

Subject: Using weights of a trained neural network

From: Vito

Date: 21 Mar, 2013 11:17:18

Message: 3 of 6

"Greg Heath" <heath@alumni.brown.edu> wrote in message <kie99m$ega$1@newscl01ah.mathworks.com>...
> "Vito" wrote in message <kicqhr$spo$1@newscl01ah.mathworks.com>...
> > I trained a neural network using the MATLAB Neural Network Toolbox, and in particular using the command nprtool. After that, I exported a structure (called 'net') containing the informations about the NN generated.
> >
> > In this way, I created a working neural network, that I can use as classifier.
> > This network has 200 inputs, 20 neurons in the first hidden layer, and 2 neurons in the last layer that provide a bidimensional output.
> >
> > What I want to do is to use the weights in order to obtain the same results that I get using sim() function.
> >
> > In order to solve this problem, I try to use the following code:
> >
> > y1 = tansig(net.IW{1} * input + net.b{1});
> > Results = tansig(net.LW{2} * y1 + net.b{2});
> >
> > Assuming that input is a monodimensional array of 200 elements, the previous code would work if net.IW{1} is a 20x200 matrix (20 neurons, 200 weights).
> >
> > The problem is that I noticed that size(net.IW{1}) returns unexpected values:
> >
> > >> size(net.IW{1})
> >
> > ans =
> >
> > 20 199
> >
> > I got the same problem with a network with 10000 input. In this case, the result wasn't 20x10000, but something like 20x9384 (I don't remember the exact value).
> >
> > So, the question is: how can I make work the above code?
>
> You need to find the dimension bugs. Post your code and we may be able to help.
>
> As far as replicating the net function, don't forget that IW is acting on normalized inputs.
>
> Hope this helps.
>
> Greg


I've solved removing the preprocessing and postprocessing functions.

You can see the solution here:
http://stackoverflow.com/questions/15526112/porting-a-neural-network-trained-with-matlab-in-other-programming-languages/15537848#15537848

Subject: Using weights of a trained neural network

From: Greg Heath

Date: 21 Mar, 2013 20:09:06

Message: 4 of 6

"Vito" wrote in message <kieq7u$svk$1@newscl01ah.mathworks.com>...
> "Greg Heath" <heath@alumni.brown.edu> wrote in message <kie99m$ega$1@newscl01ah.mathworks.com>...
> > "Vito" wrote in message <kicqhr$spo$1@newscl01ah.mathworks.com>...
> > > I trained a neural network using the MATLAB Neural Network Toolbox, and in particular using the command nprtool. After that, I exported a structure (called 'net') containing the informations about the NN generated.
> > >
> > > In this way, I created a working neural network, that I can use as classifier.
> > > This network has 200 inputs, 20 neurons in the first hidden layer, and 2 neurons in the last layer that provide a bidimensional output.
> > >
> > > What I want to do is to use the weights in order to obtain the same results that I get using sim() function.
> > >
> > > In order to solve this problem, I try to use the following code:
> > >
> > > y1 = tansig(net.IW{1} * input + net.b{1});
> > > Results = tansig(net.LW{2} * y1 + net.b{2});
> > >
> > > Assuming that input is a monodimensional array of 200 elements, the previous code would work if net.IW{1} is a 20x200 matrix (20 neurons, 200 weights).
> > >
> > > The problem is that I noticed that size(net.IW{1}) returns unexpected values:
> > >
> > > >> size(net.IW{1})
> > >
> > > ans =
> > >
> > > 20 199
> > >
> > > I got the same problem with a network with 10000 input. In this case, the result wasn't 20x10000, but something like 20x9384 (I don't remember the exact value).
> > >
> > > So, the question is: how can I make work the above code?
> >
> > You need to find the dimension bugs. Post your code and we may be able to help.
> >
> > As far as replicating the net function, don't forget that IW is acting on normalized inputs.
> >
> > Hope this helps.
> >
> > Greg
>
>
> I've solved removing the preprocessing and postprocessing functions.
>
> You can see the solution here:
> http://stackoverflow.com/questions/15526112/porting-a-neural-network-trained-with-matlab-in-other-programming-languages/15537848#15537848

It is always satisfying to find a solution to a nagging problem.

However, doesn't it nag you that you don't know exactly what caused the problem in the first place?

Greg

Subject: Using weights of a trained neural network

From: James

Date: 18 Dec, 2013 23:20:17

Message: 5 of 6

This problem is nagging me as well. I can only get this to work if I turn off the pre and post processing entirely. I've tried adding mapminmax() to my own feedforward calculation, but I was not able to replicate MATLAB's calculations. I would love to know. I need the hidden node values, and the only way to get those is to do so manually--ie no built in method in the net object. And, as a sanity check, it would be nice if I could match predictions with the processing functions turned on.


"Greg Heath" <heath@alumni.brown.edu> wrote in message <kifpd2$gee$1@newscl01ah.mathworks.com>...
> "Vito" wrote in message <kieq7u$svk$1@newscl01ah.mathworks.com>...
> > "Greg Heath" <heath@alumni.brown.edu> wrote in message <kie99m$ega$1@newscl01ah.mathworks.com>...
> > > "Vito" wrote in message <kicqhr$spo$1@newscl01ah.mathworks.com>...
> > > > I trained a neural network using the MATLAB Neural Network Toolbox, and in particular using the command nprtool. After that, I exported a structure (called 'net') containing the informations about the NN generated.
> > > >
> > > > In this way, I created a working neural network, that I can use as classifier.
> > > > This network has 200 inputs, 20 neurons in the first hidden layer, and 2 neurons in the last layer that provide a bidimensional output.
> > > >
> > > > What I want to do is to use the weights in order to obtain the same results that I get using sim() function.
> > > >
> > > > In order to solve this problem, I try to use the following code:
> > > >
> > > > y1 = tansig(net.IW{1} * input + net.b{1});
> > > > Results = tansig(net.LW{2} * y1 + net.b{2});
> > > >
> > > > Assuming that input is a monodimensional array of 200 elements, the previous code would work if net.IW{1} is a 20x200 matrix (20 neurons, 200 weights).
> > > >
> > > > The problem is that I noticed that size(net.IW{1}) returns unexpected values:
> > > >
> > > > >> size(net.IW{1})
> > > >
> > > > ans =
> > > >
> > > > 20 199
> > > >
> > > > I got the same problem with a network with 10000 input. In this case, the result wasn't 20x10000, but something like 20x9384 (I don't remember the exact value).
> > > >
> > > > So, the question is: how can I make work the above code?
> > >
> > > You need to find the dimension bugs. Post your code and we may be able to help.
> > >
> > > As far as replicating the net function, don't forget that IW is acting on normalized inputs.
> > >
> > > Hope this helps.
> > >
> > > Greg
> >
> >
> > I've solved removing the preprocessing and postprocessing functions.
> >
> > You can see the solution here:
> > http://stackoverflow.com/questions/15526112/porting-a-neural-network-trained-with-matlab-in-other-programming-languages/15537848#15537848
>
> It is always satisfying to find a solution to a nagging problem.
>
> However, doesn't it nag you that you don't know exactly what caused the problem in the first place?
>
> Greg

Subject: Using weights of a trained neural network

From: Greg Heath

Date: 19 Dec, 2013 02:07:06

Message: 6 of 6

"James" wrote in message <l8tajh$cnf$1@newscl01ah.mathworks.com>...
> This problem is nagging me as well. I can only get this to work if I turn off the pre and post processing entirely. I've tried adding mapminmax() to my own feedforward calculation, but I was not able to replicate MATLAB's calculations. I would love to know. I need the hidden node values, and the only way to get those is to do so manually--ie no built in method in the net object. And, as a sanity check, it would be nice if I could match
>predictions with the processing functions turned on.

Don't forget that targets are also normalized for training.

Therefore outputs have to be unnormalized in order to get the final answer.

Gregu

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us