Got Questions? Get Answers.
Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Neural Network: Predicition equation of the trained network

Subject: Neural Network: Predicition equation of the trained network

From: Siva

Date: 1 Apr, 2012 01:09:43

Message: 1 of 20

I am trying to determine the final equation for the trained network. My network is a plain vanilla feedforward network with a single hidden layer. The hidden layer has a logsig squashing function and the output layer has a pure linear transfer function.

I would appreciate help in correcting the code in Block 2 where I am attempting to
re-create the calculations that the toolbox performs to generate the network predictions.

%% BLOCK 1: Network training
x= rand( 5, 10) ; y= rand( 1, 10) ; % random training set (for
                                                    % explanation purposes only)
net= feedforward( 2) ; % Specify a feedforward 2-hidden node network
net= configure( net, x, y) ; % Configure the network
net= train( net, x, y) ; % Train the network
yfit= net( x) ; % Prediction from the trained network

%% BLOCK 2: My efforts to calculate the output from first principles
squash_input= net.IW{ 1}*x + net.b{ 1}*ones( 1, size( x, 2)) ; % input to
                     % the squashing function
squash_output= 1./( 1 + exp( -squash_input)) ; % output from the
                     % squashing function
yhat= net.LW{ 2, 1}*( squash_output) + net.b{ 2} ; % network output

'yhat' does not produce the same numbers as 'yfit'.

Subject: Neural Network: Predicition equation of the trained network

From: Steven_Lord

Date: 2 Apr, 2012 01:18:26

Message: 2 of 20



"Siva " <sivaathome@gmail.com> wrote in message
news:jl89sn$nfb$1@newscl01ah.mathworks.com...
> I am trying to determine the final equation for the trained network. My
> network is a plain vanilla feedforward network with a single hidden layer.
> The hidden layer has a logsig squashing function and the output layer has
> a pure linear transfer function.
>
> I would appreciate help in correcting the code in Block 2 where I am
> attempting to re-create the calculations that the toolbox performs to
> generate the network predictions.

*snip*

Don't forget pre- and post-processing.

http://www.mathworks.com/help/toolbox/nnet/ug/bss324a-1.html#bss324a-5

--
Steve Lord
slord@mathworks.com
To contact Technical Support use the Contact Us link on
http://www.mathworks.com

Subject: Neural Network: Predicition equation of the trained network

From: Greg Heath

Date: 1 Apr, 2012 16:53:43

Message: 3 of 20

On Mar 31, 9:09 pm, "Siva " <sivaath...@gmail.com> wrote:
> I am trying to determine the final equation for the trained network.
   My network is a plain vanilla feedforward network with a single
   hidden layer. The hidden layer has a logsig squashing function and
   the output layer has a pure linear transfer function.
>
> I would appreciate help in correcting the code in Block 2 where I am
attempting to
> re-create the calculations that the toolbox performs to generate the
network predictions.
>
> %% BLOCK 1: Network training
> x= rand( 5, 10) ; y= rand( 1, 10) ; % random training set (for
> % explanation purposes only)

Bad idea!

Either use y = f(x) for your favorite f, or better yet, use the data
from the
feedfordwardnet demo (see below).

> net= feedforward( 2) ; % Specify a feedforward 2-hidden node network

Incorrect: You mean "feedforwardnet"

It is also a bad idea to post code which doesn't run when cut and
pasted
into the command line.

> net= configure( net, x, y) ; % Configure the network
> net= train( net, x, y) ; % Train the network
> yfit= net( x) ; % Prediction from the trained network
>
> %% BLOCK 2: My efforts to calculate the output from first principles
> squash_input= net.IW{ 1}*x + net.b{ 1}*ones( 1, size( x, 2)) ; % input to
> % the squashing function

Incorrect x must be normalized to [-1,1]

> squash_output= 1./( 1 + exp( -squash_input)) ; % output from the
> % squashing function

Incorrect the hidden layer activation function is tansig

> yhat= net.LW{ 2, 1}*( squash_output) + net.b{ 2} ; % network output

Incorrect yhat must be unnormalized from [-1,1]

>
> 'yhat' does not produce the same numbers as 'yfit'.


close all, clear all, clc
 help feedforwardnet

[ x, t ] = simplefit_dataset;
[ I N ] = size(x)
[ O N ] = size(t)
whos
figure
hold on
plot(x,t,'o','LineWidth',2)
drawnow

H = 10
net = feedforwardnet(H);
net = train(net,x,t);
%view(net)
y = net(x);
plot(x,y,'r','LineWidth',2)
drawnow
% perf = perform(net,t,y)

IW = net.IW{1,1}
LW = net.LW{2,1}
b1 = net.b{1}
b2 = net.b{2}
xmin = min(x)
xmax=max(x)
xn = -1+ 2*(x-xmin)/(xmax-xmin) ;

h =tansig(IW*xn+b1*ones(1,N));
yn = LW*h+b2;
tmin = min(t)
tmax = max(t)
yhat = tmin +(tmax-tmin)*(yn +1)/2;
plot(x,yhat,'g','LineWidth',2')
mse(y-yhat)

Hope this helps.

Greg

Subject: Neural Network: Predicition equation of the trained network

From: Siva

Date: 3 Apr, 2012 02:37:12

Message: 4 of 20

"Steven_Lord" <slord@mathworks.com> wrote in message <jlaup1$351$1@newscl01ah.mathworks.com>...
>
>
> "Siva " <sivaathome@gmail.com> wrote in message
> news:jl89sn$nfb$1@newscl01ah.mathworks.com...
> > I am trying to determine the final equation for the trained network. My
> > network is a plain vanilla feedforward network with a single hidden layer.
> > The hidden layer has a logsig squashing function and the output layer has
> > a pure linear transfer function.
> >
> > I would appreciate help in correcting the code in Block 2 where I am
> > attempting to re-create the calculations that the toolbox performs to
> > generate the network predictions.
>
> *snip*
>
> Don't forget pre- and post-processing.
>
> http://www.mathworks.com/help/toolbox/nnet/ug/bss324a-1.html#bss324a-5
>
> --
> Steve Lord
> slord@mathworks.com
> To contact Technical Support use the Contact Us link on
> http://www.mathworks.com

Thanks Steven!

Subject: Neural Network: Predicition equation of the trained network

From: Siva

Date: 3 Apr, 2012 02:42:13

Message: 5 of 20

Greg Heath <g.heath@verizon.net> wrote in message <8f11b904-5362-4268-bd20-5cafa8b8aabe@v22g2000vby.googlegroups.com>...
> On Mar 31, 9:09 pm, "Siva " <sivaath...@gmail.com> wrote:
> > I am trying to determine the final equation for the trained network.
> My network is a plain vanilla feedforward network with a single
> hidden layer. The hidden layer has a logsig squashing function and
> the output layer has a pure linear transfer function.
> >
> > I would appreciate help in correcting the code in Block 2 where I am
> attempting to
> > re-create the calculations that the toolbox performs to generate the
> network predictions.
> >
> > %% BLOCK 1: Network training
> > x= rand( 5, 10) ; y= rand( 1, 10) ; % random training set (for
> > % explanation purposes only)
>
> Bad idea!
>
> Either use y = f(x) for your favorite f, or better yet, use the data
> from the
> feedfordwardnet demo (see below).
>
> > net= feedforward( 2) ; % Specify a feedforward 2-hidden node network
>
> Incorrect: You mean "feedforwardnet"
>
> It is also a bad idea to post code which doesn't run when cut and
> pasted
> into the command line.
>
> > net= configure( net, x, y) ; % Configure the network
> > net= train( net, x, y) ; % Train the network
> > yfit= net( x) ; % Prediction from the trained network
> >
> > %% BLOCK 2: My efforts to calculate the output from first principles
> > squash_input= net.IW{ 1}*x + net.b{ 1}*ones( 1, size( x, 2)) ; % input to
> > % the squashing function
>
> Incorrect x must be normalized to [-1,1]
>
> > squash_output= 1./( 1 + exp( -squash_input)) ; % output from the
> > % squashing function
>
> Incorrect the hidden layer activation function is tansig
>
> > yhat= net.LW{ 2, 1}*( squash_output) + net.b{ 2} ; % network output
>
> Incorrect yhat must be unnormalized from [-1,1]
>
> >
> > 'yhat' does not produce the same numbers as 'yfit'.
>
>
> close all, clear all, clc
> help feedforwardnet
>
> [ x, t ] = simplefit_dataset;
> [ I N ] = size(x)
> [ O N ] = size(t)
> whos
> figure
> hold on
> plot(x,t,'o','LineWidth',2)
> drawnow
>
> H = 10
> net = feedforwardnet(H);
> net = train(net,x,t);
> %view(net)
> y = net(x);
> plot(x,y,'r','LineWidth',2)
> drawnow
> % perf = perform(net,t,y)
>
> IW = net.IW{1,1}
> LW = net.LW{2,1}
> b1 = net.b{1}
> b2 = net.b{2}
> xmin = min(x)
> xmax=max(x)
> xn = -1+ 2*(x-xmin)/(xmax-xmin) ;
>
> h =tansig(IW*xn+b1*ones(1,N));
> yn = LW*h+b2;
> tmin = min(t)
> tmax = max(t)
> yhat = tmin +(tmax-tmin)*(yn +1)/2;
> plot(x,yhat,'g','LineWidth',2')
> mse(y-yhat)
>
> Hope this helps.
>
> Greg
>
>
>
>

Thanks for the detail Greg.

Subject: Neural Network: Predicition equation of the trained network

From: Siva

Date: 5 Apr, 2012 18:13:16

Message: 6 of 20

"Siva " <sivaathome@gmail.com> wrote in message <jldnoo$de0$1@newscl01ah.mathworks.com>...
> "Steven_Lord" <slord@mathworks.com> wrote in message <jlaup1$351$1@newscl01ah.mathworks.com>...
> >
> >
> > "Siva " <sivaathome@gmail.com> wrote in message
> > news:jl89sn$nfb$1@newscl01ah.mathworks.com...
> > > I am trying to determine the final equation for the trained network. My
> > > network is a plain vanilla feedforward network with a single hidden layer.
> > > The hidden layer has a logsig squashing function and the output layer has
> > > a pure linear transfer function.
> > >
> > > I would appreciate help in correcting the code in Block 2 where I am
> > > attempting to re-create the calculations that the toolbox performs to
> > > generate the network predictions.
> >
> > *snip*
> >
> > Don't forget pre- and post-processing.
> >
> > http://www.mathworks.com/help/toolbox/nnet/ug/bss324a-1.html#bss324a-5
> >
> > --
> > Steve Lord
> > slord@mathworks.com
> > To contact Technical Support use the Contact Us link on
> > http://www.mathworks.com
>
> Thanks Steven!

Steven -
Couple of follow-up questions:
1. How would I go about include regularization to the network training? i.e. I would like to include a term which is the sum of squares of the weights.
2. I am interested in creating custom hidden layer functions. What conditions should this function satisfy? I assume it has to be continously differentiable. Does it need to mee tany other crtieria?
3. On Item 2, how do I go about creating and making this function available for incorporation into my neural networks?
4. I would like to evaluate trained neural networks to determine what if any input to hidden layer weights I can eliminate. Is there a way to do this? And how do I go about it?
5. As a result of Item 4, I expect I will identify certain input to hidden node links that are insiginificant. How can I constrain the weight between input "I" to hidden node "J" to 0?
Thanks.
Siva
am using neural networks for variable selection with nonlinear models.

Subject: Neural Network: Predicition equation of the trained network

From: Siva

Date: 5 Apr, 2012 18:34:16

Message: 7 of 20

"Siva " <sivaathome@gmail.com> wrote in message <jldnoo$de0$1@newscl01ah.mathworks.com>...
> "Steven_Lord" <slord@mathworks.com> wrote in message <jlaup1$351$1@newscl01ah.mathworks.com>...
> >
> >
> > "Siva " <sivaathome@gmail.com> wrote in message
> > news:jl89sn$nfb$1@newscl01ah.mathworks.com...
> > > I am trying to determine the final equation for the trained network. My
> > > network is a plain vanilla feedforward network with a single hidden layer.
> > > The hidden layer has a logsig squashing function and the output layer has
> > > a pure linear transfer function.
> > >
> > > I would appreciate help in correcting the code in Block 2 where I am
> > > attempting to re-create the calculations that the toolbox performs to
> > > generate the network predictions.
> >
> > *snip*
> >
> > Don't forget pre- and post-processing.
> >
> > http://www.mathworks.com/help/toolbox/nnet/ug/bss324a-1.html#bss324a-5
> >
> > --
> > Steve Lord
> > slord@mathworks.com
> > To contact Technical Support use the Contact Us link on
> > http://www.mathworks.com
>
> Thanks Steven!

Steven:
I am now trying to implement the input pre-processing.
I took a look at the trained ANN object and identified "net.inputs{1}.range" as the min and max of my input dataset identified during the automatic preprocessing of my data. I assume this range is mapped to [-1, 1]. So here are my questions -
1. Is it possible for me to map it to a different range, for example [-0.9, 0.9]?
2. Is it possible to suppress the preprocessing and adopt the ranges I want?
If so, how do I go about doing them?
Thanks.
Siva

Subject: Neural Network: Predicition equation of the trained network

From: Steven_Lord

Date: 6 Apr, 2012 13:48:15

Message: 8 of 20



"Siva " <sivaathome@gmail.com> wrote in message
news:jlknbs$gr7$1@newscl01ah.mathworks.com...

*snip*

> Steven - Couple of follow-up questions:
> 1. How would I go about include regularization to the network training?
> i.e. I would like to include a term which is the sum of squares of the
> weights.
> 2. I am interested in creating custom hidden layer functions. What
> conditions should this function satisfy? I assume it has to be continously
> differentiable. Does it need to mee tany other crtieria?
> 3. On Item 2, how do I go about creating and making this function
> available for incorporation into my neural networks?
> 4. I would like to evaluate trained neural networks to determine what if
> any input to hidden layer weights I can eliminate. Is there a way to do
> this? And how do I go about it?
> 5. As a result of Item 4, I expect I will identify certain input to hidden
> node links that are insiginificant. How can I constrain the weight between
> input "I" to hidden node "J" to 0?

I don't know the answers for some of your questions. I believe the answers
for some of your questions will require custom networks or custom functions,
both of which are discussed briefly in the Advanced Topics section of the
Neural Network Toolbox documentation:

http://www.mathworks.com/help/toolbox/nnet/ug/bss4gtb.html

If that doesn't provide you with enough information, please contact
Technical Support with further clarifying questions.

--
Steve Lord
slord@mathworks.com
To contact Technical Support use the Contact Us link on
http://www.mathworks.com

Subject: Neural Network: Predicition equation of the trained network

From: Steven_Lord

Date: 6 Apr, 2012 13:50:22

Message: 9 of 20



"Siva " <sivaathome@gmail.com> wrote in message
news:jlkoj8$lo6$1@newscl01ah.mathworks.com...

*snip*

> Steven:
> I am now trying to implement the input pre-processing. I took a look at
> the trained ANN object and identified "net.inputs{1}.range" as the min and
> max of my input dataset identified during the automatic preprocessing of
> my data. I assume this range is mapped to [-1, 1]. So here are my
> questions - 1. Is it possible for me to map it to a different range, for
> example [-0.9, 0.9]?
> 2. Is it possible to suppress the preprocessing and adopt the ranges I
> want? If so, how do I go about doing them?

Take a look at the MAPMINMAX function.

http://www.mathworks.com/help/toolbox/nnet/ref/mapminmax.html

--
Steve Lord
slord@mathworks.com
To contact Technical Support use the Contact Us link on
http://www.mathworks.com

Subject: Neural Network: Predicition equation of the trained network

From: Greg Heath

Date: 6 Apr, 2012 18:18:28

Message: 10 of 20

On Apr 5, 2:13 pm, "Siva " <sivaath...@gmail.com> wrote:
> "Siva " <sivaath...@gmail.com> wrote in message <jldnoo$de...@newscl01ah.mathworks.com>...
> > "Steven_Lord" <sl...@mathworks.com> wrote in message <jlaup1$35...@newscl01ah.mathworks.com>...
>
> > > "Siva " <sivaath...@gmail.com> wrote in message
> > >news:jl89sn$nfb$1@newscl01ah.mathworks.com...
> > > > I am trying to determine the final equation for the trained network. My
> > > > network is a plain vanilla feedforward network with a single hidden layer.
> > > > The hidden layer has a logsig squashing function and the output layer has
> > > > a pure linear transfer function.
>
> > > > I would appreciate help in correcting the code in Block 2 where I am
> > > > attempting to re-create the calculations that the toolbox performs to
> > > > generate the network predictions.
>
> > > *snip*
>
> > > Don't forget pre- and post-processing.
>
> > >http://www.mathworks.com/help/toolbox/nnet/ug/bss324a-1.html#bss324a-5
>
> > > --
> > > Steve Lord
> > > sl...@mathworks.com
> > > To contact Technical Support use the Contact Us link on
> > >http://www.mathworks.com
>
> > Thanks Steven!
>
> Steven -
> Couple of follow-up questions:
> 1. How would I go about include regularization to the network training? i.e. I would like to include a term which is the sum of squares of the weights.

Use TRAINBR instead of TRAINLM

help trainbr
doc trainbr

> 2. I am interested in creating custom hidden layer functions. What conditions should this function satisfy? I assume it has to be continously differentiable. Does it need to mee tany other crtieria?

It should be bounded. However, you will not get an error message if it
is
(e.g., PURELIN is unbounded)

> 3. On Item 2, how do I go about creating and making this function available for incorporation into my neural networks?

Creating an m-file and saving it in the path is sufficient.

doc tansig

> 4. I would like to evaluate trained neural networks to determine what if any input to hidden layer weights I can eliminate. Is there a way to do this? And how do I go about it?

Variations of the following (dependig on how much time you want to
take):

A. Standardize input variables (help mapstd, doc mapstd) and specify a
maximum allowable MSE.
B. Rank each variable w.r.t. performance when ONLY that row is either
     1. Replaced by its average value
or
     2. Replaced by it's random shuffle (help randperm, doc randperm)
C. Identify the variable which results in the least increase in MSE.
If that
    MSE is greater than the allowed maximum, STOP. Otherwise either
    1. Permanently replace that variable with its average
    2. Permanently replace that variable with it's random shuffle

D. Repeat B and C either
     1. Without retraining
or
     2. With retraining

It has been recommended to repeat this whole process 10-20 times.

> 5. As a result of Item 4, I expect I will identify certain input to hidden node links that are insiginificant. How can I constrain the weight between input "I" to hidden node "J" to 0?

Unnecessary. When the above ranking is comlete. A new net with a
reduced
number of inputs will be trained.

Hope the helps.

Greg

PS I have also posted an example of using STEPWISEFIT on linear models
of n variables, n*(n-1)/2 cross products and n squared variables

Subject: Neural Network: Predicition equation of the trained network

From: Siva

Date: 7 Apr, 2012 03:21:21

Message: 11 of 20

"Steven_Lord" <slord@mathworks.com> wrote in message <jlmsat$j64$1@newscl01ah.mathworks.com>...
>
>
> "Siva " <sivaathome@gmail.com> wrote in message
> news:jlkoj8$lo6$1@newscl01ah.mathworks.com...
>
> *snip*
>
> > Steven:
> > I am now trying to implement the input pre-processing. I took a look at
> > the trained ANN object and identified "net.inputs{1}.range" as the min and
> > max of my input dataset identified during the automatic preprocessing of
> > my data. I assume this range is mapped to [-1, 1]. So here are my
> > questions - 1. Is it possible for me to map it to a different range, for
> > example [-0.9, 0.9]?
> > 2. Is it possible to suppress the preprocessing and adopt the ranges I
> > want? If so, how do I go about doing them?
>
> Take a look at the MAPMINMAX function.
>
> http://www.mathworks.com/help/toolbox/nnet/ref/mapminmax.html
>
> --
> Steve Lord
> slord@mathworks.com
> To contact Technical Support use the Contact Us link on
> http://www.mathworks.com

I should be able to address my needs by choosing appropriate values for YMIN and YMAX in my MAPMINMAX call.

Thanks Steven.

Subject: Neural Network: Predicition equation of the trained network

From: Siva

Date: 7 Apr, 2012 04:13:21

Message: 12 of 20

Greg Heath <g.heath@verizon.net> wrote in message <cdb8088e-9c7f-412f-8b2a-183b3ca96d32@l4g2000vbt.googlegroups.com>...
> On Apr 5, 2:13 pm, "Siva " <sivaath...@gmail.com> wrote:
> > "Siva " <sivaath...@gmail.com> wrote in message <jldnoo$de...@newscl01ah.mathworks.com>...
> > > "Steven_Lord" <sl...@mathworks.com> wrote in message <jlaup1$35...@newscl01ah.mathworks.com>...
> >
> > > > "Siva " <sivaath...@gmail.com> wrote in message
> > > >news:jl89sn$nfb$1@newscl01ah.mathworks.com...
> > > > > I am trying to determine the final equation for the trained network. My
> > > > > network is a plain vanilla feedforward network with a single hidden layer.
> > > > > The hidden layer has a logsig squashing function and the output layer has
> > > > > a pure linear transfer function.
> >
> > > > > I would appreciate help in correcting the code in Block 2 where I am
> > > > > attempting to re-create the calculations that the toolbox performs to
> > > > > generate the network predictions.
> >
> > > > *snip*
> >
> > > > Don't forget pre- and post-processing.
> >
> > > >http://www.mathworks.com/help/toolbox/nnet/ug/bss324a-1.html#bss324a-5
> >
> > > > --
> > > > Steve Lord
> > > > sl...@mathworks.com
> > > > To contact Technical Support use the Contact Us link on
> > > >http://www.mathworks.com
> >
> > > Thanks Steven!
> >
> > Steven -
> > Couple of follow-up questions:
> > 1. How would I go about include regularization to the network training? i.e. I would like to include a term which is the sum of squares of the weights.
>
> Use TRAINBR instead of TRAINLM
>
> help trainbr
> doc trainbr
>
> > 2. I am interested in creating custom hidden layer functions. What conditions should this function satisfy? I assume it has to be continously differentiable. Does it need to mee tany other crtieria?
>
> It should be bounded. However, you will not get an error message if it
> is
> (e.g., PURELIN is unbounded)
>
> > 3. On Item 2, how do I go about creating and making this function available for incorporation into my neural networks?
>
> Creating an m-file and saving it in the path is sufficient.
>
> doc tansig
>
> > 4. I would like to evaluate trained neural networks to determine what if any input to hidden layer weights I can eliminate. Is there a way to do this? And how do I go about it?
>
> Variations of the following (dependig on how much time you want to
> take):
>
> A. Standardize input variables (help mapstd, doc mapstd) and specify a
> maximum allowable MSE.
> B. Rank each variable w.r.t. performance when ONLY that row is either
> 1. Replaced by its average value
> or
> 2. Replaced by it's random shuffle (help randperm, doc randperm)
> C. Identify the variable which results in the least increase in MSE.
> If that
> MSE is greater than the allowed maximum, STOP. Otherwise either
> 1. Permanently replace that variable with its average
> 2. Permanently replace that variable with it's random shuffle
>
> D. Repeat B and C either
> 1. Without retraining
> or
> 2. With retraining
>
> It has been recommended to repeat this whole process 10-20 times.
>
> > 5. As a result of Item 4, I expect I will identify certain input to hidden node links that are insiginificant. How can I constrain the weight between input "I" to hidden node "J" to 0?
>
> Unnecessary. When the above ranking is comlete. A new net with a
> reduced
> number of inputs will be trained.
>
> Hope the helps.
>
> Greg
>
> PS I have also posted an example of using STEPWISEFIT on linear models
> of n variables, n*(n-1)/2 cross products and n squared variables

Greg –

Thanks for taking the time to give an itemized reply to my questions. I have some follow-up questions.

To Q4 …
* Do I have to standardize or can I stay with mapping the input ranges to [-1, 1]?
* A question related to standardizing variables or mapping input ranges to [-1, 1] … how do I handle the influence of outliers on these mappings?
* Is random shuffle the process of randomizing the values in the row corresponding to the variable of interest? It makes sense that if a variable has a small influence on the model, the MSE should not change much when the values for the variable are randomized or are held constant. Do I have this right?
* The idea of STEPWISEFIT is nice. In implementing the idea I assume that a variable is significant if it occurs in any of the ‘surviving’ terms. Do I have that right?

Thanks.
Siva

Subject: Neural Network: Predicition equation of the trained network

From: Greg Heath

Date: 8 Apr, 2012 07:13:12

Message: 13 of 20

On Apr 6, 2:18 pm, Greg Heath <g.he...@verizon.net> wrote:
> On Apr 5, 2:13 pm, "Siva " <sivaath...@gmail.com> wrote:
-----SNIP

> > 4. I would like to evaluate trained neural networks to determine what if any input to hidden layer weights I can eliminate. Is there a way to do this? And how do I go about it?
>
> Variations of the following (dependig on how much time you want to
> take):
>
> A. Standardize input variables (help mapstd, doc mapstd) and specify a
> maximum allowable MSE.
> B. Rank each variable w.r.t. performance when ONLY that row is either
>      1. Replaced by its average value
>      2. Replaced by it's random shuffle (help randperm, doc randperm)
> C. Identify the variable which results in the least increase in MSE.
> If that
>     MSE is greater than the allowed maximum, STOP. Otherwise either
>     1. Permanently replace that variable with its average
>     2. Permanently replace that variable with it's random shuffle
>
> D. Repeat B and C either
>      1. Without retraining
> or
>      2. With retraining
>
> It has been recommended to repeat this whole process 10-20 times.
>
> > 5. As a result of Item 4, I expect I will identify certain input to hidden node links that are insiginificant. How can I constrain the weight between input "I" to hidden node "J" to 0?
>
> Unnecessary. When the above ranking is comlete. A new net with a
> reduced
> number of inputs will be trained.


NOTICE:

IF YOU USE THE OPTION TO REPLACE A ROW WITH IT'S AVERAGE, THE DEFAULT
INPUT PROCESSING FUNCTION 'REMOVE CONSTANT ROWS' WILL CAUSE THAT
VARIABLE TO BE IGNORED!

net.inputs.processFcns

> PS I have also posted an example of using STEPWISEFIT on linear models
> of  n variables, n*(n-1)/2 cross products and n squared variables

Note: As n increases, the increase from n to 0.5*n^2 + 1.5*n variables
can quickly become unwieldy.


Hope this helps.

Greg

Subject: Neural Network: Predicition equation of the trained network

From: Greg Heath

Date: 8 Apr, 2012 07:37:19

Message: 14 of 20

On Apr 6, 11:21 pm, "Siva " <sivaath...@gmail.com> wrote:
> "Steven_Lord" <sl...@mathworks.com> wrote in message <jlmsat$j6...@newscl01ah.mathworks.com>...
>
> > "Siva " <sivaath...@gmail.com> wrote in message
> >news:jlkoj8$lo6$1@newscl01ah.mathworks.com...
>
> > *snip*
>
> > > Steven:
> > > I am now trying to implement the input pre-processing. I took a look at
> > > the trained ANN object and identified "net.inputs{1}.range" as the min and
> > > max of my input dataset identified during the automatic preprocessing of
> > > my data. I assume this range is mapped to [-1, 1]. So here are my
> > > questions - 1. Is it possible for me to map it to a different range, for
> > > example [-0.9, 0.9]?
> > > 2. Is it possible to suppress the preprocessing and adopt the ranges I
> > > want? If so, how do I go about doing them?
>
> > Take a look at the MAPMINMAX function.
>
> >http://www.mathworks.com/help/toolbox/nnet/ref/mapminmax.html
>
> > --
> > Steve Lord
> > sl...@mathworks.com
> > To contact Technical Support use the Contact Us link on
> >http://www.mathworks.com
>
> I should be able to address my needs by choosing appropriate values for YMIN and YMAX in my MAPMINMAX call.
>
> Thanks Steven.- Hide quoted text -
>
> - Show quoted text -

Again, using ymin and ymax instead of [-1 1] is a waste of time and
if chosen unwisely, can degrade training.

See my post on Nonsaturating Initial Weights.

Hope this helps.

Greg

Subject: Neural Network: Predicition equation of the trained network

From: Greg Heath

Date: 8 Apr, 2012 07:31:41

Message: 15 of 20

On Apr 7, 12:13 am, "Siva " <sivaath...@gmail.com> wrote:
> Greg Heath <g.he...@verizon.net> wrote in message <cdb8088e-9c7f-412f-8b2a-183b3ca96...@l4g2000vbt.googlegroups.com>...

-----SNIP

> Thanks for taking the time to give an itemized reply to my questions. I have some follow-up questions.
>
> To Q4 …
> * Do I have to standardize or can I stay with mapping the input ranges to [-1, 1]?

You can use anything that works. Remember Confucius:

 "Try all; Choose best."

> * A question related to standardizing variables or mapping input ranges to [-1, 1] … how do I handle the influence of outliers on these mappings?

That is one of the reasons I use mapstd BEFORE I create the net.
Outliers are easy to spot and, depending on the cause, can be removed,
modified, or kept.

> * Is random shuffle the process of randomizing the values in the row corresponding to the variable of interest? It makes sense that if a variable has a small influence on the model, the MSE should not change much when the values for the variable are randomized or are held constant. Do I have this right?

Yes.

> * The idea of STEPWISEFIT is nice. In implementing the idea I assume that a variable is significant if it occurs in any of the ‘surviving’ terms. Do I have that right?
>
Yes. However, it quicly becomes unwieldy as n increases.

Hope this Helps.

Greg

Subject: Neural Network: Predicition equation of the trained network

From: Greg Heath

Date: 8 Apr, 2012 07:47:36

Message: 16 of 20

On Apr 5, 2:34 pm, "Siva " <sivaath...@ gmail.com> wrote:
> "Siva " <sivaath...@gmail.com> wrote in message <jldnoo$de...@newscl01ah.mathworks.com>...

> Steven:
> I am now trying to implement the input pre-processing.
> I took a look at the trained ANN object and identified "net.inputs{1}.range" as the min and >max of my input dataset identified during the automatic preprocessing of my data. I >assume this range is mapped to [-1, 1]. So here are my questions -
>
> 1. Is it possible for me to map it to a different range, for example [-0.9, 0.9]?

Yes . However, there is no good reason for just changing the range
from [ -1 1 ]

Reasonable alternatives

1. No normalization

2. I prefer to use standardizaton ( help/doc MAPSTD) for statistical
(means/variances/correlations/outliers) reasons.

3. PCA normalization (help/doc PROCESSPCA) is useful in regression
for reducing the number of input variables.

4. PLS normalization (help/doc PLSREGRESS) is useful in
classification for reducing the number of input variables.

I cannot think of a good reason for using any other alternative

> 2. Is it possible to suppress the preprocessing and adopt the ranges I want?

Yes, but why in the world would you just want to change ranges??

> If so, how do I go about doing them?

Great Question!

% Create a regression net:
clear all, clc
net = fitnet % For clarity: No
semicolons

% Search for the appropriate input processing net function

net.inputs
net.inputs{1}
net.inputs.processFcns
net.inputs.processFcns{1,1}
net.inputs.processFcns{1,2}

% Therefore, I first assumed one of the following overwrites are
appropriate
%(Note that PLS is not a current option)

net.inputs.processFcns{1,2} = { [] }
net.inputs.processFcns{1,2} = { ' ' }
net.inputs.processFcns{1,2} = { 'mapstd' }
net.inputs.processFcns{1,2} = { 'processpca' }

% However, NONE of these work
.
%Since the Custom Networks in the documentation for a two input net
explicitly uses the input index, I also tried the following:

net.inputs{1}.processFcns{1,2} = { [] }
net.inputs{1}.processFcns{1,2} = { ' ' }
net.inputs{1}.processFcns{1,2} = { 'mapstd' }
net.inputs{1}.processFcns{1,2} = { 'processpca' }

%However, NONE of these work either.

I'll be baack!

Greg

Subject: Neural Network: Predicition equation of the trained network

From: Siva

Date: 21 Apr, 2012 06:14:22

Message: 17 of 20

Greg Heath <g.heath@verizon.net> wrote in message <a88bb333-28cd-4418-96c7-03ad50762f34@i18g2000vbx.googlegroups.com>...
> On Apr 5, 2:34 pm, "Siva " <sivaath...@ gmail.com> wrote:
> > "Siva " <sivaath...@gmail.com> wrote in message <jldnoo$de...@newscl01ah.mathworks.com>...
>
> > Steven:
> > I am now trying to implement the input pre-processing.
> > I took a look at the trained ANN object and identified "net.inputs{1}.range" as the min and >max of my input dataset identified during the automatic preprocessing of my data. I >assume this range is mapped to [-1, 1]. So here are my questions -
> >
> > 1. Is it possible for me to map it to a different range, for example [-0.9, 0.9]?
>
> Yes . However, there is no good reason for just changing the range
> from [ -1 1 ]
>
> Reasonable alternatives
>
> 1. No normalization
>
> 2. I prefer to use standardizaton ( help/doc MAPSTD) for statistical
> (means/variances/correlations/outliers) reasons.
>
> 3. PCA normalization (help/doc PROCESSPCA) is useful in regression
> for reducing the number of input variables.
>
> 4. PLS normalization (help/doc PLSREGRESS) is useful in
> classification for reducing the number of input variables.
>
> I cannot think of a good reason for using any other alternative
>
> > 2. Is it possible to suppress the preprocessing and adopt the ranges I want?
>
> Yes, but why in the world would you just want to change ranges??
>
> > If so, how do I go about doing them?
>
> Great Question!
>
> % Create a regression net:
> clear all, clc
> net = fitnet % For clarity: No
> semicolons
>
> % Search for the appropriate input processing net function
>
> net.inputs
> net.inputs{1}
> net.inputs.processFcns
> net.inputs.processFcns{1,1}
> net.inputs.processFcns{1,2}
>
> % Therefore, I first assumed one of the following overwrites are
> appropriate
> %(Note that PLS is not a current option)
>
> net.inputs.processFcns{1,2} = { [] }
> net.inputs.processFcns{1,2} = { ' ' }
> net.inputs.processFcns{1,2} = { 'mapstd' }
> net.inputs.processFcns{1,2} = { 'processpca' }
>
> % However, NONE of these work
> .
> %Since the Custom Networks in the documentation for a two input net
> explicitly uses the input index, I also tried the following:
>
> net.inputs{1}.processFcns{1,2} = { [] }
> net.inputs{1}.processFcns{1,2} = { ' ' }
> net.inputs{1}.processFcns{1,2} = { 'mapstd' }
> net.inputs{1}.processFcns{1,2} = { 'processpca' }
>
> %However, NONE of these work either.
>
> I'll be baack!
>
> Greg

Greg -

Thank you for trying to followup on my questions. Have you figured out the answer?

Back to the input scaling question I had ...
The trained network output displays saturation, i.e. it shows asymptotic behavior at high and low target values. This is what I was attempting to address by remapping the inputs. If I can make my network using the linear portion of the sigmoid function, this saturation should go away. Shouldn't it?

Or maybe this is symptomatic of something else. Can you think of why my trained network does that?

Thanks.
Siva

Subject: Neural Network: Predicition equation of the trained network

From: Greg Heath

Date: 22 Apr, 2012 20:11:08

Message: 18 of 20

On Apr 21, 2:14 am, "Siva " <sivaath...@gmail.com> wrote:
> Greg Heath <g.he...@verizon.net> wrote in message <a88bb333-28cd-4418-96c7-03ad50762...@i18g2000vbx.googlegroups.com>...
> > On Apr 5, 2:34 pm, "Siva " <sivaath...@ gmail.com> wrote:
> > > "Siva " <sivaath...@gmail.com> wrote in message <jldnoo$de...@newscl01ah.mathworks.com>...
>
> > > Steven:
> > > I am now trying to implement the input pre-processing.
> > > I took a look at the trained ANN object and identified "net.inputs{1}.range" as the min and >max of my input dataset identified during the automatic preprocessing of my data. I >assume this range is mapped to [-1, 1]. So here are my questions -
>
> > > 1. Is it possible for me to map it to a different range, for example [-0.9, 0.9]?
>
> > Yes . However, there is no good reason for just changing the range
> > from [ -1 1 ]
>
> > Reasonable alternatives
>
> > 1. No normalization
>
> > 2. I prefer to use standardizaton ( help/doc MAPSTD) for statistical
> > (means/variances/correlations/outliers) reasons.
>
> > 3. PCA  normalization (help/doc PROCESSPCA) is useful in regression
> > for reducing the number of input variables.
>
> > 4. PLS  normalization (help/doc PLSREGRESS) is useful in
> > classification for reducing the number of input variables.
>
> > I cannot think of a good reason for using any other alternative
>
> > > 2. Is it possible to suppress the preprocessing and adopt the ranges I want?
>
> > Yes, but why in the world would you just want to change ranges??
>
> > > If so, how do I go about doing them?
>
> > Great Question!
>
> > % Create a regression net:
> > clear all, clc
> > net = fitnet                                       % For clarity: No
> > semicolons
>
> > % Search for the appropriate input processing net function
>
> > net.inputs
> > net.inputs{1}
> > net.inputs.processFcns
> > net.inputs.processFcns{1,1}
> > net.inputs.processFcns{1,2}
>
> > % Therefore, I first assumed one of the following overwrites are
> > appropriate
> > %(Note that PLS is not a current option)
>
> > net.inputs.processFcns{1,2} = { [] }
> > net.inputs.processFcns{1,2} = { ' ' }
> > net.inputs.processFcns{1,2} = { 'mapstd' }
> > net.inputs.processFcns{1,2} = { 'processpca' }
>
> > % However, NONE of these work
> > .
> > %Since the Custom Networks  in the documentation for a two input net
> > explicitly uses the input index, I also tried the following:
>
> > net.inputs{1}.processFcns{1,2} = { [] }
> > net.inputs{1}.processFcns{1,2} = { ' ' }
> > net.inputs{1}.processFcns{1,2} = { 'mapstd' }
> > net.inputs{1}.processFcns{1,2} = { 'processpca' }
>
> > %However, NONE of these work either.
>
> > I'll be baack!
>
> > Greg
>
> Greg -
>
> Thank you for trying to followup on my questions. Have you figured out the answer?
>
> Back to the input scaling question I had ...
> The trained network output displays saturation, i.e. it shows asymptotic behavior at high and low target values. This is what I was attempting to address by remapping the inputs. If I can make my network using the linear portion of the sigmoid function, this saturation should go away. Shouldn't it?
>
> Or maybe this is symptomatic of something else. Can you think of why my trained network does that?
>
> Thanks.
> Siva- Hide quoted text -
>
> - Show quoted text -

I neither know what you mean by "displays saturation" nor why it
bothers you. The important thing is how closely the output y =
sim(net,p) estimates the target t.

A quick check of reasonableness of outputs is a matrix of results
size(results) = [ Ntrials numH] containing the variance-normalized
mean-squared-error NMSE = MSE/MSE00 where MSE00 ~ mean(var(t')) or
the R-squared statistic R2 = 1- NMSE

numH is the number of candidate values for H, the number of hidden
nodes.
Ntrials is the number of random weight initializations for each
candidate value of H

Search

heath Ntrials

for code examples. My results are in terms of both the original and
adjusted R_squared statistics.

Hope this helps.

Greg

Subject: Neural Network: Predicition equation of the trained network

From: Siva

Date: 23 Apr, 2012 15:44:08

Message: 19 of 20

Greg Heath <g.heath@verizon.net> wrote in message <37c9a9fe-90d5-4ef8-a52e-97402503537e@l3g2000vbv.googlegroups.com>...
> On Apr 21, 2:14 am, "Siva " <sivaath...@gmail.com> wrote:
> > Greg Heath <g.he...@verizon.net> wrote in message <a88bb333-28cd-4418-96c7-03ad50762...@i18g2000vbx.googlegroups.com>...
> > > On Apr 5, 2:34 pm, "Siva " <sivaath...@ gmail.com> wrote:
> > > > "Siva " <sivaath...@gmail.com> wrote in message <jldnoo$de...@newscl01ah.mathworks.com>...
> >
> > > > Steven:
> > > > I am now trying to implement the input pre-processing.
> > > > I took a look at the trained ANN object and identified "net.inputs{1}.range" as the min and >max of my input dataset identified during the automatic preprocessing of my data. I >assume this range is mapped to [-1, 1]. So here are my questions -
> >
> > > > 1. Is it possible for me to map it to a different range, for example [-0.9, 0.9]?
> >
> > > Yes . However, there is no good reason for just changing the range
> > > from [ -1 1 ]
> >
> > > Reasonable alternatives
> >
> > > 1. No normalization
> >
> > > 2. I prefer to use standardizaton ( help/doc MAPSTD) for statistical
> > > (means/variances/correlations/outliers) reasons.
> >
> > > 3. PCA  normalization (help/doc PROCESSPCA) is useful in regression
> > > for reducing the number of input variables.
> >
> > > 4. PLS  normalization (help/doc PLSREGRESS) is useful in
> > > classification for reducing the number of input variables.
> >
> > > I cannot think of a good reason for using any other alternative
> >
> > > > 2. Is it possible to suppress the preprocessing and adopt the ranges I want?
> >
> > > Yes, but why in the world would you just want to change ranges??
> >
> > > > If so, how do I go about doing them?
> >
> > > Great Question!
> >
> > > % Create a regression net:
> > > clear all, clc
> > > net = fitnet                                       % For clarity: No
> > > semicolons
> >
> > > % Search for the appropriate input processing net function
> >
> > > net.inputs
> > > net.inputs{1}
> > > net.inputs.processFcns
> > > net.inputs.processFcns{1,1}
> > > net.inputs.processFcns{1,2}
> >
> > > % Therefore, I first assumed one of the following overwrites are
> > > appropriate
> > > %(Note that PLS is not a current option)
> >
> > > net.inputs.processFcns{1,2} = { [] }
> > > net.inputs.processFcns{1,2} = { ' ' }
> > > net.inputs.processFcns{1,2} = { 'mapstd' }
> > > net.inputs.processFcns{1,2} = { 'processpca' }
> >
> > > % However, NONE of these work
> > > .
> > > %Since the Custom Networks  in the documentation for a two input net
> > > explicitly uses the input index, I also tried the following:
> >
> > > net.inputs{1}.processFcns{1,2} = { [] }
> > > net.inputs{1}.processFcns{1,2} = { ' ' }
> > > net.inputs{1}.processFcns{1,2} = { 'mapstd' }
> > > net.inputs{1}.processFcns{1,2} = { 'processpca' }
> >
> > > %However, NONE of these work either.
> >
> > > I'll be baack!
> >
> > > Greg
> >
> > Greg -
> >
> > Thank you for trying to followup on my questions. Have you figured out the answer?
> >
> > Back to the input scaling question I had ...
> > The trained network output displays saturation, i.e. it shows asymptotic behavior at high and low target values. This is what I was attempting to address by remapping the inputs. If I can make my network using the linear portion of the sigmoid function, this saturation should go away. Shouldn't it?
> >
> > Or maybe this is symptomatic of something else. Can you think of why my trained network does that?
> >
> > Thanks.
> > Siva- Hide quoted text -
> >
> > - Show quoted text -
>
> I neither know what you mean by "displays saturation" nor why it
> bothers you. The important thing is how closely the output y =
> sim(net,p) estimates the target t.
>
> A quick check of reasonableness of outputs is a matrix of results
> size(results) = [ Ntrials numH] containing the variance-normalized
> mean-squared-error NMSE = MSE/MSE00 where MSE00 ~ mean(var(t')) or
> the R-squared statistic R2 = 1- NMSE
>
> numH is the number of candidate values for H, the number of hidden
> nodes.
> Ntrials is the number of random weight initializations for each
> candidate value of H
>
> Search
>
> heath Ntrials
>
> for code examples. My results are in terms of both the original and
> adjusted R_squared statistics.
>
> Hope this helps.
>
> Greg

For example, if my targets are [ 0 1 2 3 4 5], a 'saturation behavior' is indicated by outputs are [ 0 0.8 1.9 3.1 4.2 5], i.e. a curvature at both ends of the prediction scale.

Subject: Neural Network: Predicition equation of the trained network

From: Greg Heath

Date: 24 Apr, 2012 02:42:10

Message: 20 of 20

On Apr 23, 11:44 am, "Siva " <sivaath...@gmail.com> wrote:
> Greg Heath <g.he...@verizon.net> wrote in message <37c9a9fe-90d5-4ef8-a52e-974025035...@l3g2000vbv.googlegroups.com>...
> > On Apr 21, 2:14 am, "Siva " <sivaath...@gmail.com> wrote:
> > > Greg Heath <g.he...@verizon.net> wrote in message <a88bb333-28cd-4418-96c7-03ad50762...@i18g2000vbx.googlegroups.com>...
> > > > On Apr 5, 2:34 pm, "Siva " <sivaath...@ gmail.com> wrote:
> > > > > "Siva " <sivaath...@gmail.com> wrote in message <jldnoo$de...@newscl01ah.mathworks.com>...
>
> > > > > Steven:
> > > > > I am now trying to implement the input pre-processing.
> > > > > I took a look at the trained ANN object and identified "net.inputs{1}.range" as the min and >max of my input dataset identified during the automatic preprocessing of my data. I >assume this range is mapped to [-1, 1]. So here are my questions -
>
> > > > > 1. Is it possible for me to map it to a different range, for example [-0.9, 0.9]?
>
> > > > Yes . However, there is no good reason for just changing the range
> > > > from [ -1 1 ]
>
> > > > Reasonable alternatives
>
> > > > 1. No normalization
>
> > > > 2. I prefer to use standardizaton ( help/doc MAPSTD) for statistical
> > > > (means/variances/correlations/outliers) reasons.
>
> > > > 3. PCA  normalization (help/doc PROCESSPCA) is useful in regression
> > > > for reducing the number of input variables.
>
> > > > 4. PLS  normalization (help/doc PLSREGRESS) is useful in
> > > > classification for reducing the number of input variables.
>
> > > > I cannot think of a good reason for using any other alternative
>
> > > > > 2. Is it possible to suppress the preprocessing and adopt the ranges I want?
>
> > > > Yes, but why in the world would you just want to change ranges??
>
> > > > > If so, how do I go about doing them?
>
> > > > Great Question!
>
> > > > % Create a regression net:
> > > > clear all, clc
> > > > net = fitnet                                       % For clarity: No
> > > > semicolons
>
> > > > % Search for the appropriate input processing net function
>
> > > > net.inputs
> > > > net.inputs{1}
> > > > net.inputs.processFcns
> > > > net.inputs.processFcns{1,1}
> > > > net.inputs.processFcns{1,2}
>
> > > > % Therefore, I first assumed one of the following overwrites are
> > > > appropriate
> > > > %(Note that PLS is not a current option)
>
> > > > net.inputs.processFcns{1,2} = { [] }
> > > > net.inputs.processFcns{1,2} = { ' ' }
> > > > net.inputs.processFcns{1,2} = { 'mapstd' }
> > > > net.inputs.processFcns{1,2} = { 'processpca' }
>
> > > > % However, NONE of these work
> > > > .
> > > > %Since the Custom Networks  in the documentation for a two input net
> > > > explicitly uses the input index, I also tried the following:
>
> > > > net.inputs{1}.processFcns{1,2} = { [] }
> > > > net.inputs{1}.processFcns{1,2} = { ' ' }
> > > > net.inputs{1}.processFcns{1,2} = { 'mapstd' }
> > > > net.inputs{1}.processFcns{1,2} = { 'processpca' }
>
> > > > %However, NONE of these work either.
>
> > > > I'll be baack!
>
> > > > Greg
>
> > > Greg -
>
> > > Thank you for trying to followup on my questions. Have you figured out the answer?
>
> > > Back to the input scaling question I had ...
> > > The trained network output displays saturation, i.e. it shows asymptotic behavior at high and low target values. This is what I was attempting to address by remapping the inputs. If I can make my network using the linear portion of the sigmoid function, this saturation should go away. Shouldn't it?
>
> > > Or maybe this is symptomatic of something else. Can you think of why my trained network does that?
>
> > > Thanks.
> > > Siva- Hide quoted text -
>
> > > - Show quoted text -
>
> > I neither know what you mean by "displays saturation" nor why it
> > bothers you. The important thing is how closely the output y =
> > sim(net,p) estimates the target t.
>
> > A quick check of reasonableness of outputs is a matrix of results
> > size(results) = [ Ntrials numH] containing the variance-normalized
> > mean-squared-error NMSE =  MSE/MSE00 where MSE00 ~ mean(var(t')) or
> > the R-squared statistic R2 = 1- NMSE
>
> > numH is the number of candidate values for H, the number of hidden
> > nodes.
> > Ntrials is the number of random weight initializations for each
> > candidate value of H
>
> > Search
>
> > heath Ntrials
>
> > for code examples. My results are in terms of both the original and
> > adjusted R_squared statistics.
>
> > Hope this helps.
>
> > Greg
>
> For example, if my targets are [ 0 1 2 3 4 5], a 'saturation behavior' is indicated by outputs are [ 0 0.8 1.9 3.1 4.2 5], i.e. a curvature at both ends of the prediction scale

I plotted this and don't see the connection beteen the plot, your
label or your curvature explanation.

However, that does not matter. The objective is to minimize MSE for
nontraining data given the range constraints of the output activation
function.

Hope this helps.

Greg

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us