Path: news.mathworks.com!not-for-mail
From: "Greg Heath" <heath@alumni.brown.edu>
Newsgroups: comp.soft-sys.matlab
Subject: Re: weight in neural network
Date: Thu, 2 May 2013 08:06:08 +0000 (UTC)
Organization: The MathWorks, Inc.
Lines: 78
Message-ID: <klt6pg$gh4$1@newscl01ah.mathworks.com>
References: <klrd8a$5m7$1@newscl01ah.mathworks.com>
Reply-To: "Greg Heath" <heath@alumni.brown.edu>
NNTP-Posting-Host: www-02-blr.mathworks.com
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Trace: newscl01ah.mathworks.com 1367481968 16932 172.30.248.47 (2 May 2013 08:06:08 GMT)
X-Complaints-To: news@mathworks.com
NNTP-Posting-Date: Thu, 2 May 2013 08:06:08 +0000 (UTC)
X-Newsreader: MATLAB Central Newsreader 2929937
Xref: news.mathworks.com comp.soft-sys.matlab:794730

"srishti" wrote in message <klrd8a$5m7$1@newscl01ah.mathworks.com>...
> Hello Sir, 
>            Sir if i have tried the following code, the problem if I generate W1 and W2 then weights net.IW{1,1} and net.LW{2,1} are diffrenet and if i dont use W1 and W2 then net.IW{1,1} and net.LW{2,1} are different. Sir how the weights are related 
> with W1 and W2 ?
> 
> 
> s = RandStream('mcg16807','Seed', 0);
> RandStream.setDefaultStream(s)
> x=sinimfin; %input
> t=t;      %target
> S1=1; % number of hidden layers
> S2=2; % number of output layers (= number of classes)

1. You are confusing the terms "layer" and "node".
If the input and output target matrices have dimensions

[ I N ] = size(input)
[ O N ] = size(target), % (O classes),  

the typical NN has a single input layer with I nodes,  a single 
hidden layer with H nodes and a single output layer with O nodes 
yielding a I-H-O node topology. In addition there is a single input 
bias node and a single hidden layer bias node. The bias nodes 
provide constant inputs that allow signals to be shifted vertically 
without changing shape. 

The input weight matrix IW, input bias weight vector, b1, layer weight 
matrix, LW, and output bias weight vector, b2 have the sizes

[ H 1 ] = size(b1)
[ H I ] = size(IW)
[ O 1 ] = size(b2)
[ O H ] = size(LW)

The corresponding hidden and output layer signals are given by 

hidden = tanh(IW*input + b1);
output  = LW*hidden + b2;

> [R,Q]=size(x);
> W1= rand(S1,R);  
> W2= rand(S2,S1); 

2. You have created nonnegative random weight values in (0,1) instead of using the 
function randn that will create bipolar weight values.
3. You have ignored bias weights.
4. You have not assigned the weights to a net.
5. You do not have to initialize weights 
     a.The older creation functions e.g., newfit, newpr and newff, automatically initialize   
         weights designed to cover the function space created by input and target.
      b. The current creation functions, e.g., fitnet, patternnet and feedforwardnet do not.
         However, the current version of the function train will automatically do it if you  
         have not done it already with function configure.

> net = patternnet(4);
> net = train(net,x,t);
> % view(net)
> y=net(x);
> plotconfusion(t,y);
> perf=mse(y-t);

If you use the expanded output form of train

[ net tr y e ] = train(net,x,t);

You will not only automatically get the output y, you will also get the error e = t-y, and a training structure, tr , with almost all of the other information about training and performance of the training, validation and test subsets that you could wish for.

Take the time to investigate what tr has to offer

tr = tr

If you really want to assign your own weights, try configure.

help/doc configure

Hope this helps.

Greg