http://www.mathworks.com/matlabcentral/newsreader/view_thread/328715
MATLAB Central Newsreader  weight in neural network
Feed for thread: weight in neural network
enus
©19942014 by MathWorks, Inc.
webmaster@mathworks.com
MATLAB Central Newsreader
http://blogs.law.harvard.edu/tech/rss
60
MathWorks
http://www.mathworks.com/images/membrane_icon.gif

Wed, 01 May 2013 15:44:10 +0000
weight in neural network
http://www.mathworks.com/matlabcentral/newsreader/view_thread/328715#903550
srishti
Hello Sir, <br>
Sir if i have tried the following code, the problem if I generate W1 and W2 then weights net.IW{1,1} and net.LW{2,1} are diffrenet and if i dont use W1 and W2 then net.IW{1,1} and net.LW{2,1} are different. Sir how the weights are related <br>
with W1 and W2 ?<br>
<br>
<br>
s = RandStream('mcg16807','Seed', 0);<br>
RandStream.setDefaultStream(s)<br>
x=sinimfin; %input<br>
t=t; %target<br>
S1=1; % number of hidden layers<br>
S2=2; % number of output layers (= number of classes)<br>
[R,Q]=size(x);<br>
W1= rand(S1,R); <br>
W2= rand(S2,S1); <br>
net = patternnet(4);<br>
net = train(net,x,t);<br>
% view(net)<br>
y=net(x);<br>
plotconfusion(t,y);<br>
perf=mse(yt);

Thu, 02 May 2013 08:06:08 +0000
Re: weight in neural network
http://www.mathworks.com/matlabcentral/newsreader/view_thread/328715#903584
Greg Heath
"srishti" wrote in message <klrd8a$5m7$1@newscl01ah.mathworks.com>...<br>
> Hello Sir, <br>
> Sir if i have tried the following code, the problem if I generate W1 and W2 then weights net.IW{1,1} and net.LW{2,1} are diffrenet and if i dont use W1 and W2 then net.IW{1,1} and net.LW{2,1} are different. Sir how the weights are related <br>
> with W1 and W2 ?<br>
> <br>
> <br>
> s = RandStream('mcg16807','Seed', 0);<br>
> RandStream.setDefaultStream(s)<br>
> x=sinimfin; %input<br>
> t=t; %target<br>
> S1=1; % number of hidden layers<br>
> S2=2; % number of output layers (= number of classes)<br>
<br>
1. You are confusing the terms "layer" and "node".<br>
If the input and output target matrices have dimensions<br>
<br>
[ I N ] = size(input)<br>
[ O N ] = size(target), % (O classes), <br>
<br>
the typical NN has a single input layer with I nodes, a single <br>
hidden layer with H nodes and a single output layer with O nodes <br>
yielding a IHO node topology. In addition there is a single input <br>
bias node and a single hidden layer bias node. The bias nodes <br>
provide constant inputs that allow signals to be shifted vertically <br>
without changing shape. <br>
<br>
The input weight matrix IW, input bias weight vector, b1, layer weight <br>
matrix, LW, and output bias weight vector, b2 have the sizes<br>
<br>
[ H 1 ] = size(b1)<br>
[ H I ] = size(IW)<br>
[ O 1 ] = size(b2)<br>
[ O H ] = size(LW)<br>
<br>
The corresponding hidden and output layer signals are given by <br>
<br>
hidden = tanh(IW*input + b1);<br>
output = LW*hidden + b2;<br>
<br>
> [R,Q]=size(x);<br>
> W1= rand(S1,R); <br>
> W2= rand(S2,S1); <br>
<br>
2. You have created nonnegative random weight values in (0,1) instead of using the <br>
function randn that will create bipolar weight values.<br>
3. You have ignored bias weights.<br>
4. You have not assigned the weights to a net.<br>
5. You do not have to initialize weights <br>
a.The older creation functions e.g., newfit, newpr and newff, automatically initialize <br>
weights designed to cover the function space created by input and target.<br>
b. The current creation functions, e.g., fitnet, patternnet and feedforwardnet do not.<br>
However, the current version of the function train will automatically do it if you <br>
have not done it already with function configure.<br>
<br>
> net = patternnet(4);<br>
> net = train(net,x,t);<br>
> % view(net)<br>
> y=net(x);<br>
> plotconfusion(t,y);<br>
> perf=mse(yt);<br>
<br>
If you use the expanded output form of train<br>
<br>
[ net tr y e ] = train(net,x,t);<br>
<br>
You will not only automatically get the output y, you will also get the error e = ty, and a training structure, tr , with almost all of the other information about training and performance of the training, validation and test subsets that you could wish for.<br>
<br>
Take the time to investigate what tr has to offer<br>
<br>
tr = tr<br>
<br>
If you really want to assign your own weights, try configure.<br>
<br>
help/doc configure<br>
<br>
Hope this helps.<br>
<br>
Greg