On Apr 6, 12:24 pm, "Slawomir " <SlawomirBab...@gmail.com> wrote:
> Thank you, Greg, for spending your time on it!
> I really appreciate it!
> On Apr 6, 12:24 pm, "Slawomir " <SlawomirBab...@gmail.com> wrote:
> Thank you, Greg, for spending your time on it!
> I really appreciate it!
>
> I have a few questions anyway, I've placed them in comments:
>
> clear all
> clc
> a = [0 1 0 1 0 1 0 1];
> b = [0 0 1 1 0 0 1 1];
> c = [0 0 0 0 1 1 1 1];
> y = [0 1 1 0 1 0 0 1]; % [0 1 1 0  it stands for XOR output, 1 0 0 1  it stand for XNOR output)
> c1 = [0 0 0 1 0 1 1 1];
>
> P = [a;b;c];
> T = [y;c1];
>
> [ I N ] = size(P) %[ 3 8 ]
> [ O N ] = size(T) % [ 2 8 ]
>
> %It looks quite complicated, it is your personal correction to this function MSE, right?
> %I don't get it yet, but I will, just need to draw it on paper.
If you search MSE00 in this Newsgroup there are more detailed
discussions. Typically, I look at
two simple models before designing a NN: A Constant model and a Linear
Model
NAIVE CONSTANT MODEL: Output is constant regardless of input (I've
recently nicknamed it
the "STUPID" Model). MSE00 is used to normalize the MSE of other
models to remove the effect
of scaling. 1NMSE is the well known R^2 statistic. Search Wikipedia
for "Coefficient of Determination".
y00 = repmat(mean(T,2),1,N)) % Replicated column mean
yb00 = round(y00) % Converted to binary
MSE00 = mse(Tyb00)
> MSE00 = mse(Trepmat(round(mean(T,2)),1,N)) % 0.5
>
> % I  H  O NEURAL NETWORK MODEL
>
> Hub = floor((N1)*O/(I+O+1)) % 2 = upper bound for H
> Hmax = Hub
> Hmin = 0
> dH = 1 % It stand for what?
Incrementi in H for the search
> Ntrials = 10 % This is number of what trials? Hmm, how many repetitions?
10 different random weight initialization trials for each candidate
value of H
> rng(0)
> j = 0
> for h = Hmin : dH : Hmax % Search for smallest successful H
> j = j + 1
> for i = 1:Ntrials % Multiple weight initializations
> if h == 0
> net = newff(P, T, []); % Hmm, why here is [], it is for defining number of neurons in hidden layer, right?
Yes. MATLAB will not take 0.
> else
> net = newff(P, T, h);
> end
> net.trainFcn = 'trainlm'; % Originally my teacher wanted me to perform this task, learning the rules of full
> adder for multiple methods of calculating output error.
> check it out:http://i39.tinypic.com/2q3aue0.jpg
> In table srednia = mean. On this chart there is number of epochs on Y, and 3 bars, equally for max, mean, min.
> And I need to do 30 simulations for each method in order to do this chart and fill empty spaces in this table.
>
> net.divideParam.trainratio = 1; % Small N ==> Need all data for training
> net.divideParam.valratio = 0;
> net.divideParam.testratio = 0;
> net.trainParam.goal = 0.01*MSE00; % 100 times better tnan STUPID
>
> [net, tr] = train(net, P, T);
> Nepochs(i, j) = tr.epoch(end); % It is good solution, thank you for showing me this. i stand for number of rows,
> j collumns, right? But I don't get it why this program processes each value from a,b,c independently.
It doesn't. j is the index for the value of H and i is the
index for the weight initialization trial
It support to look for the best weight values in order to accomplish
the target, right?
Kind of.
#1 Find the lowest value for H which the performance is acceptable.
#2 Given H, use multiple weight initializations to avoid unacceptable
local minima.
#3 Save the weights of the best model.
> S = round(sim(net, P)) % Integer Output
> MSE = mse(T  S)
> NMSE (i , j) = MSE / MSE00
> end
> end
> H = Hmin : dH : Hmax
> Nepochs = sum(Nepochs);
> Nepochs = sum(Nepochs) % It calculates me total number of epochs in one simulation,using given errorcalculating method,in this case ''trainlm".
I used Nepochs = Nepochs to print out the 10x3 matrix showing the
diversity of the 30 results
> NMSE = NMSE
> Stats = [min(NMSE); mean(NMSE); median(NMSE); max(NMSE)]
>
> ========================
> My answers:
> S =
> 1 1 0 0 1 1 0 0
> 0 0 0 1 0 1 1 1
> T =
> 0 1 1 0 1 0 0 1
> 0 0 0 1 0 1 1 1
> Why it is not possible to learn this network target rules?
What do you mean?
It is possible. After 26 trials I found one with H=2.
> When I draw it on a paper, I mean this whole adder, I thought that I need to create much more networks in order to calculate the output. I draw my initial idea scheme:http://i44.tinypic.com/15yviid.jpg
> But if I am not able to learn completely this network the rules based on your solution,
it could be good idea to do it like that?
I don't understand.
> I know that neural networks are used to determine the outputs, where you don't know the exact outcome,
or they need adapt to given situation, like changing the temperature
in the room = changing the level of heating installation and so on.
>
> It looks like an easy task to do, I've tried to figure it out why it is not possible in this taces to receive the exact result for y output, but right now I am confused.
It is possible to get an exact resul!. I did it after 26 attempts. See
my previous tabulations. I was trying to do it with as small as H as
possible.
The 2 outputs with 8 cases yields 16 training equations. With 3 inputs
and 2 hidden nodes there are (3+1)*2 = 8 first layer unknown input
layer weights.
With 2 hidden nodes and 2 outputs there are (2+1)*2 = 6 unknown output
layer weights. So, for H =2 , there is an exact solution of the
system with 16 equations in14 unknowns.
As H increases, the percent of solutions found will increase.
However, the number of equations will be less than the number of
unknowns. Therefore, there will be an infinite number of solutions.
> Thanks again, Slawek.
Hope this helps.
Greg
