Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
FULL ADDER using neural network

Subject: FULL ADDER using neural network

From: Slawomir

Date: 30 Mar, 2012 10:27:26

Message: 1 of 7

Hello,

I was ask to design a neural network, which will learn the rules of full adder.

http://img221.imageshack.us/img221/2937/rulesx.jpg

And a few questions:
- how to introduce an inputs?
- how to determine the output?
- can I do it only in one definition of newff?

CODE IN MATLAB:

clear all
tic
P = []; %Input vectors
T = [];%Expected answers
net = newff(P, T, {'logsig' 'logsig'}, 'trainbfg');
net.trainParam.epochs = 1000;
net.trainParam.goal = 0;
net.trainParam.lr = 0.01;
net = train(net, P, T);
S = sim(net, P)
error = T - S %Error of the neuron
toc

If you have any idea how to do it, please give me a hint!
Cheers.

Subject: FULL ADDER using neural network

From: Greg Heath

Date: 30 Mar, 2012 12:14:06

Message: 2 of 7

On Mar 30, 6:27 am, "Slawomir " <SlawomirBab...@gmail.com> wrote:
> Hello,
>
> I was ask to design a neural network, which will learn the rules of full adder.
>
> http://img221.imageshack.us/img221/2937/rulesx.jpg
>
> And a few questions:
> - how to introduce an inputs?
> - how to determine the output?
> - can I do it only in one definition of newff?
>
> CODE IN MATLAB:
>
> clear all
> tic
> P = []; %Input vectors

[ I N ] = size(P) % [ 3 8]

> T = [];%Expected answers

[ O N ] = size(T) % [ 2 8]

> net = newff(P, T, {'logsig' 'logsig'}, 'trainbfg');

Incorrect, need to define the number of hidden neuron nodes, H

> net.trainParam.epochs = 1000;
> net.trainParam.goal = 0;
> net.trainParam.lr = 0.01;

You should use as many defaults as possible.

help newff
doc mewff

> net = train(net, P, T);
> S = sim(net, P)
> error = T - S                      %Error of the neuron

What neuron? There are H in the hidden layer and O = 2 in the output
layer.

error is error of the net.

> toc
>
> If you have any idea how to do it, please give me a hint!
> Cheers.

Search the archives using

heath Neq Nw Hub

Hope this helps.

Greg

Subject: FULL ADDER using neural network

From: Slawomir

Date: 3 Apr, 2012 11:19:11

Message: 3 of 7

Hey, I wrote sth like this:

clear all
tic

a = [0 1 0 1 0 1 0 1];
b = [0 0 1 1 0 0 1 1];
c = [0 0 0 0 1 1 1 1];
y = [0 1 1 0 1 0 0 1];
c1 = [0 0 0 1 0 1 1 1];

P = [a;b;c];
T = [y;c1];

%=================================================

[input] = size(P);
[output] = size(T);

net = newff(P, T, 6, {'logsig'});
net.trainFcn = 'traingd';
net.trainParam.epochs = 1000;
net.trainParam.goal = 0.01;
net.trainParam.lr = 0.1;
[net, tr] = train(net, P, T);

%==================================================

plotperform(tr)
plottrainstate(tr)
S = sim(net, P)
error = T - S
toc

But I have no idea whatsoever if it works in the proper way... grrr.

Subject: FULL ADDER using neural network

From: Greg Heath

Date: 3 Apr, 2012 18:32:18

Message: 4 of 7

On Apr 3, 7:19 am, "Slawomir " <SlawomirBab...@gmail.com> wrote:
> Hey, I wrote sth like this:
>
> clear all
> tic
>
> a = [0 1 0 1 0 1 0 1];
> b = [0 0 1 1 0 0 1 1];
> c = [0 0 0 0 1 1 1 1];
> y = [0 1 1 0 1 0 0 1];
> c1 = [0 0 0 1 0 1 1 1];
>
> P = [a;b;c];
> T = [y;c1];
>
> %=================================================
>
> [input] = size(P);
> [output] = size(T);
>
> net = newff(P, T, 6, {'logsig'});

1. Not enough data to support H = 6
2.
>> net.layers{1}.transferFcn
      ans = logsig
>> net.layers{2}.transferFcn
      ans =purelin

Better to use { 'tansig' 'logsig' }

> net.trainFcn = 'traingd';
> net.trainParam.epochs = 1000;
> net.trainParam.goal = 0.01;
> net.trainParam.lr = 0.1;
> [net, tr] = train(net, P, T);

3. Try to use as many defaults as possible

> %==================================================
>
> plotperform(tr)
> plottrainstate(tr)
> S = sim(net, P)
> error = T - S
> toc
>
> But I have no idea whatsoever if it works in the proper way... grrr.

 [I N] = size(P) %
[ 3 8 ]
 [O N] = size(T) % [ 2
8 ]
MSE = mse(T-S)
%0.4330
MSE00 = mse(T-repmat(mean(T,2),1,N)) % 0.2500 = Stupid model MSE
NMSE = MSE/MSE00 % 1.7320
Normalized MSE

% NMSE = 1.73 means that the net is 73% WORSE than the stupid
constant
% model that, regardless of input, has a constant output equal to the
mean of
% the training targets.

Hope this helps.

Greg

Subject: FULL ADDER using neural network

From: Greg Heath

Date: 3 Apr, 2012 21:55:55

Message: 5 of 7

clear all
clc
% Semicolons removed below to assist debugging

a = [0 1 0 1 0 1 0 1];
b = [0 0 1 1 0 0 1 1];
c = [0 0 0 0 1 1 1 1];
y = [0 1 1 0 1 0 0 1];
c1 = [0 0 0 1 0 1 1 1];

P = [a;b;c];
T = [y;c1];

[ I N ] = size(P) %
[ 3 8 ]
[ O N ] = size(T) % [ 2
8 ]

% STUPID MODEL MSE

MSE00 = mse(T-repmat(round(mean(T,2)),1,N)) % 0.5

% I - H - O NEURAL NETWORK MODEL

Hub = floor((N-1)*O/(I+O+1)) % 2 = upper bound
for H
Hmax = Hub
Hmin = 0
dH = 1
Ntrials = 10

rng(0)
j = 0
for h = Hmin:dH:Hmax % Search for smallest successful H

    j = j + 1
    for i = 1:Ntrials % Multiple weight
initializations
    if h==0
        net = newff(P, T, []);
    else
        net = newff(P, T, h);
    end
    net.divideParam.trainratio = 1; % Small N ==> Need all data
for training
    net.divideParam.valratio = 0;
    net.divideParam.testratio = 0;
    net.trainParam.goal = 0.01*MSE00; % 100 times better
tnan STUPID

    [ net, tr ] = train(net, P, T);
    Nepochs(i,j) = tr.epoch(end)
    S = round(sim(net, P)) % Integer Output
    MSE = mse(T-S)
    NMSE(i , j ) = MSE/MSE00

    end
end

H = Hmin:dH:Hmax
Nepochs = Nepochs
NMSE = NMSE
Stats = [ min(NMSE); mean(NMSE); median(NMSE); max(NMSE) ]

% H = 0 1 2
% Nepochs = 2 4 47
% 2 8 9
% 2 6 18
% 2 11 12
% 2 12 9
% 2 10 26
% 2 8 55
% 2 13 44
% 2 9 12
% 2 2 21
%
% NMSE = 0.5000 0.7500 0.2500
% 0.5000 0.7500 0.5000
% 0.3750 0.7500 0.2500
% 0.6250 0.2500 0.2500
% 0.5000 0.2500 0.6250
% 0.3750 0.2500 0 <== AHAH!
% 0.6250 0.2500 0.2500
% 0.5000 0.7500 0.2500
% 0.5000 0.2500 0.2500
% 0.3750 0.7500 0.5000
%
% Stats =
% Min 0.3750 0.2500 0
% Mean 0.4875 0.5000 0.3125
% Median 0.5000 0.5000 0.2500
% Max 0.6250 0.7500 0.6250
%

 Hope this helps.

 Greg

Subject: FULL ADDER using neural network

From: Slawomir

Date: 6 Apr, 2012 16:24:12

Message: 6 of 7

Thank you, Greg, for spending your time on it!
I really appreciate it!

I have a few questions anyway, I've placed them in comments:

clear all
clc
a = [0 1 0 1 0 1 0 1];
b = [0 0 1 1 0 0 1 1];
c = [0 0 0 0 1 1 1 1];
y = [0 1 1 0 1 0 0 1]; % [0 1 1 0 - it stands for XOR output, 1 0 0 1 - it stand for XNOR output)
c1 = [0 0 0 1 0 1 1 1];

P = [a;b;c];
T = [y;c1];

[ I N ] = size(P) %[ 3 8 ]
[ O N ] = size(T) % [ 2 8 ]


%It looks quite complicated, it is your personal correction to this function MSE, right?
%I don't get it yet, but I will, just need to draw it on paper.

MSE00 = mse(T-repmat(round(mean(T,2)),1,N)) % 0.5

% I - H - O NEURAL NETWORK MODEL

Hub = floor((N-1)*O/(I+O+1)) % 2 = upper bound for H
Hmax = Hub
Hmin = 0
dH = 1 % It stand for what?
Ntrials = 10 % This is number of what trials? Hmm, how many repetitions?

rng(0)
j = 0
for h = Hmin : dH : Hmax % Search for smallest successful H
    j = j + 1
    for i = 1:Ntrials % Multiple weight initializations
    if h == 0
        net = newff(P, T, []); % Hmm, why here is [], it is for defining number of neurons in hidden layer, right?
    else
        net = newff(P, T, h);
    end
    net.trainFcn = 'trainlm'; % Originally my teacher wanted me to perform this task, learning the rules of full adder for multiple methods of calculating output error.
check it out: http://i39.tinypic.com/2q3aue0.jpg
In table srednia = mean. On this chart there is number of epochs on Y, and 3 bars, equally for max, mean, min.
And I need to do 30 simulations for each method in order to do this chart and fill empty spaces in this table.

    net.divideParam.trainratio = 1; % Small N ==> Need all data for training
    net.divideParam.valratio = 0;
    net.divideParam.testratio = 0;
    net.trainParam.goal = 0.01*MSE00; % 100 times better tnan STUPID

    [net, tr] = train(net, P, T);
    Nepochs(i, j) = tr.epoch(end); % It is good solution, thank you for showing me this. i stand for number of rows, j collumns, right? But I don't get it why this program processes each value from a,b,c independently. It suppose to look for the best weight values in order to accomplish the target, right?
    S = round(sim(net, P)) % Integer Output
    MSE = mse(T - S)
    NMSE (i , j) = MSE / MSE00
    end
end
H = Hmin : dH : Hmax
Nepochs = sum(Nepochs);
Nepochs = sum(Nepochs) % It calculates me total number of epochs in one simulation,using given error-calculating method,in this case ''trainlm".
NMSE = NMSE
Stats = [min(NMSE); mean(NMSE); median(NMSE); max(NMSE)]

========================
My answers:
S =
     1 1 0 0 1 1 0 0
     0 0 0 1 0 1 1 1
T =
     0 1 1 0 1 0 0 1
     0 0 0 1 0 1 1 1
Why it is not possible to learn this network target rules?
When I draw it on a paper, I mean this whole adder, I thought that I need to create much more networks in order to calculate the output. I draw my initial idea scheme:
http://i44.tinypic.com/15yviid.jpg
But if I am not able to learn completely this network the rules based on your solution, it could be good idea to do it like that?
I know that neural networks are used to determine the outputs, where you don't know the exact outcome, or they need adapt to given situation, like changing the temperature in the room = changing the level of heating installation and so on.

It looks like an easy task to do, I've tried to figure it out why it is not possible in this taces to receive the exact result for y output, but right now I am confused.

Thanks again, Slawek.

Subject: FULL ADDER using neural network

From: Greg Heath

Date: 8 Apr, 2012 10:28:55

Message: 7 of 7

On Apr 6, 12:24 pm, "Slawomir " <SlawomirBab...@gmail.com> wrote:
> Thank you, Greg, for spending your time on it!
> I really appreciate it!
> On Apr 6, 12:24 pm, "Slawomir " <SlawomirBab...@gmail.com> wrote:
> Thank you, Greg, for spending your time on it!
> I really appreciate it!
>
> I have a few questions anyway, I've placed them in comments:
>
> clear all
> clc
> a = [0 1 0 1 0 1 0 1];
> b = [0 0 1 1 0 0 1 1];
> c = [0 0 0 0 1 1 1 1];
> y = [0 1 1 0 1 0 0 1]; % [0 1 1 0 - it stands for XOR output, 1 0 0 1 - it stand for XNOR output)
> c1 = [0 0 0 1 0 1 1 1];
>
> P = [a;b;c];
> T = [y;c1];
>
> [ I N ] = size(P) %[ 3 8 ]
> [ O N ] = size(T) % [ 2 8 ]
>
> %It looks quite complicated, it is your personal correction to this function MSE, right?
> %I don't get it yet, but I will, just need to draw it on paper.

If you search MSE00 in this Newsgroup there are more detailed
discussions. Typically, I look at
two simple models before designing a NN: A Constant model and a Linear
Model

NAIVE CONSTANT MODEL: Output is constant regardless of input (I've
recently nicknamed it
the "STUPID" Model). MSE00 is used to normalize the MSE of other
models to remove the effect
of scaling. 1-NMSE is the well known R^2 statistic. Search Wikipedia
for "Coefficient of Determination".

y00 = repmat(mean(T,2),1,N)) % Replicated column mean
yb00 = round(y00) % Converted to binary
MSE00 = mse(T-yb00)

> MSE00 = mse(T-repmat(round(mean(T,2)),1,N)) % 0.5
>
> % I - H - O NEURAL NETWORK MODEL
>
> Hub = floor((N-1)*O/(I+O+1)) % 2 = upper bound for H
> Hmax = Hub
> Hmin = 0
> dH = 1 % It stand for what?

Incrementi in H for the search

> Ntrials = 10 % This is number of what trials? Hmm, how many repetitions?

10 different random weight initialization trials for each candidate
value of H

> rng(0)
> j = 0
> for h = Hmin : dH : Hmax % Search for smallest successful H
> j = j + 1
> for i = 1:Ntrials % Multiple weight initializations
> if h == 0
> net = newff(P, T, []); % Hmm, why here is [], it is for defining number of neurons in hidden layer, right?

Yes. MATLAB will not take 0.

> else
> net = newff(P, T, h);
> end
> net.trainFcn = 'trainlm'; % Originally my teacher wanted me to perform this task, learning the rules of full
> adder for multiple methods of calculating output error.
> check it out:http://i39.tinypic.com/2q3aue0.jpg
> In table srednia = mean. On this chart there is number of epochs on Y, and 3 bars, equally for max, mean, min.
> And I need to do 30 simulations for each method in order to do this chart and fill empty spaces in this table.
>
> net.divideParam.trainratio = 1; % Small N ==> Need all data for training
> net.divideParam.valratio = 0;
> net.divideParam.testratio = 0;
> net.trainParam.goal = 0.01*MSE00; % 100 times better tnan STUPID
>
> [net, tr] = train(net, P, T);
> Nepochs(i, j) = tr.epoch(end); % It is good solution, thank you for showing me this. i stand for number of rows,
> j collumns, right? But I don't get it why this program processes each value from a,b,c independently.

       It doesn't. j is the index for the value of H and i is the
index for the weight initialization trial

It support to look for the best weight values in order to accomplish
the target, right?

Kind of.
#1 Find the lowest value for H which the performance is acceptable.
#2 Given H, use multiple weight initializations to avoid unacceptable
local minima.
#3 Save the weights of the best model.

> S = round(sim(net, P)) % Integer Output
> MSE = mse(T - S)
> NMSE (i , j) = MSE / MSE00
> end
> end
> H = Hmin : dH : Hmax
> Nepochs = sum(Nepochs);
> Nepochs = sum(Nepochs) % It calculates me total number of epochs in one simulation,using given error-calculating method,in this case ''trainlm".

I used Nepochs = Nepochs to print out the 10x3 matrix showing the
diversity of the 30 results

> NMSE = NMSE
> Stats = [min(NMSE); mean(NMSE); median(NMSE); max(NMSE)]
>
> ========================
> My answers:
> S =
> 1 1 0 0 1 1 0 0
> 0 0 0 1 0 1 1 1
> T =
> 0 1 1 0 1 0 0 1
> 0 0 0 1 0 1 1 1
> Why it is not possible to learn this network target rules?

What do you mean?
It is possible. After 26 trials I found one with H=2.


> When I draw it on a paper, I mean this whole adder, I thought that I need to create much more networks in order to calculate the output. I draw my initial idea scheme:http://i44.tinypic.com/15yviid.jpg

> But if I am not able to learn completely this network the rules based on your solution,
    it could be good idea to do it like that?

    I don't understand.

> I know that neural networks are used to determine the outputs, where you don't know the exact outcome,
or they need adapt to given situation, like changing the temperature
in the room = changing the level of heating installation and so on.
>
> It looks like an easy task to do, I've tried to figure it out why it is not possible in this taces to receive the exact result for y output, but right now I am confused.

It is possible to get an exact resul!. I did it after 26 attempts. See
my previous tabulations. I was trying to do it with as small as H as
possible.

The 2 outputs with 8 cases yields 16 training equations. With 3 inputs
and 2 hidden nodes there are (3+1)*2 = 8 first layer unknown input
layer weights.
With 2 hidden nodes and 2 outputs there are (2+1)*2 = 6 unknown output
layer weights. So, for H =2 , there is an exact solution of the
system with 16 equations in14 unknowns.

 As H increases, the percent of solutions found will increase.
However, the number of equations will be less than the number of
unknowns. Therefore, there will be an infinite number of solutions.

> Thanks again, Slawek.

Hope this helps.

Greg

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us