Minimum size Neural Network to represent a Boolean Function

18 views (last 30 days)
dsmalenb
dsmalenb on 12 Oct 2018
Answered: Greg Heath on 17 Oct 2018
Greetings!
I am fairly new to Matlab but I am trying to understand how to build a basic neural network that is minimized to represent some Boolean function. Say we have variables p, q and r and the truth table is:
p q r | f(p,q,r)
----------------
0 0 0 0
0 0 1 1
0 1 0 1
0 1 1 0
1 0 0 1
1 0 1 0
1 1 0 1
1 1 1 0
I would like the neural network to be able to take in one value of p, q, and r and provide me with f(p,q,r).
To be honest, I have several books on Matlab, neural network, and several online forums and they seem to bounce around. I prefer not using "black box" add-ons until I understand how to do this the "hard way" first too.
I am looking for a purely NN approach. No Karnaugh maps or QM-method even though those apply.
I appreciate your help!

Answers (2)

Yavor Kamer
Yavor Kamer on 13 Oct 2018
A single neuron (with one input and one output) will take the input, multiply it by a weight, add it a bias, pass it through a nonlinear function of your choice and return the output. N(x)= fun(x*w+b) Suppose you are using a simple case with 3 neurons at your first (input) layer and a single neuron in your second and last (output) layer. Then your output would be of the form
N(p,q,r)= fun(fun(p*w(1)+b(1)) + ...
fun(q*w(2)+b(2)) + ...
fun(r*w(3)+b(3)))*w(4) + b(4));
By "training" such a neural network you are basically trying to find the values of vectors w and b that give the best fit to your desired outputs (i.e minimizing the misfit). So if you know what is your activation function ( fun ) you can write the full analytical form and try to solve and see what is the minimal complexity to obtain a given error.
  3 Comments
dsmalenb
dsmalenb on 16 Oct 2018
After reading your response I would like to modify my objective by minimizing the number of free parameters.

Sign in to comment.


Greg Heath
Greg Heath on 17 Oct 2018
In general
1. A single tanh hidden layer with a linear output layer is always a sufficient minimum MSE approximator for bounded functions.
2. The number of unknown weights for a standard I-H-O MLP is
Nw = (I+1)*H + (H+1)*O = O +(I+O+1)*H
3. The corresponding No. of training equations is
Ntrneq = Ntrn*O
4. The number of unknowns will not exceed the number of training equations when
Ntrneq >= Nw
or
H <= Hub = (Ntrneq-O)/(I+O+1)
Hope this helps.
*Thank you for formally accepting this answer*
Greg

Products


Release

R2016b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!