MATLAB Answers

0

Minimum size Neural Network to represent a Boolean Function

Asked by dsmalenb on 12 Oct 2018 at 16:59
Latest activity Commented on by Yavor Kamer 14 minutes ago

Greetings!

I am fairly new to Matlab but I am trying to understand how to build a basic neural network that is minimized to represent some Boolean function. Say we have variables p, q and r and the truth table is:

p q r | f(p,q,r)
----------------
0 0 0     0
0 0 1     1
0 1 0     1
0 1 1     0
1 0 0     1
1 0 1     0
1 1 0     1
1 1 1     0

I would like the neural network to be able to take in one value of p, q, and r and provide me with f(p,q,r).

To be honest, I have several books on Matlab, neural network, and several online forums and they seem to bounce around. I prefer not using "black box" add-ons until I understand how to do this the "hard way" first too.

I am looking for a purely NN approach. No Karnaugh maps or QM-method even though those apply.

I appreciate your help!

  0 Comments

Sign in to comment.

Products


Release

R2016b

1 Answer

Answer by Yavor Kamer on 13 Oct 2018 at 23:19

A single neuron (with one input and one output) will take the input, multiply it by a weight, add it a bias, pass it through a nonlinear function of your choice and return the output. N(x)= fun(x*w+b) Suppose you are using a simple case with 3 neurons at your first (input) layer and a single neuron in your second and last (output) layer. Then your output would be of the form

N(p,q,r)= fun(fun(p*w(1)+b(1)) + ...
              fun(q*w(2)+b(2)) + ...
              fun(r*w(3)+b(3)))*w(4) + b(4));

By "training" such a neural network you are basically trying to find the values of vectors w and b that give the best fit to your desired outputs (i.e minimizing the misfit). So if you know what is your activation function ( fun ) you can write the full analytical form and try to solve and see what is the minimal complexity to obtain a given error.

  2 Comments

True, however hidden layers provide added complexity which your formulation would not capture easily. I guess any neural network can be formulated in the way you provide but as the complexity grows so would its functional form.

Therefore, I am looking to see if there is an easy way to represent this as a neural network without having to resort to increasingly complex predetermined forms.

Hopefully, this makes sense.

I understood your question as looking for the minimally complex NN structure (and it's formulation) that would fit your data. The proposed NN has a total of 4 (weights) + 4 (biases) = 8 free paramets and you have 8 data points, thus it is already complex enough. That's why I didn't include a hidden layer, but if you mean something else by minimal you should try to define that further. For me the interesting thing would be to find a suitable functional form that can fit the data good enough with even less free paramets, by removing some of the biases for instance.

Sign in to comment.