arguments of "newp" command of neural network

24 views (last 30 days)
hi
i hope all are fit and fine. i want to ask a question related the command of neural network "newp". i want to train the weights and bias of 3 input AND gate. but i feel little bit difficulty in initialization of newp command. means how we select first argument of newp and also suggest me how to write it according to the problem. kindly help me in this regard.
Regards, Mudasir
clear
clc
net.IW{1,1}=[0 0 0]
net.b{1}=0
p=[0 1 0 1 0 1 0 1;0 0 1 1 0 0 1 1;0 0 0 0 1 1 1 1]
net=newp([1 8;1 3],1)
t=[0 0 0 0 0 0 0 1]
net.trainParam.epochs=5
net=train(net,p,t)
net.IW{1,1}
net.b{1}
a=sim(net,p)
error=[a(1)-t(1) a(2)-t(2) a(3)-t(3) a(4)-t(4) a(5)-t(5) a(6)-t(6) a(7)-t(7) a(8)-t(8)]

Accepted Answer

Walter Roberson
Walter Roberson on 7 Jun 2015
newp() was replaced as of r2010b which is why you cannot find it documented. Use the replacement. http://www.mathworks.com/help/releases/R2012a/toolbox/nnet/ref/perceptron.html
  2 Comments
Mudasir Ahmed
Mudasir Ahmed on 7 Jun 2015
thanks sir, but it is working on matlab 2013, may be due to backward compatibility and also their help is also exists which as fallows
>> help newp newp Create a perceptron.
Obsoleted in R2010b NNET 7.0. Last used in R2010a NNET 6.0.4.
Syntax
net = newp(p,t,tf,lf)
Description
Perceptrons are used to solve simple (i.e. linearly
separable) classification problems.
NET = newp(P,T,TF,LF) takes these inputs,
P - RxQ matrix of Q1 representative input vectors.
T - SxQ matrix of Q2 representative target vectors.
TF - Transfer function, default = 'hardlim'.
LF - Learning function, default = 'learnp'.
Returns a new perceptron.
The transfer function TF can be HARDLIM or HARDLIMS.
The learning function LF can be LEARNP or LEARNPN.
Examples
This code creates a perceptron layer with one 2-element
input (ranges [0 1] and [-2 2]) and one neuron. (Supplying
only two arguments to newp results in the default perceptron
learning function LEARNP being used.)
net = newp([0 1; -2 2],1);
Now we define a problem, an OR gate, with a set of four
2-element input vectors P and the corresponding four
1-element targets T.
P = [0 0 1 1; 0 1 0 1];
T = [0 1 1 1];
Here we simulate the network's output, train for a
maximum of 20 epochs, and then simulate it again.
Y = net(P)
net.trainParam.epochs = 20;
net = train(net,P,T);
Y = net(P)
Notes
Perceptrons can classify linearly separable classes in a
finite amount of time. If input vectors have a large variance
in their lengths, the LEARNPN can be faster than LEARNP.
Properties
Perceptrons consist of a single layer with the DOTPROD
weight function, the NETSUM net input function, and the specified
transfer function.
The layer has a weight from the input and a bias.
Weights and biases are initialized with INITZERO.
Adaption and training are done with TRAINS and TRAINC,
which both update weight and bias values with the specified
learning function. Performance is measured with MAE.
See also sim, init, adapt, train, hardlim, hardlims, learnp, learnpn, trainb, trains.
Walter Roberson
Walter Roberson on 7 Jun 2015
It says at the top it was obsoleted. There is no point in trying to figure out how it works.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!