IT IS CONSIDERED A HEINOUS CRIME TO TOPPOST.
PLEASE REFRAIN FROM WRITING REPLIES ABOVE PREVIOUS ENTRIES.
i HAVE REMOVED YOUR REPLY TO THE END.
"Jude" wrote in message <kebrnc$cve$1@newscl01ah.mathworks.com>...
> "Greg Heath" <heath@alumni.brown.edu> wrote in message <kd7fkr$2h7$1@newscl01ah.mathworks.com>...
> > "Jude" wrote in message <kd6vmo$93u$1@newscl01ah.mathworks.com>...
> > > I am using neural network tool box to prove a concept. I like to use binary inputs for my learning. Do we have any special learning algorithm available for binary inputs? (OR how should I modify this call (change any arguments for BI inputs?); “newff(xll',y_learn,
> > >[20],{'tansig','tansig'},'trainbfg','learngdm','msereg');” to fit a binary inputs)
> >
> > The only serious input recommendation I have is to use bipolar binary {1 , 1} and 'tansig'
> > (default) for the hidden layer. In addition, why not tranpose xll once and for all instead of doing it in multiple commands?
> >
> > For outputs: The transfer and learning functions depend on the type of target
> >
> > Reals: 'purelin' and 'trainlm'(default)
> >
> > Unipolar binary {0,1}: 'logsig' and 'trainscg'; %Use for classsification with vec2ind/ind2vec
> >
> > Bipolar binary {1,1}: 'tansig' and 'trainscg'
> >
> > > I m using as follows:
> > > NETff = newff(xll',y_learn,[20],{'tansig','tansig'},'trainbfg','learngdm','msereg');
> >
> > Why are you using validation stopping (default) AND 'msereg' ? Because H = 20 is
> > definitely overfitting? Just use a more practical value for H. See below.
> >
> How can I change, so validation will not stop?
> NETff = newff(xll',y_learn,[20],{'tansig','tansig'},'trainbfg','learngdm');
>
>
> > What size are your input and target matrices?
> >
> > For [I N ]and [O N] , you will have Neq = N*O equations, to estimate , Nw = (I+1)*H+(H+1)*O unknown weights. Without validation stopping or regularization, it is wise to keep Neq > r*Nw for r > 1, i.e.,
> >
> > H < (Neq/r O) / (I+O+1) % r >1
> >
> > I have successfully used H small enough so that ~2 <= r <= ~ 8 to 20. For
> > smaller values I recommend val stopping or regularization. I feel better using this ratio as a guide rather than just using a very large value for H (like, um, 20?) and covering up by using both val stopping and reglarization.
> >
> > > NETff.trainParam.epochs = 100000;
> >
> > What is wrong with the default?
> >
> > > NETff.trainParam.goal = 0.00001;
> >
> > MSEgoal ~ 0.01*mean(var(ylearn')) % or (0.01) > (0.005)
> >
> > > NETff= train(NETff,xll',y_learn);
> >
> > [ NETff tr Yff Eff ] = train(NETff,xll',y_learn);
> >
> > > Yff = sim(NETff,xll');
> >
> > Unnecessary
> >
> > > Where xll’ is a binary number, eg: 1010101010
> >
> > Use bipolar binary
> >
> > > Thanks.
> > > Jude
> >
> > OKEYDOKE
> >
> > Greg
> >
> > PS: try tr = tr and see all the goodies that are in that structure!
> Hi Greg,
>
> Thank you for your help. I incorporated most of your inputs.
>
> My issues are:
> Once I complete the NN training on the GUI (nntraintool) it shows 10 inputs (I 've given 10 binary inputs. However when I type my net on the command prompt (NETff) it only shows one input.
>
> NETff =
> dimensions:
>
> numInputs: 1
> numLayers: 3
> numOutputs: 1
> Is matlab automatically change my binary inputs to scalar?
I think you mean vector inputs to scalar.
No. The one input is 10dimensional.
However, your binary inputs are changed to real values for computational purposes.
Hope this helps.
Greg
P.S. Unless you have a specific reason for doing otherwise, only use one hidden layer.
One hidden layer is sufficent for a universal approximator.
