Neural network toolbox initialize the weights and biases (accepted answer: control random number generator)

5 views (last 30 days)
Dear all,
I have a problem when using the neural network toolbox. For a given data set, each training results in a different network (with different weights and biases). I learned that it is due to the random data division (which I fixed in my code) and the random initialization of the weights and biases. Instead of using the default initFcn (which is rands), I tried the other functions, i.e. midpoint, initzero. However, I have problems with both functions:
1. when using initzero, the training always stops at the 2nd iteration with horrible accuracy. Basically, almost all the weights and biases are zeros.
2. when using midpoint, I receive an error, which is due to the default net.inputWeights{i,j}.weightFcn (dotprod). I tried some other options (dist, mandist,..) without success.
So, could you please help me with some suggestions to overcome this problem? I just want to be able to reproduce the same training and obtain the same network with this toolbox. The code is attached herein.
thank you very much in advance.
cheers. Chu

Accepted Answer

Greg Heath
Greg Heath on 25 Nov 2014
Explicitly initialize the random number state before configuring or training. Most of my posted designs use one of the following statements before net creation
rng(0)
rng(4151941)
rng('default')
For examples search NEWSGROUP and ANSWERS using greg and one of the above
Hope this helps.
Thank you for formally accepting my answer
Greg
  5 Comments
Greg Heath
Greg Heath on 29 Nov 2014
1. ALWAYS begin with the help example code using MATLAB example data.
2. Use as many defaults as possible.
3. Next, choose an example that most resembles your own data.
4. If results are unsatisfactory, sequentially change one default at a time.
5. If you type the command
net % NO semicolon
you can find the current parameter settings. It is advisable not to change too many at once.
6. Once I determine feedback and input lags from the significant lags of the target autocorrelation function and/or the target/input crosscorrelation function, I use a double loop search over a range of hidden nodes (outer loop) and Ntrials (~10) random weight initializations (inner loop) for each outer loop hidden node value.
7.
help narxnet
help maglev_dataset
8. Oodles of examples can be found searching the NEWSGROUP and ANSWERS using
greg narxnet
===========================================================================
1. What is the justification for using the following instead of the defaults
a. Three hidden node layers instead of one
b. FD = ID = 1:3 and H1 = 3, H3 = H2 = 2
c. net.divideMode = 'value' for timeseries ?
d. .8/.1/.1
e. explicit weight/bias initialization
2. Your program works well if you remove the weight initialization section.
3. If you want to initialize all weights to zero just use the commands getwb and setwb.
help getwb
help setwb
Hope this helps.
Greg
chu
chu on 1 Dec 2014
Hi Greg, Thanks a lot for the useful tips. I would like to answer your above questions and comments: 1. I started with the defaults options, then played a bit by varying some options. Indeed, there was no justification for the change. 2. It's true. 3. Your answer is great! setwb is the command that I should have used (instead of initzero). By the way, if you can put some words on the following question, I will be able to formally accept your answer. It will be helpful for others: http://www.mathworks.com/matlabcentral/answers/164503-neural-network-toolbox-initialize-the-weights-and-biases-with-initzero
Thank you again for your time, Greg. Wish you all the best. Chu

Sign in to comment.

More Answers (0)

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!