How to keep ANN training result stable?

1 view (last 30 days)
When we train a neural network, normally we divide the data randomly into three (trainlm) or two (trainbr) groups (training data, validation data, and test data). For my knowledge, when we train several times, we will get different networks. These different networks will predict different outputs even for the same input data, but the difference should be small.
I am currently using neural network toolbox to modeling a nonlinear system. By using the same initial setting, I trained several times and got several neural networks. When I compared the predicted outputs of these different neural networks, I found that the predicted outputs often have big difference and sometime are obviously incorrect for my nonlinear system.
Who can tell me what causees this situation? Is the data quality problem, or something else? How can I reduce the difference of several trained networks?

Accepted Answer

Greg Heath
Greg Heath on 27 Jun 2015
The variations tend to come from
1. The random number seed (I like rng(4151941))
2. The random division of data
3. The random initial weights
I generally use defaults except for a range of hidden node sizes
h = Hmin:dH:Hmax
Then for each value of h, train Ntrials (usually 10) models.
Sometimes a perusal of the Ntrials x numel(h) matrix of results causes me to change some parameter(s) and repeat.
I have posted zillions of examples in both the NEWSGROUP and ANSWERS. My tutorials are in the NEWSGROUP.
I always start using all defaults to get the lay of the land.
Reasonable searchwords to include are subsets of
neural greg Hmin:dH:Hmax Ntrials
Post selected code with comments and/or error messages if you have further problems.
Hope this helps.
Thank you for formally accepting my answer
Greg

More Answers (0)

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!