THE DIFFERACE RESULT IN NEURAL NETWORK PROBLEM

Hi
I hope someone can help me with my question.
When I run the backprop neural network more than once on the same data set i get a different set of results. (the predicted results are different each time). Is there a way to train the Neural Network to output the same (lowest error predictions) if you run the code more than once for the same data set ? when enter the testimage for first time will classify as first class but when rerun the program with same testimage will classify as second class.. how can solve this problem ...this is my file Thanks

 Accepted Answer

if you are using a saved net that has been previously trained, this should not happen.
If you are setting the RNG to the same initial state before retraining, this should not happen.
Hope this helps.
Thank you for formally accepting my answer
Greg

6 Comments

thanks greg...you mean after traning the net save it..and testing the net in another .m files???or can traning the net then testing it in same files?? plz help me
i want to know want means by this result???
If you save the net, you can load it later to use on whatever you want provided you have the correct input dimensionality.
However, you cannot evaluate the results unless you know what the correct answer should be.
thanks greg for your reply...when enter the the image of first class the out put must be the first class ...but this not happen ..sometimes classify as senod class and some times as first class.. i saved net but nothing change...plz greg sole the problem
What are the sizes of your training, validation and test sets?
What range of hidden node values are you searching over?
How many random initial weight initializations for each hidden node value?
What are the trn/val/test R-squared values for the "best" (i.e. max(R2val)) design?
thank you greg.. the size of trainig ,validation and test is:
mynet.divideParam.trainRatio = 70/100;
mynet.divideParam.valRatio = 15/100;
mynet.divideParam.testRatio = 15/100;
the hidden node i used the defult (=10)
but i have the stuped quetions : i dont underetand How many random initial weight initializations for each hidden node value?
What are the trn/val/test R-squared values for the "best" (i.e. max(R2val)) design?
how can random initial wights??? the matlab do not do it???
plz help me greg

Sign in to comment.

More Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Asked:

on 19 Jun 2014

Edited:

on 21 Jun 2014

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!