I am trying to implement a neural network with leave-one-out crossvalidation. The problem is when I train the network I get a different result each time.
My code is:
hiddenLayerSize = 10;
net = patternnet(hiddenLayerSize);
net.divideFcn = '';
[net] = train(net,inputs,targets);
testOut = net(validation);
[c,cm] = confusion(validationTarget,testOut); %cm
TP = cm(1,1); FN = cm(1,2); TN = cm(2,2); FP = cm(2,1);
fprintf('Sensitivity : %f%%\n', TP/(TP+FN)*100);
fprintf('Specificity : %f%%\n\n', TN/(TN+FP)*100);
Is it because train() uses different proportions of the input data each time? In this case I have tried to avoid dividing data in training, validation and test by setting net.divideFcn = ''. I have also tried to set net.divideParam.trainRatio = 100/100.
I have tried to set EW = 1, but it does not change anything.
No products are associated with this question.
Try to add this command on the beginning of a script:
because your net is preset with random values of gains so during the training you have different start point in each simulation. If you set always the same weights, you will always get the same answer. Function above sets the same seed every time, so the rand() sequence is always identical
Now a new curious thing has occurred: When I run the cross validation with for example two portions of data the result depends on the past training?
This is output from MATLAB:
Sensitivity : 85.185185%
Specificity : 93.684211%
Sensitivity : 41.176471%
Specificity : 97.549020%
Sensitivity : 23.529412%
Specificity : 97.549020%
In the first execution I validate on subjectID = 1 and train on subject = 2 and in the next loop i validate on subjectID= 2 and train on subjectID = 1.
In the second execution I start validating on subjectID = 2 and train on subjectID = 1, which gives another result than the second loop in the first execution, but it is the same training data and validation data??? I ensure that all variables are cleared before each loop in the crossvalidation. It is also curious that the specificities are the same when the sensitivities differ.
I suspect that similar results are obtained because the same RNG seed is used.
See my previous comments about not resetting the seed.
How large is your data set? I assume your trn/tst split is 50/50,and you are using 2-fold XVAL without a validation set.
See my previous comments on the difference between validation and testing.
Hope this helps.
Different Matlab Neural networks toolbox results is because of two reasons: 1-random data division 2-random weight initialization
For different data division problem use function "divideblock" or "divideint" instead of "dividerand" like this:
net.dividefcn='divideblock; net.divideparam.trainratio=.7; net.divideparam.valratio=.15; net.divideparam.testratio=.15;
For random weight initialization problem, It seems (I'm not sure) all Matlab initialization functions ("initzero", "initlay”, "initwb”, “initnw”) are almost random. So you should force this functions produce similar results per call.
RandStream.setGlobalStream (RandStream ('mrg32k3a','Seed', 1234));
And then use one of them: