how do exactly the testing and validation sequences work in neural network toolbox in Matlab?

1 view (last 30 days)
I have few questions concerning Neural networks especially the Neural Network Toolbox and i'd really appreciate it if you could give me some answers. My questions are as follows:
1/ How can we get the weight matrices used in the training, testing and Validation sequences?
As far as i know, the final weights of the training sequence are used to test the model in the testing sequence using the same learning algorithm and a non used database and to validate the model, we use the final testing weights with the same learning algorithm.
2/ how do exactly the testing and validation sequences work in Matlab?
3/ Are also the training biases used in the testing and validation sequences?
4/ Why can't we have more than one hidden layer in the multilayer perceptrons problems? ( for example fitting problems)
Thank you in advance

Accepted Answer

Greg Heath
Greg Heath on 17 Jul 2015
> I have few questions concerning Neural networks especially the Neural Network Toolbox and i'd really appreciate it if you could give me some answers. My questions are as follows:
> 1/ How can we get the weight matrices used in the training, testing and Validation sequences?
Why are you emphasizing data division when the same weights are used on all of the data?
[ I N ] = size(input)
[ O N ] = size(target)
b1 = net.b{1}; B1 = repmat(b1,I,N); IW = net.IW{1,1};
b2 = net.b{2}; B2 = repmat(b2,O,N); LW = net.LW{2,1};
>As far as i know, the final weights of the training sequence are used to test the model in the testing sequence using the same learning algorithm and a non used database and to validate the model, we use the final testing weights with the same learning algorithm.
No.
Every epoch training data is used to update weights. Then the net is
applied to ALL of the data and outputs are updated.
If the validation subset error does not decrease for a specified number of
consecutive ecpochs (default is 6), training is stopped because it is interpreted as an indication that the net will not generalize well, i.e., perform satisfactorily on nontraining data.
There are several other stopping criteria. The most important of which are
that the training subset error has either reached the specified goal or a local minimum.
> 2/ how do exactly the testing and validation sequences work in Matlab?
In addition to validation stopping explained above, when multiple nets
are designed, they are ranked by validation subset error which is
considerably LESS BIASED than training subset error.
Once a best design is chosen via the validation subset performance, the
corresponding test subset is used to obtain an UNBIASED estimate of
performance on all (especially unseen) nontraining data.
> 3/ Are also the training biases used in the testing and validation sequences?
I do not understand the question.
> 4/ Why can't we have more than one hidden layer in the multilayer perceptrons problems? ( for example fitting problems)
One hidden layer is sufficient for a universal function approximator.
Infrequently, however, more are used to exploit some a priori information
about the relationships between the input and output. Sometimes this
results in using a fewer total number of weights that have to be estimated.
Hope this helps.
Thank you for formally accepting my answer
Greg
  3 Comments
Greg Heath
Greg Heath on 23 Jul 2015
Yes, the biases are just weights with special names.
Some practicioners and software add an extra component of unity to the input xnew => [ ones(1,N); xold] and do not use what we would call a bias. Obviously the weight for xnew(1,:) is what we call a bias. (Actually I have done this in the past when I wrote my own code)
Hope this helps.
Greg

Sign in to comment.

More Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!