How to perform cross-validation of a RBFN created using newrb?

2 views (last 30 days)
Hello!
let's say I have a network created using newrb using a particular training data set. I have checked it on validation data set and obtained certain results. (split sample validation)
Now I would like to perform cross-validation over the whole amount of dataset. But obviously I cannot use newrb again, since it will generate another NN topology, while I would like to test the topology that was created in the first step.
I have the following sequence of actions:
1. create newrb using test data set - only training data data
2. retrain it using train function using the whole data set split using crossvalind - training+validation mixed for cross-validation
PROBLEM: while validation using the first approach (split sample validation) does not give good results, second approach gives even higher errors
QUESTION: Does it happen due to the sensitivity of rbfn to the number of samples in the dataset? Are there any other ways to perform cross validation of RBFN network?
thank you in advance! Alexandra
  1 Comment
Alexandra
Alexandra on 11 May 2016
I was also trying to retrain the RBFN generated by newrb using classical trainbr. Result were poor. I am particularly interested in a certain network structure and would like to perform cross-validation on it. Any other suggestions?
Thanks, Alexandra

Sign in to comment.

Accepted Answer

Greg Heath
Greg Heath on 14 May 2016
> let's say I have a network created using newrb using a particular training data set. I have checked it on validation data set and obtained certain results. (split sample validation)
There is a misuse of the terminology VALIDATION. With NNs, the
val set is used to
a. Stop training when the val subset performance decreases for a
specified number of epochs.
b. Rank multiple designs.
> Now I would like to perform cross-validation over the whole amount of dataset. But obviously I cannot use newrb again, since it will generate another NN topology, while I would like to test the topology that was created in the first step.
That makes no sense. With NEWRB the topology changes whenever the training set changes.
> I have the following sequence of actions:
> 1. create newrb using test data set - only training data data
That makes no sense.
> 2. retrain it using train function using the whole data set
split using crossvalind - training+validation mixed for
cross-validation
That makes no sense because NEWRB is not created using an
external training function.
> PROBLEM: while validation using the first approach (split sample validation) does not give good results, second approach gives even higher errors
> QUESTION: Does it happen due to the sensitivity of rbfn to the number of samples in the dataset? Are there any other ways to perform cross validation of RBFN network?
> thank you in advance! Alexandra
Randomly divide the data into k equal (as much as possible)
subsets.
for i = 1:k
if i > 1, Replace subset i-1, end
1. Use the ith subset for a nontraining test subset
2. Design with the remaining k-1 subsets
3. Test with the ith subset
end
Average the k results.
This can be repeated as many times as desired and the results averaged.
Hope this helps.
Thank you for formally accepting my answer
Greg

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!