MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn moreOpportunities for recent engineering grads.

Apply Today**New to MATLAB?**

Asked by Terence
on 20 Mar 2014

Hi,

I am new to neural networks and I'm not sure how to go about trying to achieve better test error on my dataset. I have a ~20,000x64 dataset X with ~20,000x1 targets Y and I'm trying to train my neural network to do binary classification (0 and 1) on another dataset that is 19,000x64 to achieve the best results. I currently get about .175 MSE error rate on the test performance, but I want to do better. My dataset contains values in the range of -22~10000.

I used the neural networks toolbox and used its GUI to generate a script. I've modified some of the parameters like so:

inputs = X'; targets = Y';

hiddenLayerSize = 5; net = patternnet(hiddenLayerSize);

net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'}; net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};

net.divideFcn = 'divideblock'; % Divide data randomly net.divideMode = 'sample'; % Divide up every sample net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100;

net.trainFcn = 'trainrp'; % Scaled conjugate gradient

net.performFcn='mse';

net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ... 'plotregression', 'plotfit'};

% Train the Network [net,tr] = train(net,inputs,targets);

% Test the Network outputs = net(inputs); errors = gsubtract(targets,outputs); performance = perform(net,targets,outputs)

% Recalculate Training, Validation and Test Performance trainTargets = targets .* tr.trainMask{1}; valTargets = targets .* tr.valMask{1}; testTargets = targets .* tr.testMask{1}; trainPerformance = perform(net,trainTargets,outputs) valPerformance = perform(net,valTargets,outputs) testPerformance = perform(net,testTargets,outputs)

% View the Network view(net)

I've read online and the Matlab documentation for ways to improve the performance, and it suggested that I do stuff like set a higher error goal, reinitializing the weights using the init() function, etc etc, but none that stuff is helping me to achieve better performance. Maybe I'm just not understanding how to do it correctly?

But anyways, can someone please direct me into some way in which I can achieve better accuracy? Also, could you please provide me with some code in your answer? I can't seem to understand much without looking at code.

Answer by Greg Heath
on 21 Mar 2014

Edited by Greg Heath
on 21 Mar 2014

Accepted answer

1. Is there evidence that Hopt could be more than 10? i = 1:2:19 2. Since your data set is huge, why not use tic and toc to time your runs 3. Why are you complicating the code by specifying net properties and values that are already defaults? 4. Your comments say TRAINSCG, the default for patternnet, and recommended for binary outputs but your code uses TRAINRP. Are you having size problems with TRAINSCG? 5. Your Plot options are those for regression, not classification. % see those associated with patternnet ; net = patternnet % NO SEMICOLON 6. Your standarizations are incorrect. Use ZSCORE or MAPSTD. Check to make sure EACH variable is standardized 7. Unfortunately, I'm not familiar with PLS (although it is the correct function to use for classifier input variable reduction) so, some of the following advice may be questionable 8. Are you trying to reduce 64 dimensions to 8? 9. XL and YL should be transposed 10. You could save the weights using getwb instead of or in addition to saving the nets. 11. Save and plot the overall and trn/val/tst/ performances vs numhidden 12. Modify to calculate , save and plot, overall and trn/val/tst/ percent classification errors. 13. To mitigate the probability of poor initial weights, consider a double loop design where the inner loop is over Ntrials different weight intializations for each value of numhidden. I use this technique almost all of the time. Search in NEWSGROUP and ANSWERS for examples using

greg patternnet Ntrials

Hope this helps.

**Thank you for formally accepting my answer**

Greg

Answer by Greg Heath
on 20 Mar 2014

0. I-H-O = 94-5-1 node topology; N =20,000 creation data pairs

1. Ntrneq = 0.7*N*O = 14,000 training equations but only

Nw =(I+1)*H+(H+1)*O = (94+1)*5+(5+1)*1 = 481 unknown weights

2. Probably need more hidden nodes( H > 5 ). Why was the default H = 10 replaced by H = 5 ?

3. Probably don't need I = 94 input dimensions or Ntrn = 0.7*N = 14K training examples.

4. 16 to 32 examples per dimension is probably sufficient. Since [16 32]*94 ~ [ 1500 3000], I would start with ~ 10 subsets of ~ 2000 for the following Tasks

a. Standardize inputs to zero-mean/unit-variance b. Reduce input dimensionality to I < 94 via PLS (PCA and STEPWISEFIT not optimal for classification) c. Use the reduced inputs to determine the smallest acceptable value of H by trial and error

Hope this helps

**Thank you for formally accepting my answer**

Greg

Terence
on 21 Mar 2014

Hi Greg,

Thanks for the answer. I modified my code to be this:

numNN = 10; nets = cell(numNN);

for i=1:numNN, % Create a Pattern Recognition Network hiddenLayerSize = i; net = patternnet(hiddenLayerSize);

% Choose Input and Output Pre/Post-Processing Functions % For a list of all processing functions type: help nnprocess net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'}; net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};

% Setup Division of Data for Training, Validation, Testing % For a list of all data division functions type: help nndivide net.divideFcn = 'dividerand'; % Divide data randomly net.divideMode = 'sample'; % Divide up every sample net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100;

% For help on training function 'trainscg' type: help trainscg % For a list of all training functions type: help nntrain net.trainFcn = 'trainrp'; % Scaled conjugate gradient

% Choose a Performance Function % For a list of all performance functions type: help nnperformance %net.performFcn = 'mse'; % Mean squared error net.performFcn='mse';

% Choose Plot Functions % For a list of all plot functions type: help nnplot net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ... 'plotregression', 'plotfit'};

begin = (i-1)*size(X,2)*30+1; ending = i*size(X,2)*30; Xnew = X(begin:ending, :); Ynew = Y(begin:ending, :); Xnew=Xnew-mean(Xnew(:)); Xnew=Xnew/std(Xnew(:)); [XL,YL] = plsregress(Xnew, Ynew, 8); inputs = XL; targets = YL;

% Train the Network [net,tr] = train(net, inputs, targets); nets{i} = net;

outputs = net(inputs); errors = gsubtract(targets,outputs); performance = perform(net,targets,outputs)

% Recalculate Training, Validation and Test Performance trainTargets = targets .* tr.trainMask{1}; valTargets = targets .* tr.valMask{1}; testTargets = targets .* tr.testMask{1}; trainPerformance = perform(net,trainTargets,outputs) valPerformance = perform(net,valTargets,outputs) testPerformance = perform(net,testTargets,outputs)

% Test the Network end;

I think you also misread, but I have 64 features, and not 94. Also, I'm not exactly sure what we're trying to do here. In the end, are we only trying to determine the optimal H value, so making the data have zero-mean/unit-variance and PLS on the data isn't to improve the validation/test performance? Neither does the creation of neural networks on subsets of the data?

I only set the hidden node size to 5 because that seemed to yield the best performance. 10 seemed to perform worse on the test data.

## 0 Comments