Do I need to normalize data when using patternnet or Matlab will take care of it?

12 views (last 30 days)
I am working on Electromyogram signals to classify 11 different hand movements using ANN. My code is as follow:
load FeatureSet;
p=input; %[16 by 1342 ]
t=target; %[11 by 1342]
trainFcn = 'trainbr'; % Use Bayesian Regularization to prevent overtraining
net = patternnet([32 32], trainFcn);
net.layers{1}.transferFcn = 'tansig'; % hidden layer 1
net.layers{2}.transferFcn = 'logsig'; % hidden layer 2
% %------------------------parameter
net.trainParam.lr = 0.1; %learning rate
net.trainParam.mc = 0.1; %momentum
%------------------------ train
[net,tr]= train(net,p,t);
%------------------------ test
outputs= sim(net,P);
[c,cm] = confusion(t,outputs);
pct=100*(1-c); %correction rate
1) I am trying to normalize the input between 0 to +1 but I am not sure if it is needed. Dose patternnet do anything regarding the normalization of the input? my data range is between 0.1 to 120.
normalized_p=[];
for ii=1:length(p(1,:))
norm_p=[];
Max_p(ii)=max(p(:,ii));
Min_p(ii)=min(p(:,ii));
norm_p = (p(:,ii) - Min_p(ii))./(Max_p(ii) - Min_p(ii));
normalized_p=[normalized_p, norm_p];
end
2) I get the following warning when using patternnet:
Warning:
Performance function replaced with squared error performance.
> In trainbr>formatNet (line 160)
In trainbr (line 69)
In nntraining.setup (line 14)
In network/train (line 335)
In ANN_demo (line 95)
I also used newpr, but I did not get any warning.
3) Patternnet divides data into three sets which are training, validation and testing. I found the following code to get the performance of the classifier but I do not know how to calculate the correction rate of each set.
RandStream.setGlobalStream(RandStream('mt19937ar','seed',1)); % to get constant result
net.divideFcn = 'divideblock'; % Divide targets into three sets using blocks of indices
net.divideParam.trainRatio = 0.6;
net.divideParam.valRatio = 0.2;
net.divideParam.testRatio = 0.2;
%------------------------ train
[net,tr]= train(net,p,t);
%------------------------ test
outputs = net(p);
performance = perform(net,t,outputs)
trainTargets = t .* tr.trainMask{1};
valTargets = t .* tr.valMask{1};
testTargets = t .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
4) I want to compare the result of ANN with some other classifiers such as LDA, SVM , etc. and I am using 10-fold cross validation (9 for training and 1 for testing) for those classifiers. In ANN, the data is divided to three parts (0.6 training 0.2 validation 0.2 testing). How can I evaluate all the methods in the same way. For example 10-fold cross validation for all of them including ANN.
  1 Comment
Mallikarjun Yelameli
Mallikarjun Yelameli on 6 May 2017
Hi... I am also facing the same problem, but it seems nobody has answered all your questions since it's long time you may have got answers on your own. I have data, and the task is classification, I am facing following issues, I will be very grateful to you if you could help me in any way. I have normalised train and test dataset, but unable to understand how NOT to normalize again in patternet functions and also how NOT to divide the data in train, val and test because I have already divided my dataset into train and test. Waiting for your reply. Thank you.

Sign in to comment.

Accepted Answer

Greg Heath
Greg Heath on 8 May 2017
Edited: Greg Heath on 8 May 2017
BY DEFAULT: All of the neural network training algorithms normalize inputs and targets to [ -1 1 ] AND denormalizes the output using the parameters of the target normalization.
Why in the world would you substantially deviate from the examples in the documentation
help patternnet
and
doc patternnet ?
The first thing you should do is to use the documentation code in a loop with multiple random weight initializations.
Next use a double loop approach where the outer loop systematically increases the number of hidden nodes.
I have posted zillions of single and double loop examples in both the NEWSGROUP and ANSWERS. You can begin searches with
HITS
NEWGROUP ANSWERS
greg patternet 60 451
and
greg patternnet tutorial 14 44
In particular note that most cases only involve searches over number of hidden nodes and initial random weights.
Although there are some cases of m-fold cross validation, the index bookkeeping is so tricky it tends to be far less fruitful than straightforward multiple uses of the default DIVIDERAND.
Hope this helps.
Thank you for formally accepting my answer
Greg

More Answers (1)

tafteh
tafteh on 2 Feb 2017
1) you may want to pre-process your input data using processing functions. for example, you may want use
mapminmax
to transform input data so that all values fall into the interval [−1, 1]. This can speed up learning for many networks.
Your other option is to utilize
configure
to pre-process input (and also the output) of the network, for example the script bellow pre-process the input data only prior to train the network:
[x,t] = simplefit_dataset;
net = patternnet(20);
net = configure(net,x);
where you can see the input processing methods below:
>> net.inputs{1}
ans =
Neural Network Input
name: 'Input'
feedbackOutput: []
processFcns: {'removeconstantrows', mapminmax}
  1 Comment
Greg Heath
Greg Heath on 8 May 2017
Configure is only needed to remove previous weights and reinitialize with random weights. It is most useful when obtaining multiple designs in a loop.
If a net has not been assigned weights, the training algorithm will automatically assign random weights.
Hope this helps,
Greg

Sign in to comment.

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!