MATLAB Answers

Error reshape in neural network

8 views (last 30 days)
Paola HUMBERT
Paola HUMBERT on 15 May 2020
Commented: Asvin Kumar on 1 Jun 2020
Hello,
I'm trying to code a neural network (EEG2Net) to classify a signal but I get an error that I can't solve. My input data (a signal at the beginning) was changed into a cell array of dimension 30*80 and the labels are in a cell array of dimension 30*1.
The code is the following :
DataSize = size(CNN_TrainingData);
LabelSize = size(CNN_TrainingLabel);
%% Building the EEG2Net layers
EEG2Net = [
sequenceInputLayer([DataSize 1],'Name','Input')
sequenceFoldingLayer('Name','fold')
convolution2dLayer([2 1],16,'Stride',1,'Padding','Same','Name','ConvolutionLayer1')
batchNormalizationLayer('Name','BatchNorm1')
reluLayer('Name','ReLu1')
maxPooling2dLayer([2 2],'stride',[2 2],'Padding','Same','Name','MaxPooling1')
convolution2dLayer([1 64],8,'Stride',1,'Padding','Same','Name','ConvolutionLayer2')
batchNormalizationLayer('Name','BatchNorm2')
reluLayer('Name','ReLu2')
maxPooling2dLayer([2 2],'stride',[2 2],'Padding','Same','Name','MaxPooling2')
dropoutLayer(0.5,'Name','Dropout2')
convolution2dLayer([5 5],4,'Stride',1,'Padding','Same','Name','ConvolutionLayer3')
batchNormalizationLayer('Name','BatchNorm3')
reluLayer('Name','ReLu3')
maxPooling2dLayer([2 2],'Stride',[1 1],'Padding','Same','Name','MaxPooling3')
dropoutLayer(0.5,'Name','Dropout3')
flattenLayer('Name','Flatten4')
fullyConnectedLayer(1024,'Name','DenseLayer4')
reluLayer('Name','ReLu4')
dropoutLayer(0.5,'Name','Dropout4')
fullyConnectedLayer(2,'Name','DenseLayer5')
sequenceUnfoldingLayer('Name','unfold')
softmaxLayer('Name','Softmax5')
classificationLayer('Name','Classification')];
lgraph = layerGraph(EEG2Net);
lgraph = connectLayers(lgraph,'fold/miniBatchSize','unfold/miniBatchSize');
%% Analyze the network
%analyzeNetwork(lgraph)
%% Training of network
options = trainingOptions('adam','Plots','training-progress','MiniBatchSize',miniBatchSize);
trainetNet = trainNetwork(CNN_TrainingData, CNN_TrainingLabels, lgraph, options);
The error message is the following :
Error using trainNetwork (line 170)
Number of elements must not change. Use [] as one of the size inputs to automatically calculate the
appropriate size for that dimension.
Error in Test_EEG2Net (line 65)
trainetNet = trainNetwork(CNN_TrainingData, CNN_TrainingLabels, lgraph, options);
Caused by:
Error using reshape
Number of elements must not change. Use [] as one of the size inputs to automatically calculate the
appropriate size for that dimension.
I have changed tried a lot of changes in the code but i can't find what has to be done, if someone could help me I would really appreciate it.
Thank you.
(I'm using the 2020a version of matlab)

  0 Comments

Sign in to comment.

Answers (1)

Asvin Kumar
Asvin Kumar on 19 May 2020
The first input argument to the sequenceInputLayer should be the size of the input sequence at each time step.
If your input is 30 sequences of length 80 each, then this argument would be 1 because that’s the size of the input at each step.
sequenceInputLayer(1,'Name','Input')
Have a look at the Japanese Vowel Classification example where the input is many sequences of variable length but the input at each time step is a 12-length vector.
This still raises the question as to why your dataset is a cell array of 30*80. You can have a look at the example linked above to get a better idea about the way the data should be shaped too.

  4 Comments

Show 1 older comment
Asvin Kumar
Asvin Kumar on 26 May 2020
From what I understand, it seems to me that you have 40 EEG signals of 80 timesteps each.
Your sequenceInputLayer is treating its input as a sequence of images of height 80, width 1 and having 1 channel as mentioned here. That should explain why your code is working although this is probably not what you want.
I will be going back to the Japanese Vowel Example to try and draw an analogy to your scenario.
First detail to be noted is the shape and format of the training data. Notice that the cell array is 270x1. Each cell corresponds to a single training data. You have a similar 40x1 cell array which is good.
Each cell has matrices of height 12 but varying length. So, the number of timesteps might be different in each training data but the sample height at each timestep is the same (it’s 12.) In your case, it’s better. All training data are of the same length (80) and the EEG signal is, I assume, 1 sample per timestep. Here, you should reshape your data to 1x80 rather than 80x1.
Second detail to be noted is the input arguments to sequenceInputLayer. Notice that the input argument to ‘InputSize’ parameter is 12. This is the number of input samples at every time step. In your case, I assume that should be 1 because you have an EEG signal of 80 timesteps and 1 sample per timestep.
Hope that helps. So, two recommendations:
  1. Reshape your data in each cell from 80x1 to 1x80.
  2. Set the sequenceInputLayer argument to 1.
Just a disclaimer, I am making quite a few assumptions on how your data is structured. It might be possible that I’ve got that wrong in which case these suggestions wouldn’t be the right ones. Nevertheless, the explanation at the start of this comment for why your code is working would still be valid.
Paola HUMBERT
Paola HUMBERT on 27 May 2020
Thank you for answering.
My data is indeed 40 EEG Signals of 80 timesteps of 1 sample per timestep. This is why I thought of putting my data in this shape
XTrain =
40×1 cell array
{80×1 double}
....
{80×1 double}
I tried changing my data in the shape you advised me, so like this
XTrain =
40×1 cell array
{1×80 double}
...
{1×80 double}
And set the sequence input argument to 1, but an error appears in my convolutional layer.
Error using trainNetwork (line 170)
Invalid network.
Error in Test_EEG2Net (line 113)
trainedNet = trainNetwork(XTrain, YTrain, lgraph, options);
Caused by:
Layer 'ConvolutionLayer1': Input size mismatch. Size of input to this layer is different from
the expected input size.
Inputs to this layer:
from layer 'fold' output 'out' (output size 1)
I tried changing the filter size, but it doesn't work. It looks like it only works when the input size in the sequence layer is [1 80 1].
I don't understand why it cannot work when I just put 1.
Concerning the japanese vowel travel, what should be the shape of YTrain ? My YTrain is of this shape
YTrain =
40×1 cell array
{[Left ]}
{[Right]}
....
{[Left ]}
{[Left ]}
I ask this question because I am also trying to code another neural network with the same data
RNN = [
sequenceInputLayer(80,'Name','Input')
lstmLayer(numHiddenUnits,'OutputMode','sequence','Name','LSTMLayer')
dropoutLayer(0.05,'Name','Dropout')
fullyConnectedLayer(numClasses,'Name','DenseLayer')
softmaxLayer('Name','Softmax')
classificationLayer('Name','Output')]
When XTrain was in the shape
40×1 cell array
{80×1 double}
the network would train without an error message. But then when I changed the shape of XTrain for the one you recommended and the argument of sequenceInputLayer to 1, an error message appears.
Error using trainNetwork (line 170)
Invalid training data. Sequence responses must have the same sequence length as the corresponding
predictors.
Could it be in this case an error in the shape of YTrain ?
Asvin Kumar
Asvin Kumar on 1 Jun 2020
I have sent you an email with some explanation. Please have a look and feel free to get back if you have any more questions.

Sign in to comment.