MATLAB Answers

1

Error in matlab included deep learning example

Asked by Javier Bush on 15 Oct 2019 at 1:50
Latest activity Commented on by Javier Bush on 26 Oct 2019 at 23:20
I am trying to run the matlab example
openExample('nnet/SeqToSeqClassificationUsing1DConvAndModelFunctionExample')
In 2019b but, when i change to train the network on gpu the example show me this error. Please help me to run it or give me a workaround to train using gpu.
Error using gpuArray/subsasgn
Attempt to grow array along ambiguous dimension.
Error in deep.internal.recording.operations.ParenAssignOp/forward (line 45)
x(op.Index{:}) = rhs;
Error in deep.internal.recording.RecordingArray/parenAssign (line 29)
x = recordBinary(x,rhs,op);
Error in dlarray/parenAssign (line 39)
objdata(varargin{:}) = rhsdata;
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>maskedCrossEntropyLoss (line 484)
loss(i) = crossentropy(dlY(:,i,idx),dlT(:,i,idx),'DataFormat','CBT');
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>modelGradients (line 469)
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample (line 284)
[gradients, loss] = dlfeval(@modelGradients,dlX,Y,parameters,hyperparameters,numTimeSteps);
Thanks!

  1 Comment

Thanks for reporting this - I can reproduce the problem using R2019b here, I shall forward this to the development team...

Sign in to comment.

Products

3 Answers

Answer by Joss Knight
on 15 Oct 2019 at 11:02
 Accepted Answer

There is a bug in this Example which will be rectified. Thanks for reporting. To workaround, initialize the loss variable in the maskedCrossEntropyLoss function:
function loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps)
numObservations = size(dlY,2);
loss = zeros([1,1],'like',dlY); % Add this line
for i = 1:numObservations
idx = 1:numTimeSteps(i);
loss(i) = crossentropy(dlY(:,i,idx),dlT(:,i,idx),'DataFormat','CBT');
end
end

  5 Comments

I appreciate your support, I just changed the miniBatchSize to 2 and I get the following error:
Index exceeds the number of array elements (1).
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>maskedCrossEntropyLoss (line 486)
idx = 1:numTimeSteps(i);
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>modelGradients (line 472)
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample (line 287)
[gradients, loss] = dlfeval(@modelGradients,dlX,Y,parameters,hyperparameters,numTimeSteps);
There are some small issues in the example script that will prevent you from setting the miniBatchSize>1. The fix is pretty simple though.
1) Replace the modelGradients function with the following:
function [gradients,loss] = modelGradients(dlX,T,parameters,hyperparameters,numTimeSteps)
dlY = model(dlX,parameters,hyperparameters,true);
dlY = softmax(dlY,'DataFormat','CBT');
dlT = dlarray(T,'CBT');
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
gradients = dlgradient(mean(loss),parameters); % this line was changed to compute the mean loss
end
2) Replace the transformSequences function with the following:
function [XTransformed, YTransformed, numTimeSteps] = transformSequences(X,Y)
% Removed line which computed the numTimeSteps. We'll compute this later in the loop
miniBatchSize = numel(X);
numFeatures = size(X{1},1);
sequenceLength = max(cellfun(@(sequence) size(sequence,2),X));
classes = categories(Y{1});
numClasses = numel(classes);
sz = [numFeatures miniBatchSize sequenceLength];
XTransformed = zeros(sz,'single');
sz = [numClasses miniBatchSize sequenceLength];
YTransformed = zeros(sz,'single');
for i = 1:miniBatchSize
predictors = X{i};
% Create dummy labels.
numTimeSteps(i) = size(predictors,2); % This line now sets the time steps for the i-th observation
responses = zeros(numClasses, numTimeSteps(i), 'single'); % This line also uses the i-th observation numTimeSteps
for c = 1:numClasses
responses(c,Y{i}==classes(c)) = 1;
end
% Left pad.
XTransformed(:,i,:) = leftPad(predictors,sequenceLength);
YTransformed(:,i,:) = leftPad(responses,sequenceLength);
end
end
Note, however, that depending on your GPU you might run into out-of-memory issues already with a small miniBatchSize. I have a GeForce GTX 1080 and I already run into this issue with a miniBatchSize of 3.
We will work on updating the example to fix these issues as soon as possible. Apologies for the inconvenience!
Thanks, I can change miniBatchSize now.

Sign in to comment.


Answer by Javier Bush on 16 Oct 2019 at 3:20

Thanks it worked!

  0 Comments

Sign in to comment.


Answer by Linda Koletsou Koutsiou on 22 Oct 2019 at 14:13

Thank you for reporting the issue. The error you are getting is related to an attempt to grow a gpuArray using linear indexing assignment.
For more information please refer to the following bug report:

  1 Comment

Linda,
I just changed the miniBatchSize to 2, in the same example and I get the following error, could you please help me with that? I think this is a bug because that is offered as a parameter in the example but you cannot change it.
Index exceeds the number of array elements (1).
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>maskedCrossEntropyLoss (line 486)
idx = 1:numTimeSteps(i);
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>modelGradients (line 472)
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample (line 287)
[gradients, loss] = dlfeval(@modelGradients,dlX,Y,parameters,hyperparameters,numTimeSteps);

Sign in to comment.