MATLAB Answers

0

'radix_sort: failed to get memory buffer' when executing accumarray on gpuArrays of certain size

Asked by Daniel Hähnke on 25 Jul 2019
Latest activity Edited by Joss Knight
on 17 Aug 2019
Hello,
I'm trying to use accumarray on large gpuArrays, but get the error 'radix_sort: failed to get memory buffer'.
This is a minimal example that gives me the error:
a = randi(intmax, 2^28-2048, 1, 'gpuArray');
b = gpuArray(randi(3, 2^28-2048, 3, 'uint16'));
c = accumarray(b,a);
When I do the same with arrays of size [2^28-2047 1] and [2^28-2047 3] it works.
This is my gpuDevice after creating a and b:
CUDADevice with properties:
Name: 'GeForce GTX 1080 Ti'
Index: 1
ComputeCapability: '6.1'
SupportsDouble: 1
DriverVersion: 10.1000
ToolkitVersion: 9.1000
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 1.1718e+10
AvailableMemory: 7.6692e+09
MultiprocessorCount: 28
ClockRateKHz: 1683000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
Shouldn't this be enough memory for this kind of operation?
I'm running version 9.5.0.944444 (R2018b) on Linux.
I can work around this problem but I'd like to understand it so I can adapt my code accordingly.
Best wishes,
Daniel

  0 Comments

Sign in to comment.

Products


Release

R2018b

2 Answers

Answer by Ganesh Regoti on 31 Jul 2019

I have run the code on TitanV and it works fine. The array of larger size is working fine. So, I think there is no memory issue in it.
Try to clear the memory of GPU device through reset command. Here is the link
Now, try to re-run the code.

  6 Comments

Hi Daniel,
There is no concept of cleaning the memory implicitly in GPU. Only ways to clean memory are
  1. Restart the session
  2. Reset the Device
So once the memory is created, it stays in the GPU memory until you clean it explicitly.
Coming to your case of accumArray function, the cause for the issue is at some of executing the internal code, a memory buffer is created which is exceeding the RAM limit.
Thanks. So that means it's not possible to free up the memory of a variable when it's reassigned?
Is this an issue with GPUs in general (i.e. do they only support a full wipe?) or can MATLAB just not do it?
This isn't strictly true. MATLAB holds onto a quarter of GPU memory, once assigned, as an optimisation to prevent unnecessary device synchronization. Memory is then re-used. MATLAB will never return an out-of-memory error because it is holding onto the memory of a variable that has gone out of scope. However, it appears that in this case the NVIDIA thrust library is allocating its own memory buffer and MATLAB doesn't know about this, so it doesn't know to free up its memory pool to make space. This should be fixed for you in MATLAB R2019a.
In the meantime, try
feature('GpuAllocPoolSizeKb', 0);
as a temporary measure to turn off the pooling of memory that's causing this issue.

Sign in to comment.


Answer by Joss Knight
on 17 Aug 2019
Edited by Joss Knight
on 17 Aug 2019

There is an issue in an NVIDIA library that is not functioning correctly when memory is limited. This is fixed in CUDA 10 / MATLAB R2019a.
In the meantime, try
poolSize = feature('GpuAllocPoolSizeKb', 0);
as a temporary measure to turn off the pooling of memory that's underlying this issue. When you are ready to enable pooling again use
feature('GpuAllocPoolSizeKb', poolSize);
This is advisable since turning off pooling will significantly reduce performance.

  0 Comments

Sign in to comment.