I have a parallel calculation that I’m doing on the GPU to speed up things compared to doing it on the CPU. However, the calculations are so large that I have to do them in batches in order to not run out of memory. So to do that I’m trying to estimate how much memory the calculation will require before I do it and then design the batch sizes so that the memory limit is not exceeded.
The way I’m doing it right now is simply looking at the dimensions of every variable that will be on the GPU during the calculation and multiplying the number of elements in each variable with 8 since I’m using double precision.
Once I know the size of every variable that will be in the calculation I also multiply the whole thing by a safety factor, 1.5 for example, to be on the safe side. However, even though I’m overestimating the size with 150% using the safety factor I’m still every now and then getting the error: Error using gpuArray/pagefun
Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists, reset the
GPU by calling 'gpuDevice(1)'.
And the line that seems to cause the error is:
pse = pagefun(@mldivide, A, B);
So am I estimating the size wrong by only looking at the dimensions of each variable and multiplying with 8? If I use a function like mldivide, does that cause an internal expansion of some temporary variable to occur on the GPU that I’m not foreseeing?