MATLAB Answers

0

How to optimize gpuArray operation to minimize GPU memory

Asked by Octavian on 4 Dec 2014
Latest activity Commented on by Edric Ellis
on 4 Dec 2014
Dear All,
I have to do A-B, where A, B are gpuArrays of 20000X20000 elements (single precision). Each of A and B is ~ 1.6 Gb and I have a 4Gb card. A and B have each to be created en block, although not at the same time. My code generates A, and B, but I run out of memory when I try A-B. Arrayfun does not seem to help A, B);. Is there a workaround? I thought of generating A, splitting it in 2 or 4 (A1-4), deleting A, creating B, splitting B (B1-4), deleting B, calculating A1-B1, A2-B2 etc, deleting A1-4, B1-4, concatenating C1-4 into C, deleting C1-4.
Is there a smarter coding way to minimize memory requirements (I am sure it is)? Thank you
Octavian

  2 Comments

These are definitely dense matrices? Sparse type won't work?

Sign in to comment.

1 Answer

Answer by Edric Ellis
on 4 Dec 2014

Your card cannot possibly fit 3 1.6Gb matrices in memory, which is needed to perform C = A - B. Therefore, you need to make your problem smaller to fit it on your device somehow - this should be possible (if somewhat inconvenient) if you're accessing A and B in purely element-wise ways.

  2 Comments

That depends on how you're creating A and B - you need to create/read only part of them at a time, and the operate on them.

Sign in to comment.