I have a very simple question for you. I just unpacked my new GTX 690 and I was on the point of starting the preliminary tests on my MATLAB application. Since the algorithm involves very simple arithmetic operations, I managed to obtain a very good speed-up by just using the Parallel Computing Toolbox with "gpuArray" (with my previous card, a GTX 460, I obtained a speed-up of about 40x).
The problem is that the new GTX 690 is a dual graphic card, since there are two coupled GPUs inside it. I thought that MATLAB would anyway see it as a single card, thus avoiding the burden of modify the code in order to manually balance the computing load between the two cards. It does not seem to be the case: MATLAB recognizes it as two separate cards, so I am afraid I will have to change my script and manually distribute and synchronize the computation.
Does anybody have any suggestion on how to use the GTX 690 as a single graphic card with MATLAB Parallel Computing Toolbox?? Since the card is quite new, I didn't found anything interesting on the web...
Thank you guys! Cheers,
The answer appears to be NO, MATLAB can only treat them as two cards. See http://www.mathworks.com/matlabcentral/answers/41316-multi-gpu-computing-in-matlab
Indeed, you're right. It seems that I have to treat GTX 690 as two separate cards.
I tried to achieve this by using the spmd command, which allows to launch parallel processes on different labs. Anyway, I'm facing a couple of difficulties because at the moment if I use both GPUs the computation is even slower compared to a single GPU! There must be an error on my algorithm. What I do is just the following:
matlabpool local 2 spmd gpuDevice(labindex); if labindex == 1 data = gpuArray(source_data(:, :, 1:index)); else data = gpuArray(source_data(:, :, index+1:last_elem)); end for n = 1:m, data_GPU = data(:, :, n); ... calculations on GPUs ... end end
In practice, I just split my whole data (it's a 3D matrix) in two matrices and I assign each of them to each GPU. I then start a for loop, because I need to iterate over each slice of the 3D matrices so I select the slice I want and then perform my computations.
Anyway, as the size of the matrix increases the RAM of my machine gets sucked up and MATLAB must use the swap space, so the calculation becomes very slow. Only when I do
a huge amount of memory is released. I never used spmd, but it seems that it requires a lot of RAM to handle the two parallel processes. Am I missing something or doing something wrong? Does anybody knows how to use spmd with two GPUs without using too much RAM?
Any help will be appreciated!!
I know this is an old question, but it's related to the original poster's. Can you have (2) GTX 690's in SLI and have MATLAB see it as a single GPU card (2 per 690's) coupled with the other in SLI, therefore seeing it as one GTX 690?
I understand Nvidia markets the Tesla cards specifically for GPU computing, but I'd just like to know if the above is at all possible.