Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

New to MATLAB?

GTX 690 and MATLAB Parallel Computing Toolbox - Is it possible to have the two internal GPUs behaving like one single GPU?

Asked by evel

evel (view profile)

on 19 Jun 2012

Hi everybody,

I have a very simple question for you. I just unpacked my new GTX 690 and I was on the point of starting the preliminary tests on my MATLAB application. Since the algorithm involves very simple arithmetic operations, I managed to obtain a very good speed-up by just using the Parallel Computing Toolbox with "gpuArray" (with my previous card, a GTX 460, I obtained a speed-up of about 40x).

The problem is that the new GTX 690 is a dual graphic card, since there are two coupled GPUs inside it. I thought that MATLAB would anyway see it as a single card, thus avoiding the burden of modify the code in order to manually balance the computing load between the two cards. It does not seem to be the case: MATLAB recognizes it as two separate cards, so I am afraid I will have to change my script and manually distribute and synchronize the computation.

Does anybody have any suggestion on how to use the GTX 690 as a single graphic card with MATLAB Parallel Computing Toolbox?? Since the card is quite new, I didn't found anything interesting on the web...

Thank you guys! Cheers,


1 Comment

Juliette Salexa

Juliette Salexa (view profile)

on 23 Jun 2012

Hi Alberto,
I am looking for an answer to your question and will enter it below if I find anything.

May I ask though, have you tried running any raw CUDA programs with the GTX 690 ?
Are you able to have all 3072 cores doing work (like matrix operations) on the same array ?


evel (view profile)

3 Answers

Answer by Walter Roberson

Walter Roberson (view profile)

on 23 Jun 2012

The answer appears to be NO, MATLAB can only treat them as two cards. See


Walter Roberson

Walter Roberson (view profile)

Answer by evel

evel (view profile)

on 27 Jun 2012

Indeed, you're right. It seems that I have to treat GTX 690 as two separate cards.

I tried to achieve this by using the spmd command, which allows to launch parallel processes on different labs. Anyway, I'm facing a couple of difficulties because at the moment if I use both GPUs the computation is even slower compared to a single GPU! There must be an error on my algorithm. What I do is just the following:

 matlabpool local 2
    if labindex == 1
        data = gpuArray(source_data(:, :, 1:index));
        data = gpuArray(source_data(:, :, index+1:last_elem));
    for n = 1:m,
        data_GPU = data(:, :, n);
        ... calculations on GPUs ...

In practice, I just split my whole data (it's a 3D matrix) in two matrices and I assign each of them to each GPU. I then start a for loop, because I need to iterate over each slice of the 3D matrices so I select the slice I want and then perform my computations.

Anyway, as the size of the matrix increases the RAM of my machine gets sucked up and MATLAB must use the swap space, so the calculation becomes very slow. Only when I do

matlabpool close

a huge amount of memory is released. I never used spmd, but it seems that it requires a lot of RAM to handle the two parallel processes. Am I missing something or doing something wrong? Does anybody knows how to use spmd with two GPUs without using too much RAM?

Any help will be appreciated!!


1 Comment

Jill Reese

Jill Reese (view profile)

on 19 Nov 2012

Do the operations you wish to invoke on slices of your data set require communication or can each slice operate independently?


evel (view profile)

Answer by Yannick

Yannick (view profile)

on 20 Jan 2013

I know this is an old question, but it's related to the original poster's. Can you have (2) GTX 690's in SLI and have MATLAB see it as a single GPU card (2 per 690's) coupled with the other in SLI, therefore seeing it as one GTX 690?

I understand Nvidia markets the Tesla cards specifically for GPU computing, but I'd just like to know if the above is at all possible.



Yannick (view profile)

Contact us