MATLAB Answers

Nvidia NVLink in MATALB using multiple Geforce 2080 ti cards

50 views (last 30 days)
Lykke Kempfner
Lykke Kempfner on 17 Dec 2018
Commented: Joss Knight on 29 Jan 2020 at 11:32
The NVLink yields a high bidirectional bandwidth between the cards (the Geforce 2080 ti offers 100 GB/, while the GeForce RTX 2080 offers 50GB/s). When fully utilized, the NVLink will minimize inter-GPU traffic over the PCI Express interface and also allows the memory on each card to behave more as a single, shared resource. This opens up the possibility for new multi-GPU modes for scientific analysis and big dataworkloads as well.
Will Matlab take advantage of the nvlinks ability to share resources between cards in future releases (R2019a/R2019b)?

  1 Comment

Walter Roberson
Walter Roberson on 17 Dec 2018
Questions about future products need to be asked of your Sales Representative. The people who volunteer to answer questions here are either outside volunteers or inside volunteers who are seldom authorized to speak about future products without a Non-Disclosure Agreement.

Sign in to comment.

Accepted Answer

Prem Ankur
Prem Ankur on 20 Dec 2018
Edited: Prem Ankur on 20 Dec 2018
I understand that you want to use Nvidia NVLink with MATLAB to leverage multiple Nvidia Geforce 2080 ti graphic cards.
Currently, there is no straightforward way to utilize the high bidirectional bandwidth to distribute the computations across the available GPUs. This would be possible with the introduction of distributed GPU arrays which can use all GPUs and gather the results back automatically. Distributed GPU arrays are currently not supported in MATLAB.
Our development teams are aware of this and are currently considering "distributed GPU Arrays" for future releases of MATLAB.
Currently, there are no commands to enable NVLink in MDCS. As a workaround to leverage NVlink, you can use "gop" function as in example given below that does the computation in each gpu and get the result back :
>> gop(@plus, gpuArray(X), 'gpuArray');
Open a parpool and issue local and/or communicating instructions via spmd that will allow to do distributed computation between the GPUs.
Refer to the following link for using gop with spmd to distribute the operations to each GPU and in turn effectively use all the GPU memory for the same function using chunks of data:

  7 Comments

Show 4 older comments
juan pedrosa
juan pedrosa on 29 Jan 2020 at 10:42
Hi it's been more than a year since this answer, do you have any update on the subject?
Joss Knight
Joss Knight on 29 Jan 2020 at 11:32
We are actively working on improving MATLAB's multi-GPU support. Is there something specific you need that you cannot do using the answers in these comments?

Sign in to comment.

More Answers (0)

Sign in to answer this question.