Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
OpenCL

Subject: OpenCL

From: Emanuele Ronchi

Date: 7 Jan, 2009 19:19:02

Message: 1 of 13

Hi, I am buying the new MacbookPro for MATLAB, with the idea that once SnowLeopard comes out, MATLAB will be able to use the OpenCL capabilities of the two (!!) graphics chips installed. Has anybody any comment on this? For what I could read on OpenCL I can foresee three situations.

1. OpenCL sits in the BLAS and not in MATLAB and if I understand correctly, MATLAB uses the BLAS for the basic operations like matrix inversion, multiplications etc (true?). In this case MATLAB could take advantage of OpenCL right away and become blazingly fast without any recoding (wishful thinking?? :).

2. MATLAB needs recoding and recompiling to take advantage of OpenCL. Historically Mathworks has been very slow to update MATLAB with the latest technology (think of multicore support and Intel Macs...) so in this case it might take an year after the release of SnowLeopard before we see an openCL version of MATLAB running on OS X.

3. OpenCL cannot really be used for MATLAB because it is not possible top bring down the core MATLAB actions to simple operations that can be performed on GPU (maybe RAM problems in case of large matrices?)

I really hope the answer is situation 1 but if anybody (especially Mathworks developers) has any comment or information of this please let me know asap

thanks a lot!

Lele

Subject: OpenCL

From: Patrick

Date: 2 Mar, 2009 19:28:01

Message: 2 of 13

Hi Lele,

I wouldn't count on MATLAB to start tapping the power of graphics cards any time soon. I asked a representative about this in October 2008 and they said there are no plans for moving towards GPGPU computing, which is a real shame in my opinion. Also, it seems as if Mac is a real second-class platform; there was a while when MATLAB didn't have a version that ran at all on Intel Macs for instance (after 10.4.8 was released, I believe). I can't see Mathworks re-compiling their code just for the sake of a few Mac users.

However, if you're interested in getting MATLAB to run on your graphics card, check out Accelereyes at http://www.accelereyes.com/. They have a cool software suite that lets you run MATLAB code on your GPU with minimal modifications. I do a lot of large matrix manipulations using MATLAB and Accelereyes on Linux. I bought a GPU with 800 gigaflops of computing power (the NVidia GeForce 280), and I can harness all of that for my dense matrix multiplications, often getting 10x speedups.

It would be great if Mathworks started to support GPUs themselves, but I don't think they're agile enough. (Yes, Mathworks, I'm intentionally trying to provoke a reaction here. You have a great product but it's time that you start making software that takes advantage of the massively powerful hardware for doing dense matrix multiplications.)

Good luck Lele! Try out Accelereyes (you should first wait for Cuda 2.1 to be released for Macs) and let's make some noise to Mathworks that we want MATLAB to start taking advantage of vector processors on graphics cards soon.

Patrick

Subject: OpenCL

From: Emanuele Ronchi

Date: 3 Mar, 2009 06:38:01

Message: 3 of 13

Thanks for the info Patrick! I checked the website and it looks very promising, especially the demo on neural nets which I use often. However, there is one thing that concerns me, and this is the graphics card memory. Apparently if you use GPU processing, the maximum size of your matrices is limited by the graphic card memory (correct?). That makes perfect sense from the graphic card's point of view but it is a main obstacle when training nets that require 6+ GB RAM. Is there a way to "bridge together" RAM and G-RAM? I truly hope the guys at Apple have thought about this in developing grand central but chances are not so good because it looks like a major hardware obstacle.

(There is actually an option to split the Jacobian calculation into chunks using Levenberg-Marquardt NN training but unfortunately this is a great overhead, and in any case the Hessian must exist as a whole in the memory.)

In any case, I join your call to Mathworks to PLEASE be quick in implementing GPGPU for Matlab and (especially for Mac :)

Let me know if you have any new info on this

cheers,

Lele



"Patrick " <prg7*8@cornell.edu.do.the.multiplication> wrote in message <gohc01$e05$1@fred.mathworks.com>...
> Hi Lele,
>
> I wouldn't count on MATLAB to start tapping the power of graphics cards any time soon. I asked a representative about this in October 2008 and they said there are no plans for moving towards GPGPU computing, which is a real shame in my opinion. Also, it seems as if Mac is a real second-class platform; there was a while when MATLAB didn't have a version that ran at all on Intel Macs for instance (after 10.4.8 was released, I believe). I can't see Mathworks re-compiling their code just for the sake of a few Mac users.
>
> However, if you're interested in getting MATLAB to run on your graphics card, check out Accelereyes at http://www.accelereyes.com/. They have a cool software suite that lets you run MATLAB code on your GPU with minimal modifications. I do a lot of large matrix manipulations using MATLAB and Accelereyes on Linux. I bought a GPU with 800 gigaflops of computing power (the NVidia GeForce 280), and I can harness all of that for my dense matrix multiplications, often getting 10x speedups.
>
> It would be great if Mathworks started to support GPUs themselves, but I don't think they're agile enough. (Yes, Mathworks, I'm intentionally trying to provoke a reaction here. You have a great product but it's time that you start making software that takes advantage of the massively powerful hardware for doing dense matrix multiplications.)
>
> Good luck Lele! Try out Accelereyes (you should first wait for Cuda 2.1 to be released for Macs) and let's make some noise to Mathworks that we want MATLAB to start taking advantage of vector processors on graphics cards soon.
>
> Patrick

Subject: OpenCL

From: Patrick

Date: 3 Mar, 2009 11:40:20

Message: 4 of 13

Hi Lele,

You're right that it can be a challenge figuring out how to cram all your calculations onto your video card's 1 GB or so of RAM. Shuffling variables back and forth takes a while too - it's limited by your PCI bus speed. If you have any steps where you can fit all your calculations onto your graphics card, great; otherwise you're right that it will be slow.

Take care,

Patrick

"Emanuele Ronchi" <emanuele.ronchi@tsl.uu.se> wrote in message <goij89$kn$1@fred.mathworks.com>...
> Thanks for the info Patrick! I checked the website and it looks very promising, especially the demo on neural nets which I use often. However, there is one thing that concerns me, and this is the graphics card memory. Apparently if you use GPU processing, the maximum size of your matrices is limited by the graphic card memory (correct?). That makes perfect sense from the graphic card's point of view but it is a main obstacle when training nets that require 6+ GB RAM. Is there a way to "bridge together" RAM and G-RAM? I truly hope the guys at Apple have thought about this in developing grand central but chances are not so good because it looks like a major hardware obstacle.
>
> (There is actually an option to split the Jacobian calculation into chunks using Levenberg-Marquardt NN training but unfortunately this is a great overhead, and in any case the Hessian must exist as a whole in the memory.)
>
> In any case, I join your call to Mathworks to PLEASE be quick in implementing GPGPU for Matlab and (especially for Mac :)
>
> Let me know if you have any new info on this
>
> cheers,
>
> Lele
>
>

Subject: OpenCL

From: Royi Avital

Date: 11 Oct, 2009 07:29:03

Message: 5 of 13

I wish they added support for utilizing the GPU.
It might be something big for image and video processing.

Now, with the open standards there's no reason why they won't do it.

This is might be the "killer" feature which make people upgrade or not.

Subject: OpenCL

From: Sebastien Paris

Date: 11 Oct, 2009 08:32:01

Message: 6 of 13


I think Mathworks don't have really the choice to use OpenCL probably in 2 years ago. The time that BLAS/LAPACK/FFTW will be available with OpenCL.


"Royi Avital" <RoyiREMOVEAvital@yahoo.com> wrote in message <has1fu$d44$1@fred.mathworks.com>...
> I wish they added support for utilizing the GPU.
> It might be something big for image and video processing.
>
> Now, with the open standards there's no reason why they won't do it.
>
> This is might be the "killer" feature which make people upgrade or not.

Subject: OpenCL

From: Sebastien Paris

Date: 11 Oct, 2009 09:46:03

Message: 7 of 13

I found this project this morning ...


http://openclblas.sourceforge.net/


So it's going in the right direction.


"Sebastien Paris" <sebastien.paris.nospam@lsis.org> wrote in message <has560$ip6$1@fred.mathworks.com>...
>
> I think Mathworks don't have really the choice to use OpenCL probably in 2 years ago. The time that BLAS/LAPACK/FFTW will be available with OpenCL.
>
>
> "Royi Avital" <RoyiREMOVEAvital@yahoo.com> wrote in message <has1fu$d44$1@fred.mathworks.com>...
> > I wish they added support for utilizing the GPU.
> > It might be something big for image and video processing.
> >
> > Now, with the open standards there's no reason why they won't do it.
> >
> > This is might be the "killer" feature which make people upgrade or not.

Subject: OpenCL

From: Royi Avital

Date: 19 Jan, 2010 21:58:03

Message: 8 of 13

It looks promising.
Hopefully it will wake up someone in Mathworks.
We don't want 3rd party solutions.
We want the real thing - Open CL support within Matlab.

Just to think about blockproc utilizing GPU.

"Sebastien Paris" <sebastien.paris.nospam@lsis.org> wrote in message <has9gr$nfc$1@fred.mathworks.com>...
> I found this project this morning ...
>
>
> http://openclblas.sourceforge.net/
>
>
> So it's going in the right direction.
>
>
> "Sebastien Paris" <sebastien.paris.nospam@lsis.org> wrote in message <has560$ip6$1@fred.mathworks.com>...
> >
> > I think Mathworks don't have really the choice to use OpenCL probably in 2 years ago. The time that BLAS/LAPACK/FFTW will be available with OpenCL.
> >
> >
> > "Royi Avital" <RoyiREMOVEAvital@yahoo.com> wrote in message <has1fu$d44$1@fred.mathworks.com>...
> > > I wish they added support for utilizing the GPU.
> > > It might be something big for image and video processing.
> > >
> > > Now, with the open standards there's no reason why they won't do it.
> > >
> > > This is might be the "killer" feature which make people upgrade or not.

Subject: OpenCL

From: Rob Campbell

Date: 20 Jan, 2010 00:06:03

Message: 9 of 13

I've heard that Mathworks are put off GPGPU because it only handles single-precision. However, Jacket, the above-mentioned 3rd party enhancement, now states a new feature: "Double precision linear algebra enhancements will help a larger user base that has double precision requirements." Come on Mathworks!

And it's not just the GPGPU... 3-D plots in Matlab are dog slow and look awful on screen. Check out what Jacket can do in this regard:
http://www.accelereyes.com/products/jacketgfx

Subject: OpenCL

From: Ken M.

Date: 26 Feb, 2010 17:09:04

Message: 10 of 13

OpenCL does support double precision.
http://www.geeks3d.com/20091014/radeon-hd-5770-has-no-double-precision-floating-point-support/

The reason that some hardware doesn't support it is because they also want to sell partially 'broken' chips, and since not many people actually care for that 'feature' it is good marketing.

Obviously OpenCL is the only way to go if you don't want ot get in the middle of a standardisation fight AND have the most potential users out there already.

Subject: OpenCL

From: Ken M.

Date: 26 Feb, 2010 17:09:05

Message: 11 of 13

OpenCL does support double precision.
http://www.geeks3d.com/20091014/radeon-hd-5770-has-no-double-precision-floating-point-support/

The reason that some hardware doesn't support it is because they also want to sell partially 'broken' chips, and since not many people actually care for that 'feature' it is good marketing.

Obviously OpenCL is the only way to go if you don't want ot get in the middle of a standardisation fight AND have the most potential users out there already.

Subject: OpenCL

From: Mark Shore

Date: 26 Feb, 2010 17:33:05

Message: 12 of 13

"Rob Campbell" <matlab@robertREMOVEcampbell.removethis.co.uk> wrote in message <hj5hdb$bko$1@fred.mathworks.com>...
> I've heard that Mathworks are put off GPGPU because it only handles single-precision. However, Jacket, the above-mentioned 3rd party enhancement, now states a new feature: "Double precision linear algebra enhancements will help a larger user base that has double precision requirements." Come on Mathworks!
>
> And it's not just the GPGPU... 3-D plots in Matlab are dog slow and look awful on screen. Check out what Jacket can do in this regard:
> http://www.accelereyes.com/products/jacketgfx

A MATLAB Beta program recently started (http://www.mathworks.com/programs/gpu_beta/). Applicants need an NVIDIA CUDA-enabled GPU and to have a licence for the parallel processing toolbox.

Subject: OpenCL

From: Rob Campbell

Date: 16 Mar, 2010 20:01:09

Message: 13 of 13


> A MATLAB Beta program recently started (http://www.mathworks.com/programs/gpu_beta/). Applicants need an NVIDIA CUDA-enabled GPU and to have a licence for the parallel processing toolbox.

That's good to hear. I've been playing with www.gp-you.org today and haven't found it terribly useful so far. Hopefully TMW will look into improving the graphics next.

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us