Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

To resolve issues starting MATLAB on Mac OS X 10.10 (Yosemite) visit: http://www.mathworks.com/matlabcentral/answers/159016

Why are the BLAS functions in R2011b much faster than R2008a?

Asked by Derek O'Connor on 10 Feb 2012
Latest activity Edited by NGUYEN on 8 Oct 2013

Here are the results of a simple Matrix Benchmark test with n = 10^3; A = rand(n,n)

 Dell Precision 690, 2x4-Core Xeon 3545 CPU @ 2.3GHz, 16GB ram
 Windows 7 64-bit
                        Matlab Version : 7.6 (R2008a 64-bit)
                A*A    chol(A)   lu(A)    qr(A)    svd(A)  eig(A)   Total
---------------------------------------------------------------------------
Times(secs)    0.249    0.059    0.117    0.237    1.672    2.663    4.997
Norm. Times    1.000    0.237    0.469    0.954    6.715   10.698   20.073
---------------------------------------------------------------------------
                Times(secs) averaged over 20  runs

I got friend who has R2011b to run the same test on his machine which has similar specs and he got this:

                        Matlab Version : 7.13 (R2011b 64-bit)
                A*A    chol(A)   lu(A)    qr(A)    svd(A)  eig(A)   Total
---------------------------------------------------------------------------
Times(secs)    0.055    0.016    0.073    0.049    0.337    2.005    2.535
Norm. Times    1.000    0.301    1.346    0.901    6.168   36.740   46.456
---------------------------------------------------------------------------
                Times(secs) averaged over 20  runs

As you can see, matrix multiplication is nearly 5 times faster, while the total is twice as fast. Also note how the normalized times have changed.

0 Comments

Derek O'Connor

Products

No products are associated with this question.

1 Answer

Answer by Matt Tearle on 10 Feb 2012

The developers continually work on squeezing a bit more efficiency out of the core math operations, so I'd expect to see an improvement over time. The details of the underlying implementations are proprietary information, but for 8a -> 11b in particular, I'd guess that improvements in multithreading would be a contributing factor.

Take a look at the release notes in the documentation. You'll see performance enhancements in the MATLAB math section. They don't often go into detail (again, that's MathWorks IP), but you'll see certain functions or function groups listed. In particular, I note that 11a says:

Performance Enhancement

  • Matrix transpose
  • Element-wise single precision functions
  • Sparse matrix indexed assignment
  • Many linear algebra functions
  • Convolution for long vectors and large matrices with conv and conv2

1 Comment

Derek O'Connor on 10 Feb 2012

Thanks, Matt. As usual, Mathworks is very coy. Strange, you'd think they would be blowing their trumpet about such an improvement in speed.

The only good information I've ever got on the innards of Matlab is the Jan 1992 SIAM paper by Gilbert, Moler, and Schreiber: Sparse Matrices in Matlab: Design and Implementation.

Matt Tearle

Contact us