3d matrix multiplication with reduce computation time
7 views (last 30 days)
Show older comments
Hello Everyone
I have two 3D-matrices A(M,N,I) and B(M,N,I) with complex numbers. Values of M,N and I are large say in the range 500-5000. An easy way to do matrix multiplication is to use for loop for I, but it hugely slows down my rest of the code because I have to do this multiplication thousand of time.
I have been searching the answer in MATLAB answer since two days. The recommended solutions are using following MEX files
3. ' mtimesx ': This is the only solution I found which fits for my situation. It's really fast with real numbers but when I use the complex number the estimated time is greater than simply using for loop.
Can anyone here please recommend the solution in which I can do multiplication of two complex 3D-matrices without using the for the loop but reduces computation time?
I am really looking for a positive response.
Regards abi
3 Comments
Jan
on 29 Aug 2017
Just a note: See the preceding question: https://www.mathworks.com/matlabcentral/answers/353797-can-i-reduce-computation-time-for-the-matrix-computation-sparse-matrices
Accepted Answer
James Tursa
on 29 Aug 2017
Some testing issues I had:
I couldn't run the posted code because npolinomi was not defined. Looking at the code, I made a guess and set npolinomi = 500. The code then ran, but ran out of memory on my machine. So I dropped all of the 500's to 200's just to get things to run.
Speed observations:
It's appears to be somewhat of apples to oranges comparison here. You have sparse and full matrices in your testing mix, as well as logical variables. The fact that you have sparse matrices involved make a HUGE difference and this was not mentioned in your original post. For matrix multiplies, MTIMESX only has code for full double and full single variables. So you appear to be comparing a full*full matrix multiply (via MTIMESX) along with the W1 building vs a sparse*sparse multiply. The more sparse the matrices are, the better the sparse*sparse formulation will perform for speed. So if the matrices are sparse enough, it would not surprise me if the sparse code was faster. In addition, in my 32-bit R2011a PCWIN version the MTIMESX code runs faster, so it appears the newer MATLAB versions have been improved for sparse matrix multiply speed.
So, back to the original question of how to speed up your code given that there are sparse matrices involved. This is hard to say because the code you show is boiled down to the point that I cannot really tell if there is anything that could be sped up with a mex routine or with other methods. Working with sparse matrices in mex routines is not trivial, particularly if you are trying to combine them or incrementally build them. So I would have to see more detail about your real code before I could make any suggestions. E.g., how is COEFF_S really built? Reshaping a sparse matrix (even transposing a sparse vector) invokes a deep data copy, so maybe something could be changed here to avoid that data copy. Are there any calculations that could be moved outside the loop? Are there any full/sparse conversions that could be avoided? Etc, etc. We can't really advise on any of this based on the boiled down code posted above.
7 Comments
James Tursa
on 29 Aug 2017
I need to think about the 131s --> 136s timing result you are seeing with the sparse code ...
James Tursa
on 30 Aug 2017
Edited: James Tursa
on 30 Aug 2017
So, after thinking about it and making a few crude tests, I have a hypothesis (which may or may not be true).
The matrix multiply employed by MATLAB (and MTIMESX) for full matrices is contained in a 3rd party BLAS library that is highly optimized for speed, and internally employs multi-threading. E.g., the generic matrix multiply routine is called DGEMM for real double precision matrices. The complex counterpart to this routine is called ZGEMM, but this routine requires the real and imaginary parts of variables to be interleaved in memory like Fortran and C/C++. But in MATLAB, the real and imaginary parts of complex variables are stored in different parts of memory. So MATLAB had to make a choice when doing a complex matrix multiply ... copy the real and imaginary parts into temporary interleaved memory to call ZGEMM, or just call DGEMM on the appropriate real and imaginary parts? MATLAB chose the latter (and in fact that is what MTIMESX does also). Going from a real matrix multiply to a complex matrix multiply then means that DGEMM is called four times instead of once. In both cases the DGEMM code executes in a multi-threaded fashion as long as the sizes are large enough to justify it. So you see the 3x - 4x speed difference. That is all to be expected. On my system the CPU monitor is pegged at about 50% for both of these cases (I assume this represents 1/2 of the logical cores but all of the physical cores in the system).
For sparse real matrix multiply, it does not appear to me (when watching the CPU monitor) that the calculation is multi-threaded. The CPU usage is down at around 6% typically, which is what I would expect for my system is there is no multi-threading going on (50% / 8 cores).
For complex sparse matrix multiply, I suspect that this is not multi-threaded either, but the elements are calculated serially. Only those calculations where both operands are non-zero are done (see note below). My suspicion is that some type of appropriate SSE calculation(s) is done to calculate each individual complex element multiply, and that is where the complex speed increase is coming from. I.e., some SSE instruction(s) can essentially carry out the complex element multiply in about the same time as a single real element multiply. Hence you don't see much difference in the sparse real vs complex matrix multiply total times.
But ... THIS IS ALL JUST A GUESS!
Side Note: There is a mismatch in the underlying sparse matrix multiply algorithm used when there are NaN's involved. E.g.,
>> a
a =
NaN 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0.9747 0
0.9696 0 0 0 0
>> s = sparse(a);
>> a*a
ans =
NaN NaN NaN NaN NaN
NaN 0 0 0 0
NaN 0 0 0 0
NaN 0 0 0.9501 0
NaN 0 0 0 0
>> full(s*s)
ans =
NaN 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0.9501 0
NaN 0 0 0 0
You can see that the NaN's did not propagate appropriately in the sparse matrix multiply case. The sparse algorithm did not do any of the NaN*0 calculations ... it simply assumed the result would be 0 since one of the operands was 0. This, I suppose, could probably be considered a bug in MATLAB. Maybe I will report it ...
More Answers (0)
See Also
Categories
Find more on Logical in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!