bsxfun vs implicit expansion

The MATLAB docs recommend that calls to bsxfun are replaced with direct calls to the functions and operators that support implicit expansion. I have seen this same recommendation in several places on the file exchange (etc). However, a simple benchmark (see attached) seems to show that bsxfun is noticably faster, at least for this particular use case I'm interested in (complex vector times a complex 2D or 3D matrix with relatively large dimensions). The output image below is running in MATLAB 2023a. I get the same trend under both Windows and Linux on both new and old hardware. Is this an expected result? I couldn't find many details of what actually happens under the hood for both of these approaches and when it might be better to use one over the other.

9 Comments

Use whichever one you prefer.
My guess is that implicit expansion might have more potential for future improvement (e.g. multithreading or whatever magic TMW can implement), whereas BSXFUN is by design limited to repeated calls of an arbitrary function handle.
I (and a few other FEX contributors) prefer BSXFUN because of the backwards compatibility (back to R2007b).
Thanks! I don't have a particular preference, except maximising performance. In this case, I don't need backwards compatibility either.
Interestingly, the situation is reversed if the benchmark is run using the gpuArray datatype. I just tried using NVIDIA GTX 1060 and A100 GPUs, and implicit expansion was 3-5 times faster.
The performance difference is enough in these different scenarios that I suspect I will end up writing benchmarks for every use case and datatype, which is pretty cumbersome.
Any inklings under which circumstances it might be better to use bsxfun vs implicit expansion and vice versa?
Stephen23
Stephen23 on 7 Sep 2023
Edited: Stephen23 on 7 Sep 2023
"Any inklings under which circumstances it might be better to use bsxfun vs implicit expansion and vice versa?"
In general I would expect implicit expansion to be faster (it is much easier to optimize a specific function on some arrays than optimize some code that has to call an arbitrary function handle multiple times). Together with the very explicit TMW advice this makes your choice quite clear.
"The performance difference is enough in these different scenarios that I suspect I will end up writing benchmarks for every use case and datatype, which is pretty cumbersome."
I would not recommend doing that, you are falling into this trap:
MATLAB is optimized to suit well-written code, not the other way round. Your choice is already clear.
How are you doing the timing?
The mfile of timing is included in the question.
Historically, some of the frequent volunteers have found that bsxfun was faster for some situations but not all; implicit expansion is faster for some operations.
As far as I know, there is no guideance from Mathworks as to when each of the two would be expected to be faster.
GPU is faster because consecutive operations can be fused in the implicit case. Not sure about the CPU case. I'll ask.
Sometimes the ideal choice is easy to identify. I'd prefer using implicit expansion all day, but anything I write for my toolbox is intended to work in legacy versions, so I use bsxfun() everywhere instead. At least in R2019b, the speed benefit of either choice never seemed to be consistently significant.
... I suppose then the question becomes whether it's worthwhile to conditionally switch between different implementations depending on version. Is the occasional time advantage worth the cost of version checking, ugliness, and hassle?
This is the result of R2024b on ly Laptop (Intel Ultra 9 185H). Bsxfun is still has an edge in speed

Sign in to comment.

Answers (2)

Gagan Agarwal
Gagan Agarwal on 20 Sep 2023
Hi Bradley,
I understand that you want to know the scenarios for which ‘bxfun is preferred and the scenarios for which use of implicit expansion is preferred.
In general, it has been observed that implicit expansion tends to be faster than bsxfun. This is especially true when working with GPUs, where implicit expansion often outperforms bsxfun. However, I couldn't find any specific scenarios that definitely determine when to use one over the other. Therefore, the best approach would be to decide on a case-to-case basis.
I hope this helps!
Hi Bradley,
Just come across this today. To answer you question, you should always use Implicit Expansion instead of bsxfun whenever possible. We expect Implicit expansion to be faster or run at similar time. Everytime you see a performance regression from switching from bsxfun, please consider this as a bug and contact customer support .
Using your benchmark, I was able to find a few performance edge cases and have reported them to the development.
More importantly, using implicit expansion, you will have more readable code. And if you have several calls to bsxfun, like
tmp = bsxfun(@times, x, y);
z = bsxfun(@times, tmp, w);
then you are likely to write it as
z = x .* y + w;
This is what we observe in practice. MATLAB execution engine has taken this into consideration and would execute without the need to create temporary array, and explore the coarse grain parallelism by chaining the operations.
Because of this more complex setting, there is some subtle differences in parallelization strategy compared to the relatively simple case in bsxfun. Implicitly expansion is a lot better in more complex expression.
For your simple test cases, bsxfun can do really well in term of performance and is competitive with implicit expansion. But if you look at your code closely, you want to write ifft( (1i*kx) .* fft(f)). That is, you could have written the benchmark as 1i .* kx .* mat3D instead of forming 1i.*kx first and create an unnecessary temporary array.
I have not tested the performance difference of the two approaches, but I think this shows how easy to write a more complex expression using implicit expansion then using bsxfun.
Hope this helps,
---Bob.

Categories

Find more on Parallel Computing Toolbox in Help Center and File Exchange

Products

Release

R2023a

Asked:

on 7 Sep 2023

Commented:

on 18 Oct 2024

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!