When performing repeated integration or interpolation of large data-sets over fixed grids, it is often advantageous to access the bare-metal matrix operations behind linear functions like interp1, interp2, and trapz. Mat-Op-Ex Toolkit seeks to provide these matrices efficiently, and in a way accessible to the user to build on.
Mat-Op-Ex functions are designed to be analogous to their conventional counterparts, except that they receive no function data and output a matrix instead. To avoid dealing with high-order tensor operations, physical dimensions are always vectorized within these functions, such that the operator matrix is limited to order N=2. Further details are available in the source code files, along with examples and unit tests.
This solution is based heavily on a wonderful bit of code presented by Bruno Luong, and expanded here to be more generally applied.
Joel Lynch (2021). mat-op-ex (https://github.com/lynch4815/mat-op-ex/releases/tag/v1.0.2), GitHub. Retrieved .
Thanks for test_speedup.m script!!!
But, what really looks bad is the fact, that "mat-op-ex -CPU" graph significantly increase for large values nx*ny. For nx*ny > 10^5. So, for large grids is interp2 or interpn always faster than interp_matrix on CPU (typicaly 8 core)?!
On the other hand "mat-op-ex -GPU" graph looks surprisingly well.
So finally, your method is very probably useful only for small grids on CPUs (as an alternative of gridedInterpolant, whis is always faster for small grids) and for large grids on GPUs.
Could you add the clear example to prove that interp1(2)_matrix is significantly faster than interp1(2)?
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!