I am running 2014a on a machine with 192Gb of RAM and 20 cores. I am trying to convolute two vectors, one with 3,060,663 elements, the other with 693. The built-in conv took 0.06 seconds. convnfft filled the memory and then crashed the machine.
One update I would really like to see in this package is the derivative functions, at least for W(z). As pointed out in Zaghloul and Ali, the derivative functions of W(z) (equations 21-23) become numerically unstable near dV/dx = 0 (V=real(W(z)), at the peak of the Voigt function. This can cause problems when trying to compute analytical Jacobians for doing nonlinear fits of the Voigt function to optical spectra.
Since you are already using the Zaghloul and Ali algorithm in this region, it would be helpful to also use their method to output a function, say Faddeeva_dw(z) = dV/dx + i*dL/dx (L=imag(W(z)). The corresponding y derivatives can then be trivially computed.
This is still a great submission, but I have found a bug either in the script or in LabVIEW 2009's XML implementation. If I create an XML file in LV containing only an array, usually I get a block that looks like this:
Thank you for this great function! Just one question: even when I use the -native option the final resolution of the image is different than the size of the matrix that has gone into the image. E.g., I have a 2048x2048 image, but after using export_fig (with the -native option) it returns a 2052x2052 image [currently using '-m4 -bmp -native' as options]. If I don't use that option it returns a 3176x3176 image.
Do you have a suggestion as how to get the exported image to match the size of the matrix?
22 Oct 2014
Exports figures nicely to a number of vector & bitmap formats.