I recently observed that setting a break point inside a function had the effect of changing the calculation being made downstream (and changing the answer). I distilled it to a small example:
function z = breakpointcode
b = single(0.75);
c = single(0.866084992885589599609375);
z = b / c - c;
d = single(double(b)/double(c)-double(c));
If you run the code without setting a breakpoint, it is apparent that in the background the z calculation is being done in double precision (presumably to "help" you out with precision) because the z and d results match. But if you set a breakpoint on the line indicated, the z calculation is instead done in single precision (it doesn't match the double precision calculation).
I suppose the JIT is trying to "help" me here, but actually it is causing me headaches. I am trying to emulate a single precision calculation on a different machine ... I WANT the entire calculation to happen in single precision. That's why I made the variables single in the first place.
Well, I guess I already knew that I can't trust PC calculations for single variables to do single precision calculations in the background (I have the same general problem with C/C++ and Fortran compilers, not just MATLAB). But what threw me in this case was that setting a breakpoint could actually change how the calculation was done downstream. You can keep setting and unsetting the breakpoint and the answer will keep flipping back and forth between the single and double precision result. I spot checked R2006b and R2013a and they both do the same thing (32-bit WinXP). My suspicion is that there may be two different parsed versions of the function in memory at the same time ... one gets run when there are no breakpoints and the other gets run when there are breakpoints. Just a guess ...
NOTE: FYI, although the example has "constants" for b and c, I get the same behavior if b and c are input as arguments. I.e., this behavior is not just because the JIT recognizes b and c as constants and can pre-calculate the answer.