Yes, absolutely, if you have C code then the compiler can make a difference. There are parts of C in which the order of operations is unspecified, such as the order in which arguments to a function are evaluated. The C99 standard takes a lot of care to talk about "sequence points"; within the constraints imposed by the sequence points of the expression, the order of operations of sub-expressions of the same priority is sometimes unspecified.
A C compiler (or more correctly, code generator) might generate a fused multiply-and-add, which increases precision. I would need to review up through TC6 to see whether that has been made legal in C yet -- it does not (or at least did not)match the C abstract semantics.
Using 80 bit registers for double precision on a machine that can only store 64 bit double precision values is considered enough of a violation of the abstract semantics as to be formally forbidden (but it happens in practice.) [Note: IEEE 754 has defined 80 bit "extended precision" arithmetic... provided the system has a mechanism to store all 80 bits.] One of the issues with 80 bit registers is that if there is an interrupt or normal change of process context to run a competing routine, then the 80 bit registers have to be spilled to memory... as 64 bit quantities. Thus depending exactly when interrupts or context changes occurred, calculations could come out differently.
The Best Practices must, unfortunately, involve re-coding a number of Matlab routines, as Mathworks has never made any promises about operations being optimized for accuracy.
I think one of the first things I would do is write a routine with a name such as "accurate_sum", that re-ordered each vector so as to minimize the effect of loss of precision and catastrophic canceling. Unfortunately when I have thought about this before, I end up getting myself confused about the best handling of a mix of positive and negative values.