I've recently developed a monte carlo simulation based model that is to be deployed onto a server environment as part of company policy. The performance testing has revealed that my relatively mediocre work desktop machine (i5 650 @ 3.2GHz, 3.42GB RAM, Windows XP) outperforms the server environment (32x Intel Xeon CPU E7-4830 @ 2.13GHz CPUs, 32GB RAM, Windows Server 2008 R2 Standard (64bit)) and I've gotten the best performance out of my personal desktop (Intel i7 3770k Quad Core, 8GB RAM, Windows 7 Home (64bit)). A comparison on a similar server setup (4x Intel Xeon X5670 @ 2.93GHz, 8GB RAM, Windows Server 2008 R2 Standard (64bit)) also runs quicker than the server environment and has comparable anti-virus scanning.
The only major structural difference between these machines appears to be the graphics cards used (the high end server environment only has a standard VGA adapter), but this doesn't seem like an intuitive explanation for the difference in performance.
Could it be Matlab is not optimised to perform well with those specific Xeon processors?
I've developed the model in R2013a and it utilises parallel processing (hence the need for Xeon processors and Windows Server). The performance slowdown occurs systematically for compiled and uncompiled code and particularly for the ODE test in the BENCH function.
Any help would be much appreciated.