From: "Michael" <>
Newsgroups: comp.soft-sys.matlab
Subject: floating point calc differences between Matlab and C?
Date: Wed, 25 Mar 2009 22:22:01 +0000 (UTC)
Organization: Circular Logic
Lines: 16
Message-ID: <gqeaq9$56$>
Reply-To: "Michael" <>
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 8bit
X-Trace: 1238019721 166 (25 Mar 2009 22:22:01 GMT)
NNTP-Posting-Date: Wed, 25 Mar 2009 22:22:01 +0000 (UTC)
X-Newsreader: MATLAB Central Newsreader 1151890
Xref: comp.soft-sys.matlab:527767


I've implemented some basic complex number arithmetic functions in C to use in a larger project in which portions of a matlab program are being replicated in C to run as mex files.

While comparing the  Matlab and C implementations of the arithmetic routines between, I see differences in the that look like floating point rounding differences. Generally the differences are less than the eps of the Matlab results used as reference.

Perhaps these are a result of the floating point rounding mode being used on the processor. Is there a way to check and set the floating point rounding mode from Matlab? Is this something that can even be set by different apps running on the same processor?

I'm running OSX on an Intel Mac, using gcc 4 for C compilation. It looks like gcc 4 only offers a compiler flag rounding mode for DEC Alpha.

The functions that give me differences between Matlab and C are division, sqrt, and division-into-a-scalar. Perhaps the differences could come from different implementations of the algorithms such that rounding errors play out differently even while using the same rounding mode?

Any thoughts appreciated!