From: Peter Boettcher <>
Newsgroups: comp.soft-sys.matlab
Subject: Re: Very weird resolution issue. Bug ????? Seriously !!
References: <goju5b$nv4$>
	<gok3qm$2gn$> <gok5db$mqi$>
	<gok6c5$19g$> <gok8gq$2na$>
	<gok9cs$3ut$> <gokdr9$heh$>
Message-ID: <>
Organization: MIT Lincoln Laboratory
User-Agent: Gnus/5.110006 (No Gnus v0.6) Emacs/23.0.0 (gnu/linux)
Cancel-Lock: sha1:WBXjl/0iClJ/ZgIQqHsBR2aVKyU=
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Lines: 56
Date: Wed, 04 Mar 2009 12:58:38 -0500
X-Trace: llnews 1236189518 (Wed, 04 Mar 2009 12:58:38 EST)
NNTP-Posting-Date: Wed, 04 Mar 2009 12:58:38 EST
Xref: comp.soft-sys.matlab:522521

"Sung Soo Kim" <> writes:

> Thank you John,
>> Compilers do this sort of thing routinely, in their
>> attempts to optimize your code.
> Your point on 'compiler' makes me try to just admit this bug. Yes, the
> compiler do many kinds of weird optimization. But still I think they
> had to make this not happen, because it changes what it is supposed to
> do. See this:
> a=3/14+3/14+15/14
> b=(3/14+3/14)+15/14
> c=3/14+(3/14+15/14)
> This is from Roger's example. Among 'b' and 'c', which is supposed to
> be equal to 'a' do you think? Can you predict the result of MATLAB
> without running the code? The answer is supposed to be 'b'. But are
> you sure? (of course this may be documented and it is fortunately 'b')

Why do you say b?  Why is that obviously the "correct" answer?  In math,
the answers are the same, so it shouldn't matter.  It is only your
expectation which says we should add the numbers in left-to-right order.

I presume, given your matrix multiplication example, that you think the
"correct" order is to accumulate, in order, the products of pairs of
numbers from the first row and first column of the matrices.  Why?  Why
is that the correct order?  The math doesn't care.  In fact the "best"
ordering would be to accumulate the numbers in ascending order of
magnitude, to minimize the round-off error.  So the top-to-bottom
ordering is just as wrong as any other order.

You can get different answers on some CPUs and compilers that use 80-bit
double precision inside the FPU, but 64-bit in memory.  Current Intel
CPUs have a choice- use the FPU, or use SSE2 instructions for
double-precision computations.  SSE2 doesn't use 80 bit extended.

In your experience, computations come out the same way all the time.
Your mistake is in assuming that that is somehow part of a standard, or
that they are guaranteed to do so.  Take the 32-bit to 64-bit
migration.  Many programmers assumed that a pointer was the same size as
an integer.  It always worked before, so why not?  The problem was, no
standard said it must, so the change to the new architecture breaks a
lot of code.

It's virtually impossible to write code that adheres perfectly to the
guarantees of the standards and documentation.  Whatever.  We do the
best we can given the constraints and needs of a given project.  But
when such a situation is pointed out to you, and you understand the
mistaken assumption involved, you fix the code and move on.  You don't
protest the advances in architecture, numerical methods, compilers, or