From: "Sung Soo Kim" <>
Newsgroups: comp.soft-sys.matlab
Subject: Re: Very weird resolution issue. Bug ????? Seriously !!
Date: Wed, 4 Mar 2009 03:18:02 +0000 (UTC)
Organization: JHU
Lines: 31
Message-ID: <gokrta$q9s$>
References: <goju5b$nv4$> <gok3qm$2gn$> <gok5db$mqi$> <gok6c5$19g$> <gok8gq$2na$> <gok9cs$3ut$> <gokdr9$heh$> <gokgla$siv$>
Reply-To: "Sung Soo Kim" <>
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 8bit
X-Trace: 1236136682 26940 (4 Mar 2009 03:18:02 GMT)
NNTP-Posting-Date: Wed, 4 Mar 2009 03:18:02 +0000 (UTC)
X-Newsreader: MATLAB Central Newsreader 1724905
Xref: comp.soft-sys.matlab:522320

> Never assume that such an operation gives
> an exact result down to the last bit, since it
> cannot possibly so so. Even a simple decimal
> number like 0.1 has no exact representation
> in a binary floating point system.

After a long series of discussion, now I'm convinced that you guys are right. I cannot rely on what it is supposed to be right. But, the last point you said, I cannot agree. Of course it is freedom of a CPU architect to implement any floating point as s/he wish. But there is clear IEEE standard that is developed by smart engineers. I believe they had very good reason to make standard on this issue. One of the most important reason is of course to minimize calculation error for scientific purpose. For single, or double, there is only one IEEE standard for each that represents 0.1 (Let's exclude signed or unsigned number issue.)

> Better for several reasons is to write it this way
> in Matlab:
>    A\b

Fortunately, I already know this issue, and I've been using it. What I meant by 'inverse' is right this operator. But, what I meant by 'same thing can happen on the inverse' is that this operator can change its behavior in the future. Then how can I rely on that? Maybe I have to change my testing codes again.

Operator is the most important and a basic thing in scientific calculation. If it changes its behavior, without approved modification of standard, the designer is evil. :(
Unfortunately, '\' operator is not standard at all, and I cannot blame Mathworks for that. And in fact, they improved the performance a lot. So it boils down to my decision on if I can live with that. Bad for me... But now I can live with it. In the same sense, it looks like no one knows about standard of matrix multiplication. So maybe there is not. That said, I cannot blame Mathworks on this issue, though I'll report this.

Anyway, thanks to people in this thread, I decided not to rely on MATLAB's floating point calculation on any subject. Something looks intuitive and is supposed to be obviously right (even in technical sense) may not be supported in the future.

> It is not entirely matlab to blame here. Your
> cpu makes many subtle decisions about your
> computations too.

Well, I know that might be true, though I doubt it. (Do you remember Pentium floating point error? It was a big issue, and it looks like now no one cares about this kind of thing. Maybe no on can.) Except for some acceleration-related situation like 3D game engine, a kind of optimization mentioned above may not happen, especially in scientific software. Even if they exist, I think scientific software must turn off that by default, though they can offer users to opt out. As you know, CPU optimization is mainly about pipelining, not about changing the order of mutually dependent assembly codes. Of course, I admit that there is gray area that IEEE standard cannot capture. :(

FYI, I'm testing all of these on a single machine. Because I'm migrating from old 2005 version to 2007 version, I can compare two versions line by line. So, everything is the same except MATLAB.

Anyway, thank you very much John and everyone. You convinced me that I must not rely on MATLAB for floating point calculation, maybe even for cases that IEEE standard can be an issue...