VERY odd error with simple multiplication ... OK... not so odd.

14 views (last 30 days)
Hi all, I'm encountering a strange error with the following bit of code. All of the variables are double precision.
preWinT = .20;
info.micFsr2 =2000;
T360_wrong = (preWinT-.02)*info.micFsr2
info.micFsr3 =20000;
T3600_correct = (preWinT-.02)*info.micFsr3
Notice the trailing "1" in T360_wrong in the output that follows:
T360_wrong = 3.600000000000001e+02
When increasing the multiplier from 2000 to 20000, the error does not appear.
T3600_correct = 3.600000000000000e+03
This is even happening when entering purely numeric values in the command window.
(.2-.02)*2000
ans = 3.600000000000001e+02
The error again disappears when I enter
(.18)*2000
ans = 360
Does anyone have any idea why this is happening? It's pretty disturbing to think about the ramifications of this kind of error.
Thanks,
David
  2 Comments
James Tursa
James Tursa on 7 Nov 2018
Most of the numbers you are using can't be represented exactly in IEEE double precision. E.g.,
>> num2strexact(0.20)
ans =
0.200000000000000011102230246251565404236316680908203125
>> num2strexact(0.02)
ans =
2.00000000000000004163336342344337026588618755340576171875e-2
>> num2strexact(0.20 - 0.02)
ans =
0.1800000000000000210942374678779742680490016937255859375
>> num2strexact((0.20 - 0.02)*2000)
ans =
3.6000000000000005684341886080801486968994140625e2
>> num2strexact((0.20 - 0.02)*20000)
ans =
3.60000000000000045474735088646411895751953125e3
>> num2strexact(0.18)
ans =
0.179999999999999993338661852249060757458209991455078125
>> num2strexact(0.18*2000)
ans =
3.6e2
You just got "lucky" with that last one.

Sign in to comment.

Accepted Answer

Stephen23
Stephen23 on 7 Nov 2018
Edited: Stephen23 on 13 Aug 2020
"Does anyone have any idea why this is happening?"
Yes, the behaviors of floating point numbers are quite well documented, and often discussed on this forum.
"It's pretty disturbing to think about the ramifications of this kind of error."
Only if you write your code believing that floating point error does not exist. Experienced programmers understand that calculations on floating point numbers can accumulate floating point errors (like your example does), and do not expect floating point mathematics to be equivalent to the symbolic maths that they learned in high school (i.e. arithmetic operations on floating point values are not associative), so they write their code accordingly to allow for this.
You need to learn about the limits of floating point numbers. Start by reading these:
This is worth reading as well:
  4 Comments
David Murphy
David Murphy on 7 Nov 2018
Thank you, John.
As I mention in my response to Stephen, my magical thinking was that the underlying operation might dynamically apply some of that LSB distrust where appropriate. I imagine, however, that this would be computationally expensive and amounts to asking MATLAB to be psychic about my particular programming requirements.
Lesson learned!
John D'Errico
John D'Errico on 8 Nov 2018
Uh, yeah. MATLAB is fast, efficient in double precision computations. Three is no way you would want it to do something that occasionally would try to get those bits right. Any such magical correction would cost you some serious time, and it could never be perfect.

Sign in to comment.

More Answers (2)

John D'Errico
John D'Errico on 7 Nov 2018
Edited: John D'Errico on 7 Nov 2018
Nope. It is just that you need to learn about arithmetic in general on a computer, and how numbers are stored in floating point form. But no surprise at all, nor should it be remotely disturbing.
Doubles are stored in an IEEE binary form, with a 52 bit binary mantissa. This is in fact pretty standard. If you check other languages, they too usually do the same.
sprintf('%0.55f',0.18)
ans =
'0.1799999999999999933386618522490607574582099914550781250'
The problem is, can you store the number 0.18 EXACTLY in binary? NO! Just as you cannot store the number 1/3 as a decimal number exactly, you cannot store the fraction 18/100 EXACTLY as a binary number. Both are repeating numbers when you try to represent them in the chosen base. So
1/3 = 0.333333333333333... in decimal
So if you truncate that number to 16 decimal digits or so, you are making an error, down in the least significant digits.
In binary form, 0.18 would look like this:
18/100 = 0.0010111000010100011110101110000...
Thus an infinitely repeating sequence of binary bits. I've not carried enough bits there, but that comes out as:
format long g
B18 = '0010111000010100011110101110000';
dot(B18-'0',2.^(-1:-1:-numel(B18)))
ans =
0.179999999701977
Now, when you do arithmetic with that number, say multiply it by 2000? It turns out that now you get lucky. The product
sprintf('%0.55f',0.18*2000)
ans =
'360.0000000000000000000000000000000000000000000000000000000'
is indeed representable as an integer.
The answer? NEVER trust the least significant bits of a double precision number.

TADA
TADA on 7 Nov 2018
You'll find the answer in countless posts similar to yours. this problem arises from the way Matlab executes floating point arthritics. BTW, Matlab is not the only culprit. I remember first meeting this in .Net a few years back. If I'm not mistaken, it's the same everywhere when doing calculations with floating point numbers.
  2 Comments
Stephen23
Stephen23 on 7 Nov 2018
Edited: Stephen23 on 7 Nov 2018
"this problem arises from the way Matlab executes floating point arthritics."
MATLAB probably does not execute these kind of floating point arithmetic itself: basic floating point operations (addition, multiplication, division) are done efficiently at a hardware level.

Sign in to comment.

Categories

Find more on Creating and Concatenating Matrices in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!