FAQ: Why is 0.3 - 0.2 - 0.1 not equal to zero?

529 views (last 30 days)
Jan
Jan on 26 Dec 2012
Edited: John D'Errico on 30 Aug 2021
Why does
0.3 - 0.2 - 0.1 == 0
or
v = 0:0.1:1;
any(v == 0.3)
(or similar numbers) reply false?
  1 Comment
Jan
Jan on 26 Dec 2012
This is an experimental question only! There is no need to answer it and please do not vote for it. This is neither my question nor my answer, but only an example for a nicer, more convenient, more usable FAQ, which is less stuffed with commercials.

Sign in to comment.

Accepted Answer

Jan
Jan on 26 Dec 2012
Edited: Jan on 26 Dec 2012
0.3 - 0-2 - 0.1 returns -2.7756e-17.
As is mentioned frequently in the newsgroup, some floating point numbers can not be represented exactly in binary form. So that's why you see the very small but not zero result. See EPS.
The difference is that 0:0.1:0.4 increments by a number very close to but not exactly 0.1 for the reasons mentioned below. So after a few steps it will be off whereas [0 0.1 0.2 0.3 0.4] is forcing the the numbers to their proper value, as accurately as they can be represented anyway.
a = [0 0.1 0.2 0.3 0.4];
b = 0:.1:.4;
as = sprintf('%20.18f\n',a)
>> as =
0.000000000000000000 % ==
0.100000000000000010 % ==
0.200000000000000010 % ==
0.299999999999999990 % ~= bs !
0.400000000000000020 % ==
bs = sprintf('%20.18f\n',b)
>> bs =
0.000000000000000000 % ==
0.100000000000000010 % ==
0.200000000000000010 % ==
0.300000000000000040 % ~= as !
0.400000000000000020 % ==
and:
format hex;
hd = [a.',b.']
>> hd =
0000000000000000 0000000000000000 % ==
3fb999999999999a 3fb999999999999a % ==
3fc999999999999a 3fc999999999999a % ==
3fd3333333333333 3fd3333333333334 % ~= !
3fd999999999999a 3fd999999999999a % ==
If you're trying to compare two floating-point numbers, be very careful about using == to do so. An alternate comparison method is to check if the two numbers you're comparing are "close enough" (as expressed by a tolerance) to one another:
% instead of a == b
% use:
areEssentiallyEqual = abs(a-b) < tol
% for some small value of tol relative to a and b
% perhaps defined using eps(a) and/or eps(b)
You can see this same sort of behavior outside MATLAB. Using pencil and paper (or a chalkboard, or a whiteboard, etc.) compute x = 1/3 to as many decimal places as you want. The number of decimal places must be finite, however. Now compute y = 3*x. In exact arithmetic, y would be exactly 1; however, since x is not exactly one third but is a rounded approximation to one third, y will not be exactly 1.
For a readable introduction to floating point arithmetic, look at Cleve's Corner article from 1996: Floating Points (PDF) http://www.mathworks.com/company/newsletters/news_notes/pdf/Fall96Cleve.pdf
For more rigorous and detailed information on floating point arithmetic, read the following paper: What Every Computer Scientist Should Know About Floating Point Arithmetic http://docs.sun.com/source/806-3568/ncg_goldberg.html
Another resource is Technical Note 1108 http://www.mathworks.com/support/tech-notes/1100/1108.html on the Support section of The MathWorks website.
This answer is copied and slightly modifed from matlab.wikia.com/wiki/FAQ: Why_is_0.3-0.2-0.1_not_equal_to_zero
  5 Comments
Walter Roberson
Walter Roberson on 30 Aug 2021
128 bit representation that did not change the range, the error would be about 6e-36, but not 0.

Sign in to comment.

More Answers (1)

John D'Errico
John D'Errico on 30 Aug 2021
Edited: John D'Errico on 30 Aug 2021
Let me add my take on the problem.
Suppose we try to represent these numbers in a binary form? That is, represent 1/10 = 0.1 in decimal, but as a binary number? We must do that because all floating point numbers are stored in binary form. Even if decimal storage was used, we would still have problems. For example, does 2/3 - 1/3 == 1/3? Surely that must be true in decimal arithmetic?
Suppose we weree working in 10 digits of precision in a decimal arithmetic storage form. What would 1/3 look like?
X = 0.3333333333
Y = 0.6666666667
I've rounded both values to their closest approximation I can find in a decimal form, with only 10 digits after the point. Now Y-X will be:
Y - X = 0.6666666667 - 0.3333333333 = 0.3333333334
And that is not the same value as X. But you say, I should have used Y = 0.6666666666 instead, rounding down. Then we would have Y-X=X.
But then we must also have X+Y = 3/3 = 1. And if we had rounded Y down to make the last result work, then we would see:
X = Y = 0.3333333333 + 0.6666666666 = 0.9999999999
So there will always be some contradiction, as long as we are forced to use a finite decimal storage for numbers that have no finite representation in that base.
The same applies to any binary storage form. This is how doubles and singles are stored in MATLAB. A double uses 52 binary bits to store the number. MATLAB comes as close as it can, but only 52 bits represent the mantissa.
So what would the number 1/10 look like in binary? If we think of the binary bits like this:
0.00011001100110011001100110011001100110011001100110011...
That is...
1/10 = 2^-4 + 2^-5 + 2^-8 + 2^-9 + 2^-12 + 2^-13 + 2^-16 + 2^-17 + ...
TRY IT!
format long g
2^-4 + 2^-5 + 2^-8 + 2^-9 + 2^-12 + 2^-13 + 2^-16 + 2^-17
ans =
0.0999984741210938
I had to stop somewhere. If I add in a few more terms, we will come closer. In fact, the binary expansion that MATLAB uses for the number 1/10 is:
approx = sum(2.^[-4 -5 -8 -9 -12 -13 -16 -17 -20 -21 -24 -25 -28 -29 -32 -33 -36 -37 -40 -41 -44 -45 -48 -49 -52 -53 -55])
approx =
0.1
Which looks like 0.1 as displayed by MATLAB but is it? Is it EXACTLY 0.1?
sprintf('%0.55f',approx)
ans = '0.1000000000000000055511151231257827021181583404541015625'
sprintf('%0.55f',1/10)
ans = '0.1000000000000000055511151231257827021181583404541015625'
As you can see, both values are now seen to be the same. But neither is exaclty 0.1, only the closest approximation MATLAB could find for that number.
Similarly, we could try to approximate 0.2 and 0.3 as binary numbers, but again, we will fail as long as we are forced to use a finite number of binary bits in the approximation. And as we saw with the decimal examples before, we will always fail some of the time. Sometimes, things work. For example, try these two examples:
0.2 - 0.1 == 0.1
ans = logical
1
0.3 - 0.2 == 0.1
ans = logical
0
So one of those trivial mathematical identities seems to work, but the other fails. Again, the problem is, MATLAB can use only a finite number of bits to represent any number. And when those numbers are not representable exactly in a finite number of bits, we will aways SOMETIMES see a contradiction to what we expect must be true.
This does not happen all of the time. For example, what is the representation of the number 1/8 == 0.125 in MATLAB?
sprintf('%0.55f',1/8)
ans = '0.1250000000000000000000000000000000000000000000000000000'
So MATLAB gets that EXACTLY correct. The trick is, 1/8 is just a power of 2 itself. So it is exactly representable in a binary form. And that means we will see this ALWAYS work in MATLAB:
1/2 - 1/8 == 3/8
ans = logical
1
Simple positive or negative powers of 2 (and integer multiples of them) will be correctly represented, as long as a finite number of bits are sufficient to do the job. But 0.1, 0.2, and 0.3? While they are finitely representable as decimals, that is not the case in binary. And THAT is my take on why 0.3-0.2-0.1 is not equal to zero in MATLAB.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!