Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Math error

Subject: Math error

From: jhu

Date: 31 Jan, 2006 12:58:40

Message: 1 of 28

In matlab type

>>101*0.2-20.2

ans =

      3.5527136788005e-015

he answer should be 0?

Subject: Math error

From: Michael Wild

Date: 31 Jan, 2006 18:50:37

Message: 2 of 28

jhu wrote:
> In matlab type
>
>>> 101*0.2-20.2
>
> ans =
>
> 3.5527136788005e-015
>
> he answer should be 0?

welcome to floating point operations! computers are NEVER exact when
computing with floating point numbers.

michael

Subject: Math error

From: PB

Date: 31 Jan, 2006 13:28:52

Message: 3 of 28

jhu wrote:
>
>
> In matlab type
>
>>>101*0.2-20.2
>
> ans =
>
> 3.5527136788005e-015
>
> he answer should be 0?

Yes, in theory, but not using a computer with finite precision. Read
this, <http://docs.sun.com/source/806-3568/ncg_goldberg.html>,
o learn more.

/PB

Subject: Math error

From: ellieandrogerxyzzy@mindspring.com.invalid (Roger Stafford)

Date: 31 Jan, 2006 19:40:51

Message: 4 of 28

In article <43dfa36e@news1.ethz.ch>, Michael Wild
<themiwi@student.ethz.ch> wrote:

> jhu wrote:
> > In matlab type
> >
> >>> 101*0.2-20.2
> >
> > ans =
> >
> > 3.5527136788005e-015
> >
> > he answer should be 0?
>
> welcome to floating point operations! computers are NEVER exact when
> computing with floating point numbers.
>
> michael
---------------
  Well, Michael, I wouldn't go so far as to say "never". Some operations
are performed exactly with no errors whatsoever. Adding, subtracting, and
multiplying integers, for example, gives precise results if the answer is
not beyond 2^53. The same applies to numbers which are exactly
expressible as binary fractions, as long as the "span" of the result
doesn't exceed the maximum capacity of 53 bits and the 2's exponent stays
in range. (By "span" I mean the distance between the highest and lowest 1
bits.) For example, 1.921875+1.78125 gives a precise result because both
quantities are expressible as exact binary fractions. jhu's troubles stem
from the fact that neither .2 nor 20.2 can be so represented.

(Remove "xyzzy" and ".invalid" to send me email.)
Roger Stafford

Subject: Math error

From: Michael Wild

Date: 31 Jan, 2006 22:45:00

Message: 5 of 28

Roger Stafford wrote:
> In article <43dfa36e@news1.ethz.ch>, Michael Wild
> <themiwi@student.ethz.ch> wrote:
>
>> jhu wrote:
>>> In matlab type
>>>
>>>>> 101*0.2-20.2
>>> ans =
>>>
>>> 3.5527136788005e-015
>>>
>>> he answer should be 0?
>> welcome to floating point operations! computers are NEVER exact when
>> computing with floating point numbers.
>>
>> michael
> ---------------
> Well, Michael, I wouldn't go so far as to say "never". Some operations
> are performed exactly with no errors whatsoever. Adding, subtracting, and
> multiplying integers, for example, gives precise results if the answer is
> not beyond 2^53. The same applies to numbers which are exactly
> expressible as binary fractions, as long as the "span" of the result
> doesn't exceed the maximum capacity of 53 bits and the 2's exponent stays
> in range. (By "span" I mean the distance between the highest and lowest 1
> bits.) For example, 1.921875+1.78125 gives a precise result because both
> quantities are expressible as exact binary fractions. jhu's troubles stem
> from the fact that neither .2 nor 20.2 can be so represented.
>
> (Remove "xyzzy" and ".invalid" to send me email.)
> Roger Stafford


roger, you're right concerning the exact binary fractions, also with the
integers (but as i posted, i was referring to floating point
operations...). but as the cases where you get exact binary fractions
are so extremely unlikely to hit you (at least in non-trivial
applications), i think it's rather safe to speak of "never" in an
informal way.

michael

Subject: Math error

From: Harmonic Software

Date: 3 Feb, 2006 20:28:52

Message: 6 of 28

Perhaps, but I think there might be something dubious that Jhu has
illustrated. Even if Matlab executes this expression in single-precision,
(But I think it actually does it in double-precision), the magnitude of the
error is interesting. I executed this in expression in O-Matrix and all is
fine,

O>REAL_MIN
1.17549e-038
O>DOUBLE_MIN
2.22507e-308
O>101*0.2-20.2
0



"Michael Wild" <themiwi@student.ethz.ch> wrote in message
news:43dfa36e@news1.ethz.ch...
> jhu wrote:
>> In matlab type
>>
>>>> 101*0.2-20.2
>>
>> ans =
>>
>> 3.5527136788005e-015
>>
>> he answer should be 0?
>
> welcome to floating point operations! computers are NEVER exact when
> computing with floating point numbers.
>
> michael

Subject: Math error

From: J Luis

Date: 4 Feb, 2006 00:05:25

Message: 7 of 28

Harmonic Software wrote:
>
>
> Perhaps, but I think there might be something dubious that Jhu has
> illustrated. Even if Matlab executes this expression in
> single-precision,
> (But I think it actually does it in double-precision), the
> magnitude of the
> error is interesting. I executed this in expression in O-Matrix
> and all is
> fine,
>
> O>REAL_MIN
> 1.17549e-038
> O>DOUBLE_MIN
> 2.22507e-308
> O>101*0.2-20.2
> 0

I run this in C (using doubles) and the result is exactly
like in Matlab.
I think that O-Matrix is just not writing with enough
precision to see the tiny difference from zero.

Subject: Math error

From: John D'Errico

Date: 4 Feb, 2006 09:57:58

Message: 8 of 28

In article <v5KdndR7vJXkgnneRVn-gw@giganews.com>, "Harmonic Software" <harmonic@omatrix.com> wrote:

> Perhaps, but I think there might be something dubious that Jhu has
> illustrated. Even if Matlab executes this expression in single-precision,
> (But I think it actually does it in double-precision), the magnitude of the
> error is interesting. I executed this in expression in O-Matrix and all is
> fine,
>
> O>REAL_MIN
> 1.17549e-038
> O>DOUBLE_MIN
> 2.22507e-308
> O>101*0.2-20.2
> 0

Oh give the blatant sales job a break.

Plonk.



--
The best material model of a cat is another, or preferably the same, cat.
A. Rosenblueth, Philosophy of Science, 1945

Those who can't laugh at themselves leave the job to others.
Anonymous

Subject: Math error

From: ellieandrogerxyzzy@mindspring.com.invalid (Roger Stafford)

Date: 4 Feb, 2006 21:34:18

Message: 9 of 28

In article <v5KdndR7vJXkgnneRVn-gw@giganews.com>, "Harmonic Software"
<harmonic@omatrix.com> wrote:

> Perhaps, but I think there might be something dubious that Jhu has
> illustrated. Even if Matlab executes this expression in single-precision,
> (But I think it actually does it in double-precision), the magnitude of the
> error is interesting. I executed this in expression in O-Matrix and all is
> fine,
>
> O>REAL_MIN
> 1.17549e-038
> O>DOUBLE_MIN
> 2.22507e-308
> O>101*0.2-20.2
> 0
>
> > jhu wrote:
> >> In matlab type
> >>
> >>>> 101*0.2-20.2
> >>
> >> ans =
> >>
> >> 3.5527136788005e-015
> >>
> >> he answer should be 0?
-------------------
  You had better check your calculations again, Harmonic Software. If you
faithfully followed the IEEE 754 specifications for 64-bit binary floating
point "rounding to nearest" at every stage of the calculation in
101*0.2-20.2, you should get a non zero answer if it is done correctly.
The answer should be exactly the one Jhu found. An exactly zero result
would indicate that your software is not doing an accurate round to
nearest at every step. That is because there are two successive
calculations involved in computing 101*0.2, the conversion of 0.2 to
'float', followed by the multiplication, each accompanied by a rounding.
It should come as no surprise to you that these two successive operations
when done correctly happen to round to a result differing in its least bit
from the single calculation of converting of 20.2 to 'float'. I have
checked it, and that is indeed the case.

  It is very analogous to the example I like to give with

3/14 + (3/14 + 15/14) and
(3/14 + 3/14) + 15/14

producing different answers and appearing to violate the associative law
of addition.

  By the way, REAL_MIN and DOUBLE_MIN have nothing whatever to do with
this question. Only the fact that the above difference is precisely
1/2^48, which is one bit in the least position of 20.2, is significant.

(Remove "xyzzy" and ".invalid" to send me email.)
Roger Stafford

Subject: Math error

From: us

Date: 4 Feb, 2006 17:28:37

Message: 10 of 28

jhu:
<SNIP everlasting FP conundrum...

> In matlab type
>>>101*0.2-20.2
> ans =
> 3.5527136788005e-015
> the answer should be 0...

yes, it is, iff you do this to our most versatile ML

% let's not use this, for now...
% system_dependent('precision',53);
     system_dependent('setround',inf);
     r=101*0.2-20.2
     isequal(r,0)
% reset to genuine behavior...
     system_dependent('setround',0);
     r=101*0.2-20.2
     isequal(r,0)

us

Subject: Math error

From: us

Date: 4 Feb, 2006 17:35:25

Message: 11 of 28

jhu:
<SNIP everlasting FP conundrum...

> In matlab type
>>>101*0.2-20.2
> ans =
> 3.5527136788005e-015
> the answer should be 0...

yes, it is, iff you do this to our most versatile ML

% let's not use this, for now...
% system_dependent('setprecision',53);
     system_dependent('setround',inf);
     r=101*0.2-20.2
     isequal(r,0)
% reset to genuine behavior...
     system_dependent('setround',0);
     r=101*0.2-20.2
     isequal(r,0)

us

Subject: Math error

From: PB

Date: 4 Feb, 2006 18:10:20

Message: 12 of 28

us wrote:
>
>
> jhu:
> <SNIP everlasting FP conundrum...
>
>> In matlab type
>>>>101*0.2-20.2
>> ans =
>> 3.5527136788005e-015
>> the answer should be 0...
>
> yes, it is, iff you do this to our most versatile ML
>
> % let's not use this, for now...
> % system_dependent('setprecision',53);
> system_dependent('setround',inf);
> r=101*0.2-20.2
> isequal(r,0)
> % reset to genuine behavior...
> system_dependent('setround',0);
> r=101*0.2-20.2
> isequal(r,0)
>
> us

<us>, where do you find all the undocumented features?
/PB

Subject: Math error

From: ellieandrogerxyzzy@mindspring.com.invalid (Roger Stafford)

Date: 4 Feb, 2006 23:18:14

Message: 13 of 28

In article <ef2745b.9@webx.raydaftYaTP>, us <us@neurol.unizh.ch> wrote:

> jhu:
> <SNIP everlasting FP conundrum...
>
> > In matlab type
> >>>101*0.2-20.2
> > ans =
> > 3.5527136788005e-015
> > the answer should be 0...
>
> yes, it is, iff you do this to our most versatile ML
>
> % let's not use this, for now...
> % system_dependent('setprecision',53);
> system_dependent('setround',inf);
> r=101*0.2-20.2
> isequal(r,0)
> % reset to genuine behavior...
> system_dependent('setround',0);
> r=101*0.2-20.2
> isequal(r,0)
>
> us
-------------------------
  Because Jhu's numbers are all rational numbers, each of them is
represented by an infinitely repeating binary sequence - in fact these
repeat every four bits. For that reason, no matter how high a level the
rounding precision is set to, there will be at least one-fourth of the
levels where one would get a difference between 101*0.2 and 20.2 out at
their least bits after rounding. Even with one billion and 53 bits in the
significand, they would still differ by one bit - (admittedly an
exceedingly small bit.)

  To get around such problems as this, one would need to use a system
capable of dealing with ratios of integers, as in the Symbolic Math
Toolbox. (Or perhaps that is what 'inf' does in 'system_dependent? I am
only guessing at its meaning.)

(Remove "xyzzy" and ".invalid" to send me email.)
Roger Stafford

Subject: Math error

From: us

Date: 4 Feb, 2006 19:19:03

Message: 14 of 28

PB:
<SNIP is - as usual - keen to know the basics...

> <us>, where do you find all the undocumented features...

to be honest with you my dear <pb>:
after 20 years of ML and >10yrs of CSSM i cannot possibly remember
all the zillions of sources anymore: discussions with TMW, CSSM,
friends, family, our cats, the back of the various corn flake boxes
of my daughters, <roger stafford>'s apparently gentle father,
dear loren shure, john d'errico, the steves of the lord's and the
eddins' and then some, duane hanselman the living booklet, peter
acklam (whom i keep in touch with), the scott and his hull and the
pick and the a-boo, amanda galtman so long ago, george bush (the w.
that is) and his dick (boy, that could be misinterpreted), fritz the
cat, johny cash, my dear brett shoelson in hiding for now, peter
spellucci not seen in a while, michael the robbinson
not-fancy-but-it-works (well, most of the time),... sorry if i forgot
anyone of you folks who i truly do appreciate!

however, did you try a google with
     system_dependent
(in particular in CSSM) - this should get you started

very best
us

Subject: Math error

From: PB

Date: 5 Feb, 2006 03:51:35

Message: 15 of 28

us wrote:
>
>
> PB:
> <SNIP is - as usual - keen to know the basics...
>
>> <us>, where do you find all the undocumented features...
>
> to be honest with you my dear <pb>:
> after 20 years of ML and >10yrs of CSSM i cannot possibly
> remember
> all the zillions of sources anymore: discussions with TMW, CSSM,
> friends, family, our cats, the back of the various corn flake boxes
> of my daughters, <roger stafford>'s apparently gentle father,
> dear loren shure, john d'errico, the steves of the lord's and the
> eddins' and then some, duane hanselman the living booklet, peter
> acklam (whom i keep in touch with), the scott and his hull and the
> pick and the a-boo, amanda galtman so long ago, george bush (the w.
> that is) and his dick (boy, that could be misinterpreted), fritz
> the
> cat, johny cash, my dear brett shoelson in hiding for now, peter
> spellucci not seen in a while, michael the robbinson
> not-fancy-but-it-works (well, most of the time),... sorry if i
> forgot
> anyone of you folks who i truly do appreciate!
>
> however, did you try a google with
> system_dependent
> (in particular in CSSM) - this should get you started
>
> very best
> us

Thanks <us>

/PB

Subject: Math error

From: Scott Seidman

Date: 6 Feb, 2006 14:02:21

Message: 16 of 28

"Harmonic Software" <harmonic@omatrix.com> wrote in
news:v5KdndR7vJXkgnneRVn-gw@giganews.com:

> Perhaps, but I think there might be something dubious that Jhu has
> illustrated. Even if Matlab executes this expression in
> single-precision, (But I think it actually does it in
> double-precision), the magnitude of the error is interesting. I
> executed this in expression in O-Matrix and all is fine,
>
> O>REAL_MIN
> 1.17549e-038
> O>DOUBLE_MIN
> 2.22507e-308
> O>101*0.2-20.2
> 0
>
>

Dubious only if you know little about floating point math, and the fact
that such errors SHOULDN'T BE real min.


--
Scott
Reverse name to reply

Subject: Math error

From: Steve Amphlett

Date: 6 Feb, 2006 10:10:32

Message: 17 of 28

Just as an [OT] aside... Does anyone know how hand-held calculators
work? I've never seen typical FP problems on my Casio. Do they
cheat? Or do they use BCD?

Subject: Math error

From: Steve Amphlett

Date: 6 Feb, 2006 10:11:40

Message: 18 of 28

Steve Amphlett wrote:
>
>
> Just as an [OT] aside... Does anyone know how hand-held
> calculators
> work? I've never seen typical FP problems on my Casio. Do they
> cheat? Or do they use BCD?

It's BCD.

 <http://www.thimet.de/CalcCollection/Calculators/Casio-fx-180/Contents.htm>

Subject: Math error

From: om

Date: 7 Feb, 2006 05:01:18

Message: 19 of 28

<us wrote his aknowledgements>

What a rich culture there is in learning and helping. Thanks to all
for sharing so freely the knowledge you've gained! It's a challenge
to even read all the posts and still get all my work done. But then
I quess reading and learning will make my work more efficient in
time.
om

Subject: Math error

From: Steven Lord

Date: 7 Feb, 2006 09:17:30

Message: 20 of 28


"Steve Amphlett" <Firstname.Lastname@where_I_work.com> wrote in message
news:ef2745b.15@webx.raydaftYaTP...
> Just as an [OT] aside... Does anyone know how hand-held calculators
> work? I've never seen typical FP problems on my Casio. Do they
> cheat? Or do they use BCD?

It depends on what you mean by "work" -- basic arithmetic, trig functions,
etc. I remember reading at some point about the CORDIC algorithm:

http://www.fpga-guru.com/cordic.htm

Doing a little more digging, I found a few articles on this webpage that
discuss calculator computations. They're a bit old, but I'd imagine that at
least some of this information is still valid:

http://mathforum.org/library/drmath/sets/high_graphing_calc.html

Calculators and Trig Functions, from 12/03/1996, gives a different brief
overview of CORDIC. The paper referenced in that page by Prof. Bruce
Edwards is located in the Papers link here:

http://www.math.ufl.edu/~be/

To relate this to MATLAB a bit, here's Cleve's explanation of what we use:

http://www.mathworks.com/company/newsletters/news_notes/clevescorner/winter02_cleve.html

--
Steve Lord
slord@mathworks.com

Subject: Math error

From: Kelvin Hales

Date: 7 Feb, 2006 14:47:47

Message: 21 of 28

In article <dsaa5q$idb$1@fred.mathworks.com>, Steven Lord wrote:
> "Steve Amphlett" <Firstname.Lastname@where_I_work.com> wrote in message
> news:ef2745b.15@webx.raydaftYaTP...
> > Just as an [OT] aside... Does anyone know how hand-held calculators
> > work? I've never seen typical FP problems on my Casio. Do they
> > cheat? Or do they use BCD?

I've always understood pocket calculators to use BCD. Certainly the HP ones I have used
(and still use today). Further details at:<http://www.hpmuseum.org/techcpu.htm>

Kelvin B. Hales
Kelvin Hales Associates Limited
Consulting Control Engineers
Web: www.khace.com

Subject: Math error

From: Steve Amphlett

Date: 7 Feb, 2006 10:05:31

Message: 22 of 28

Kelvin Hales wrote:
>
>
>
> I've always understood pocket calculators to use BCD. Certainly the
> HP ones I have used
> (and still use today). Further details at:<http://www.hpmuseum.org/techcpu.htm>

I wonder if "xhpcalc" on my HP-UX workstation also uses BCD
underneath? It's like an HP calculator in every other way.

Subject: Math error

From: Derek O'Connor

Date: 7 Feb, 2006 17:54:11

Message: 23 of 28

Steve Amphlett wrote:
>
>
> Kelvin Hales wrote:
>>
>>
>>
>> I've always understood pocket calculators to use BCD. Certainly
> the
>> HP ones I have used
>> (and still use today). Further details at:<http://www.hpmuseum.org/techcpu.htm>
>
> I wonder if "xhpcalc" on my HP-UX workstation also uses BCD
> underneath? It's like an HP calculator in every other way.

Does anybody know the technical details of Microsoft's Calculator
5.2? Ii appears to be very good, with about 30 decimal digits
precision.

For example in Matlab R13 I get
sin(10^22) = -0.852200849767189
sin(10^23) = -0.324053937643003 (wrong)

In MS Calculator I get
sin(10^22) = -0.85220084976718880177270589375303
sin(10^23) = +0.70114063986107846946924183048087

As a check, PariGP 2.2.12 with \p50 gives
 sin(10^23) = +0.701140639861078469469241830480871113042258876008

Incidentally, O-Matrix 5.8 (Harmonic Software) gives wrong answers
from about sin(10^11) on.

Derek O'Connor

Subject: Math error

From: Wim Van Hoydonck

Date: 8 Feb, 2006 08:27:33

Message: 24 of 28

And why does the following generate wrong output (check and confirmed
on R13 and R14) ?

If you check the output, you see that some entries of certain vectors
are wrong, which could generating errors in subsequent calculations.

I never read anything in the documentation about Matlab generating
errors when initializing vectors using the colon ":" operator...

%=========================================
>>format long e
>>Mm17 = [-0.1:0.1:0.7]
>>M07 = [0.0:0.1:0.7]
>>M17 = [0.1:0.1:0.7]
>>M27 = [0.2:0.1:0.7]
>>M37 = [0.3:0.1:0.7]
>>M47 = [0.4:0.1:0.7]
>>M08 = [0.0:0.1:0.8]
>>M18 = [0.1:0.1:0.8]
>>M28 = [0.2:0.1:0.8]
>>M38 = [0.3:0.1:0.8]

PB wrote:
>
>
> jhu wrote:
>>
>>
>> In matlab type
>>
>>>>101*0.2-20.2
>>
>> ans =
>>
>> 3.5527136788005e-015
>>
>> he answer should be 0?
>
> Yes, in theory, but not using a computer with finite precision.
> Read
> this, <http://docs.sun.com/source/806-3568/ncg_goldberg.html>,
> o learn more.
>
> /PB

Subject: Math error

From: Steve Amphlett

Date: 8 Feb, 2006 08:40:12

Message: 25 of 28

Wim Van Hoydonck wrote:
>
>
> And why does the following generate wrong output (check and
> confirmed
> on R13 and R14) ?
>
> If you check the output, you see that some entries of certain
> vectors
> are wrong, which could generating errors in subsequent
> calculations.
>
> I never read anything in the documentation about Matlab generating
> errors when initializing vectors using the colon ":" operator...
>
> %=========================================
>>>format long e
>>>Mm17 = [-0.1:0.1:0.7]
>>>M07 = [0.0:0.1:0.7]
>>>M17 = [0.1:0.1:0.7]
>>>M27 = [0.2:0.1:0.7]
>>>M37 = [0.3:0.1:0.7]
>>>M47 = [0.4:0.1:0.7]
>>>M08 = [0.0:0.1:0.8]
>>>M18 = [0.1:0.1:0.8]
>>>M28 = [0.2:0.1:0.8]
>>>M38 = [0.3:0.1:0.8]

Explained very nicely here. It's even got the old 0.0:0.1:1.0
chestnut in it. Maybe not "documentation" as such, but it is a ML
resource.

 <http://www.mathworks.com/company/newsletters/news_notes/pdf/Fall96Cleve.pdf>

Subject: Math error

From: Wim Van Hoydonck

Date: 8 Feb, 2006 10:59:09

Message: 26 of 28

Thats not the point,

>> m1 = [0.0:0.1:0.7]
>> m2 = [0.0:0.1:0.8]

do not generate the same results for the fifth and sixth entry.

I would be more interested in knowing how the value of the upper
limit of a vector generated with the colon operator influences the
round-off error for values inside that vector.

If you subsequently do:
>> m1(9) = 0.8
>> diff = m1 - m2

You get

diff =

[0 0 0 0 -1.110223024625157e-16 -5.551115123125783e-17
-1.110223024625157e-16
-1.110223024625157e-16 0]

Doing this in Octave, the 8th index equals
1.11022302462516e-16, and all the other are exactly zero.

Greetings,

Wim

Steve Amphlett wrote:
>
>
> Wim Van Hoydonck wrote:
>>
>>
>> And why does the following generate wrong output (check and
>> confirmed
>> on R13 and R14) ?
>>
>> If you check the output, you see that some entries of certain
>> vectors
>> are wrong, which could generating errors in subsequent
>> calculations.
>>
>> I never read anything in the documentation about Matlab
> generating
>> errors when initializing vectors using the colon ":"
operator...
>>
>> %=========================================
>>>>format long e
>>>>Mm17 = [-0.1:0.1:0.7]
>>>>M07 = [0.0:0.1:0.7]
>>>>M17 = [0.1:0.1:0.7]
>>>>M27 = [0.2:0.1:0.7]
>>>>M37 = [0.3:0.1:0.7]
>>>>M47 = [0.4:0.1:0.7]
>>>>M08 = [0.0:0.1:0.8]
>>>>M18 = [0.1:0.1:0.8]
>>>>M28 = [0.2:0.1:0.8]
>>>>M38 = [0.3:0.1:0.8]
>
> Explained very nicely here. It's even got the old 0.0:0.1:1.0
> chestnut in it. Maybe not "documentation" as such, but it is a ML
> resource.
>
> <http://www.mathworks.com/company/newsletters/news_notes/pdf/Fall96Cleve.pdf>
>

Subject: Math error

From: Steve Amphlett

Date: 8 Feb, 2006 11:21:46

Message: 27 of 28

Wim Van Hoydonck wrote:
>
>
> Thats not the point,
>
>>> m1 = [0.0:0.1:0.7]
>>> m2 = [0.0:0.1:0.8]
>
> do not generate the same results for the fifth and sixth entry.

I see. Try this:

sprintf('%1.20f\n', 0:0.1:1000)

I get

999.79999999999995000000
999.89999999999998000000
1000.00000000000000000000

for the last three values. Either the round-off is cancelling
exactly (by luck). Or something clever is being done. Try adding up
loads and loads of 0.1 in a loop and you'll see that round-off
doesn't seem to cancel like this. I reckon they are doing forward
and reverse expansion and taking the mean.

Subject: Math error

From: Steve Amphlett

Date: 8 Feb, 2006 11:28:03

Message: 28 of 28

Steve Amphlett wrote:
>
>
>
> I see. Try this:
>
> sprintf('%1.20f\n', 0:0.1:1000)
>
> I get
>
> 999.79999999999995000000
> 999.89999999999998000000
> 1000.00000000000000000000
>
> for the last three values. Either the round-off is cancelling
> exactly (by luck). Or something clever is being done. Try adding
> up
> loads and loads of 0.1 in a loop and you'll see that round-off
> doesn't seem to cancel like this. I reckon they are doing forward
> and reverse expansion and taking the mean.

The answer IS in Cleve's article. Read carefully:

"Matlab is careful to arrange that the last element is exactly equal
to 1. But if you form this vector yourself by repeated additions of
0.1, you will miss hitting the final 1 exactly."

So it's not surprising that different end points give different
middle ones. I'd say ML is cleverer than octave in this respect.

Tags for this Thread

No tags are associated with this thread.

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us