Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
floating point calc differences between Matlab and C?

Subject: floating point calc differences between Matlab and C?

From: Michael

Date: 25 Mar, 2009 22:22:01

Message: 1 of 22

Hi,

I've implemented some basic complex number arithmetic functions in C to use in a larger project in which portions of a matlab program are being replicated in C to run as mex files.

While comparing the Matlab and C implementations of the arithmetic routines between, I see differences in the that look like floating point rounding differences. Generally the differences are less than the eps of the Matlab results used as reference.

Perhaps these are a result of the floating point rounding mode being used on the processor. Is there a way to check and set the floating point rounding mode from Matlab? Is this something that can even be set by different apps running on the same processor?

I'm running OSX on an Intel Mac, using gcc 4 for C compilation. It looks like gcc 4 only offers a compiler flag rounding mode for DEC Alpha.

The functions that give me differences between Matlab and C are division, sqrt, and division-into-a-scalar. Perhaps the differences could come from different implementations of the algorithms such that rounding errors play out differently even while using the same rounding mode?

Any thoughts appreciated!

Cheers,
Michael

Subject: floating point calc differences between Matlab and C?

From: James Tursa

Date: 25 Mar, 2009 22:43:02

Message: 2 of 22

"Michael" <michael@circular-logic.com> wrote in message <gqeaq9$56$1@fred.mathworks.com>...
>
> While comparing the Matlab and C implementations of the arithmetic routines between, I see differences in the that look like floating point rounding differences. Generally the differences are less than the eps of the Matlab results used as reference.
>
> Perhaps these are a result of the floating point rounding mode being used on the processor. Is there a way to check and set the floating point rounding mode from Matlab? Is this something that can even be set by different apps running on the same processor?

This is in all likelihood would be a fruitless endeavor. Even if *one* of the differences was due to rounding mode difference, and you managed to get them both to do the same thing, there are other things that will be very difficult to control. e.g., Is there a co-processor involved? On a PC the co-processor uses 80-bit extended precision. So the use of the co-processor can change your results depending on how it is utilized, which intermediate calculations stay as 80-bit for future use in downstream code, etc. The compiler will be making these choices for you without you having much say in the matter, and it often won't match how the MATLAB calculations use the co-processor. Even a simple calculation such as A + B * C will give a different result between MATLAB and C because of the co-processor issue. Also, the C compiler may be rearranging the order of calculations for optimization
reasons, and that can cause another difference between C and MATLAB. etc. etc.

If you are getting agreement down to the eps level, I would say that is adequate comparison.

James Tursa

Subject: floating point calc differences between Matlab and C?

From: Walter Roberson

Date: 26 Mar, 2009 00:57:26

Message: 3 of 22

Michael wrote:

> Perhaps these are a result of the floating point rounding mode being used on the processor.
> Is there a way to check and set the floating point rounding mode from Matlab?

Not on the Mac, not without writing some MEX to do it for you.

There is a system_dependant() call that [if I recall correctly] can do some
things with the rounding modes for MS Windows.

Subject: floating point calc differences between Matlab and C?

From: Michael

Date: 26 Mar, 2009 02:35:03

Message: 4 of 22

> > Perhaps these are a result of the floating point rounding mode being used on the processor. Is there a way to check and set the floating point rounding mode from Matlab? Is this something that can even be set by different apps running on the same processor?
>
> This is in all likelihood would be a fruitless endeavor. Even if *one* of the differences was due to rounding mode difference, and you managed to get them both to do the same thing, there are other things that will be very difficult to control. e.g., Is there a co-processor involved? On a PC the co-processor uses 80-bit extended precision. So the use of the co-processor can change your results depending on how it is utilized, which intermediate calculations stay as 80-bit for future use in downstream code, etc. The compiler will be making these choices for you without you having much say in the matter, and it often won't match how the MATLAB calculations use the co-processor. Even a simple calculation such as A + B * C will give a different result between MATLAB and C because of the co-processor issue. Also, the C compiler may be rearranging the order of calculations for optimization
> reasons, and that can cause another difference between C and MATLAB. etc. etc.
>
> If you are getting agreement down to the eps level, I would say that is adequate comparison.
>
> James Tursa

Thanks James. I hadn't thought about compiler or co-processor issues. What a can of worms I've gotten myself into! I tried a debug build (i.e. no optimizations) of my C code and I get the same results as my optimized build, but I figure other difference between the gcc build and Matlab could still be in play. Is there a co-processor on Mac Intel's like what you mentioned? I know Pentium-class chips can (always?) do the 80-bit float processing, but are current chips still considered pentium-class?

I should have been more clear in my first post. *Most* of the results from the Matlab and C versions are within eps, but some aren't (for my particular test vectors which aren't designed with particular test value conditions in mind). Again, I'm working with complex numbers.

I measure relative error like so:

err = abs( (Matlab_result - C_result) / abs(Matlab_result) )

Then I check if err is less than the relative eps:

err < eps( abs(Matlab_result) ) ? OK : problem

The problem functions I have are:

abs() yields about 1% of values within 10*rel_eps > err > rel_eps
div() yields about 10% of values within 10*rel_eps > err > rel_eps
divIntoScalar() yields about 40% of values within 20*rel_eps > err > rel_eps
sqrt() yields about 40% of values within 1e4*rel_eps > err > rel_eps

So sqrt() really is far out there. Curiously, all of the problematic values for sqrt() are from input values with a negative real component. When the real componenent is positive (the same abs vals as when negative), I get no relative errors outside of eps, and actually hardly any that are even non-zero. This seems very strange. The sign of the complex component doesn't effect the distribution of error. The error in the other functions is not distributed in this way.

If anyone can help me in how to think about this properly, I'd be very grateful. I'm new to these floating point issues and I could easily be missing something.

Cheers,
Michael

Subject: floating point calc differences between Matlab and C?

From: Michael

Date: 26 Mar, 2009 02:38:01

Message: 5 of 22

> > Perhaps these are a result of the floating point rounding mode being used on the processor.
> > Is there a way to check and set the floating point rounding mode from Matlab?
>
> Not on the Mac, not without writing some MEX to do it for you.
>
> There is a system_dependant() call that [if I recall correctly] can do some
> things with the rounding modes for MS Windows.

Thanks Walter. When you say writing some MEX code to do it, do you mean a MEX file code effect the floating point rounding mode of MATLAB as it's running, or do you mean being able to change the rounding mode within the C code running as the MEX file?

I couldn't find how to do it in gcc. Do you know how?

Thanks again.

Cheers,
Michael

Subject: floating point calc differences between Matlab and C?

From: Roger Stafford

Date: 26 Mar, 2009 08:04:01

Message: 6 of 22

"Michael" <michael@circular-logic.com> wrote in message <gqepkn$s8r$1@fred.mathworks.com>...
> Thanks James. I hadn't thought about compiler or co-processor issues. What a can of worms I've gotten myself into! I tried a debug build (i.e. no optimizations) of my C code and I get the same results as my optimized build, but I figure other difference between the gcc build and Matlab could still be in play. Is there a co-processor on Mac Intel's like what you mentioned? I know Pentium-class chips can (always?) do the 80-bit float processing, but are current chips still considered pentium-class?
>
> I should have been more clear in my first post. *Most* of the results from the Matlab and C versions are within eps, but some aren't (for my particular test vectors which aren't designed with particular test value conditions in mind). Again, I'm working with complex numbers.
>
> I measure relative error like so:
>
> err = abs( (Matlab_result - C_result) / abs(Matlab_result) )
>
> Then I check if err is less than the relative eps:
>
> err < eps( abs(Matlab_result) ) ? OK : problem
>
> The problem functions I have are:
>
> abs() yields about 1% of values within 10*rel_eps > err > rel_eps
> div() yields about 10% of values within 10*rel_eps > err > rel_eps
> divIntoScalar() yields about 40% of values within 20*rel_eps > err > rel_eps
> sqrt() yields about 40% of values within 1e4*rel_eps > err > rel_eps
>
> So sqrt() really is far out there. Curiously, all of the problematic values for sqrt() are from input values with a negative real component. When the real componenent is positive (the same abs vals as when negative), I get no relative errors outside of eps, and actually hardly any that are even non-zero. This seems very strange. The sign of the complex component doesn't effect the distribution of error. The error in the other functions is not distributed in this way.
>
> If anyone can help me in how to think about this properly, I'd be very grateful. I'm new to these floating point issues and I could easily be missing something.
>
> Cheers,
> Michael

  Michael, the errors you describe are far too large to be accounted for by such differences as 80-bit floating point numbers, rounding modes, coprocessors, or things of this kind. That is, they are too large for most single operations involving real numbers. Such errors can only come from 1) something like a subtraction of nearly equal numbers, 2) a long chain of operations, or 3) calls on functions that are improperly written.

  You state that these errors occur principally with complex numbers. To do complex arithmetic it is necessary to perform more than one real number arithmetic operation. In particular, taking the square root of a complex number involves a number of operations, and if these are not done properly, some excessively large rounding errors can be produced. Your description of the large square root errors occurring only with negative real parts strongly suggests that inappropriate routines on some systems are being used to carry out complex square root.

  Consider this possibility. Suppose we want to find x and y, given a and b for

 x+y*i = sqrt(a+b*i)

According to convention, of the two possible complex square roots, the one with the non-negaive real part is to be selected. A little algebra shows that the solution can then be expressed as:

 x = sqrt((sqrt(a^2+b^2)+a)/2);
 y = b/(2*x);

where, as you see, all operations are performed on real numbers.

  For positive values of a, these expressions should yield quite decent results, nothing like the errors you describe. However if a is negative, the expression for x becomes inappropriate for values of b much smaller than the absolute value of a. The two quantities, sqrt(a^2+b^2) and a, can nearly cancel each other and produce an excessive rounding error relative to the size of the result. Try it for something like a = -1 and b = 0.01 and you will get a relative error of the magnitude you described (1e4.)

  There is a much better formula to use for the negative a case and that is:

 y = sqrt((sqrt(a^2+b^2)-a)/2);
 if b<0, y = -y; end % Keep x non-negative
 x = b/(2*y);

As long as a is negative, there is none of the above kind of cancellation occurring and full accuracy is maintained. A good complex square root routine would select from these two solutions according to the sign of a.

  Therefore I am putting forth the hypothesis that in at least one of your systems being compared, there are possibly some inferior complex square root routines being called upon which produce excessive errors in the manner shown above, perhaps different from the above specific example but having analogous computational difficulties.

Roger Stafford

Subject: floating point calc differences between Matlab and C?

From: James Tursa

Date: 26 Mar, 2009 15:11:01

Message: 7 of 22

"Michael" <michael@circular-logic.com> wrote in message <gqepkn$s8r$1@fred.mathworks.com>...
>
> sqrt() yields about 40% of values within 1e4*rel_eps > err > rel_eps
>

Can you post a specific example of this? i.e., show the actual code used and the numbers used for this calculation for some of the bad results?

James Tursa

Subject: floating point calc differences between Matlab and C?

From: Michael

Date: 31 Mar, 2009 00:47:44

Message: 8 of 22

"Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in message <gqfcth$o5u$1@fred.mathworks.com>...
> Michael, the errors you describe are far too large to be accounted for by such differences as 80-bit floating point numbers, rounding modes, coprocessors, or things of this kind. That is, they are too large for most single operations involving real numbers. Such errors can only come from 1) something like a subtraction of nearly equal numbers, 2) a long chain of operations, or 3) calls on functions that are improperly written.
>
> You state that these errors occur principally with complex numbers. To do complex arithmetic it is necessary to perform more than one real number arithmetic operation. In particular, taking the square root of a complex number involves a number of operations, and if these are not done properly, some excessively large rounding errors can be produced. Your description of the large square root errors occurring only with negative real parts strongly suggests that inappropriate routines on some systems are being used to carry out complex square root.
>
> Consider this possibility. Suppose we want to find x and y, given a and b for
>
> x+y*i = sqrt(a+b*i)
>
> According to convention, of the two possible complex square roots, the one with the non-negaive real part is to be selected. A little algebra shows that the solution can then be expressed as:
>
> x = sqrt((sqrt(a^2+b^2)+a)/2);
> y = b/(2*x);
>
> where, as you see, all operations are performed on real numbers.
>
> For positive values of a, these expressions should yield quite decent results, nothing like the errors you describe. However if a is negative, the expression for x becomes inappropriate for values of b much smaller than the absolute value of a. The two quantities, sqrt(a^2+b^2) and a, can nearly cancel each other and produce an excessive rounding error relative to the size of the result. Try it for something like a = -1 and b = 0.01 and you will get a relative error of the magnitude you described (1e4.)
>
> There is a much better formula to use for the negative a case and that is:
>
> y = sqrt((sqrt(a^2+b^2)-a)/2);
> if b<0, y = -y; end % Keep x non-negative
> x = b/(2*y);
>
> As long as a is negative, there is none of the above kind of cancellation occurring and full accuracy is maintained. A good complex square root routine would select from these two solutions according to the sign of a.
>
> Therefore I am putting forth the hypothesis that in at least one of your systems being compared, there are possibly some inferior complex square root routines being called upon which produce excessive errors in the manner shown above, perhaps different from the above specific example but having analogous computational difficulties.
>
> Roger Stafford

Roger, thanks very much, your algorithm is working much better than mine! With my test data, it's now not signifigantly different than Matlab's results. My main mistake was simply to implement the mathematical formula for sqrt of complex number, without the numerical analysis considerations.

I'm not sure I understand how excessive rounding error becomes a problem for b << |a| when a is negative and you use:

x = sqrt((sqrt(a^2+b^2)+a)/2);
y = b/(2*x);

Is this the issue, e.g. with a = -1 and b = 0.01:
sqrt(a^2+b^2) evals to 1.0001, so you've lost some of b's original precision because b^2 is added to a^2 which is orders of magnitude greater. Then you add -1 so have 0.0001 which greatly reduces the order of mag but you can't regain the original precision of b, so you have a number with a precision less than usual for its order of mag. From then on, your relative precision is much less than what you'd expect for the order of mag of the answer?
---
BTW, I assume my method I described for measuring the acceptable error between the results of my algorithm and matlab is reasonable for complex numbers?
I measure like so:
err = abs( (Matlab_result - C_result) / abs(Matlab_result) )
Then I check if err is less than the relative eps:
err < eps( abs(Matlab_result) ) ? OK : problem
---
So my absolute value and divisions algorithms are still giving results different than matlab's by greater than eps. Do you know a source for finding numerical algorithms for these? I know Matlab uses blas and lapack, but from what I can see from their source these don't have basic operations like sqrt, div and absolute value. There is a blas routine dcabs1.f that says it's for abs val of a complex value, but it looks like something else because it just computes abs(real) + abs(imag).

If you feel game, here are some details on what I've found so far:

I've been looking around. On netlib I found something in the 'Templates' library for both abs and div.

But the abs algorithm from Templates (http://www.netlib.org/templates/single//cabs.c) actually gives *worse* results, strangely.

My simple algorithm is just this:
res = sqrt(a^2 + b^2);

For a=3.999999999999986e-01, b=4.227932187381601e-01, it yields an error that is 1.7x the realtive eps of the Matlab result. Threre are other numbers close in value with slightly less or greater differences between a and b that yield no error at all. This is the only value (and -a + -b) that yields too great an error in my test.

The algorithm from Templates cabs.c looks like this and for my test values, it yields many more values out of the acceptable range than my simplistic algorithm:

if(real < 0)
real = -real;
if(imag < 0)
imag = -imag;
if(imag > real){
temp = real;
real = imag;
imag = temp;
}
if((real+imag) == real)
return(real);

temp = imag/real;
temp = real*sqrt(1.0 + temp*temp); /*overflow!!*/
return(temp);

---
The Templates algorithm for division (http://www.netlib.org/templates/single//c_div.c) gives somewhat better results than my simplistic algorithm, but still greater than eps compared to matlab:

if( (abr = b->r) < 0.)
abr = - abr;
if( (abi = b->i) < 0.)
abi = - abi;
if( abr <= abi )
{
if(abi == 0)
sig_die("complex division by zero", 1);
ratio = (double)b->r / b->i ;
den = b->i * (1 + ratio*ratio);
c->r = (a->r*ratio + a->i) / den;
c->i = (a->i*ratio - a->r) / den;
}

else
{
ratio = (double)b->i / b->r ;
den = b->r * (1 + ratio*ratio);
c->r = (a->r + a->i*ratio) / den;
c->i = (a->i - a->r*ratio) / den;
}
}

The numbers that give trouble here, with err / eps(abs(Matlab_result)):

-7.000000000000000e+00 - 8.714479827243187e-01i 1.271683075112519e+00
-6.199999999999999e+00 + 8.337771486592951e-02i 1.288925174930687e+00
-4.100000000000000e+00 - 1.423526483194365e+00i 1.171874580415606e+00
-3.800000000000000e+00 - 7.735560905031258e-01i 1.075638851535295e+00
-2.999999999999989e-01 - 3.093362496096221e-01i 1.910457200777206e+00
-1.999999999999993e-01 - 2.027100355086718e-01i 3.927976617533565e+00
 1.999999999999993e-01 + 2.027100355086718e-01i 2.514836539366301e+00
 5.000000000000000e-01 + 5.463024898437905e-01i 1.185018913225869e+00
 2.300000000000000e+00 - 1.119213641734133e+00i 1.217203806863496e+00
 3.300000000000000e+00 + 1.597457476600322e-01i 1.128032434691938e+00
 3.600000000000000e+00 + 4.934667299849033e-01i 1.105140837630346e+00


Thanks again for any thoughts!

Cheers,
Michael

Subject: floating point calc differences between Matlab and C?

From: Roger Stafford

Date: 31 Mar, 2009 03:35:03

Message: 9 of 22

"Michael" <michael@circular-logic.com> wrote in message <gqrp7g$nu0$1@fred.mathworks.com>...

  Michael, I have time to answer only a few of your questions here.

Q1:
> Is this the issue, e.g. with a = -1 and b = 0.01:
> sqrt(a^2+b^2) evals to 1.0001, so you've lost some of b's original precision because b^2 is added to a^2 which is orders of magnitude greater. Then you add -1 so have 0.0001 which greatly reduces the order of mag but you can't regain the original precision of b, so you have a number with a precision less than usual for its order of mag. From then on, your relative precision is much less than what you'd expect for the order of mag of the answer?

  Yes, that is the gist of it. The error in x is more or less the same as for other values of a and b, but x itself is very much smaller, so the relative error is larger.

Q2:
> I measure like so:
> err = abs( (Matlab_result - C_result) / abs(Matlab_result) )
> Then I check if err is less than the relative eps:
> err < eps( abs(Matlab_result) ) ? OK : problem

  No, that doesn't look right. You should either compare

 err = abs((Matlab_result-C_result)/abs(Matlab_result))

with just 'eps', or you should compare

 err = abs(Matlab_result-C_result)

with eps(Matlab_result). This last quantity is simply the value of a single least bit error in 'Matlab_result'. If you have a sequence of several operations, you could get more than one such bit worth of errors.

Q3:
> So my absolute value and divisions algorithms are still giving results different than matlab's by greater than eps.

  In the division of complex numbers, you should not expect the error in the real part of the result to be within a bit or so within that real part. The same can be true with the imaginary part though not at the same time. All you can hope for is that each of these errors amounts to only a bit or so within the absolute value of the complex result. If you look at the expressions for the real and imaginary parts, you can see why:

 real((a+b*i)/(c+d*i)) = (a*c+b*d)/sqrt(c^2+d^2)
 imag((a+b*i)/(c+d*i)) = (b*c-a*d)/sqrt(c^2+d^2)

The quantity a*c+b*d or b*c-a*d can be very small compared with the values in a, b, c, and d if a+b*i and c+d*i are nearly orthogonal or nearly parallel, respectively, in the complex plane and thereby lose relative accuracy if this is taken with respect to the real or imaginary results, respectively. As I say, the errors should be compared with the result's absolute value instead.

  I cannot see why there should be trouble with absolute value of complex numbers unless the square root routine for reals is defective.

  I'm afraid that's all I have time for now. My alibi is that the April 15 deadline for income tax reports is looming very large.

Roger Stafford

Subject: floating point calc differences between Matlab and C?

From: Michael

Date: 31 Mar, 2009 18:21:01

Message: 10 of 22

"Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in message <gqs317$8gs$1@fred.mathworks.com>...
> "Michael" <michael@circular-logic.com> wrote in message <gqrp7g$nu0$1@fred.mathworks.com>...
>
> Michael, I have time to answer only a few of your questions here.
>
> Q1:
> > Is this the issue, e.g. with a = -1 and b = 0.01:
> > sqrt(a^2+b^2) evals to 1.0001, so you've lost some of b's original precision because b^2 is added to a^2 which is orders of magnitude greater. Then you add -1 so have 0.0001 which greatly reduces the order of mag but you can't regain the original precision of b, so you have a number with a precision less than usual for its order of mag. From then on, your relative precision is much less than what you'd expect for the order of mag of the answer?
>
> Yes, that is the gist of it. The error in x is more or less the same as for other values of a and b, but x itself is very much smaller, so the relative error is larger.
>
> Q2:
> > I measure like so:
> > err = abs( (Matlab_result - C_result) / abs(Matlab_result) )
> > Then I check if err is less than the relative eps:
> > err < eps( abs(Matlab_result) ) ? OK : problem
>
> No, that doesn't look right. You should either compare
>
> err = abs((Matlab_result-C_result)/abs(Matlab_result))
>
> with just 'eps', or you should compare
>
> err = abs(Matlab_result-C_result)
>
> with eps(Matlab_result). This last quantity is simply the value of a single least bit error in 'Matlab_result'. If you have a sequence of several operations, you could get more than one such bit worth of errors.
>
> Q3:
> > So my absolute value and divisions algorithms are still giving results different than matlab's by greater than eps.
>
> In the division of complex numbers, you should not expect the error in the real part of the result to be within a bit or so within that real part. The same can be true with the imaginary part though not at the same time. All you can hope for is that each of these errors amounts to only a bit or so within the absolute value of the complex result. If you look at the expressions for the real and imaginary parts, you can see why:
>
> real((a+b*i)/(c+d*i)) = (a*c+b*d)/sqrt(c^2+d^2)
> imag((a+b*i)/(c+d*i)) = (b*c-a*d)/sqrt(c^2+d^2)
>
> The quantity a*c+b*d or b*c-a*d can be very small compared with the values in a, b, c, and d if a+b*i and c+d*i are nearly orthogonal or nearly parallel, respectively, in the complex plane and thereby lose relative accuracy if this is taken with respect to the real or imaginary results, respectively. As I say, the errors should be compared with the result's absolute value instead.
>
> I cannot see why there should be trouble with absolute value of complex numbers unless the square root routine for reals is defective.
>
> I'm afraid that's all I have time for now. My alibi is that the April 15 deadline for income tax reports is looming very large.
>
> Roger Stafford

Thanks again Roger. And please, you don't need any alibi, you've been very helpful with your time. I'll digest what you've written here and best of luck with your tax return. I had a weird fit of motivation this month and have miracurously prepared my return already. Strange days.

Cheers,
Michael

Subject: floating point calc differences between Matlab and C?

From: Tim Davis

Date: 31 Mar, 2009 21:45:04

Message: 11 of 22

"Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in message ...

> In the division of complex numbers, you should not expect the error in the real part of the result to be within a bit or so within that real part. The same can be true with the imaginary part though not at the same time. All you can hope for is that each of these errors amounts to only a bit or so within the absolute value of the complex result. If you look at the expressions for the real and imaginary parts, you can see why:
>
> real((a+b*i)/(c+d*i)) = (a*c+b*d)/sqrt(c^2+d^2)
> imag((a+b*i)/(c+d*i)) = (b*c-a*d)/sqrt(c^2+d^2)
>
> The quantity a*c+b*d or b*c-a*d can be very small compared with the values in a, b, c, and d if a+b*i and c+d*i are nearly orthogonal or nearly parallel, respectively, in the complex plane and thereby lose relative accuracy if this is taken with respect to the real or imaginary results, respectively. As I say, the errors should be compared with the result's absolute value instead.


Roger brings up the basic problem. Those expressions are not the best way to do complex division. See for example Algorithm 116, ACM Transactions on Mathematical Software, by R. L. Smith (1962). It tries to avoid underflow and overflow. You can see my implementation of it here:

http://www.cise.ufl.edu/research/sparse/umfpack/UMFPACK/Source/umfpack_global.c

It's very different than the standard algebraic description. My guess is that the gcc compiler might have less accurate code for complex division than it could have.

Getting these basic operations (complex multiplication and division) done correctly is a fine art.

Subject: floating point calc differences between Matlab and C?

From: Roger Stafford

Date: 1 Apr, 2009 00:14:01

Message: 12 of 22

"Tim Davis" <davis@cise.ufl.edu> wrote in message <gqu2t0$r3d$1@fred.mathworks.com>...
> .....
> Roger brings up the basic problem. Those expressions are not the best way to do complex division. See for example Algorithm 116, ACM Transactions on Mathematical Software, by R. L. Smith (1962). It tries to avoid underflow and overflow. You can see my implementation of it here:
>
> http://www.cise.ufl.edu/research/sparse/umfpack/UMFPACK/Source/umfpack_global.c
> ......

  Hello Tim. I think we should distinguish between two aspects of complex division here. As you say, the division in your web reference will undergo overflow and underflow less often than the expression I wrote: around 1e308 and 1e-308 for yours and only 1e154 and 1e-154 for mine in double precision floating point. For that reason it is the superior code, but only for that reason.

  With respect to round-off errors, which is the subject of this thread, the two methods will produce essentially equivalent errors. In the special situation where b*c-a*d becomes very small for the expression I wrote for nearly parallel directions in the complex plane, your expressions ai-ar*r and ai*r-ar also become very small for exactly the same reason. A similar statement holds for the real part of the division for nearly orthogonal directions.

  In both methods for these special cases there can appear in the lower bits of the real or imaginary parts many erroneous bits, not just one or two, due to round-off error. Only when these errors are compared with the magnitude of the complex quotient can they be considered to be within proper error range. This is a inherent property of complex division and not due to any defects in algorithms.

Roger Stafford

Subject: floating point calc differences between Matlab and C?

From: Michael

Date: 1 Apr, 2009 23:43:01

Message: 13 of 22

"Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in message <gqs317$8gs$1@fred.mathworks.com>...
> Q2:
> > I measure like so:
> > err = abs( (Matlab_result - C_result) / abs(Matlab_result) )
> > Then I check if err is less than the relative eps:
> > err < eps( abs(Matlab_result) ) ? OK : problem
>
> No, that doesn't look right. You should either compare
>
> err = abs((Matlab_result-C_result)/abs(Matlab_result))
>
> with just 'eps', or you should compare
>
> err = abs(Matlab_result-C_result)
>
> with eps(Matlab_result). This last quantity is simply the value of a single least bit error in 'Matlab_result'. If you have a sequence of several operations, you could get more than one such bit worth of errors.
>
> Q3:
> > So my absolute value and divisions algorithms are still giving results different than matlab's by greater than eps.
>
> In the division of complex numbers, you should not expect the error in the real part of the result to be within a bit or so within that real part. The same can be true with the imaginary part though not at the same time. All you can hope for is that each of these errors amounts to only a bit or so within the absolute value of the complex result. If you look at the expressions for the real and imaginary parts, you can see why:
>
> real((a+b*i)/(c+d*i)) = (a*c+b*d)/sqrt(c^2+d^2)
> imag((a+b*i)/(c+d*i)) = (b*c-a*d)/sqrt(c^2+d^2)
>
> The quantity a*c+b*d or b*c-a*d can be very small compared with the values in a, b, c, and d if a+b*i and c+d*i are nearly orthogonal or nearly parallel, respectively, in the complex plane and thereby lose relative accuracy if this is taken with respect to the real or imaginary results, respectively. As I say, the errors should be compared with the result's absolute value instead.
>
> I cannot see why there should be trouble with absolute value of complex numbers unless the square root routine for reals is defective.

"Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in message <gqs317$8gs$1@fred.mathworks.com>...
> Q2:
> > I measure like so:
> > err = abs( (Matlab_result - C_result) / abs(Matlab_result) )
> > Then I check if err is less than the relative eps:
> > err < eps( abs(Matlab_result) ) ? OK : problem
>
> No, that doesn't look right. You should either compare
>
> err = abs((Matlab_result-C_result)/abs(Matlab_result))
>
> with just 'eps', or you should compare
>
> err = abs(Matlab_result-C_result)
>
> with eps(Matlab_result). This last quantity is simply the value of a single least bit error in 'Matlab_result'. If you have a sequence of several operations, you could get more than one such bit worth of errors.
>
> Q3:
> > So my absolute value and divisions algorithms are still giving results different than matlab's by greater than eps.
>
> In the division of complex numbers, you should not expect the error in the real part of the result to be within a bit or so within that real part. The same can be true with the imaginary part though not at the same time. All you can hope for is that each of these errors amounts to only a bit or so within the absolute value of the complex result. If you look at the expressions for the real and imaginary parts, you can see why:
>
> real((a+b*i)/(c+d*i)) = (a*c+b*d)/sqrt(c^2+d^2)
> imag((a+b*i)/(c+d*i)) = (b*c-a*d)/sqrt(c^2+d^2)
>
> The quantity a*c+b*d or b*c-a*d can be very small compared with the values in a, b, c, and d if a+b*i and c+d*i are nearly orthogonal or nearly parallel, respectively, in the complex plane and thereby lose relative accuracy if this is taken with respect to the real or imaginary results, respectively. As I say, the errors should be compared with the result's absolute value instead.
>
> I cannot see why there should be trouble with absolute value of complex numbers unless the square root routine for reals is defective.

Re Q2:
Yes, I see now, thanks. With

err = abs((Matlab_result-C_result)/abs(Matlab_result))

I've already normalized so comparison with just eps is appropriate. This makes my results look much better. There's nothing like asking the right question of the data.

Re Q3:
OK, I'm pretty sure I follow.
I found a Numerical Recipes algorithm for division which I see now looks just like the one Tim links to (thanks for the reply, Tim). I've implemented it and it gives results *exactly* the same as Matlabs for my test script, so that's exciting. And it looks like it'll probably run faster since there are two fewer multiplications, replaced by two equality comparisons and possible sign changes.

---
And as for my absolute value algorithm ( just |a+bi| = sqrt(a^2+b^2) ), using the correct err evaluation now shows that although it's not identical to matlab's implementation, it gives mostly identical results and the others are within eps. Much better than before!

I tried implementing the abs algorithm from Numerical Recipes (below) but it gives results further from Matlab's than the simple algorithm above.

if( (absA = _A) < 0. )
absA = -absA;
if( (absB = _B) < 0. )
absB = -absB;
if( absA >= absB ){
r = _B / _A;
real = absA * sqrt( 1+(r*r) );
}else{
r = _A / _B;
real = absB * sqrt( 1+(r*r) );
}
imag = 0;

The advantage of it, though, is that it protects better against overflow. If I'm starting to understand things correctly, I don't see that it would protect any better from rounding error, since (r*r) could be << 0, and when added to 1 would lose precision. Unless since it's 1+xxx, the implied bit (correct term?) within floating point representation allows full precision of the fractional part of (r*r) to be retained for the sqrt operation? A disadvantage I think is that it takes more multiplies.

I've tried asking Mathworks support about how they implement abs(), sqrt() and div(), but they only pointed me to the fact that Matlab uses BLAS and LAPACK. As before, I don't see these routines implemented as separate functions in BLAS/LAPACK, unless I'm missing something? Does anyone know if Matlab's implementation of these funcs are detailed somewhere?

Cheers and thanks again,
Michael

Subject: floating point calc differences between Matlab and C?

From: Tim Davis

Date: 2 Apr, 2009 21:52:01

Message: 14 of 22

See also Loren's blog:

http://blogs.mathworks.com/loren/2008/02/07/why-hypot/

Subject: floating point calc differences between Matlab and C?

From: Michael

Date: 3 Apr, 2009 02:13:02

Message: 15 of 22

"Tim Davis" <davis@cise.ufl.edu> wrote in message <gr3c21$6nt$1@fred.mathworks.com>...
> See also Loren's blog:
>
> http://blogs.mathworks.com/loren/2008/02/07/why-hypot/

Thanks Tim. The FDLIBM lib has an implementation of hypot, I'll check it out.

Mathworks support pointed me to FDLIBM when I asked about their implementations of of sqrt, abs and div for complex numbers, saying that ML's sqrt function uses FDLIBM's HYPOT and to check the rest of the lib for abs and div. But I don't see complex versions of those funcs there, but I'll keep looking.

Cheers

Subject: floating point calc differences between Matlab and C?

From: Joachim

Date: 3 Apr, 2009 18:44:02

Message: 16 of 22

"Michael" <michael@circular-logic.com> wrote in message
> I'm running OSX on an Intel Mac, using gcc 4 for C compilation. It looks like gcc 4 only offers a compiler flag rounding mode for DEC Alpha.

There are compiler flags that affect the precision of floating point computations. Look at the man page for gcc. There is the ffloat-store flag you can set that prevent register storage that violates IEEE rules. Also, note that some compiler optimizations might change the code in a way that affects rounding of floats. E.g., a compiler optimization might replace x/a by x * (1/a). So it might be worth compiling the mex without enabling optimization.

Subject: floating point calc differences between Matlab and C?

From: Michael

Date: 7 Apr, 2009 01:06:01

Message: 17 of 22

"Joachim" <a@b.com> wrote in message <gr5ldi$5ha$1@fred.mathworks.com>...
> "Michael" <michael@circular-logic.com> wrote in message
> > I'm running OSX on an Intel Mac, using gcc 4 for C compilation. It looks like gcc 4 only offers a compiler flag rounding mode for DEC Alpha.
>
> There are compiler flags that affect the precision of floating point computations. Look at the man page for gcc. There is the ffloat-store flag you can set that prevent register storage that violates IEEE rules. Also, note that some compiler optimizations might change the code in a way that affects rounding of floats. E.g., a compiler optimization might replace x/a by x * (1/a). So it might be worth compiling the mex without enabling optimization.

Thanks Joachim. I'll look again for the compiler flags, must have been looking in the wrong spot. I've tried a debug build already, and didn't see any different results with my test values.

Cheers,
Michael

Subject: floating point calc differences between Matlab and C?

From: Malcolm Lidierth

Date: 6 Nov, 2010 12:07:04

Message: 18 of 22

Compiler flags:
See also MATLAB's feature('SetPrecision',...) and feature('SetRounding,...)
Details on Yair's blog
http://undocumentedmatlab.com/blog/undocumented-feature-function/

Subject: floating point calc differences between Matlab and C?

From: Bruno Luong

Date: 6 Nov, 2010 12:29:05

Message: 19 of 22

"Malcolm Lidierth" <ku.ca.lck@htreidil.mloclam> wrote in message <ib3gd8$4nj$1@fred.mathworks.com>...
> Compiler flags:
> See also MATLAB's feature('SetPrecision',...) and feature('SetRounding,...)
> Details on Yair's blog
> http://undocumentedmatlab.com/blog/undocumented-feature-function/

"feature('SetPrecision',...)" does not have any effect according to my test. Matlab uses 53 by default (there are few post on this subject lately). This loop will run forever

while true
     a=rand(1,1000);
     feature('SetPrecision',64);
     s64=sum(a);
     feature('SetPrecision',53);
     s53=sum(a);
     if s53-s64
         fprintf('Sum are different\n')
         break
     end
end

% Bruno

Subject: floating point calc differences between Matlab and C?

From: Jan Simon

Date: 6 Nov, 2010 14:06:04

Message: 20 of 22

Dear Bruno,

A = rand(100, 100);
x = rand(100, 1);
system_dependent('SetPrecision', 53);
y53 = A*x;
system_dependent('SetPrecision', 64);
y64 = A*x;

mean(abs(y53 - y64))
% Matlab 6.5:
  >> 4.86e-15
% Matlab 2009a:
  >> 0.0

See also the important, but obvioulsy outdated paper of Kahan:
  http://www.eecs.berkeley.edu/~wkahan/MxMulEps.pdf

Matlab 6.5:
  system_dependent('SetPrecision', 53);
  sum(x(:))
  >> 4943.74617316933

  system_dependent('SetPrecision', 64);
  sum(x(:))
  >> 4943.74617316934

  system_dependent('SetPrecision', 53);
  xsum(x(:), 1, 'long') % FEX, 80 bit long double accumulation
  >> 4943.74617316932

Under Matlab 2009a, all results are equal.
It seems like Matlab 2009a enables the 64 bit precision internally.

Kind regards, Jan

Subject: floating point calc differences between Matlab and C?

From: Jan Simon

Date: 6 Nov, 2010 16:09:03

Message: 21 of 22

Dear Bruno, dear users of old Matlab versions!

To my surprise the internal precision affects the graphic output under Matlab 6.5:
  system_dependent('SetPrecision', 53); % Standard
  figure;
  plot(1:10, 'LineWidth', 1.5)
==> two pixel width

  system_dependent('SetPrecision', 64);
  figure;
  plot(1:10, 'LineWidth', 1.5)
==> one pixel width with Painters and ZBuffer renderer
==> two pixel width with OpenGL

Solution: Of course use a newer Matlab version, the OpenGL renderer or a LineWidth of 1.5 + eps :-)

Jan

Subject: floating point calc differences between Matlab and C?

From: Malcolm Lidierth

Date: 8 Jan, 2011 12:29:05

Message: 22 of 22

TMW posting 1-79FEJH dated June 2009:
"As its name implies, the behavior of system_dependent('setprecision'...) varies from machine to machine and operating system to operating system. You are free to investigate its behavior in your particular environment."
See
<href>
http://www.mathworks.com/support/solutions/en/data/1-79FEJH/index.html?product=ML&solution=1-79FEJH
</href>

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us