Path: news.mathworks.com!not-for-mail From: "Michael" <michael@circular-logic.com> Newsgroups: comp.soft-sys.matlab Subject: Re: floating point calc differences between Matlab and C? Date: Tue, 31 Mar 2009 00:47:44 +0000 (UTC) Organization: Circular Logic Lines: 122 Message-ID: <gqrp7g$nu0$1@fred.mathworks.com> References: <gqeaq9$56$1@fred.mathworks.com> <gqec1m$c0$1@fred.mathworks.com> <gqepkn$s8r$1@fred.mathworks.com> <gqfcth$o5u$1@fred.mathworks.com> Reply-To: "Michael" <michael@circular-logic.com> NNTP-Posting-Host: webapp-02-blr.mathworks.com Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 8bit X-Trace: fred.mathworks.com 1238460464 24512 172.30.248.37 (31 Mar 2009 00:47:44 GMT) X-Complaints-To: news@mathworks.com NNTP-Posting-Date: Tue, 31 Mar 2009 00:47:44 +0000 (UTC) X-Newsreader: MATLAB Central Newsreader 1151890 Xref: news.mathworks.com comp.soft-sys.matlab:528852 "Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in message <gqfcth$o5u$1@fred.mathworks.com>... > Michael, the errors you describe are far too large to be accounted for by such differences as 80-bit floating point numbers, rounding modes, coprocessors, or things of this kind. That is, they are too large for most single operations involving real numbers. Such errors can only come from 1) something like a subtraction of nearly equal numbers, 2) a long chain of operations, or 3) calls on functions that are improperly written. > > You state that these errors occur principally with complex numbers. To do complex arithmetic it is necessary to perform more than one real number arithmetic operation. In particular, taking the square root of a complex number involves a number of operations, and if these are not done properly, some excessively large rounding errors can be produced. Your description of the large square root errors occurring only with negative real parts strongly suggests that inappropriate routines on some systems are being used to carry out complex square root. > > Consider this possibility. Suppose we want to find x and y, given a and b for > > x+y*i = sqrt(a+b*i) > > According to convention, of the two possible complex square roots, the one with the non-negaive real part is to be selected. A little algebra shows that the solution can then be expressed as: > > x = sqrt((sqrt(a^2+b^2)+a)/2); > y = b/(2*x); > > where, as you see, all operations are performed on real numbers. > > For positive values of a, these expressions should yield quite decent results, nothing like the errors you describe. However if a is negative, the expression for x becomes inappropriate for values of b much smaller than the absolute value of a. The two quantities, sqrt(a^2+b^2) and a, can nearly cancel each other and produce an excessive rounding error relative to the size of the result. Try it for something like a = -1 and b = 0.01 and you will get a relative error of the magnitude you described (1e4.) > > There is a much better formula to use for the negative a case and that is: > > y = sqrt((sqrt(a^2+b^2)-a)/2); > if b<0, y = -y; end % Keep x non-negative > x = b/(2*y); > > As long as a is negative, there is none of the above kind of cancellation occurring and full accuracy is maintained. A good complex square root routine would select from these two solutions according to the sign of a. > > Therefore I am putting forth the hypothesis that in at least one of your systems being compared, there are possibly some inferior complex square root routines being called upon which produce excessive errors in the manner shown above, perhaps different from the above specific example but having analogous computational difficulties. > > Roger Stafford Roger, thanks very much, your algorithm is working much better than mine! With my test data, it's now not signifigantly different than Matlab's results. My main mistake was simply to implement the mathematical formula for sqrt of complex number, without the numerical analysis considerations. I'm not sure I understand how excessive rounding error becomes a problem for b << |a| when a is negative and you use: x = sqrt((sqrt(a^2+b^2)+a)/2); y = b/(2*x); Is this the issue, e.g. with a = -1 and b = 0.01: sqrt(a^2+b^2) evals to 1.0001, so you've lost some of b's original precision because b^2 is added to a^2 which is orders of magnitude greater. Then you add -1 so have 0.0001 which greatly reduces the order of mag but you can't regain the original precision of b, so you have a number with a precision less than usual for its order of mag. From then on, your relative precision is much less than what you'd expect for the order of mag of the answer? --- BTW, I assume my method I described for measuring the acceptable error between the results of my algorithm and matlab is reasonable for complex numbers? I measure like so: err = abs( (Matlab_result - C_result) / abs(Matlab_result) ) Then I check if err is less than the relative eps: err < eps( abs(Matlab_result) ) ? OK : problem --- So my absolute value and divisions algorithms are still giving results different than matlab's by greater than eps. Do you know a source for finding numerical algorithms for these? I know Matlab uses blas and lapack, but from what I can see from their source these don't have basic operations like sqrt, div and absolute value. There is a blas routine dcabs1.f that says it's for abs val of a complex value, but it looks like something else because it just computes abs(real) + abs(imag). If you feel game, here are some details on what I've found so far: I've been looking around. On netlib I found something in the 'Templates' library for both abs and div. But the abs algorithm from Templates (http://www.netlib.org/templates/single//cabs.c) actually gives *worse* results, strangely. My simple algorithm is just this: res = sqrt(a^2 + b^2); For a=3.999999999999986e-01, b=4.227932187381601e-01, it yields an error that is 1.7x the realtive eps of the Matlab result. Threre are other numbers close in value with slightly less or greater differences between a and b that yield no error at all. This is the only value (and -a + -b) that yields too great an error in my test. The algorithm from Templates cabs.c looks like this and for my test values, it yields many more values out of the acceptable range than my simplistic algorithm: if(real < 0) real = -real; if(imag < 0) imag = -imag; if(imag > real){ temp = real; real = imag; imag = temp; } if((real+imag) == real) return(real); temp = imag/real; temp = real*sqrt(1.0 + temp*temp); /*overflow!!*/ return(temp); --- The Templates algorithm for division (http://www.netlib.org/templates/single//c_div.c) gives somewhat better results than my simplistic algorithm, but still greater than eps compared to matlab: if( (abr = b->r) < 0.) abr = - abr; if( (abi = b->i) < 0.) abi = - abi; if( abr <= abi ) { if(abi == 0) sig_die("complex division by zero", 1); ratio = (double)b->r / b->i ; den = b->i * (1 + ratio*ratio); c->r = (a->r*ratio + a->i) / den; c->i = (a->i*ratio - a->r) / den; } else { ratio = (double)b->i / b->r ; den = b->r * (1 + ratio*ratio); c->r = (a->r + a->i*ratio) / den; c->i = (a->i - a->r*ratio) / den; } } The numbers that give trouble here, with err / eps(abs(Matlab_result)): -7.000000000000000e+00 - 8.714479827243187e-01i 1.271683075112519e+00 -6.199999999999999e+00 + 8.337771486592951e-02i 1.288925174930687e+00 -4.100000000000000e+00 - 1.423526483194365e+00i 1.171874580415606e+00 -3.800000000000000e+00 - 7.735560905031258e-01i 1.075638851535295e+00 -2.999999999999989e-01 - 3.093362496096221e-01i 1.910457200777206e+00 -1.999999999999993e-01 - 2.027100355086718e-01i 3.927976617533565e+00 1.999999999999993e-01 + 2.027100355086718e-01i 2.514836539366301e+00 5.000000000000000e-01 + 5.463024898437905e-01i 1.185018913225869e+00 2.300000000000000e+00 - 1.119213641734133e+00i 1.217203806863496e+00 3.300000000000000e+00 + 1.597457476600322e-01i 1.128032434691938e+00 3.600000000000000e+00 + 4.934667299849033e-01i 1.105140837630346e+00 Thanks again for any thoughts! Cheers, Michael