Why is a Number Divided by Itself Not Equal to Unity?
Show older comments
w=0:(pi/100):pi;
w = w([17 66 93]);
x = exp(1i*w);
x./x
ans =
1.0000e+00 + 4.8645e-17i 1.0000e+00 - 4.9461e-17i 1.0000e+00 - 2.6884e-17i
I imagine this result has something to do with how Matlab implements division of complex numbers and I realize that the error is less than eps(1), But I thought it was an interesting result and it would never have occurred to me that this could happen. I checked the doc page for rdivide, but it doesn't say anything about its algorithm when both arguments are complex. Just curious if anyone can explain why this result happens (it only happened for these three particular values of the original w vector).
6 Comments
Ameer Hamza
on 14 Jun 2020
Probably an artifact of the finite precision of floating-point numbers. The exact detail of ldivide and rdivide are not disclosed, but it probably does some computations on numbers before dividing elements of both vectors. Those computations accumulate a tiny bit of error, which creates this discrepancy.
Stephen23
on 14 Jun 2020
"...a similar result could be obtained for two arguments to rdivide of the same numeric type that are isequal and neither has a complex attribute?"
No, this will not occur with real arrays. What you see is a side-effect of complex division being... complex:
Paul
on 14 Jun 2020
"Implementing the complex division exactly as on that matworld page gets the expected answer:"
Sure, but I was not making any comment on that specific algorithm: it is quite possible that the actual implentation (in MATLAB, BLAS, or whatever) would be something algebraically equivalent but that uses fewer/faster/more stable/whatever operations, and hence has different accumulated floating point error to what you tried.
My point was simply that complex division is not a single atomic numeric operation, so whatever algorithm that is used to implement it could quite conceievably accumulate some floating point error: the authors of numeric libraries must always compromise speed vs memory vs accumulated error vs stability vs whatever else.
Paul
on 15 Jun 2020
Answers (1)
It depends what algorithm is used to perform complex division x./y. For example, if it is implemented as x./abs(y).^2.*conj(y), then there will be floating point errors introduced in these operations, even when x=y,
>> x./abs(x).^2.*conj(x) - 1
ans =
1.0e-15 *
0 0 0.2220
If it had been done, however, by conversion to complex exponentials, I don't think we should have seen residual errors,
>> [r,t]=deal(abs(x),angle(x));
>> (r./r).*exp(1i*(t-t))-1
ans =
0 0 0
10 Comments
Paul
on 14 Jun 2020
Matt J
on 14 Jun 2020
Well, one thing I'm pretty sure it's not doing is pre-conversion to complex exponentials. That would necessitate transcendental functions like sine and cosine, which are less efficient.
Paul
on 14 Jun 2020
Matt J
on 15 Jun 2020
That is interesting, indeed. You should probably submit it as its own Answer.
James Tursa
on 15 Jun 2020
Edited: James Tursa
on 15 Jun 2020
The whole scaling business with r is to keep intermediate results from overflowing or underflowing. E.g.,
>> x = realmax + realmax*1i
x =
1.7977e+308 +1.7977e+308i
>> x./x
ans =
1
>> a=real(x); b=imag(x); c=real(x); d=imag(x);
>> (a.*c+b.*d)./(c.^2+d.^2) +1j*(b.*c - a.*d)./(c.^2+d.^2) - 1 % Overflows
ans =
NaN + NaNi
>>
>> x = realmin + realmin*1i
x =
2.2251e-308 +2.2251e-308i
>> x./x
ans =
1
>> a=real(x); b=imag(x); c=real(x); d=imag(x);
>> (a.*c+b.*d)./(c.^2+d.^2) +1j*(b.*c - a.*d)./(c.^2+d.^2) - 1 % Underflows
ans =
NaN
I would think most implementations of complex divide would do some form of scaling, either always or conditionally, to avoid the overflow and underflow.
Paul
on 15 Jun 2020
James Tursa
on 15 Jun 2020
Any floating point operation result needs to be taken with a grain of salt with regards to the last couple of trailing mantissa bits, so a good numeric algorithm would account for that. Another place you might get bit is getting a "slightly" complex result when you were expecting real, or vice-versa, and getting errors downstream because of that. But again, good algorithm design should check for that.
James Tursa
on 16 Jun 2020
Edited: James Tursa
on 16 Jun 2020
These are not supid questions IMO.
You would need to look at IEEE floating point standards for stuff like x/x = 1. I think IEEE requires that the calculation result be correct as if the calculation is done in infinite precision and rounded according to the lsb rounding rule in effect (e.g., typically "round to even"), so that would mean x/x = 1 would be required to be true for all possible real finite floating point bit patterns (except 0). It would not be hard to write a MATLAB program to test every possible single and double bit pattern to see if it is true. (But I would be shocked to find out that any such bit pattern failed this test ... this is implemented at the chip level)
For language standards regarding complex arithmetic, I am unaware of any particular requirements (e.g., z/z = 1 or avoiding overflow & underflow when possible). I think that Java has some floating point replicability requirements in their standards but I don't know the details.
Paul
on 16 Jun 2020
Categories
Find more on Logical in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!