Why is a Number Divided by Itself Not Equal to Unity?

w=0:(pi/100):pi;
w = w([17 66 93]);
x = exp(1i*w);
x./x
ans =
1.0000e+00 + 4.8645e-17i 1.0000e+00 - 4.9461e-17i 1.0000e+00 - 2.6884e-17i
I imagine this result has something to do with how Matlab implements division of complex numbers and I realize that the error is less than eps(1), But I thought it was an interesting result and it would never have occurred to me that this could happen. I checked the doc page for rdivide, but it doesn't say anything about its algorithm when both arguments are complex. Just curious if anyone can explain why this result happens (it only happened for these three particular values of the original w vector).

6 Comments

Probably an artifact of the finite precision of floating-point numbers. The exact detail of ldivide and rdivide are not disclosed, but it probably does some computations on numbers before dividing elements of both vectors. Those computations accumulate a tiny bit of error, which creates this discrepancy.
Paul
Paul on 14 Jun 2020
Edited: Paul on 14 Jun 2020
Are you suggesting that a similar result could be obtained for two arguments to rdivide of the same numeric type that are isequal and neither has a complex attribute?
"...a similar result could be obtained for two arguments to rdivide of the same numeric type that are isequal and neither has a complex attribute?"
No, this will not occur with real arrays. What you see is a side-effect of complex division being... complex:
Implementing the complex division exactly as on that matworld page gets the expected answer:
>> a=real(x);b=imag(x);c=real(x);d=imag(x);
>> (a.*c+b.*d)./(c.^2+d.^2) +1j*(b.*c - a.*d)./(c.^2+d.^2) - 1
ans =
0 0 0
So I'm not sure what side effect you're referring to.
"Implementing the complex division exactly as on that matworld page gets the expected answer:"
Sure, but I was not making any comment on that specific algorithm: it is quite possible that the actual implentation (in MATLAB, BLAS, or whatever) would be something algebraically equivalent but that uses fewer/faster/more stable/whatever operations, and hence has different accumulated floating point error to what you tried.
My point was simply that complex division is not a single atomic numeric operation, so whatever algorithm that is used to implement it could quite conceievably accumulate some floating point error: the authors of numeric libraries must always compromise speed vs memory vs accumulated error vs stability vs whatever else.

Sign in to comment.

Answers (1)

It depends what algorithm is used to perform complex division x./y. For example, if it is implemented as x./abs(y).^2.*conj(y), then there will be floating point errors introduced in these operations, even when x=y,
>> x./abs(x).^2.*conj(x) - 1
ans =
1.0e-15 *
0 0 0.2220
If it had been done, however, by conversion to complex exponentials, I don't think we should have seen residual errors,
>> [r,t]=deal(abs(x),angle(x));
>> (r./r).*exp(1i*(t-t))-1
ans =
0 0 0

10 Comments

Regarding your first option, I thought that maybe the result isn't zero because of the order of operations, but that's not the case:
>> e1 = x./abs(x).^2.*conj(x) - 1
e1 =
0 0 2.220446049250313e-16
>> e2 = (x.*conj(x))./abs(x).^2 -1
e2 =
0 0 2.220446049250313e-16
>> isequal(e1,e2)
ans =
logical
1
On the other hand, this appears to work just fine (and if it didn't I'd be really shocked):
>> e3 = (x.*conj(x))./(x.*conj(x)) -1
e3 =
0 0 0
But whatever rdvivide is doing, it's not doing anything in this answer thread. So now I'm curious as to what it's really doing and if whatever it is doing is supposed to be the most effiecient or the most accurate in general in some sense, even if it's not exactly accurate in special cases like these.
Well, one thing I'm pretty sure it's not doing is pre-conversion to complex exponentials. That would necessitate transcendental functions like sine and cosine, which are less efficient.
Here is code based on a published algorithm that yields the exact result as x./x
w=0:(pi/100):pi;
w = w([17 66 93]);
x = exp(1i*w);
z1 = x./x;
z2 = nan*z1;
a=real(x);b=imag(x);c=real(x);d=imag(x);
ii = abs(c) >= abs(d);
r = d./c;
den = c + r.*d;
e = (a + b.*r)./den;
f = (b - a.*r)./den;
z2(ii) = e(ii) + 1i*f(ii);
r = c./d;
den = d + r.*c;
e = (a.*r + b)./den;
f = (b.*r - a)./den;
z2(~ii) = e(~ii) + 1i*f(~ii);
isequal(z1,z2)
ans =
logical
1
But I noticed something interesting. Reverse the definition of ii
ii = abs(c) < d;
%% run the rest of the code here
>> z2
z2 =
1 1 1
I'm not in any way suggesting that the published algorithm or Matlab's implementation is incorrect. But it's an interesting result nonetheless.
That is interesting, indeed. You should probably submit it as its own Answer.
The whole scaling business with r is to keep intermediate results from overflowing or underflowing. E.g.,
>> x = realmax + realmax*1i
x =
1.7977e+308 +1.7977e+308i
>> x./x
ans =
1
>> a=real(x); b=imag(x); c=real(x); d=imag(x);
>> (a.*c+b.*d)./(c.^2+d.^2) +1j*(b.*c - a.*d)./(c.^2+d.^2) - 1 % Overflows
ans =
NaN + NaNi
>>
>> x = realmin + realmin*1i
x =
2.2251e-308 +2.2251e-308i
>> x./x
ans =
1
>> a=real(x); b=imag(x); c=real(x); d=imag(x);
>> (a.*c+b.*d)./(c.^2+d.^2) +1j*(b.*c - a.*d)./(c.^2+d.^2) - 1 % Underflows
ans =
NaN
I would think most implementations of complex divide would do some form of scaling, either always or conditionally, to avoid the overflow and underflow.
Sure enough. Do you think that there could ever be a case that such scaling could compromise accuracy too much for a situation where underflow or overflow would not occur using a "naive" algorithm? I realize that's a subjective question, just curious if you have any thoughts on the matter.
Any floating point operation result needs to be taken with a grain of salt with regards to the last couple of trailing mantissa bits, so a good numeric algorithm would account for that. Another place you might get bit is getting a "slightly" complex result when you were expecting real, or vice-versa, and getting errors downstream because of that. But again, good algorithm design should check for that.
Paul
Paul on 16 Jun 2020
Edited: Paul on 16 Jun 2020
I don’t know much, or anything really, about language standards, but in the little bit of reading I did about the FORTRAN standard I didn’t see any requirement that complex division must yield 1 for x/x. Does any other language have that requirement in a standard? At the risk of asking a stupid question, do you know if any language has a requirement that x/x must equal 1 for non-complex types? Or is that a requirement of the IEEE 754 standard?
These are not supid questions IMO.
You would need to look at IEEE floating point standards for stuff like x/x = 1. I think IEEE requires that the calculation result be correct as if the calculation is done in infinite precision and rounded according to the lsb rounding rule in effect (e.g., typically "round to even"), so that would mean x/x = 1 would be required to be true for all possible real finite floating point bit patterns (except 0). It would not be hard to write a MATLAB program to test every possible single and double bit pattern to see if it is true. (But I would be shocked to find out that any such bit pattern failed this test ... this is implemented at the chip level)
For language standards regarding complex arithmetic, I am unaware of any particular requirements (e.g., z/z = 1 or avoiding overflow & underflow when possible). I think that Java has some floating point replicability requirements in their standards but I don't know the details.
Thank you both for the discussion. I did some research and learned about three algorithms for complex division, among other things. I'll probably never have to code them myself, but still good to know they exist.

Sign in to comment.

Products

Release

R2019a

Asked:

on 14 Jun 2020

Commented:

on 16 Jun 2020

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!