Rounding error in orthogonality test

6 views (last 30 days)
Ellen
Ellen on 28 Jan 2015
Edited: John D'Errico on 29 Jan 2015
I was just trying to do a simple orthogonality test of 5 matrices using the two rules of orthogonality. The following was the code I created, but it seems as though a rounding error is occurring and I'm not sure how to stop it. I know R1 and R3 should be orthogonal. But MatLab informs me that none are. To me it seems like a rounding error, but I'm not sure how to get MatLab to stop rounding individual values in my matrices.
Any ideas? Thank you!
clear all
R1=[0.2500 -0.0580 0.9665; 0.4330 0.8995 -0.0580; -0.8660000 0.4330 0.2500];
R2=[0.8214 0.2212 0.5257; 0.3830 0.8969 0.2212; -0.4226 0.3830 0.821];
R3=[0.9330 -0.1853 0.3085; 0.2500 0.9504 -0.1853; -0.2588 0.2500 0.9330];
R4=[0.1830 -0.8856 -0.4268; 0.6830 0.4268 0.5928; -0.7071 0.1830 0.6830];
R5=[0.0000 -0.9848 0.1736; -0.0000 -0.1736 -0.9848; -1.0000 0.0000 0.0000];
A=R1^-1;
B=transpose(R1);
r1=det(R1);
if A==B & r1==1
fprintf('R1 is orthogonal\n')
else fprintf('R1 is not orthogonal\n')
end
%%%%%%%%%%%%%%%%
C=inv(R2);
D=transpose(R2);
r2=det(R2);
if C==D & r2==1
fprintf('R2 is orthogonal\n')
else fprintf('R2 is not orthogonal\n')
end
%%%%%%%%%%%%%%%%%%%
E=inv(R3);
F=transpose(R3);
r3=det(R3);
if E==F & r3==1
fprintf('R3 is orthogonal\n')
else fprintf('R3 is not orthogonal\n')
end
%%%%%%%%%%%%%%%%%%%%%%
G=inv(R4);
H=transpose(R4);
r4=det(R4);
if G==H & r4==1
fprintf('R4 is orthogonal\n')
else fprintf('R4 is not orthogonal\n')
end
%%%%%%%%%%%%%%%%%%
I=inv(R5);
J=transpose(R5);
r5=det(R5);
if I==J & r5==1
fprintf('R5 is orthogonal\n')
else fprintf('R5 is not orthogonal\n')
end

Answers (1)

John D'Errico
John D'Errico on 28 Jan 2015
Edited: John D'Errico on 28 Jan 2015
I once made the claim that det should be modified. With EVERY use of the function, a dialog box should pop up, asking "Do you REALLY want or need to use the determinant here?" The dialog box would have one button only on it: "NO".
Or maybe the function should be completely disabled from ever running, UNLESS the user has passed a test certifying they know and understand enough about numerical analysis and numerical methods on a computer. Of course, the person who would pass the test as I would write it would also know enough to avoid using det in the first place.
Ok, so both of those alternatives might be a tad extreme. :)
As students, we were taught to use the determinant by our teachers. I was! They learned about it from their teachers, etc. And nobody ever told them not to use it, so why should they do any different? Worse, your textbook tells you that a singular matrix will have a determinant of zero. Of course it was written by the same people who tell you to use that tool. We end up with this entire cadre of people writing books, teaching classes, writing papers, creating new people who will do the same. All of those people not understanding that a determinant is almost always the wrong thing to compute.
The first issue is people try to compute the determinant the wrong way, using a recursive method. This is amazingly inefficient, so that even determinants of 10x10 matrices will take a huge amount of time to compute. In fact, there are schemes using an LU decomposition that do allow at least efficient computation of the determinant, or you could use the product of eigenvalues. The MATLAB det function uses the LU scheme, so it is at least well written as you should expect.
Next, det is quite sensitive to scaling. For example,
A = eye(1000);
Clearly the determinant of A is 1, and it is non-singular.
A = eye(1000);
det(A)
ans =
1
B = A*.5;
det(B)
ans =
9.33263618503219e-302
B = A*.4;
det(B)
ans =
0
So simply scaling A by a constant is sufficient to make the determinant underflow to zero, despite the fact that all of these matrices were non-singular and VERY well behaved. Had I wanted to see an overflow to inf, that would have been as simple to do.
B = A*2.5;
det(B)
ans =
Inf
The point is, det is a poor measure of singularity. For any tolerance you will provide, I can trivially provide a clearly non-singular matrix that will cause det to conclude wrongly about the singularity status of that matrix. For example...
A = rand(6,5);
A = [A,A*rand(5,1)];
det(A)
ans =
4.66661953785783e-17
B = A*1000;
det(B)
ans =
19.1144736270657
The matrix A has rank 5. That should be abundantly clear from how it was constructed. det(A) suggests that A might be singular, although it was not zero. Numerical errors in the floating point arithmetic will cause it to be very rarely exactly zero, even when it might have been so occasionally.
But simply by scaling A by 1000, we might conclude that B is not singular. Or is it?
B = A*900;
det(B)
ans =
0
Simply by arbitrary chance, I chose a different value to scale by, and see how unstable the determinant can be!
As you can see, det is just bad to the bone. Avoid it like you would avoid the plague.
And there are good alternatives to det as a test for singularity. The simplest one is rank. It uses the singular values of your matrix to estimate the numerical rank of the matrix. If rank(A) is less than the smaller of the number of rows or columns of A, then it is singular.
rank(A)
ans =
5
rank(B)
ans =
5
Some prefer a different measure, like cond. If cond is large, on the order of 1/eps, then the matrix is numerically singular.
cond(A)
ans =
2.05284970911659e+16
cond(B)
ans =
2.80663046168285e+16
As you can see, cond rather stably predicts both matrices are singular, although numerical trash changed the exact value slightly.
You might also use the singular values of the matrix itself. Of course then you need to understand what they tell you. More information provided, more understanding needed.
  3 Comments
John D'Errico
John D'Errico on 28 Jan 2015
Edited: John D'Errico on 28 Jan 2015
There was no condescension. I spent a lot of time explaining why you are doing the wrong thing, and I suggested better ways to do what you were trying to do. That I used humor in the beginning was not an insult in any way. Just as humor. I even pointed out that I learned the same things from my teachers. These were facts I had to unlearn over time.
You asked how to stop the rounding errors. You cannot do so. Numerical errors are an important fact of using floating point arithmetic. That you do not appreciate this fact suggests you have something to learn about numerical analysis and numerical methods as implemented in floating point arithmetic on a computer. That you are using det to compute what you did suggests you yourself do not understand why it is a bad thing to do.
Sadly these are things that are often not taught well in classes. As I point out, that is partly because those classes are taught by people who do not appreciate those facts themselves. My hope is to short-circuit the passage of this mis-information from one person to another, by at least the few people who might read this response.
John D'Errico
John D'Errico on 29 Jan 2015
Edited: John D'Errico on 29 Jan 2015
Let me repeat. As for how to avoid roundoff in floating point arithmetic, the fact is, you cannot do so. While you may think that those numbers were entered exactly, they were not. I'll pick (semi-randomly) one of the numbers that you entered: 0.8969.
How is that number stored on the computer? Computers do not use decimal digits to store that number. They store it using 52 binary bits in the mantissa of a number. In fact, the number was stored as...
0.8969000000000000305533376376843079924583435058593750000
This happens because binary fractions like 2^-52 are not nice, simple decimal numbers. In fact, the exact representation of the number 2^-52 is:
0.0000000000000002220446049250313080847263336181640625
As I said, all floating point numbers are stored in a binary form. But you cannot represent most decimal numbers with a fractional part exactly. (As it turns out, numbers like 0.2500 are representable exactly.) This is something you might expect, but those are the rare case, not the normal one.
So when you store those 4 digit numbers in MATLAB (or in any floating point environment that uses the IEEE format to store numbers) they are not stored exactly. Furthermore, any arithmetic done with those numbers will incur additional arithmetic losses.
In exact decimal arithmetic, the following holds true:
0.112233445566789 * 0.998877665544321 = 0.112107482103749819192045155269
But remember that a double precision number, stored in the IEEE754 format can hold only a finite number of binary bits in that representation. It amounts to approximately 16 correct decimal digits. And as you see, that product contains more than 16 digits. So the final product, as it is stored, will actually be:
0.112107482103749822588412143886671401560306549072265625
The two numbers start to deviate after the 16th correct decimal digit. And nothing that you do, except to work using a tool that can handle the arithmetic exactly will solve the problem. So, for example, if you are careful about how those numbers were stored, you could employ the symbolic toolbox. But any floating point storage form that employs a finite number of digits will fail to produce exact results.
In the end, what you need to learn is how to use tolerances to make any test on floating point numbers. Never do the test
0.3 == (0.2 + 0.1)
ans =
0
because most of the time, it will produce a false result, even though we know it to be a mathematically true statement. Computers do not do mathematics when they do floating point arithmetic. They give you approximations to the result you expect.
tol = 1.e-15;
abs(0.3 - (0.2 + 0.1)) < tol
ans =
1
So if the absolute difference between the numbers is sufficiently small, then you will predict they were equal.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!