Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
funny problem about the precision of matrix transpose and inverse.

Subject: funny problem about the precision of matrix transpose and inverse.

From: Ha

Date: 29 Oct, 2012 20:33:08

Message: 1 of 9

>> A=[
-3.40411799,-0.37565809,7.34175377,-1.17789516
-3.87102015,-0.13046468,8.34873435,-1.33945295
-3.87102015,0.13046468,8.34873435,-1.33945295
-3.40411799,0.37565809,7.34175377,-1.17789516
-2.52662896,0.57554158,5.44924934,-0.87426582
-1.34439119,0.70600626,2.89948501,-0.46518713
0.00000000,0.75131617,-0.00000000,0.00000000
1.34439119,0.70600626,-2.89948501,0.46518713
2.52662896,0.57554158,-5.44924934,0.87426582
3.40411799,0.37565809,-7.34175377,1.17789516
3.87102015,0.13046468,-8.34873435,1.33945295
3.87102015,-0.13046468,-8.34873435,1.33945295
3.40411799,-0.37565809,-7.34175377,1.17789516
2.52662896,-0.57554158,-5.44924934,0.87426582
1.34439119,-0.70600626,-2.89948501,0.46518713
-0.00000000,-0.75131617,0.00000000,-0.00000000
-1.34439119,-0.70600626,2.89948501,-0.46518713
-2.52662896,-0.57554158,5.44924934,-0.87426582
-3.40411799,-0.37565809,7.34175377,-1.17789516
];

>>inv(A'*A)

ans =

  1.0e+014 *

    0.2346 0.0000 -0.1170 -1.4075
    0.0000 0.0000 -0.0000 -0.0000
   -0.1170 -0.0000 -0.0141 0.2507
   -1.4075 -0.0000 0.2507 5.6301

>> T=A'; inv(T*A)

ans =

  1.0e+014 *

   -0.1759 0.0000 -0.0816 -0.0000
    0.0000 0.0000 -0.0000 -0.0000
   -0.0816 -0.0000 0.0346 0.4516
   -0.0000 -0.0000 0.4516 2.8149


Look, it's funny that the two results are different. How can that be?

Then, I tried

>> A'*A-T*A

ans =

  1.0e-013 *

         0 0 -0.5684 -0.0711
         0 0.0089 0 0
   -0.5684 0 0 -0.1421
   -0.0711 0 -0.1421 0

not equals to zero.

Subject: funny problem about the precision of matrix transpose and inverse.

From: Roger Stafford

Date: 29 Oct, 2012 21:33:08

Message: 2 of 9

"Ha" wrote in message <k6mp64$9uj$1@newscl01ah.mathworks.com>...
> >>inv(A'*A)
> >> T=A'; inv(T*A)
> Look, it's funny that the two results are different. How can that be?
>
> Then, I tried
> >> A'*A-T*A
> not equals to zero.
- - - - - - - - - - -
  I will make a guess here. When the matlab compiler sees the expression A'*A it realizes that the result must be Hermitian and for better computation efficiency calls on a different routine than with T*A. Due to the different algorithm used the roundoff errors will differ, hence the very small differences you observed after subtraction. Since both results are nearly singular their two inverses would likely be quite large as you observed in this case.

Roger Stafford

Subject: funny problem about the precision of matrix transpose and inverse.

From: James Tursa

Date: 29 Oct, 2012 22:28:08

Message: 3 of 9

"Roger Stafford" wrote in message <k6msmk$l8l$1@newscl01ah.mathworks.com>...
> "Ha" wrote in message <k6mp64$9uj$1@newscl01ah.mathworks.com>...
> > >>inv(A'*A)
> > >> T=A'; inv(T*A)
> > Look, it's funny that the two results are different. How can that be?
> >
> > Then, I tried
> > >> A'*A-T*A
> > not equals to zero.
> - - - - - - - - - - -
> I will make a guess here. When the matlab compiler sees the expression A'*A it realizes that the result must be Hermitian and for better computation efficiency calls on a different routine than with T*A. Due to the different algorithm used the roundoff errors will differ, hence the very small differences you observed after subtraction. Since both results are nearly singular their two inverses would likely be quite large as you observed in this case.
>
> Roger Stafford

Correct. For the expression A'*A, MATLAB will call a symmetric BLAS matrix multiply routine, whereas the expressions T = A' and subsequently T*A will end up calling the generic BLAS matrix multiply routine. The results will be close but not exactly the same.

James Tursa

Subject: funny problem about the precision of matrix transpose and inverse.

From: Steven_Lord

Date: 30 Oct, 2012 14:04:14

Message: 4 of 9



"James Tursa" <aclassyguy_with_a_k_not_a_c@hotmail.com> wrote in message
news:k6mvto$2bk$1@newscl01ah.mathworks.com...
> "Roger Stafford" wrote in message
> <k6msmk$l8l$1@newscl01ah.mathworks.com>...
>> "Ha" wrote in message <k6mp64$9uj$1@newscl01ah.mathworks.com>...
>> > >>inv(A'*A)
>> > >> T=A'; inv(T*A)
>> > Look, it's funny that the two results are different. How can that be?
>> >
>> > Then, I tried
>> > >> A'*A-T*A
>> > not equals to zero.
>> - - - - - - - - - - -
>> I will make a guess here. When the matlab compiler sees the expression
>> A'*A it realizes that the result must be Hermitian and for better
>> computation efficiency calls on a different routine than with T*A. Due
>> to the different algorithm used the roundoff errors will differ, hence
>> the very small differences you observed after subtraction. Since both
>> results are nearly singular their two inverses would likely be quite
>> large as you observed in this case.
>>
>> Roger Stafford
>
> Correct. For the expression A'*A, MATLAB will call a symmetric BLAS matrix
> multiply routine, whereas the expressions T = A' and subsequently T*A will
> end up calling the generic BLAS matrix multiply routine. The results will
> be close but not exactly the same.

As an additional note for the original poster, if you're trying to solve a
system of linear equations do NOT repeat NOT construct the normal equations
and invert.

http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Computation

Use the backslash operator instead.

http://www.mathworks.com/help/matlab/ref/mldivide.html

--
Steve Lord
slord@mathworks.com
To contact Technical Support use the Contact Us link on
http://www.mathworks.com

Subject: funny problem about the precision of matrix transpose and inverse.

From: Ha

Date: 31 Oct, 2012 04:15:09

Message: 5 of 9

"Steven_Lord" <slord@mathworks.com> wrote in message <k6omou$54f$1@newscl01ah.mathworks.com>...
>
> As an additional note for the original poster, if you're trying to solve a
> system of linear equations do NOT repeat NOT construct the normal equations
> and invert.
>
> http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Computation
>
> Use the backslash operator instead.
>
> http://www.mathworks.com/help/matlab/ref/mldivide.html
>

Yeah, I used the backslash in my code.

I'm linearizing a nonlinear inversion problem and solving A*m=b in a least squares sense, where A is the jacobian matrix.

Backslash seems provide no help. I still get a warning that the matrix is badly scaled or nearly singular and sometimes it gives rise to NaNs.

Subject: funny problem about the precision of matrix transpose and inverse.

From: Roger Stafford

Date: 31 Oct, 2012 05:46:08

Message: 6 of 9

"Ha" wrote in message <k6q8kc$8i4$1@newscl01ah.mathworks.com>...
> Yeah, I used the backslash in my code.
>
> I'm linearizing a nonlinear inversion problem and solving A*m=b in a least squares sense, where A is the jacobian matrix.
>
> Backslash seems provide no help. I still get a warning that the matrix is badly scaled or nearly singular and sometimes it gives rise to NaNs.
- - - - - - - - - -
  If we could enjoy infinite computational precision, the solution to any system of equations

 A*m=b

would theoretically always give us a precise solution for 'm' provided A were nonsingular, that is, provided that its determinant were nonzero. However if A is very close to such singularity, then even in such a hypothetical situation the solutions would very likely vary grossly in the presence of very small variations in any of the 'b' or 'A' quantities. Roundoff errors in computation play a similar role to such small variations, so the "funny" results you have described, though due to precision limits in computation, are actually a sign of a very "unstable" set of linear equations. A tiny variation in any of the above values could lead to the seemingly anonymous behavior you have seen. Rather than placing the blame on the computer or the algorithm it uses, if I were you I would begin to question the validity of your set of equations as representing a realistic set of numerical
relationships. Perhaps they need some considerable modification.

Roger Stafford

Subject: funny problem about the precision of matrix transpose and inverse.

From: Bruno Luong

Date: 31 Oct, 2012 07:18:12

Message: 7 of 9

"Ha" wrote in message <k6q8kc$8i4$1@newscl01ah.mathworks.com>...

>
> Backslash seems provide no help. I still get a warning that the matrix is badly scaled or nearly singular and sometimes it gives rise to NaNs.

Then your system is not invertible, or nearly so. Often because you forget to add appropriate constraints, boundary condition to make the problem well-posed. It might also be badly scaled, as the warning message suggests.

You could try to use pinv(), but the problem might lie well before the system was formed.

Bruno

Subject: funny problem about the precision of matrix transpose and inverse.

From: Ha

Date: 31 Oct, 2012 07:23:09

Message: 8 of 9

"Roger Stafford" wrote in message <k6qdv0$pl4$1@newscl01ah.mathworks.com>...

> If we could enjoy infinite computational precision, the solution to any system of equations
>
> A*m=b
>
> would theoretically always give us a precise solution for 'm' provided A were nonsingular, that is, provided that its determinant were nonzero. However if A is very close to such singularity, then even in such a hypothetical situation the solutions would very likely vary grossly in the presence of very small variations in any of the 'b' or 'A' quantities. Roundoff errors in computation play a similar role to such small variations, so the "funny" results you have described, though due to precision limits in computation, are actually a sign of a very "unstable" set of linear equations. A tiny variation in any of the above values could lead to the seemingly anonymous behavior you have seen. Rather than placing the blame on the computer or the algorithm it uses, if I were you I would begin to question the validity of your set of equations as representing a realistic set of numerical
> relationships. Perhaps they need some considerable modification.
>
> Roger Stafford

Thanks, Roger. Nice interpretation! In practice, I have been considering that the quasi-singularity of the matrix arises from the nature of my inversion problem, most likely, the tradeoff among model parameters.

Thanks to all you guys above.

Subject: funny problem about the precision of matrix transpose and inverse.

From: Roger Stafford

Date: 31 Oct, 2012 18:19:08

Message: 9 of 9

"Roger Stafford" wrote in message <k6qdv0$pl4$1@newscl01ah.mathworks.com>...
> .... A tiny variation in any of the above values could lead to the seemingly anonymous behavior you have seen. .....
- - - - - - - - - -
  Oops, my age is showing! I meant to say "seemingly anomalous behavior".

Roger Stafford

Tags for this Thread

No tags are associated with this thread.

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us