Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Inverse of a Matrix

Subject: Inverse of a Matrix

From: Ravi

Date: 2 Oct, 2007 04:15:40

Message: 1 of 12

Are there any methods available for finding approximate
inverse for nearly singular matrices using MATLAB? Is there
a way to avoid this?

Subject: Inverse of a Matrix

From: Bjorn Gustavsson

Date: 2 Oct, 2007 08:06:41

Message: 2 of 12

"Ravi " <vioravis.nospam@gmail.com> wrote in message
<fdsglc$6b7$1@fred.mathworks.com>...
> Are there any methods available for finding approximate
> inverse for nearly singular matrices using MATLAB? Is there
> a way to avoid this?
>
You could try the svd approach. Then you can do damping and
truncation directly on the singular values.

HTH,
Bjoern

Subject: Inverse of a Matrix

From: Duane Hanselman

Date: 2 Oct, 2007 09:09:20

Message: 3 of 12

"Ravi " <vioravis.nospam@gmail.com> wrote in message
<fdsglc$6b7$1@fred.mathworks.com>...
> Are there any methods available for finding approximate
> inverse for nearly singular matrices using MATLAB? Is there
> a way to avoid this?

What do you need the inverse of a nearly singular matrix
for? You cannot expect to get a good inverse from a bad
matrix. The condition number of the inverse will not
miraculously be better than the original matrix. Garbage in,
garbage out applies in this case. There is no way around it.
All you can expect from MATLAB is an inverse whose
properties are not significantly worse than the original
matrix. There are very few reasons for ever computing an
inverse.

Subject: Inverse of a Matrix

From: John D'Errico

Date: 2 Oct, 2007 09:40:50

Message: 4 of 12

"Ravi " <vioravis.nospam@gmail.com> wrote in message <fdsglc$6b7
$1@fred.mathworks.com>...
> Are there any methods available for finding approximate
> inverse for nearly singular matrices using MATLAB? Is there
> a way to avoid this?

As Duane points out, why do you want the inverse at all?

If you are using it to solve a system of equations, then
its a very poor choice.

A pseudo-inverse, as provided by pinv will be a good
choice however. It uses the svd to survive the singularity,
and your result will be as stable as is numerically possible.
For example:

A = ones(2);
A(1,1) = 1+eps;


inv(A)
Warning: Matrix is close to singular or badly scaled.
         Results may be inaccurate. RCOND = 5.551115e-17.
ans =
   4.5036e+15 -4.5036e+15
  -4.5036e+15 4.5036e+15


pinv(A)
ans =
         0.25 0.25
         0.25 0.25


A*pinv(A)
ans =
          0.5 0.5
          0.5 0.5


A*inv(A)
Warning: Matrix is close to singular or badly scaled.
         Results may be inaccurate. RCOND = 5.551115e-17.
ans =
     1 0
     0 1


However, perturb A only by a tiny bit and see how
much the inverse changes.

B = A;
B(2,2) = 1-eps;

inv(B)
Warning: Matrix is singular to working precision.
ans =
   Inf Inf
   Inf Inf


Or instead, try this:


B*inv(A)
Warning: Matrix is close to singular or badly scaled.
         Results may be inaccurate. RCOND = 5.551115e-17.
ans =
            1 0
            1 0
 

Whereas pinv(B) is still the same stable result.


pinv(B)
ans =
         0.25 0.25
         0.25 0.25

HTH,
John

Subject: Inverse of a Matrix

From: Ravi

Date: 2 Oct, 2007 19:54:47

Message: 5 of 12

John and others,

Thanks for your replies. I am not in a position to avoid
using matrix inversion. pinv really seems to give some
erratic answers for some of the test cases I have. I would
like to if there is a method to calculate the inverse using
Cholesky decomposition and if so, whether it would be better
than the inv operation in MATLAB?



"John D'Errico" <woodchips@rochester.rr.com> wrote in
message <fdt3n2$kej$1@fred.mathworks.com>...
> "Ravi " <vioravis.nospam@gmail.com> wrote in message
<fdsglc$6b7
> $1@fred.mathworks.com>...
> > Are there any methods available for finding approximate
> > inverse for nearly singular matrices using MATLAB? Is there
> > a way to avoid this?
>
> As Duane points out, why do you want the inverse at all?
>
> If you are using it to solve a system of equations, then
> its a very poor choice.
>
> A pseudo-inverse, as provided by pinv will be a good
> choice however. It uses the svd to survive the singularity,
> and your result will be as stable as is numerically possible.
> For example:
>
> A = ones(2);
> A(1,1) = 1+eps;
>
>
> inv(A)
> Warning: Matrix is close to singular or badly scaled.
> Results may be inaccurate. RCOND = 5.551115e-17.
> ans =
> 4.5036e+15 -4.5036e+15
> -4.5036e+15 4.5036e+15
>
>
> pinv(A)
> ans =
> 0.25 0.25
> 0.25 0.25
>
>
> A*pinv(A)
> ans =
> 0.5 0.5
> 0.5 0.5
>
>
> A*inv(A)
> Warning: Matrix is close to singular or badly scaled.
> Results may be inaccurate. RCOND = 5.551115e-17.
> ans =
> 1 0
> 0 1
>
>
> However, perturb A only by a tiny bit and see how
> much the inverse changes.
>
> B = A;
> B(2,2) = 1-eps;
>
> inv(B)
> Warning: Matrix is singular to working precision.
> ans =
> Inf Inf
> Inf Inf
>
>
> Or instead, try this:
>
>
> B*inv(A)
> Warning: Matrix is close to singular or badly scaled.
> Results may be inaccurate. RCOND = 5.551115e-17.
> ans =
> 1 0
> 1 0
>
>
> Whereas pinv(B) is still the same stable result.
>
>
> pinv(B)
> ans =
> 0.25 0.25
> 0.25 0.25
>
> HTH,
> John
>

Subject: Inverse of a Matrix

From: roberson@ibd.nrc-cnrc.gc.ca (Walter Roberson)

Date: 2 Oct, 2007 20:52:31

Message: 6 of 12

In article <fdt1s0$l5f$1@fred.mathworks.com>,
Duane Hanselman <masteringmatlab@yahoo.spam.com> wrote:

>There are very few reasons for ever computing an
>inverse.

I've been wondering about that statement, which I have seen several
people post.

I am working with a function

Pij = g( Dij * inv(f1(m1,m2)) * Dij' + log(det(f1(m1,m2))),
         Dij * inv(f2(m1,m2)) * Dij' + log(det(f2(m1,m2))) )

Where
 Dij is ([m1;m2](i,:) - [m1;m2](j,:)) -- i.e., a difference vector
 f1(m1,m2) is a matrix function in the covariances of m1 and m2
 f2(m1,m2) is a matrix function closely related to f1

As Dij is a vector, Dij * inv(f1(m1,m2)) * Dij' is a scalar,
as is Dij * inv(f2(m1,m2)) * Dij'; g() calculates a scalar result.

Hence, P is a (symmetric) square matrix of scalar results,
each of which involves a vector-matrix-vector calculation.

I found a matrix formulation that allows me to calculate
an entire column of P at one time. However, because I need a 2D
matrix output and each element of that output involves a vector
matrix computation, in order to calculate everything in one shot,
I would need Matlab to do a 3-dimensional matrix multiplication,
which it will not do using the built-in operators.

So what then do I do if "there are very few reasons for ever
computing an inverse"? I can reformulate the terms with
Dij / f1(m1,m2) * Dij' and Dij / f2(m1,m2) * Dij'
but P can be (for example) 1545 x 1545, so if I did that
reformulation together with the column-at-a-time trick,
would I not be essentially re-doing the logical equivilent of
inverting f1(m1,m2) and f2(m1,m2) each 1545 times? Would that not
be much *much* slower than my present stategy of inverting f1(m1,m2)
and f2(m1,m2) once each?

Already, working with the 1545 x 1545 matrix is slow. When I was
proceeding an element at a time, vector-matrix-vector, the calculation
of P was taking 65 seconds each time, having pre-calculated the inverse.
The column at a time formulation was barely faster (62 seconds)
until I found that allowing it to do a full matrix multiply and
then taking the diag() was noticably faster -- and then found that
sum(Dij * inf(f1(m1,m2)) .* Dij, 2) was faster still. I now have it
down to less than 2 seconds per iteration.

Isn't it the case that if I use sum(Dij / f1(m1,m2) .* Dij, 2) for
each column that I would see a substantial slowdown in the calculation?
Is it worth it? Or is "substantial re-use of the result of the logical
division" one of the "few reasons" that is considered to justify
taking the inverse?
--
   "I was very young in those days, but I was also rather dim."
   -- Christopher Priest

Subject: Inverse of a Matrix

From: John D'Errico

Date: 2 Oct, 2007 21:17:30

Message: 7 of 12

"Ravi " <vioravis.nospam@gmail.com> wrote in message <fdu7m7$ijo
$1@fred.mathworks.com>...
> John and others,
>
> Thanks for your replies. I am not in a position to avoid
> using matrix inversion. pinv really seems to give some
> erratic answers for some of the test cases I have. I would
> like to if there is a method to calculate the inverse using
> Cholesky decomposition and if so, whether it would be better
> than the inv operation in MATLAB?

Can you explain how you perceive pinv is giving
erratic results? An example would be useful.

A true matrix inverse should be far more erratic
than pinv. And unless your matrix is symmetric
positive definite, cholesky is not an option.

John

Subject: Inverse of a Matrix

From: Tim Davis

Date: 3 Oct, 2007 00:51:38

Message: 8 of 12

roberson@ibd.nrc-cnrc.gc.ca (Walter Roberson) wrote in
...
> I am working with a function
>
> Pij = g( Dij * inv(f1(m1,m2)) * Dij' + log(det(f1(m1,m2))),
> Dij * inv(f2(m1,m2)) * Dij' + log(det(f2(m1,m2))) )
>
> Where
> Dij is ([m1;m2](i,:) - [m1;m2](j,:)) -- i.e., a
difference vector
> f1(m1,m2) is a matrix function in the covariances of m1
and m2
> f2(m1,m2) is a matrix function closely related to f1

What you want to do is factorize the matrix A = f1(m1,m2)
once and then reuse the factorization lots of times. This
isn't the case of "one of the few reasons to use the
inverse". If you're multiplying the inverse times a vector,
then that's a clear signal that you should do something
else. Never ever multiply by the inverse.

> Isn't it the case that if I use sum(Dij / f1(m1,m2) .*
Dij, 2) for
> each column that I would see a substantial slowdown in the
calculation?

Yes, that would be costly. But it's not the right thing to do.

> Is it worth it? Or is "substantial re-use of the result of
the logical
> division" one of the "few reasons" that is considered to
justify
> taking the inverse?

No, this is a good reason to factor once and solve 1545
times. I'm going to rewrite your problem to shorten the
expressions just a bit. You want to compute s = sum ((b /
A) .* b, 2) where A is used many times with different
vectors b, right?

For simplicity, and so I don't get it wrong, let me write
this as backslash, and assume b is a column vector instead:

s = sum (b'*(A\b)) ;

Let's suppose A is unsymmetric and dense. Then do this once:

[L,U,p] = lu (A,'vector') ;
lower.LT = true ;
upper.UT = true ;

and then do this many times with different b:

x = linsolve (U, linsolve (L, b (p), lower), upper) ;
s = sum (b'*x) ;

or just a one-liner:

x = sum (b' * insolve (U, linsolve (L, b(p), lower), upper));

If that syntax looks ugly, try this (a little slower):

[L,U,p]=lu(A,'vector') ;

s = sum (b' * (U \ (L \ b (p)))) ;

If that's still ugly, get LINFACTOR on the File Exchange and
do this once:

F = linfactor (A) ;

and then do

s = sum (b' * linfactor (A,b)) ;

where linfactor.m will do the lu and linsolve's for you. It
will also do the more general thing (sparse case, and both
LU and Chol).

I wrote linfactor precisely to answer this very question.

This will be more accurate and just as fast as:

F = inv(A) ;

s = sum (b'*(F*b)) ;

which is what you're doing now.

It's important to use the "vector" option for LU; otherwise,
P*b takes a lot of time.

Here are some timings, in MATLAB 7.4:

>> A=rand(1545);
>> b=rand(1545,1);
>> tic; s = sum (b'*(A\b)); toc
Elapsed time is 1.460618 seconds.

>> tic; F=inv(A);toc
Elapsed time is 2.420236 seconds.
>> tic; s2 = sum (b'*(F*b)); toc
Elapsed time is 0.006123 seconds.

>> clear F

>> tic ; F = linfactor(A) ; toc
Elapsed time is 0.905961 seconds.
>> F

F =

       L: [1545x1545 double]
       U: [1545x1545 double]
       p: [1x1545 double]
    kind: 'dense LU: L*U = A(p,:)'
    code: 3


>> tic; s = sum (b' * linfactor(F,b)) ; toc
Elapsed time is 0.006120 seconds.

So, since linfactor is the right thing to do in terms of
accuracy, and since it's just as fast as F=inv(A) followed
by F*b, why use inv?

Note that linfactor doesn't emulate the b/A syntax (b/A is a
tiny bit slower than backslash anyway; the slash operator in
MATLAB is extremely simple; it just does (A'\b')' and uses
backslash).

Subject: Inverse of a Matrix

From: Tim Davis

Date: 3 Oct, 2007 01:17:43

Message: 9 of 12

"Ravi " <vioravis.nospam@gmail.com> wrote
...
> I am not in a position to avoid
> using matrix inversion.

Why? What do you want to do with the inverse that you
cannot avoid? I'm very curious.

Subject: Inverse of a Matrix

From: Tim Davis

Date: 3 Oct, 2007 11:12:50

Message: 10 of 12

"Tim Davis" <davis@cise.ufl.edu> wrote in message
<fduqjn$bo3$1@fred.mathworks.com>...
> "Ravi " <vioravis.nospam@gmail.com> wrote
> ...
> > I am not in a position to avoid
> > using matrix inversion.
>
> Why? What do you want to do with the inverse that you
> cannot avoid? I'm very curious.

<humor>
Are you needing to use the pseudo-inverse applied to delta
functions? I anxiously await your reply;
I'm on pinv and needles. ;-)
</humor>

Subject: Inverse of a Matrix

From: Vanesa

Date: 19 Aug, 2009 10:55:05

Message: 11 of 12

"Tim Davis" <davis@cise.ufl.edu> wrote in message <fdup2q$j8o$1@fred.mathworks.com>...
> roberson@ibd.nrc-cnrc.gc.ca (Walter Roberson) wrote in
> ...
> > I am working with a function
> >
> > Pij = g( Dij * inv(f1(m1,m2)) * Dij' + log(det(f1(m1,m2))),
> > Dij * inv(f2(m1,m2)) * Dij' + log(det(f2(m1,m2))) )
> >
> > Where
> > Dij is ([m1;m2](i,:) - [m1;m2](j,:)) -- i.e., a
> difference vector
> > f1(m1,m2) is a matrix function in the covariances of m1
> and m2
> > f2(m1,m2) is a matrix function closely related to f1
>
> What you want to do is factorize the matrix A = f1(m1,m2)
> once and then reuse the factorization lots of times. This
> isn't the case of "one of the few reasons to use the
> inverse". If you're multiplying the inverse times a vector,
> then that's a clear signal that you should do something
> else. Never ever multiply by the inverse.
>
> > Isn't it the case that if I use sum(Dij / f1(m1,m2) .*
> Dij, 2) for
> > each column that I would see a substantial slowdown in the
> calculation?
>
> Yes, that would be costly. But it's not the right thing to do.
>
> > Is it worth it? Or is "substantial re-use of the result of
> the logical
> > division" one of the "few reasons" that is considered to
> justify
> > taking the inverse?
>
> No, this is a good reason to factor once and solve 1545
> times. I'm going to rewrite your problem to shorten the
> expressions just a bit. You want to compute s = sum ((b /
> A) .* b, 2) where A is used many times with different
> vectors b, right?
>
> For simplicity, and so I don't get it wrong, let me write
> this as backslash, and assume b is a column vector instead:
>
> s = sum (b'*(A\b)) ;
>
> Let's suppose A is unsymmetric and dense. Then do this once:
>
> [L,U,p] = lu (A,'vector') ;
> lower.LT = true ;
> upper.UT = true ;
>
> and then do this many times with different b:
>
> x = linsolve (U, linsolve (L, b (p), lower), upper) ;
> s = sum (b'*x) ;
>
> or just a one-liner:
>
> x = sum (b' * insolve (U, linsolve (L, b(p), lower), upper));
>
> If that syntax looks ugly, try this (a little slower):
>
> [L,U,p]=lu(A,'vector') ;
>
> s = sum (b' * (U \ (L \ b (p)))) ;
>
> If that's still ugly, get LINFACTOR on the File Exchange and
> do this once:
>
> F = linfactor (A) ;
>
> and then do
>
> s = sum (b' * linfactor (A,b)) ;
>
> where linfactor.m will do the lu and linsolve's for you. It
> will also do the more general thing (sparse case, and both
> LU and Chol).
>
> I wrote linfactor precisely to answer this very question.
>
> This will be more accurate and just as fast as:
>
> F = inv(A) ;
>
> s = sum (b'*(F*b)) ;
>
> which is what you're doing now.
>
> It's important to use the "vector" option for LU; otherwise,
> P*b takes a lot of time.
>
> Here are some timings, in MATLAB 7.4:
>
> >> A=rand(1545);
> >> b=rand(1545,1);
> >> tic; s = sum (b'*(A\b)); toc
> Elapsed time is 1.460618 seconds.
>
> >> tic; F=inv(A);toc
> Elapsed time is 2.420236 seconds.
> >> tic; s2 = sum (b'*(F*b)); toc
> Elapsed time is 0.006123 seconds.
>
> >> clear F
>
> >> tic ; F = linfactor(A) ; toc
> Elapsed time is 0.905961 seconds.
> >> F
>
> F =
>
> L: [1545x1545 double]
> U: [1545x1545 double]
> p: [1x1545 double]
> kind: 'dense LU: L*U = A(p,:)'
> code: 3
>
>
> >> tic; s = sum (b' * linfactor(F,b)) ; toc
> Elapsed time is 0.006120 seconds.
>
> So, since linfactor is the right thing to do in terms of
> accuracy, and since it's just as fast as F=inv(A) followed
> by F*b, why use inv?
>
> Note that linfactor doesn't emulate the b/A syntax (b/A is a
> tiny bit slower than backslash anyway; the slash operator in
> MATLAB is extremely simple; it just does (A'\b')' and uses
> backslash).
Hi

I guess i have a similar problem...I'm trying to find the positive part of a matrix, lets call it A+ of A (I dont know if that is the correct name) but is simply the matrix with ONLY the positive eigenvalues (diagonal matrix) of A multiplied by the right eigenvector matrix and the inverse of the right eigenvector matrix.

I dont know if my approach is correct but what I tried to do was to simply multiply teh positive eigenvalue matrix by the right eigenvector matrix. Then I applied linsolve, i get an upper triangular matrix but the lower matrix is simply a diagonal matrix of ones. and so the system has no solution. (I get many NaN's and the matrix really does not mmake much sense to me)

I applied linfactor and get exactly the same result.

Does someone have any idea how to get A+?

I'd appreciate your help very much.

Thank you.

Subject: Inverse of a Matrix

From: Airan

Date: 6 Mar, 2012 15:51:18

Message: 12 of 12

You could try the Block LDL' factorization for Hermitian indefinite matrices.
http://www.mathworks.fr/help/techdoc/ref/ldl.html
I had the same problem and it solved it...

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us