## Finding the pseudo inverse of a matrix

### Deepa Maheshvare (view profile)

on 9 Dec 2018
Latest activity Edited by John D'Errico

on 11 Dec 2018

### Jan (view profile)

I'm trying to find the inverse of the following matrix
A =
-185.0685 185.0685 0
185.0685 -274.3238 89.2553
0 89.2553 -89.2553
Since A is a low-rank matrix , inv(A) doesn't work. So I tried the pseudo inverse, pinv(A) which takes the inverse of SVD of A.
However, A*pinv(A) isn't equal to identity. Are there alternative ways to find an inverse of A that will satisfy A*inverse(A) = Identity ?

on 9 Dec 2018
Edited by Jan

### Jan (view profile)

on 9 Dec 2018

If the matrix A does not have full rank, there is no inverse. In consequence you cannot find any B, which satisfies A*B=eye . This is the definition of the rank, of invertible and there cannot be an "alternative".
This means, that the question is not meaningful. It is like asking for the inverse of 0. There is none.

Show 1 older comment
Bruno Luong

### Bruno Luong (view profile)

on 9 Dec 2018
Of course not, why decomposing A would change anything? You will exactly the same result since matrix multiplication is assiociative.
Deepa Maheshvare

### Deepa Maheshvare (view profile)

on 9 Dec 2018
Sorry for the silly questions .
For example in the least squares solution,
A^TA x = A^Tb ; inverse is (A^TA) , exists only when the columns of A are linearly independent.
In my example, the matrix is not full rank and the decomposition has the same rank as A, then what does it mean to compute the pseudo inverse?
Bruno Luong

### Bruno Luong (view profile)

on 9 Dec 2018
If you want an interpretation, in your case
A*pinv(A) = pinv(A)*A
is the matrix of projection on Image(A), which is a subspace of dimension 2 of R^3.

### John D'Errico (view profile)

on 9 Dec 2018
Edited by John D'Errico

### John D'Errico (view profile)

on 9 Dec 2018

Perhaps I need to explain it differently, since you don't want to accept that there are some matrices that completely lack an inverse.
Suppose you instead asked to compute the inverse of a number. So, for some X,find the number Xinv, such that X*Xinv==1.
Great, no problem, since we all know how to form the number Xinv=1/X. But then suppose I gave you X=0? That is, the one case where X lacks a multiplicative inverse? No matter what you do, there is no number Xinv, sich that 0*Xinv==1. Certainly you must accept that fact. (inf need not apply.)
The case of computing the inverse of a rank deficient matrix is the same. No inverse exists, when A is less than full rank. Wanting it to happen is not sufficient.
So then you ask, why is the pseudo-inverse not sufficient to form an inverse? But we just got done saying that if A is amatrix of less than full rank, then no matrix exists such that
A*Ainv == eye(size(A))
So even if we compute Ainv as the pseudo-inverse, it does not matter. We cannot get around the lack of a multiplicative inverse. The magic of an SVD is not sufficient, or even the fact it is called a pseudo-inverse. A name that sounds like it is an inverse is not sufficient to make it one.
What does the pseudo-inverse do then?
pinv(A), computed using the SVD, it is a computation that is nicely stable. When A has full rank, then pinv(A) should be the same as inv(A). And pinv(A) is a nice way to solve a linear system of equations, A*x=b, that is robust to singularity of the matrix A. A virtue of the pseudo-inverse built from an SVD is theresulting least squares solution is the one that has minimum norm, of all possible solutions that are equally as good in term of predictive value. I could probably list a few other properties, but you can read about them as easily in Wikipedia.
But it is not an inverse when A is singular.

Show 1 older comment
Bruno Luong

### Bruno Luong (view profile)

on 10 Dec 2018
The condition number of PINV(A) for not full rank matrix is always infinity (1+16 numerically). It does not indicate PINV is incorrectly computed.
John D'Errico

### John D'Errico (view profile)

on 10 Dec 2018
pinv(A)*b is completely meaningful. Or not. Only you know what meaning you want to assign to the solution. A unique solution does not exist. So pinv picks A solution. in fact, it chooses the minimum norm solution. Sigh. Let me make up an example.
A = rand(3,2);
A = [A,sum(A,2)];
A
A =
0.417043932300898 0.467035857301822 0.88407978960272
0.914690211527393 0.11016637918877 1.02485659071616
0.0516693012115018 0.94137210505082 0.993041406262322
You should suspect that A is not of full rank, since the third column is the sum of the first two random columns. Now, let me create a right hand side. In fact, I will even tell you the "true" solution to the problem I will then try to solve.
Xtrue = [1;1;7];
b = A*Xtrue
b =
7.07263831682176
8.1988527257293
7.94433125009857
Can we now recover Xtrue from this, given only A and b? OF COURSE NOT!!!!!!! A is a singular matrix.
cond(A)
ans =
1.32021598459259e+16
rank(A)
ans =
2
If I try to solve it, things will likely go poorly.
A\b
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 6.376244e-18.
ans =
0
0
8
inv(A)*b
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 6.376244e-18.
ans =
ans =
13.6639503822245
15.0169695669203
16.1431839758279
Hey, I'm happy I got something that is not inf! There was no noise, else I would have gotten complete crapola. For example, change b in the least significant bits, and what do I see?
bhat = b + randn(3,1)*10*eps
bhat =
7.07263831682176
8.1988527257293
7.94433125009857
A\bhat
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 6.376244e-18.
ans =
8
8
0
So a teeny tiny change in b, resulted in significant changes in the result. This is why trying to solve a singular system is nasty, because when you do, it amplifies any noise in those least significant bits. And since you should NEVER trust the least significant bits of ANY number, things go to hell. It can be called an ill-posed problem.
Remember, we said that A is singular. What does that mean for our solution? It turns out that when we try to solve such a singular linear system, the solution can be broken down into a sum of terms. So, given any particular solution, I'll call it Xpart, we can write the set of ALL solutions as the sum of Xpart, plus any arbitrary linear combination of the nullspace basis vectors. Since rank of A is 2, the nullspace of A has rank 1, spanned here by the vector I'll call Anull.
Anull = null(A)
Anull =
-0.577350269189626
-0.577350269189626
0.577350269189626
Anull = Anull/Anull(1)
Anull =
1
1
-1
I've scaled Anull to look nice there. So the solution we would get from ANY solution to the problem A*X==b is of the general form
Xpart + Anull*t
where t is some arbitrary, unknown parameter. Pick any value you want for t. ANYTHING AT ALL. E.g., as far as the linear algebra is concerned, this is as good as any other solution:
Xpart + Anull*3
Xpart + Anull*3
ans =
3
3
5
It turns out that pinv returns the solution which has minimum norm, of all possible solutions. That is not difficult to prove, if you understand how pinv works, and if you understand the mathematics behind the expression: Xpart + Anull*t.
Now, what will we get when we use pinv?
pinv(A)*b
ans =
2.66666666666666
2.66666666666667
5.33333333333333
pinv(A)*bhat
ans =
2.66666666666667
2.66666666666667
5.33333333333333
Nifty, huh? pinv was insensitive to those tiny changes in b. In facgt, pinv is a robust solver, in the sense that it will not be torn to shreds by a random change of one bit in the right hand side, as backslash or inv might have been.
Now, was the solution the one we started from? NO! However, we can actually find that original solution, just by trying various values for t, as we did above. Kind of randomly here, we will find that we get Xtrue back for t=1.
A\b + Anull*1
ans =
1
1
7
pinv(A)*b + Anull*(-5/3)
ans =
1
1
7
inv(A)*b + Anull*(9)
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 6.376244e-18.
ans =
1
1
7
Asyou can see, all three "solutions" are related in away, being connected via a different amount of the null space basis vector of A. and since b was created with no noise in it, we could always recover an exact solution, as long as I knew what the exact solution should be. It was pretty easy in thise cases, since I knew Xtrue.
Again, a real, serious virtue of the pseudo-inverse solution provided by pinv is it is robust against tiny perturbations that would kill the solution you would get from other solvers. Those other solvers often give strange results, including NaNs on occasion, and inv often creates solutions with huge numbers in them. But pinv is always going to return the solution with minimum norm, so it tends to be well behaved, at the cost of being somewhat less efficient since it is basd on the SVD.
But is ANY solution from a singular system solve meaningful however? That can be argued. However, unless you understand the linear algebra to know if the use of pinv is appropriate for your partiicular problem, those arguments will not be worth much either way.
Finally, new in recent MATLAB releases, we now have the linear solver lsqminnorm, which works a lot like pinv(A)*b. You might expect that merely from the name.
Deepa Maheshvare

### Deepa Maheshvare (view profile)

on 11 Dec 2018
Thanks a lot for the wonderful explanation.

### Bruno Luong (view profile)

on 9 Dec 2018
Edited by Bruno Luong

### Bruno Luong (view profile)

on 9 Dec 2018

For non-full rank square matrix A, there exists no matrix B such that
A*B = I
So the pinv(A) (at the B place) won't make miracle as you like, sorry.