If *A* is square and nonsingular, the equations *AX
= I* and *XA = I* have the same solution, *X*.
This solution is called the inverse of *A*, is denoted
by *A*^{-1}, and is computed
by the function `inv`

.

The *determinant* of a matrix is useful in
theoretical considerations and some types of symbolic computation,
but its scaling and round-off error properties make it far less satisfactory
for numeric computation. Nevertheless, the function `det`

computes the determinant of a square
matrix:

A = pascal(3) A = 1 1 1 1 2 3 1 3 6 d = det(A) X = inv(A) d = 1 X = 3 -3 1 -3 5 -2 1 -2 1

Again, because `A`

is symmetric, has integer
elements, and has determinant equal to one, so does its inverse. However,

B = magic(3) B = 8 1 6 3 5 7 4 9 2 d = det(B) X = inv(B) d = -360 X = 0.1472 -0.1444 0.0639 -0.0611 0.0222 0.1056 -0.0194 0.1889 -0.1028

Closer examination of the elements of `X`

,
or use of `format rat`

, would reveal that they are
integers divided by 360.

If `A`

is square and nonsingular, then, without
round-off error, `X = inv(A)*B`

is theoretically
the same as `X = A\B`

and `Y = B*inv(A)`

is
theoretically the same as `Y = B/A`

. But the computations
involving the backslash and slash operators are preferable because
they require less computer time, less memory, and have better error-detection
properties.

Rectangular matrices do not have inverses or determinants. At
least one of the equations *AX = I* and *XA
= I* does not have a solution. A partial replacement for
the inverse is provided by the *Moore-Penrose pseudoinverse*,
which is computed by the `pinv`

function:

format short C = fix(10*gallery('uniformdata',[3 2],0)); X = pinv(C) X = 0.1159 -0.0729 0.0171 -0.0534 0.1152 0.0418

The matrix

Q = X*C Q = 1.0000 0.0000 0.0000 1.0000

is the 2-by-2 identity, but the matrix

P = C*X P = 0.8293 -0.1958 0.3213 -0.1958 0.7754 0.3685 0.3213 0.3685 0.3952

is not the 3-by-3 identity. However, `P`

acts
like an identity on a portion of the space in the sense that `P`

is
symmetric, `P*C`

is equal to `C`

,
and `X*P`

is equal to `X`

.

If `A`

is *m*-by-*n* with *m* > *n* and
full rank *n*, each of the three statements

x = A\b x = pinv(A)*b x = inv(A'*A)*A'*b

theoretically computes the same least-squares solution `x`

,
although the backslash operator does it faster.

However, if `A`

does not have full rank, the
solution to the least-squares problem is not unique. There are many
vectors `x`

that minimize

norm(A*x -b)

The solution computed by `x = A\b`

is a basic
solution; it has at most *r* nonzero components,
where *r* is the rank of `A`

. The
solution computed by `x = pinv(A)*b`

is the minimal
norm solution because it minimizes `norm(x)`

. An
attempt to compute a solution with `x = inv(A'*A)*A'*b`

fails
because `A'*A`

is singular.

Here is an example that illustrates the various solutions:

A = [ 1 2 3 4 5 6 7 8 9 10 11 12 ];

does not have full rank. Its second column is the average of the first and third columns. If

b = A(:,2)

is the second column, then an obvious solution to ```
A*x
= b
```

is `x = [0 1 0]'`

. But none of the
approaches computes that `x`

. The backslash operator
gives

x = A\b Warning: Rank deficient, rank = 2, tol = 1.4594e-014. x = 0.5000 0 0.5000

This solution has two nonzero components. The pseudoinverse approach gives

y = pinv(A)*b y = 0.3333 0.3333 0.3333

There is no warning about rank deficiency. But ```
norm(y)
= 0.5774
```

is less than `norm(x) = 0.7071`

.
Finally,

z = inv(A'*A)*A'*b

fails completely:

Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 9.868649e-018. z = -0.8594 1.3438 -0.6875

Was this topic helpful?