Y = inv(X)
Y = inv(X) returns the
inverse of the square matrix
X. A warning message
is printed if
X is badly scaled or nearly singular.
In practice, it is seldom necessary to form the explicit inverse
of a matrix. A frequent misuse of
inv arises when
solving the system of linear equations Ax = b.
One way to solve this is with
A better way, from both an execution time and numerical accuracy standpoint,
is to use the matrix division operator
This produces the solution using Gaussian elimination, without forming
the inverse. See
for further information.
Here is an example demonstrating the difference between solving
a linear system by inverting the matrix with
solving it directly with
A\b. A random matrix
order 500 is constructed so that its condition number,
1.e10, and its norm,
1. The exact solution
a random vector of length 500 and the right-hand side is
Thus the system of linear equations is badly conditioned, but consistent.
On a 300 MHz, laptop computer the statements
n = 500; Q = orth(randn(n,n)); d = logspace(0,-10,n); A = Q*diag(d)*Q'; x = randn(n,1); b = A*x; tic, y = inv(A)*b; toc err = norm(y-x) res = norm(A*y-b)
elapsed_time = 1.4320 err = 7.3260e-006 res = 4.7511e-007
while the statements
tic, z = A\b, toc err = norm(z-x) res = norm(A*z-b)
elapsed_time = 0.6410 err = 7.1209e-006 res = 4.4509e-015
It takes almost two and one half times as long to compute the
y = inv(A)*b as with
A\b. Both produce computed solutions with about the same
1.e-6, reflecting the condition number of
the matrix. But the size of the residuals, obtained by plugging the
computed solution back into the original equations, differs by several
orders of magnitude. The direct solution produces residuals on the
order of the machine accuracy, even though the system is badly conditioned.
The behavior of this example is typical. Using
inv(A)*b is two to three times as fast and produces
residuals on the order of machine accuracy, relative to the magnitude
of the data.