numeric
::expMatrix
Exponential of a matrix
MuPAD® notebooks are not recommended. Use MATLAB® live scripts instead.
MATLAB live scripts support most MuPAD functionality, though there are some differences. For more information, see Convert MuPAD Notebooks to MATLAB Live Scripts.
numeric::expMatrix(A
, <mode
>, <method
>,options
) numeric::expMatrix(A
,x
, <mode
>, <method
>,options
) numeric::expMatrix(A
,X
, <mode
>, <method
>,options
)
numeric::expMatrix(A)
returns the exponential of
a square matrix A.
numeric::expMatrix(A, x)
with a vector x returns
the vector .
numeric::expMatrix(A, X)
with a matrix X returns
the matrix .
If no return type is specified via the option ReturnType
= d
, the domain type of the result depends on the type of
the input matrix A
:
For a dense matrixA of
type Dom::DenseMatrix(Ring)
, the result is again
a matrix of type Dom::DenseMatrix()
over the ring
of expressions.
For all other matrices A of
category Cat::Matrix
,
the result is returned as a matrix
of
type Dom::Matrix()
over the ring of
expressions. This includes input matrices A
of
type Dom::Matrix(Ring)
, Dom::SquareMatrix(Ring)
, Dom::MatrixGroup(Ring)
etc.
The components of A must
not contain symbolic objects which cannot be converted to numerical
values via float
.
Numerical symbolic expressions such as π, , etc.
are accepted. They are converted to floats.
The specification of a method such as TaylorExpansion
etc.
implies SoftwareFloats
, i.e., the result is computed
via the software arithmetic of the MuPAD^{®} kernel.
The methods Diagonalization
and Interpolation
do
not work for all matrices (see below).
With SoftwareFloats
, special algorithms are
implemented for traceless 2×2 matrices
and skew symmetric 3×3 matrices.
Specification of a particular method does not have any effect for
such matrices.
If or is
required, one should not compute first
and then multiply the resulting matrix with the vector/matrix x/X.
In general, the call numeric::expMatrix(A, x)
or numeric::expMatrix(A,
X)
, respectively, is faster.
The function is sensitive to the environment variable DIGITS
,
which determines the numerical working precision.
We consider a lower triangular matrix given by an array:
A := array(1..2, 1..2, [[1, 0] , [1, PI]]): expA := numeric::expMatrix(A)
We consider a vector given by a list x1
and
by an equivalent 1dimensional
array x2
, respectively:
x1 := [1, 1]: x2 := array(1..2, [1, 1]):
Further, an equivalent input vector X of
type Dom::Matrix
()
is
used:
X := matrix(x1):
The following three calls all yield a vector represented by an 2×1 array corresponding to the type of the input matrix A:
numeric::expMatrix(A, x1), numeric::expMatrix(A, x2, Krylov), numeric::expMatrix(A, X, Diagonalization)
For further processing, the array expA
is
converted to an element of the matrix domain Dom::Matrix()
:
expA := matrix(expA):
Now, the overloaded arithmetical operators +
, *
, ^
etc.
can be used for further computations:
expA*X
delete A, expA, x1, x2, X:
We demonstrate the different precision goals of the methods. Note that software arithmetic is used when a method is specified:
A := array(1..3, 1..3, [[ 1000, 1, 0 ], [ 0, 1, 1 ], [1/10^100, 0, 1000]]):
The default method TaylorExpansion
computes each
component of correctly:
numeric::expMatrix(A, SoftwareFloats)
The method Diagonalization
produces a result,
which is accurate in the sense that holds.
Indeed, the largest components of are
correct. However, Diagonalization
does not even
get the right order of magnitude of the smaller components:
numeric::expMatrix(A, Diagonalization)
Note that is very sensitive to small changes in A. After elimination of the small lower triangular element, both methods yield the same result with correct digits for all entries:
B := array(1..3, 1..3, [[ 1000, 1, 0 ], [ 0 , 1, 1 ], [ 0 , 0, 1000]]): numeric::expMatrix(B, SoftwareFloats)
numeric::expMatrix(B, Diagonalization)
delete A, B:
Hilbert matrices have
real positive eigenvalues. For large dimension, most of these eigenvalues
are small and may be regarded as a single cluster. Consequently, the
option Krylov
is useful:
numeric::expMatrix(linalg::hilbert(100), [1 $ 100], Krylov)

A square n×n matrix
of domain type 

A vector represented by a list 

An n×m matrix
of domain type 

One of the flags 

One of the flags 

With With Compared to the If no If the result cannot be computed with hardware floats, software arithmetic by the MuPAD kernel is tried. If the current value of There may be several reasons for hardware arithmetic to fail:
If neither If Note that The trailing digits in floatingpoint results computed with 

The specification of a method implies The method The default method NoteThe methods NoteThe method NoteThe method The method This method is fast when x is spanned by few eigenvectors of A. Further, if A has only few clusters of similar eigenvalues, then this method can be much faster than the other methods. Cf. Example 3. 

Suppresses warnings 

Option, specified as Return the result matrix or vector as a matrix of domain type 
All results are float matrices/vectors. For an n×n matrix A:
numeric::expMatrix(A, method)
returns as
an n×n matrix,
numeric::expMatrix(A, x, method)
returns as
an n×1 matrix,
numeric::expMatrix(A, X, method)
returns as
an n×m matrix.
The domain type of the result depends on the domain type of
the input matrix A unless
a return type is requested explicitly via ReturnType
= d
.
Y. Saad, “Analysis of some Krylov Subspace Approximations to the Matrix Exponential Operator”, SIAM Journal of Numerical Analysis 29 (1992).
The method TaylorExpansion
sums the usual
Taylor series
in a suitable numerically stable way.
The method Diagonalization
computes by
a diagonalization A = T diag(λ_{1}, λ_{2},
…) T^{ 1}.
The method Interpolation
computes a polynomial P interpolating
the function exp at
the eigenvalues of A.
Evaluation of the matrix polynomial yields .
The method Krylov
reduces A to
a Hessenberg matrix H and
computes an approximation of from .
Depending on A and x,
the dimension of H may
be smaller than the dimension of A.
numeric::expMatrix
uses polynomial arithmetic
to multiply matrices and vectors. Thus, sparse matrices are handled
efficiently based on the MuPAD internal sparse representation
of polynomials.