Exponential of a matrix
This functionality does not run in MATLAB.
numeric::expMatrix(A, <mode>, <method>, options) numeric::expMatrix(A, x, <mode>, <method>, options) numeric::expMatrix(A, X, <mode>, <method>, options)
numeric::expMatrix(A) returns the exponential of a square matrix A.
numeric::expMatrix(A, x) with a vector x returns the vector .
numeric::expMatrix(A, X) with a matrix X returns the matrix .
If no return type is specified via the option ReturnType = d, the domain type of the result depends on the type of the input matrix A:
For a dense matrixA of type Dom::DenseMatrix(Ring), the result is again a matrix of type Dom::DenseMatrix() over the ring of expressions.
For all other matrices A of category Cat::Matrix, the result is returned as a matrix of type Dom::Matrix() over the ring of expressions. This includes input matrices A of type Dom::Matrix(Ring), Dom::SquareMatrix(Ring), Dom::MatrixGroup(Ring) etc.
The components of A must not contain symbolic objects which cannot be converted to numerical values via float. Numerical symbolic expressions such as π, , etc. are accepted. They are converted to floats.
The specification of a method such as TaylorExpansion etc. implies SoftwareFloats, i.e., the result is computed via the software arithmetic of the MuPAD® kernel.
The methods Diagonalization and Interpolation do not work for all matrices (see below).
With SoftwareFloats, special algorithms are implemented for traceless 2×2 matrices and skew symmetric 3×3 matrices. Specification of a particular method does not have any effect for such matrices.
If or is required, one should not compute first and then multiply the resulting matrix with the vector/matrix x/X. In general, the call numeric::expMatrix(A, x) or numeric::expMatrix(A, X), respectively, is faster.
The function is sensitive to the environment variable DIGITS, which determines the numerical working precision.
We consider a lower triangular matrix given by an array:
A := array(1..2, 1..2, [[1, 0] , [1, PI]]): expA := numeric::expMatrix(A)
We consider a vector given by a list x1 and by an equivalent 1-dimensional array x2, respectively:
x1 := [1, 1]: x2 := array(1..2, [1, 1]):
Futher, an equivalent input vector X of type Dom::Matrix() is used:
X := matrix(x1):
The following three calls all yield a vector represented by an 2×1 array corresponding to the type of the input matrix A:
numeric::expMatrix(A, x1), numeric::expMatrix(A, x2, Krylov), numeric::expMatrix(A, X, Diagonalization)
For further processing, the array expA is converted to an element of the matrix domain Dom::Matrix():
expA := matrix(expA):
Now, the overloaded arithmetical operators +, *, ^ etc. can be used for further computations:
delete A, expA, x1, x2, X:
We demonstrate the different precision goals of the methods. Note that software arithmetic is used when a method is specified:
A := array(1..3, 1..3, [[ 1000, 1, 0 ], [ 0, 1, 1 ], [1/10^100, 0, -1000]]):
The default method TaylorExpansion computes each component of correctly:
The method Diagonalization produces a result, which is accurate in the sense that holds. Indeed, the largest components of are correct. However, Diagonalization does not even get the right order of magnitude of the smaller components:
Note that is very sensitive to small changes in A. After elimination of the small lower triangular element, both methods yield the same result with correct digits for all entries:
B := array(1..3, 1..3, [[ 1000, 1, 0 ], [ 0 , 1, 1 ], [ 0 , 0, -1000]]): numeric::expMatrix(B, SoftwareFloats)
delete A, B:
Hilbert matrices have real positive eigenvalues. For large dimension, most of these eigenvalues are small and may be regarded as a single cluster. Consequently, the option Krylov is useful:
numeric::expMatrix(linalg::hilbert(100), [1 $ 100], Krylov)
One of the flags Hard, HardwareFloats, Soft, or SoftwareFloats
One of the flags Diagonalization, Interpolation, Krylov, or TaylorExpansion
Hard, HardwareFloats, Soft, SoftwareFloats
With Hard (or HardwareFloats), computations are done using fast hardware float arithmetic from within a MuPAD session. Hard and HardwareFloats are equivalent. With this option, the input data are converted to hardware floats and processed by compiled C code. The result is reconverted to MuPAD floats and returned to the MuPAD session.
With Soft (or SoftwareFloats) computations are dome using software float arithmetic provided by the MuPAD kernel. Soft and SoftwareFloats are equivalent. SoftwareFloats is used by default if the current value of DIGITS is larger than 15 and the input matrix A is not of domain type DOM_HFARRAY.
Compared to the SoftwareFloats used by the MuPAD kernel, the computation with HardwareFloats may be many times faster. Note, however, that the precision of hardware arithmetic is limited to about 15 digits. Further, the size of floating-point numbers may not be larger than approximately 10308 and not smaller than approximately 10- 308.
If no HardwareFloats or SoftwareFloats are requested explicitly, the following strategy is used: If the current value of DIGITS is smaller than 16 or if the matrix A is a hardware float array of domain type DOM_HFARRAY, then hardware arithmetic is tried. If this is successful, the result is returned.
If the result cannot be computed with hardware floats, software arithmetic by the MuPAD kernel is tried.
If the current value of DIGITS is larger than 15 and the input matrix A is not of domain type DOM_HFARRAY, or if one of the options Soft, SoftwareFloats or Symbolic is specified, MuPAD computes the result with its software arithmetic without trying to use hardware floats first.
There may be several reasons for hardware arithmetic to fail:
If neither HardwareFloats nor SoftwareFloats is specified, the user is not informed whether hardware floats or software floats are used.
If HardwareFloats are specified but fail due to one of the reasons above, a warning is issued that the (much slower) software floating-point arithmetic of the MuPAD kernel is used.
Note that HardwareFloats can only be used if all input data can be converted to floating-point numbers.
The trailing digits in floating-point results computed with HardwareFloats and SoftwareFloats may differ.
Diagonalization, Interpolation, Krylov, TaylorExpansion
The specification of a method implies SoftwareFloats, i.e., the result is always computed via the software arithmetic of the MuPAD kernel.
The method TaylorExpansion is the default algorithm. It produces fast results for matrices with small norms.
The default method TaylorExpansion computes each individual component of , , to a relative precision of about 10^(-DIGITS), unless numerical roundoff prevents reaching this precision goal. Roughly speaking: all digits of all components of the result are reliable up to roundoff effects.
The method Krylov is only available for computing with a vector x. Also vectors represented by n×1 matrices are accepted.
This method is fast when x is spanned by few eigenvectors of A. Further, if A has only few clusters of similar eigenvalues, then this method can be much faster than the other methods. Cf. Example 3.
Option, specified as ReturnType = d
All results are float matrices/vectors. For an n×n matrix A:
numeric::expMatrix(A, method) returns as an n×n matrix,
numeric::expMatrix(A, x, method) returns as an n×1 matrix,
numeric::expMatrix(A, X, method) returns as an n×m matrix.
The domain type of the result depends on the domain type of the input matrix A unless a return type is requested explicitly via ReturnType = d.
The method TaylorExpansion sums the usual Taylor series
in a suitable numerically stable way.
The method Diagonalization computes by a diagonalization A = T diag(λ1, λ2, …) T- 1.
The method Interpolation computes a polynomial P interpolating the function exp at the eigenvalues of A. Evaluation of the matrix polynomial yields .
The method Krylov reduces A to a Hessenberg matrix H and computes an approximation of from . Depending on A and x, the dimension of H may be smaller than the dimension of A.
numeric::expMatrix uses polynomial arithmetic to multiply matrices and vectors. Thus, sparse matrices are handled efficiently based on the MuPAD internal sparse representation of polynomials.
Y. Saad, "Analysis of some Krylov Subspace Approximations to the Matrix Exponential Operator", SIAM Journal of Numerical Analysis 29 (1992).