# numeric::expMatrix

Exponential of a matrix

### Use only in the MuPAD Notebook Interface.

This functionality does not run in MATLAB.

## Syntax

```numeric::expMatrix(`A`, <`mode`>, <`method`>, `options`)
numeric::expMatrix(`A`, `x`, <`mode`>, <`method`>, `options`)
numeric::expMatrix(`A`, `X`, <`mode`>, <`method`>, `options`)
```

## Description

`numeric::expMatrix(A)` returns the exponential of a square matrix A.

`numeric::expMatrix(A, x)` with a vector x returns the vector .

`numeric::expMatrix(A, X)` with a matrix X returns the matrix .

If no return type is specified via the option ```ReturnType = d```, the domain type of the result depends on the type of the input matrix `A`:

• For an array A, the result is returned as an array.

• For an hfarray A, the result is returned as an hfarray.

• For a dense matrixA of type `Dom::DenseMatrix(Ring)`, the result is again a matrix of type `Dom::DenseMatrix()` over the ring of expressions.

• For all other matrices A of category `Cat::Matrix`, the result is returned as a `matrix` of type `Dom::Matrix()` over the ring of expressions. This includes input matrices `A` of type `Dom::Matrix(Ring)`, `Dom::SquareMatrix(Ring)`, `Dom::MatrixGroup(Ring)` etc.

The components of A must not contain symbolic objects which cannot be converted to numerical values via `float`. Numerical symbolic expressions such as π, , etc. are accepted. They are converted to floats.

The specification of a method such as `TaylorExpansion` etc. implies `SoftwareFloats`, i.e., the result is computed via the software arithmetic of the MuPAD® kernel.

The methods `Diagonalization` and `Interpolation` do not work for all matrices (see below).

With `SoftwareFloats`, special algorithms are implemented for traceless 2×2 matrices and skew symmetric 3×3 matrices. Specification of a particular method does not have any effect for such matrices.

If or is required, one should not compute first and then multiply the resulting matrix with the vector/matrix x/X. In general, the call `numeric::expMatrix(A, x)` or ```numeric::expMatrix(A, X)```, respectively, is faster.

## Environment Interactions

The function is sensitive to the environment variable `DIGITS`, which determines the numerical working precision.

## Examples

### Example 1

We consider a lower triangular matrix given by an array:

```A := array(1..2, 1..2, [[1, 0] , [1, PI]]): expA := numeric::expMatrix(A)```

We consider a vector given by a list `x1` and by an equivalent 1-dimensional array `x2`, respectively:

```x1 := [1, 1]: x2 := array(1..2, [1, 1]):```

Further, an equivalent input vector X of type `Dom::Matrix``()` is used:

`X := matrix(x1):`

The following three calls all yield a vector represented by an 2×1 array corresponding to the type of the input matrix A:

```numeric::expMatrix(A, x1), numeric::expMatrix(A, x2, Krylov), numeric::expMatrix(A, X, Diagonalization)```

For further processing, the array `expA` is converted to an element of the matrix domain `Dom::Matrix()`:

`expA := matrix(expA):`

Now, the overloaded arithmetical operators `+`, `*`, `^` etc. can be used for further computations:

`expA*X`

`delete A, expA, x1, x2, X:`

### Example 2

We demonstrate the different precision goals of the methods. Note that software arithmetic is used when a method is specified:

```A := array(1..3, 1..3, [[ 1000, 1, 0 ], [ 0, 1, 1 ], [1/10^100, 0, -1000]]):```

The default method `TaylorExpansion` computes each component of correctly:

`numeric::expMatrix(A, SoftwareFloats)`

The method `Diagonalization` produces a result, which is accurate in the sense that holds. Indeed, the largest components of are correct. However, `Diagonalization` does not even get the right order of magnitude of the smaller components:

`numeric::expMatrix(A, Diagonalization)`

Note that is very sensitive to small changes in A. After elimination of the small lower triangular element, both methods yield the same result with correct digits for all entries:

```B := array(1..3, 1..3, [[ 1000, 1, 0 ], [ 0 , 1, 1 ], [ 0 , 0, -1000]]): numeric::expMatrix(B, SoftwareFloats)```

`numeric::expMatrix(B, Diagonalization)`

`delete A, B:`

### Example 3

Hilbert matrices have real positive eigenvalues. For large dimension, most of these eigenvalues are small and may be regarded as a single cluster. Consequently, the option `Krylov` is useful:

`numeric::expMatrix(linalg::hilbert(100), [1 \$ 100], Krylov)`

## Parameters

 `A` A square n×n matrix of domain type `DOM_ARRAY`, `DOM_HFARRAY`, or of category `Cat::Matrix` `x` A vector represented by a list ```[x1, …, xn]``` or a 1-dimensional array ```array(1..n, [x1, …, xn] )```, or a 1-dimensional hfarray ```hfarray(1..n, [x1, …, xn] )``` `X` An n×m matrix of domain type `DOM_ARRAY`, `DOM_HFARRAY`, `Dom::Matrix``(Ring)` or `Dom::DenseMatrix(Ring)` with a suitable coefficient ring `Ring` `mode` One of the flags `Hard`, `HardwareFloats`, `Soft`, or `SoftwareFloats` `method` One of the flags `Diagonalization`, `Interpolation`, `Krylov`, or `TaylorExpansion`

## Options

`Hard`, `HardwareFloats`, `Soft`, `SoftwareFloats`

With `Hard` (or `HardwareFloats`), computations are done using fast hardware float arithmetic from within a MuPAD session. `Hard` and `HardwareFloats` are equivalent. With this option, the input data are converted to hardware floats and processed by compiled C code. The result is reconverted to MuPAD floats and returned to the MuPAD session.

With `Soft` (or `SoftwareFloats`) computations are dome using software float arithmetic provided by the MuPAD kernel. `Soft` and `SoftwareFloats` are equivalent. `SoftwareFloats` is used by default if the current value of `DIGITS` is larger than 15 and the input matrix `A` is not of domain type `DOM_HFARRAY`.

Compared to the `SoftwareFloats` used by the MuPAD kernel, the computation with `HardwareFloats` may be many times faster. Note, however, that the precision of hardware arithmetic is limited to about 15 digits. Further, the size of floating-point numbers may not be larger than approximately 10308 and not smaller than approximately 10- 308.

If no `HardwareFloats` or `SoftwareFloats` are requested explicitly, the following strategy is used: If the current value of `DIGITS` is smaller than 16 or if the matrix `A` is a hardware float array of domain type `DOM_HFARRAY`, then hardware arithmetic is tried. If this is successful, the result is returned.

If the result cannot be computed with hardware floats, software arithmetic by the MuPAD kernel is tried.

If the current value of `DIGITS` is larger than 15 and the input matrix `A` is not of domain type `DOM_HFARRAY`, or if one of the options `Soft`, `SoftwareFloats` or `Symbolic` is specified, MuPAD computes the result with its software arithmetic without trying to use hardware floats first.

There may be several reasons for hardware arithmetic to fail:

• The current value of `DIGITS` is larger than 15.

• The data contains symbolic objects.

• The data contains numbers larger than 10308 or smaller than 10- 308 that cannot be represented by hardware floats.

If neither `HardwareFloats` nor `SoftwareFloats` is specified, the user is not informed whether hardware floats or software floats are used.

If `HardwareFloats` are specified but fail due to one of the reasons above, a warning is issued that the (much slower) software floating-point arithmetic of the MuPAD kernel is used.

Note that `HardwareFloats` can only be used if all input data can be converted to floating-point numbers.

The trailing digits in floating-point results computed with `HardwareFloats` and `SoftwareFloats` may differ.

`Diagonalization`, `Interpolation`, `Krylov`, `TaylorExpansion`

The specification of a method implies `SoftwareFloats`, i.e., the result is always computed via the software arithmetic of the MuPAD kernel.

The method `TaylorExpansion` is the default algorithm. It produces fast results for matrices with small norms.

The default method `TaylorExpansion` computes each individual component of , , to a relative precision of about `10^(-DIGITS)`, unless numerical roundoff prevents reaching this precision goal. Roughly speaking: all digits of all components of the result are reliable up to roundoff effects.

 Note:   The methods `Diagonalization`, `Interpolation`, and `Krylov` compute the result to a relative precision w.r.t. the norm: . Consequently, if the result has components of different orders of magnitude, then the smaller components have larger relative errors than the large components. Not all digits of the small components are reliable! Cf. Example 2.
 Note:   The method `Diagonalization` only works for diagonalizable matrices. For matrices without a basis of eigenvectors, `numeric::expMatrix` may either produce an error or the returned result is dominated by roundoff effects. For symmetric/Hermitian or skew/skew-Hermitian matrices, this method produces reliable results.
 Note:   The method `Interpolation` may become numerically unstable for certain matrices. The algorithm tries to detect such instabilities and stops with an error message.

The method `Krylov` is only available for computing with a vector x. Also vectors represented by n×1 matrices are accepted.

This method is fast when x is spanned by few eigenvectors of A. Further, if A has only few clusters of similar eigenvalues, then this method can be much faster than the other methods. Cf. Example 3.

`NoWarning`

Suppresses warnings

`ReturnType`

Option, specified as `ReturnType = d`

Return the result matrix or vector as a matrix of domain type `d`. The following return types are available: `DOM_ARRAY`, `DOM_HFARRAY`, `Dom::Matrix()`, or `Dom::DenseMatrix()`.

## Return Values

All results are float matrices/vectors. For an n×n matrix A:

• `numeric::expMatrix(A, method)` returns as an n×n matrix,

• `numeric::expMatrix(A, x, method)` returns as an n×1 matrix,

• `numeric::expMatrix(A, X, method)` returns as an n×m matrix.

The domain type of the result depends on the domain type of the input matrix A unless a return type is requested explicitly via `ReturnType` = `d`.

## Algorithms

The method `TaylorExpansion` sums the usual Taylor series

in a suitable numerically stable way.

The method `Diagonalization` computes by a diagonalization A = Tdiag(λ1, λ2, …) T- 1.

The method `Interpolation` computes a polynomial P interpolating the function exp at the eigenvalues of A. Evaluation of the matrix polynomial yields .

The method `Krylov` reduces A to a Hessenberg matrix H and computes an approximation of from . Depending on A and x, the dimension of H may be smaller than the dimension of A.

`numeric::expMatrix` uses polynomial arithmetic to multiply matrices and vectors. Thus, sparse matrices are handled efficiently based on the MuPAD internal sparse representation of polynomials.

## References

Y. Saad, "Analysis of some Krylov Subspace Approximations to the Matrix Exponential Operator", SIAM Journal of Numerical Analysis 29 (1992).