Note: This page has been translated by MathWorks. Please click here

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

Partial Differential Equation Toolbox™ software handles the following basic eigenvalue problem:

$$-\nabla \cdot \left(c\nabla u\right)+au=\lambda du$$

where *λ* is an unknown complex number.
In solid mechanics, this is a problem associated with wave phenomena
describing, e.g., the natural modes of a vibrating membrane. In quantum
mechanics *λ* is the energy level of a bound
state in the potential well *a*(**x**),
where **x** represents a 2-D or 3-D point.

The numerical solution is found by discretizing the equation
and solving the resulting algebraic eigenvalue problem. Let us first
consider the discretization. Expand *u* in the FEM
basis, multiply with a basis element, and integrate on the domain
Ω. This yields the generalized eigenvalue equation

*KU* = *λMU*

where the mass matrix corresponds to the right side, i.e.,

$${M}_{i,j}={\displaystyle \underset{\Omega}{\int}d(x){\varphi}_{j}(x){\varphi}_{i}(x)\text{\hspace{0.17em}}dx}$$

The matrices *K* and *M* are
produced by calling `assema`

for the equations

–∇ · (*c*∇*u*)
+ *au* = 0 and –∇ · (0∇*u*)
+ *du* = 0

In the most common case, when the function *d*(**x**) is positive, the mass matrix *M* is
positive definite symmetric. Likewise, when *c*(**x**) is positive and we have Dirichlet boundary
conditions, the stiffness matrix *K* is also positive
definite.

The generalized eigenvalue problem, *KU* = *λMU*,
is now solved by the *Arnoldi algorithm* applied
to a shifted and inverted matrix with restarts until all eigenvalues
in the user-specified interval have been found.

Let us describe how this is done in more detail. You may want to look at the examples Eigenvalues and Eigenfunctions for the L-Shaped Membrane or Eigenvalues and Eigenmodes of a Square, where actual runs are reported.

First a shift *µ* is determined close
to where we want to find the eigenvalues. When both *K* and *M* are
positive definite, it is natural to take *µ* =
0, and get the smallest eigenvalues; in other cases take any point
in the interval [lb,ub] where eigenvalues are sought. Subtract *µM* from
the eigenvalue equation and get (*K* - *µM*)*U* =
(*λ* - *µ*)*MU*.
Then multiply with the inverse of this shifted matrix and get

$$\frac{1}{\lambda -\mu}U={\left(K-\mu M\right)}^{-1}MU$$

This is a standard eigenvalue problem *AU* =
θ*U*, with the matrix *A* = (*K* – *µM*)^{-1}*M* and
eigenvalues

$${\theta}_{i}=\frac{1}{{\lambda}_{i}-\mu}$$

where *i* = 1, . . ., *n*.
The largest eigenvalues *θ _{i}* of
the transformed matrix

The Arnoldi algorithm computes an orthonormal basis *V* where
the shifted and inverted operator *A* is represented
by a Hessenberg matrix *H*,

*AV _{j}* =

(The subscripts mean that *V _{j}* and

Some of the eigenvalues of this Hessenberg matrix *H _{j,j}* eventually
give good approximations to the eigenvalues of the original pencil
(

The basis *V* is built one column *v _{j}* at
a time. The first vector

This is formulated as $${h}_{j+1}{v}_{j+1}=A{v}_{j}-{V}_{j}{h}_{j}$$,
where the column vector *h _{j}* consists
of the Gram-Schmidt coefficients, and

$$A{V}_{j}={V}_{j}{H}_{j,j}+{v}_{j+1}{h}_{j+1,j}{e}_{j}^{T}$$

where *H _{j,j}* is a

The eigensolution of the small Hessenberg matrix *H* gives
approximations to some of the eigenvalues and eigenvectors of the
large matrix operator *A _{j,j}* in
the following way. Compute eigenvalues

$${H}_{j,j}{s}_{i}={s}_{i}{\theta}_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\mathrm{...},j$$

Then *y _{i}* =

$${r}_{i}=A{y}_{i}-{y}_{i}{\theta}_{i}=A{V}_{j}{s}_{i}-{V}_{j}{s}_{i}{\theta}_{i}=(A{V}_{j}-{V}_{j}{H}_{j,j}){s}_{i}={v}_{j+1}{h}_{j+1,j}{s}_{i,j}$$

This residual has to be small in norm for *θ _{i}* to
be a good eigenvalue approximation. The norm of the residual is

$$\Vert {r}_{i}\Vert =\left|{h}_{j+1,j}{s}_{j,i}\right|$$

the product of the last subdiagonal element of the Hessenberg
matrix and the last element of its eigenvector. It seldom happens
that *h*_{j+1,j} gets
particularly small, but after sufficiently many steps *j* there
are always some eigenvectors *s _{i}* with
small last elements. The long vector

It is not necessary to actually compute the eigenvector approximation *y _{i}* to
get the norm of the residual; we only need to examine the short vectors

This eigenvalue computation and test for convergence is done
every few steps *j*, until all approximations to
eigenvalues inside the interval [lb,ub] are flagged as converged.
When *n* is much larger than *j*,
this is done very often, for smaller *n* more seldom.
When all eigenvalues inside the interval have converged, or when *j* has
reached a prescribed maximum, the converged eigenvectors, or more
appropriately *Schur vectors*, are computed and
put in the front of the basis *V*.

After this, the Arnoldi algorithm is restarted with a random
vector, if all approximations inside the interval are flagged as converged,
or else with the best unconverged approximate eigenvector *y _{i}*.
In each step

This is a heuristic strategy that has worked well on both symmetric,
nonsymmetric, and even defective eigenvalue problems. There is a tiny
theoretical chance of missing an eigenvalue, if all the random starting
vectors happen to be orthogonal to its eigenvector. Normally, the
algorithm restarts *p* times, if the maximum multiplicity
of an eigenvalue is *p*. At each restart a new random
starting direction is introduced.

The shifted and inverted matrix *A* = (*K* – *µM*)^{–1}*M* is
needed only to operate on a vector *v _{j}* in
the Arnoldi algorithm. This is done by computing an LU factorization,

*P*(*K* – *µM*)*Q* = *LU*

using the sparse MATLAB^{®} command `lu`

(*P* and *Q* are
permutations that make the triangular factors *L* and *U* sparse
and the factorization numerically stable). This factorization needs
to be done only once, in the beginning, then *x* = *Av _{j}* is
computed as,

*x* = *QU*^{–1}*L*^{–1}*PMv _{j}*

with one sparse matrix vector multiplication, a permutation, sparse forward- and back-substitutions, and a final renumbering.

Was this topic helpful?