Note: This page has been translated by MathWorks. Please click here

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

Partial Differential Equation Toolbox™ software handles the following basic eigenvalue problem:

$$-\nabla \cdot \left(c\nabla u\right)+au=\lambda du$$

where * λ* is an unknown complex number.
In solid mechanics, this is a problem associated with wave phenomena
describing, e.g., the natural modes of a vibrating membrane. In quantum
mechanics

The numerical solution is found by discretizing the equation
and solving the resulting algebraic eigenvalue problem. Let us first
consider the discretization. Expand * u* in the FEM
basis, multiply with a basis element, and integrate on the domain
Ω. This yields the generalized eigenvalue equation

* KU* =

where the mass matrix corresponds to the right side, i.e.,

$${M}_{i,j}={\displaystyle \underset{\Omega}{\int}d(x){\varphi}_{j}(x){\varphi}_{i}(x)\text{\hspace{0.17em}}dx}$$

The matrices * K* and

`assema`

for the equations–∇ · (* c*∇

In the most common case, when the function * d*(

The generalized eigenvalue problem, * KU = λMU*,
is now solved by the

Let us describe how this is done in more detail. You may want to look at the examples Eigenvalues and Eigenmodes of the L-Shaped Membrane or Eigenvalues and Eigenmodes of a Square, where actual runs are reported.

First a shift * µ* is determined close
to where we want to find the eigenvalues. When both

$$\frac{1}{\lambda -\mu}U={\left(K-\mu M\right)}^{-1}MU$$

This is a standard eigenvalue problem * AU =
θU*, with the matrix

$${\theta}_{i}=\frac{1}{{\lambda}_{i}-\mu}$$

where * i* = 1, . . .,

The Arnoldi algorithm computes an orthonormal basis * V* where
the shifted and inverted operator

* AV_{j}* =

(The subscripts mean that * V_{j}* and

Some of the eigenvalues of this Hessenberg matrix * H_{j,j}* eventually
give good approximations to the eigenvalues of the original pencil
(

The basis * V* is built one column

This is formulated as $${h}_{j+1}{v}_{j+1}=A{v}_{j}-{V}_{j}{h}_{j}$$,
where the column vector * h_{j}* consists
of the Gram-Schmidt coefficients, and

$$A{V}_{j}={V}_{j}{H}_{j,j}+{v}_{j+1}{h}_{j+1,j}{e}_{j}^{T}$$

where * H_{j,j}* is a

The eigensolution of the small Hessenberg matrix * H* gives
approximations to some of the eigenvalues and eigenvectors of the
large matrix operator

$${H}_{j,j}{s}_{i}={s}_{i}{\theta}_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\mathrm{...},j$$

Then * y_{i}* =

$${r}_{i}=A{y}_{i}-{y}_{i}{\theta}_{i}=A{V}_{j}{s}_{i}-{V}_{j}{s}_{i}{\theta}_{i}=(A{V}_{j}-{V}_{j}{H}_{j,j}){s}_{i}={v}_{j+1}{h}_{j+1,j}{s}_{i,j}$$

This residual has to be small in norm for * θ_{i}* to
be a good eigenvalue approximation. The norm of the residual is

$$\Vert {r}_{i}\Vert =\left|{h}_{j+1,j}{s}_{j,i}\right|$$

the product of the last subdiagonal element of the Hessenberg
matrix and the last element of its eigenvector. It seldom happens
that *h*_{j+1,j} gets
particularly small, but after sufficiently many steps * j* there
are always some eigenvectors

It is not necessary to actually compute the eigenvector approximation * y_{i}* to
get the norm of the residual; we only need to examine the short vectors

This eigenvalue computation and test for convergence is done
every few steps * j*, until all approximations to
eigenvalues inside the interval [lb,ub] are flagged as converged.
When

After this, the Arnoldi algorithm is restarted with a random
vector, if all approximations inside the interval are flagged as converged,
or else with the best unconverged approximate eigenvector * y_{i}*.
In each step

This is a heuristic strategy that has worked well on both symmetric,
nonsymmetric, and even defective eigenvalue problems. There is a tiny
theoretical chance of missing an eigenvalue, if all the random starting
vectors happen to be orthogonal to its eigenvector. Normally, the
algorithm restarts * p* times, if the maximum multiplicity
of an eigenvalue is

The shifted and inverted matrix * A* = (

* P*(

using the sparse MATLAB^{®} command `lu`

(* P* and

* x* =

with one sparse matrix vector multiplication, a permutation, sparse forward- and back-substitutions, and a final renumbering.

Was this topic helpful?