Power Method
We now describe the power method for computing the dominant
eigenpair.Its extension to the inverse power method is practical for finding
any eigenvalue provided that a good initial approximation is known.Some
schemes for finding eigenvalues use other methods that converge fast, but have
limited precision.The inverse power method is then invoked to refine the
numerical values and gain full precision.To discuss the situation, we will
need the following definitions.
DefinitionIf
is an eigenvalue of
Athat
is larger in absolute value than anyother eigenvalue, it is called the
dominant eigenvalue.
An eigenvector
corresponding to
is called a dominant
eigenvector.
DefinitionAn
eigenvectorVis
said to be normalized
if the coordinate of largest magnitude is equal to unity (i.e., the largest
coordinate in the vectorVis
the number 1).
Remark.It
is easy to normalize an eigenvector
by forming a new vectorwhereand.
Theorem (Power Method)Assume
that the n�n
matrixAhasndistinct
eigenvaluesand
that they are ordered in decreasing magnitude; that is,.Ifis
chosen appropriately, then the sequences
andgenerated
recursively by
and
whereand,will
converge to the dominant eigenvectorand
eigenvalue,respectively.
That is,
and.
Remark.If
is an eigenvector and
,
then some other starting vector must be chosen.
Speed of Convergence
In the iteration in the theorem uses the equation
,
and the coefficient of
that is used to form goes
to zero in proportion to
.Hence,
the speed of convergence of
to
is governed by the terms.Consequently,
the rate of convergence is linear.Similarly, the convergence of the sequence
of constants
to
is linear.The Aitken
method can be used for any linearly convergent sequence
to form a new sequence,
,
that converges faster. TheAitken
can be adapted to speed up the convergence of the power method.
Shifted-Inverse Power Method
We will now discuss the shifted inverse power method.It requires a
good starting approximation for an eigenvalue, and then iteration is used to
obtain a precise solution.Other procedures such as the
QM
and Givens� method are used first to obtain the starting approximations.Cases
involving complex eigenvalues, multiple eigenvalues, or the presence of two
eigenvalues with the same magnitude or approximately the same
magnitude will cause computational difficulties and require more advanced
methods.Our illustrations will focus on the case where the eigenvalues are
distinct.The shifted inverse power method is based on the following three
results (the proofs are left as exercises).
Theorem (Shifting Eigenvalues)Suppose
that,Vis
an eigenpair ofA.Ifis
any constant, then,Vis
an eigenpair of the matrix.
Theorem (Inverse Eigenvalues)Suppose
that,Vis
an eigenpair ofA.If,then,Vis
an eigenpair of the matrix.
Theorem (Shifted-Inverse
Eigenvalues)Suppose that,Vis
an eigenpair ofA.If,
then,Vis
an eigenpair of the matrix.
Theorem (Shifted-Inverse
Power Method)Assume that the
n�n
matrixAhas
distinct eigenvaluesand
consider the eigenvalue
.
Then a constantcan
be chosen so that
is the dominant eigenvalue of.Furthermore,
ifis
chosen appropriately, then thesequences
andgenerated
recursively by
and
whereand,will
converge to the dominant eigenpair,of
the matrix.
Finally, the corresponding eigenvalue for the matrixAis
given by the calculation
Remark.For
practical implementations of this Theorem, a linear system solver is used to
compute
in each step by solving the linear system
.
Mathematica Subroutine
(Power
Method).To compute the dominant valueand
its associated eigenvectorfor
the n�n
matrixA.It is assumed that the n eigenvalues have the dominance
property
.
Application to Markov Chains
In the study of
Markov chains the elements of the transition
matrix are the probabilities of moving from any state to any other state.A
Markov process can be described by a square matrix whose entries are all
positive and the column sums are all equal to 1.For example, a 3�3 transition
matrix looks like
where,and.The
initial state vector is.
The computationshows
how the
is redistributed in the next state.Similarly we see that
shows how the
is redistributed in the next state.
and
shows
how the
is redistributed in the next state.
Therefore, the distribution for the next state is
A recursive sequence is generated using the general rule
fork
= 0, 1, 2, ... .
We desire to know the limiting distribution.Since
we will also havewe
obtain the relation
From which it follows that
Therefore the limiting distributionPis the eigenvector corresponding
to the dominant eigenvalue.The
following subroutine reminds us of the iteration used in the power method.
|