0% found this document useful (0 votes)
51 views

Power Method

The document summarizes several methods for approximating eigenvalues of matrices: 1. The Gerschgorin circle theorem provides bounds on the locations of eigenvalues based on the locations and sizes of entries along the matrix's diagonal. 2. The power method is an iterative algorithm that approximates the largest eigenvalue and its corresponding eigenvector. It involves repeatedly multiplying the matrix by a starting vector. 3. Aitken's Δ2 method can accelerate the convergence of sequences that converge linearly, such as those produced by the power method, by using estimates from three consecutive terms.

Uploaded by

Zelene Carter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Power Method

The document summarizes several methods for approximating eigenvalues of matrices: 1. The Gerschgorin circle theorem provides bounds on the locations of eigenvalues based on the locations and sizes of entries along the matrix's diagonal. 2. The power method is an iterative algorithm that approximates the largest eigenvalue and its corresponding eigenvector. It involves repeatedly multiplying the matrix by a starting vector. 3. Aitken's Δ2 method can accelerate the convergence of sequences that converge linearly, such as those produced by the power method, by using estimates from three consecutive terms.

Uploaded by

Zelene Carter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Power Method for Approximating Eigenvalues - (4.

1)

1. Symmetric matrices:
If A is a symmetric n  n matrix, then eigenvalues of A are real numbers and there exist n eigenvectors
that are orthogonal.
2 −1 1 −1
Example A  . , v : 1, 3, , v T1 v 2  0.
−1 2 1 1

2. Gersgorin Circle:
Let A be an n  n matrix and R i be the circle in the complex plane C with center a ii and radius ∑ j1,j≠i |a ij |
n

such that
n
Ri  z ∈ C; |z − a ii | ≤ ∑ |a ij | .
j1, j≠i

The eigenvalues of A are contained within R   ni1


R i . Moreover, the union of any k of these circles that
do not intersect the remining n − k contains precisely k (counting multiplicities) of the eigenvlaues.
4 1 1
Example Let A  0 2 1 . Use the Gersgorin Circle Theorem to determine bounds for the
−2 0 9
eigenvalues of A.

2
y

R1  z; |z − 4| ≤ 2 ,
R2  z; |z − 2| ≤ 1 , 0
1 2 3 4 5 6 7 8 9 10 11
R3  z; |z − 9| ≤ 2 . x

-1

-2

Since R 1 and R 2 are disjoint from R 3 , there are two eigenvalues within R 1  R 2 and one within R 3 .
Hence,
1 ≤ | 1 |, | 2 | ≤ 6 and 7 ≤ | 3 | ≤ 11
Moreover, 7 ≤ A ≤ 11 and since 1 ≤ | 1 |, A is nonsingular.
Eigenvalues of A are: 8.48534949329733, 4.63182668375403, 1.88282382294864

1
4 −1 −1
Example Let A  −1 4 −1 . Use the Gersgorin Circle Theorem to determine bounds for the
−1 −1 4
eigenvalues of A.
A is symmetric so all its eigenvalues are real. R 1  z ∈ R; |z − 4| ≤ 2 , 2 ≤ z ≤ 6. A is positive
definite (then it is also nonsingular).

3. Power Method:
Let A be an n  n real matrix, and  i , v i  for i  1, . . . , n be eigenpairs of A where
| 1 |  | 2 | ≥  ≥ | n | ≥ 0.
The Power Method is an iterative method that finds  1 asymptotically.
Idea: Assume that v 1 , . . . , v n are linearly independent. Let x be in R n and x  v 1    c n v n . Then
Ax  Av 1    c n Av n   1 v 1    c n  n v n
and
A k x   k1 v 1    c n  kn v n , for k  1, 2, . . . .
Since
k k
A k x   k1 v 1  2k   c n nk v n ,
1 1
 k2  kn
lim A k x  lim  k1 v 1    c n vn   k1 v 1 .
k→ k→  k1  k1
When k is very large, A k x ≈  k1 v 1 .
Algorithm: Given A, x and a stopping criterion , let x 0  1 x. Because ||x 0 ||  1, let
||x||  

|x p 0 |  ‖x 0 ‖   1.
Approximate  1 , v 1  as follows: For k  1, 2, . . . ,
(1) Compute y k  Ax k−1 .
k
(2) Compute p k such that y p k  ‖y k ‖  .
k
(3) Let r k  y p k and x k  r1k y k ‖x k ‖   1
(4) If ‖x k − x k−1 ‖   , then  1 ≈ r k and v 1  x k . Otherwise, k  k  1, repeat steps (1)-(4).

−4 14 0 1
Example A  −5 13 0 , x 0  1 .
−1 0 2 1

Symmetric Power Method: x 0  1 x.


‖x‖ 2
(1) Compute y k  Ax k−1 .
T
(2) r k  y k  x k−1 .
(3) Compute ‖y k ‖ 2 . If ‖y k ‖ 2  0, then A has a zero eigenvalue.

2
(4) Compute x k  1
y k .
y k 2

(5) If ‖x k − x k−1 ‖ 2  , then  1 ≈ r k and v 1  x k . Otherwise, k  k  1, repeat steps (1)-(5).

4 −1 1 1
Example A  −1 3 −2 , x 0  0 .
1 −2 3 0

4. Aitken’s Δ 2 Method:
Let p n  be a sequence which converges to its limit p linearly. That is, there exists a positive number 
such that
p n1 − p
lim  .
n→
pn − p
Can the order of convergence of this sequence be improved? Observe the following. For sufficiently
large n,
p n1 − p
≈ .
pn − p
Assume the signs of p n1 − p and p n − p agree (either both are positive or both are negative) for all n.
Then
p n2 − p p n1 − p 2
p n1 − p ≈ p n − p  p n2 − p p n − p ≈ p n1 − p

p n2 p n − p n2 p − p p n  p 2 ≈ p 2n1 − 2p n1 p  p 2


p n2 p n − p n2 p − p p n ≈ p 2n1 − 2p n1 p
Solve for p :
2
p n p n2 − p 2n1 p n1 − p n
p n p n2 − p 2n1 ≈ p p n2 − 2p n1  p n   p≈  pn − .
p n2 − 2 p n1  p n p n2 − 2 p n1  p n
Define a new sequence p̂ n as
2
p n1 − p n
p n p n2 − p 2n1
p̂ n  p n − or p̂ n 
.
p n2 − 2p n1  p n p n2 − 2 p n1  p n
Algebraically these two formulas are equivalent, but numerically the first one is more stable than the
second one. The sequence p̂ n converges to p more rapidly. This method is called Aitken’s
Δ 2 Method. Observe that estimate p̂ n depends on estimates p n , p n1 , and p n2 . So, p̂ 0 can be
computed after p 2 is computed.
Steps for Aitken’s Δ 2 Method: let p n be generated by a method which has a linear convergence.
2
p1 − p0
Having p 0 , p 1 , and p 2 , compute p̂ 0  p 0 − , and for n  1, 2, . . .
p 2 − 2p 1  p 0
a. compute p n2 ,
2
p n1 − p n
b. compute p̂ n  p n − , and
p n2 − 2p n1  p n
3

You might also like