0% found this document useful (0 votes)
229 views9 pages

Final Project: Power Method

The document discusses using the power method to approximate dominant eigenvalues and eigenvectors of matrices. It begins by introducing eigenvalue problems and noting that the power method can specifically find the dominant eigenvalue, which is important for many applications. It then provides details on implementing the power method, including initializing a vector, iteratively multiplying it by the matrix, and scaling the results. Sample code is presented and tested on example matrices. The convergence properties are analyzed, showing faster convergence when the subordinate eigenvalues are relatively small. Both the dominant eigenvalue and eigenvector can be approximated, with the advantage of solving for both simultaneously, but the disadvantage of only obtaining the dominant solution.

Uploaded by

Zhijie Tang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
229 views9 pages

Final Project: Power Method

The document discusses using the power method to approximate dominant eigenvalues and eigenvectors of matrices. It begins by introducing eigenvalue problems and noting that the power method can specifically find the dominant eigenvalue, which is important for many applications. It then provides details on implementing the power method, including initializing a vector, iteratively multiplying it by the matrix, and scaling the results. Sample code is presented and tested on example matrices. The convergence properties are analyzed, showing faster convergence when the subordinate eigenvalues are relatively small. Both the dominant eigenvalue and eigenvector can be approximated, with the advantage of solving for both simultaneously, but the disadvantage of only obtaining the dominant solution.

Uploaded by

Zhijie Tang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Final project: Power method

ZJT

Introduction:

Eigenvalue problems generally arise in dynamics problems and structural stability

analysis. To find the eigenvalues of an matrix A, we can solve its characteristic

equation. For large-scale matrix, the characteristic equation would be difficult and

time-consuming to solve.

Power method is an alternative method for approximating eigenvalues, this method

can be used only to find the dominant eigenvalue of A, dominant eigenvalues are

important in many physical applications. So power method is applied widely to

calculate eigenvalues and corresponding eigenvector of the given matrix.

In this project, I want to apply power method to solve some given matrix eigenvalue

problem and address the benefits and disadvantages about this method.

Firstly, I want to address the importance to find the dominant eigenvalue in some

practical cases, for example: to measure the influence of a node of network in graph

theory, eigenvector centrality has been widely used to assign relative scores to all

nodes. In eigenvector centrality, a node is important if it is linked to by other important

nodes. And concluded from the Perron-Frobenius theorem[1]: only the greatest

eigenvalue results in the desired centrality measure. Power method is highly

recognized to find this dominant eigenvector.


Dominant eigenvalue and dominant eigenvectors

The dominant eigenvalue is the eigenvalue that largest in absolute value in a matrix:

Suppose 1 , 2 , . . . . . . , are the eigenvalue of a n dimension matrix A, and 1 is the

dominant eigenvalue of A, then |1 |>| |, = 1,2, . . . , .

Power method
In power method, there are three main steps to find out the dominant eigenvalue:

1. First pick a nonzero vector 0 as initial guess.

2. Then iterate the following sequence given by:

1 = 0

2 = 1 = (0 ) = 2 0

3 = 2 = (2 0 ) = 3 0

= 1 = (1 0 ) = 0

3. Properly scaling the sequence, we can obtain a good approximation of the

dominant eigenvalue.
Power method convergence:

In matrix A, rewrite initial guess 0 as the linear combination of eigenvectors :

0 = 1 1 + 2 2 +. . . + ( 0)

Then multiply both side by matrix A:

0 = (1 1 + 2 2 +. . . + )

= 1 (1 ) + 2 (2 )+. . . + ( )

= 1 (1 1 ) + 2 (2 2 )+. . . + ( )

After repeating k times, we have:

0 = 1 (1 1 ) + 2 (2 2 )+. . . + ( )

2
= 1 [1 1 + 2 ( ) 2 +. . . + ( ) ]
1 1


In the expression [1 1 + 2 (2 ) 2 +. . . + ( ) ], these fractions such as
1 1

2
,..., are less than 1, so the items (2 ) , . . . , ( ) will approach 0 when
1 1 1 1

. Then we can approximate 0 1 1 1.


Now we know the main procedures to realize power method, lets test the following

sample code with a given matrix and then verify the characteristic of power method

by comparing two extreme matrixes:

Sample Code[2]:

#include<stdio.h>
#include<math.h>

int main()
{
int i,j,k,n,iter;
float x[10],y[10],z[10],a[10][10],l,d,f,e,Z;
printf("\nEnter the order of the matrix : \n");
scanf("%d",&n);
printf("\nEnter the number of Iterations : \n");
scanf("%d",&iter);
printf("\nEnter the coefficients of the matrix row wise :\n");
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
scanf("%f",&a[i][j]);
}
x[i]=1;
}
k=0;
line:
for(i=0;i<n;i++)
{
y[i]=0;
for(j=0;j<n;j++)
{
y[i]=y[i]+a[i][j]*x[j];
}
z[i]=fabs(y[i]);
}
Z=z[0];
j=0;
for(i=1;i<n;i++)
{
if(z[i]>=Z)
{
Z=z[i];
j=i;
}
}
if(Z==y[j])
{
d=Z;
}
else
{
d=-Z;
}
for(i=0;i<n;i++)
{
x[i]=y[i]/d;
}
k++;
if(k>=iter)
{
printf("\nThe numerically largest Eigen value is %f \n",d);
}
else
{
goto line;
}
return 0;
}

2 4 2
first test the code with a 3 dimension matrix [2 1 2] , by solving its
4 2 5
characteristic equation, we know the analytical eigenvalues are 1 = 6 , 2 =
3 , 3 = 5

after 86 iterations, the simulated solution of dominant eigenvalue is as follow:

the numerical dominant eigenvalue is the same as the analytical value, we can say the

code can give a good approximation of dominant eigenvalue.

From the prove of convergence of power method, we know the convergence rate of
| |
power method is related with the absolute value of , we can guess that the bigger
|1 |

the absolute value is, the slow the convergence rate is. To verify the guess, we can

compare the following 2 matrix:


|2 | |2 |
Each matrix has fraction of = 0.1 and = 0.9
|1 | |1 |

1 3 |2 |
1. In matrix [ ] , the numerical eigenvalues are 1 = 10, 2 = 1. =
6 8 |1 |

0.1 is small.
8 2 |2 |
2. In matrix [ ] , the numerical eigenvalues are 1 = 10, 2 = 9. =
17 7 |1 |

0.9 is relatively large.

| |
from the 2 examples, we find when |1 |
is small, it can converge to a stable value
|2 |
in a few steps, but when |1 |
is large, it usually takes more steps to converge.

Lastly, I modified the code to output the corresponding dominant eigenvector as


well. The code is as follow:
with the modified code, i can now approximate both the dominant eigenvalue and

the dominant eigenvector, for example, the analytical dominant eigenvalue of

1 2 0 0.5
matrix[2 1 2] is 3, and its corresponding dominant eigenvector is [0.5] ,
1 3 1 1
after 80 iterations, power method gives numerical dominant eigenvalue and

eigenvector are the same as the analytical solution.

the advantage of power method is that it can approximate both dominant eigenvalue
and corresponding dominant eigenvector at the same time, the disadvantage is that it

can only obtain the dominant eigenvalue, the other eigenvalues are unknown.

Reference:

[1]: https://en.wikipedia.org/wiki/Eigenvector_centrality

[2]:power method to find the largest eigenvalue

http://computingforbeginners.blogspot.com/2014/05/power-method-to-find-

largest-eigenvalue.html

You might also like