0% found this document useful (0 votes)
141 views5 pages

Lesson15 9 PDF

The document discusses properties of matrices including: - The spectral radius of a matrix and how it relates to the behavior of powers of the matrix. - Real matrices can be put in upper triangular form via Schur decomposition if all eigenvalues are real. - Symmetric matrices have real eigenvalues and can be diagonalized by an orthogonal matrix. - Positive matrices have properties regarding their spectral radius and positive eigenvectors. - Stochastic matrices converge to a unique stationary distribution under repeated multiplication.

Uploaded by

conker4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
141 views5 pages

Lesson15 9 PDF

The document discusses properties of matrices including: - The spectral radius of a matrix and how it relates to the behavior of powers of the matrix. - Real matrices can be put in upper triangular form via Schur decomposition if all eigenvalues are real. - Symmetric matrices have real eigenvalues and can be diagonalized by an orthogonal matrix. - Positive matrices have properties regarding their spectral radius and positive eigenvectors. - Stochastic matrices converge to a unique stationary distribution under repeated multiplication.

Uploaded by

conker4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Spectral radius, symmetric and positive

matrices
Zdeněk Dvořák
April 28, 2016

1 Spectral radius
Definition 1. The spectral radius of a square matrix A is

ρ(A) = max{|λ| : λ is an eigenvalue of A}.

For an n × n matrix A, let kAk = max{|Aij | : 1 ≤ i, j ≤ n}.

Lemma 1. If ρ(A) < 1, then

lim kAn k = 0.
n→∞

If ρ(A) > 1, then


lim kAn k = ∞.
n→∞

Proof. Recall that A = CJC −1 for a matrix J in Jordan normal form and
regular C, and that An = CJ n C −1 . If ρ(A) = ρ(J) < 1, then J n converges to
the 0 matrix, and thus An converges to the zero matrix as well. If ρ(A) > 1,
then J n has a diagonal entry (J n )ii = λn for an eigenvalue λ such that
|λ| > 1, and if v is the i-th column of C and v 0 the i-th row of C −1 , then
v 0 An v = v 0 CJ n C −1 v = eTi J n ei = λn . Therefore, limn→∞ |v 0 An v| = ∞, and
thus limn→∞ kAn k = ∞.

2 Matrices with real eigenvalues


Lemma 2 (Schur). If all eigenvalues of a real n × n matrix A are real, then
A = QU QT for some upper-triangular matrix U and an orthogonal matrix
Q.

1
Proof. We prove the claim by induction on n. Let λ be an eigenvalue of
A and let B 0 = v1 , . . . , vk be an orthonormal basis of the space Ker(A −
λI) of the eigenvectors of A for λ. Let us extend B 0 to an orthonormal
basis B = v1 , . . . , vn of Rn, and letC = (v1 |v2 | . . . |vn ). Note that C is
λIk X
orthogonal and C T AC = for some matrices X and A0 . The
0 A0
characteristic polynomial of A is det(A − Ix) = (λ − x)k det(A0 − Ix), and
thus all eigenvalues of A0 are also eigenvalues of A, and thus they are real.
By the induction hypothesis, D0T A0 D0 = U 0
 for anupper-triangular matrix
I 0
A and an orthogonal matrix D0 . Let D = k and note that D is also
0 D0
orthogonal. We have
 
T T λIk XD0
D C ACD = ,
0 U0

which is upper-triangular, and thus we can set Q = CD.

3 Symmetric matrices
Lemma 3. If a real matrix A is symmetric, then all its eigenvalues are real.
Proof. Suppose that λ is an eigenvalue of A and let v be a correspond-
ing eigenvector (possibly complex). Then λhv, vi = λv T v = (Av)T v =
(v T AT )v = (v T A)v = v T (Av) = v T (Av) = λv T v = λhv, vi, and thus λ = λ
and λ is real.
Corollary 4. If a real matrix A is symmetric, then A = QDQT for a diago-
nal matrix D and an orthogonal matrix Q; i.e., A is diagonalizable and there
exists an orthonormal basis formed by eigenvectors of A.
Proof. By Lemma 2, we have A = QU QT for an upper-triangular matrix
A and an orthogonal matrix Q. Since A is symmetric, we have A = AT =
(QU QT )T = QU T QT , and since Q is regular, it follows that U T = U . Hence,
U is symmetric, and thus U is diagonal. It follows that columns of Q are
eigenvectors of A, and since Q is orthogonal, they form an orthonormal
basis.
Lemma 5. If A is a symmetric real matrix A, then max{xT Ax : kxk = 1}
is the largest eigenvalue of A.
Proof. Let A = QDQT for a diagonal matrix D and an orthogonal matrix
Q. Note that A and D have the same eigenvalues and that kQxk = kxk

2
for every x, and since Q is regular, it follows that max{xT Ax : kxk = 1} =
max{xT Dx : kxk = 1}. Therefore, it suffices to show that max{xT Dx :
kxk = 1} is the largest eigenvalue of D. Let d1 ≥ d2 ≥ . . . ≥ dn P
be the diag-
onal
Pentries of D, which are also its eigenvalues. Then x Dx = ni=1 di x2i ≤
T

d1 ni=1 x2i = d1 kxk = d1 for every x such that kx1 k = 1, and eT1 De1 = d1 .
The claim follows.

4 Positive matrices
A matrix A is non-negative if all its entries are non-negative, and it is positive
if all its entries are positive.

Lemma 6. If A is a positive matrix, ρ(A) = 1, and λ is an eigenvalue of A


with |λ| = 1, then the real part of λ is positive.

Proof. Suppose for a contradiction that the real part of λ is non-positive.


Choose ε > 0 such that Aii > ε for every i. Then |λ − ε| > 1. Choose
0 < δ < 1 such that δ|λ − ε| > 1.
Let A1 = δ(A − εI) and A2 = δA. Note that δ(λ − ε) is an eigenvalue
of A1 , and thus ρ(A1 ) > 1. On the other hand, ρ(A2 ) = δρ(A) < 1. By
Lemma 1, limn→∞ kAn2 k = 0 and limn→∞ kAn1 k = ∞. However, each entry of
A1 is at most as large as each entry of A2 , and A1 is a positive matrix, and
thus (An1 )ij ≤ (An2 )ij for all i, j, n. This is a contradiction.

Theorem 7 (Perron-Frobenius). Let A be a non-negative square matrix. If


some power of A is positive, then ρ(A) is an eigenvalue of A and all other
eigenvalues of A have absolute value strictly less than ρ(A).

Proof. The claim is trivial if ρ(A) = 0, hence assume that ρ(A) > 0. Let
m0 ≥ 1 be an integer such that Am0 is positive. Since A is non-negative,
Am is positive for every m ≥ m0 . Suppose that λ is an eigenvalue of A with
|λ| = ρ(A). Let A1 = A/ρ(A) and note that ρ(A1 ) = 1 and λ1 = λ/ρ(A) is
an eigenvalue of A1 with |λ1 | = 1.
If λ1 6= 1, then there exists m ≥ m0 such that the real part of λm 1
is non-positive. But λm 1 is an eigenvalue of the positive matrix A m
1 with
ρ(Am m
1 ) = |λ1 | = 1, which contradicts Lemma 6. Therefore, λ1 = 1, and thus
λ = ρ(A) and A has no other eigenvalues with absolute value ρ(A).

Lemma 8. Let A be a non-negative square matrix such that some power of


A is positive, with ρ(A) = 1. If v is a non-negative vector and (Av)i ≥ vi for
every i, then Av = v.

3
Proof. Let m ≥ 1 be an integer such that Am is positive. Suppose for a
contradiction that Av 6= v, and thus Av − v is non-negative and non-zero.
Since Am is positive, we have that Am (Av − v) is positive. Thus, there exists
δ > 1 such that Am (Av − δv) is still positive, and thus (Am+1 v)i ≥ δ(Am v)i
for all i. Since A is non-negative, it follows by multiplying the inequality
by A that (Am+2 v)i ≥ δ(Am+1 v)i for all i. Combining these inequalities,
(Am+2 v)i ≥ δ 2 (Am v)i for all i. Similarly, (Am+n v)i ≥ δ n (Am v)i for all n ≥ 0
and all i. Consider any β such that 1 < β < δ, and let B = A/β. Then
(B m+n v)i ≥ (δ/β)n (B m v)i for all n ≥ 0 and all i, and thus limn→∞ kB n k =
∞. However, ρ(B) = 1/β < 1, which contradicts Lemma 1. Therefore,
Av = v.

Lemma 9. Let A be a non-negative n × n matrix. If some power of A


is positive, then the algebraic multiplicity of ρ(A) is one and there exists a
positive eigenvector for ρ(A).

Proof. If ρ(A) = 0, then by considering the Jordan normal form of A, we


conclude that An = 0, which contradicts the assumption that some power of
A is positive. Hence, ρ(A) > 0. Without loss of generality, we can assume
that ρ(A) = 1, as otherwise we divide A by ρ(A) first. Let v be an eigenvector
for 1, and let w be the vector such that wi = |vi | for all i. We have (Aw)i ≥
|(Av)i | = |vi | = wi for all i, and by Lemma 8, it follows that Aw = w.
Let m ≥ 1 be an integer such that Am is positive. We have Am w = w,
and since w is non-negative, the vector Am w = w is positive. Thus, w is
actually positive.
Suppose now for contradiction that the algebraic multiplicity of ρ(A) is
greater than 1. By considering the Jordan normal form of A, it follows
that there exists a non-zero vector z linearly independent on w such that
either Az = z, or Az = z + w. In the former case, there exists a non-zero
vector z 0 = w + αz for some α such that z 0 is non-negative, but at least one
coordinate of z 0 is 0. However, Az 0 = z 0 , and thus Am z 0 = z 0 , and Am z 0 is
positive, which is a contradiction. In the latter case, choose α > 0 so that
w0 = z + αw is positive. Then (Aw0 )i = (z + (α + 1)w)i > wi0 for all i, which
contradicts Lemma 8.
A square matrix is stochastic if each of its columns has sum equal to 1.
Let j denote the row vector of all ones. Note that A is stochastic if and only
if jA = j, and thus j T is an eigenvector of AT for the eigenvalue 1.

Lemma 10. Let A be a non-negative square stochastic matrix such that


some power of A is positive. Then there exists a unique positive vector v

4
with Av = v such that jv = 1. Furthermore, for any vector w such that
jw = 1, we have
lim Ak w = v.
k→∞

Proof. By Theorem 7 and Lemma 9, ρ(A) is an eigenvalue of A with algebraic


multiplicity 1, and the absolute value of any other eigenvalue is strictly less
than ρ(A), and there exists a positive vector v such that Av = ρ(A)v. Choose
v so that jv = 1. We have ρ(A) = ρ(A)jv = j(Av) = (jA)v = jv = 1, and
thus Av = v and ρ(A) = 1.
Let J be the matrix in Jordan normal form such that A = CJC −1 , such
that J11 = 1 and all other diagonal entries of J are strictly smaller than 1, and
C?,1 = v. Let z be the first row of C −1 . We have zA = zCJC −1 = eT1 JC −1 =
eT1 C −1 = z, and thus AT z T = z T . Therefore, z T is an eigenvector of AT for
eigenvalue 1. Note that the eigenvalues of AT are the same as the eigenvalues
of A, with the same algebraic multiplicities. Hence, 1 has multiplicity 1 as
an eigenvalue of AT , and thus the corresponding eigenvector is unique up to
scalar multiplication. It follows that z is a multiple of j. Since z is the first
row of C −1 and v the first column of C, we have zv = 1, and since jv = 1, it
follows that z = j.
We have limk→∞ J k = e1 eT1 , and thus limk→∞ Ak w = Ce1 eT1 C −1 w =
vzw = vjw = v.

Example 1. Let G be a connected non-bipartite graph. Start in a vertex


v1 of G, walk to its neighbor chosen uniformly at random, walk again to a
neighbor of the target vertex chosen uniformly at random, etc.
Let V (G) = {v1 , . . . , vn }. Let pi,k denote the probability that after k steps,
we are in the vertex vi , and let pk = (p1,k , p2,k , . . . , pn,k )T . So, p0 = e1 .
1
Furthermore, pk+1 = Apk , where A is the matrix such that Ai,j = deg(v j)
if
k
vi vj ∈ E(G) and Ai,j = 0 otherwise. Therefore, pk = A e1 .
Since G is connected and not bipartite, there exists k0 such that Ak0 is
positive. By Lemma 10, we have limk→∞ Ak e1 = p for the unique positive
vector p such that Ap = p and jp = 1. Observe that this is true for p =
1
2|E(G)|
(deg(v1 ), . . . , deg(vn ))T .
Therefore, after many steps, the probability that the walk ends in a vertex
deg(vi )
vi approaches 2|E(G)| .

For a directed graph G of links between webpages, the corresponding


eigenvector p gives PageRank, which is one of the factors used to measure
the importance of a webpage.

You might also like