0% found this document useful (0 votes)
25 views5 pages

Matrix Theory

1. The continuity argument is a technique used in matrix theory to prove results involving singular matrices. It involves approximating a singular matrix with nonsingular matrices and taking a limit. 2. The continuity argument was used to prove that matrices AB and BA have the same eigenvalues by approximating a singular matrix A with nonsingular A + εI and taking the limit as ε approaches 0. 3. The continuity argument fails in some cases, such as when trying to show an identity involving determinants does not always hold if one of the matrices is singular.

Uploaded by

wkchanxx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views5 pages

Matrix Theory

1. The continuity argument is a technique used in matrix theory to prove results involving singular matrices. It involves approximating a singular matrix with nonsingular matrices and taking a limit. 2. The continuity argument was used to prove that matrices AB and BA have the same eigenvalues by approximating a singular matrix A with nonsingular A + εI and taking the limit as ε approaches 0. 3. The continuity argument fails in some cases, such as when trying to show an identity involving determinants does not always hold if one of the matrices is singular.

Uploaded by

wkchanxx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

62 Partitioned Matrices, Rank, and Eigenvalues Chap.

2.5 The Continuity Argument and Matrix Functions

One of the most frequently used techniques in matrix theory is the


continuity argument. A good example of this is to show, as we saw
in the previous section, that matrices AB and BA have the same set
of eigenvalues when A and B are both square matrices of the same
size. It goes as follows. First consider the case where A is invertible
and conclude that AB and BA are similar due to the fact that

AB = A(BA)A−1 .

If A is singular, consider A + ϵI. Choose δ > 0 such that A + ϵI is


invertible for all ϵ, 0 < ϵ < δ. Thus, (A + ϵI)B and B(A + ϵI) have
the same set of eigenvalues for every ϵ ∈ (0, δ).
Equate the characteristic polynomials to get

det(λI − (A + ϵI)B) = det(λI − B(A + ϵI)), 0 < ϵ < δ.

Since both sides are continuous functions of ϵ, letting ϵ → 0+ gives

det(λI − AB) = det(λI − BA).

Thus, AB and BA have the same eigenvalues.


The proof was done in three steps:
1. Show that the assertion is true for the nonsingular A.
2. Replace singular A by nonsingular A + ϵI.
3. Use continuity of a function in ϵ to get the desired conclusion.
We have used and will more frequently use the following theorem.

Theorem 2.9 Let A be an n × n matrix. If A is singular, then there


exists a δ > 0 such that A + ϵI is nonsingular for all ϵ ∈ (0, δ).

Proof. The polynomial det(λI + A) in λ has at most n zeros. If they


are all 0, we can take δ to be any positive number. Otherwise, let δ
be the smallest nonzero λ in modulus. This δ serves the purpose.
A continuity argument is certainly an effective way for many ma-
trix problems when a singular matrix is involved. The setting in
Sec. 2.5 The Continuity Argument and Matrix Functions 63

which the technique is used is rather important. Sometimes the re-


sult for nonsingular matrices may be invalid for the singular case.
Here is an example for which the continuity argument fails.

Theorem 2.10 Let C and D be n-square matrices such that

CDT + DC T = 0.

If D is nonsingular, then for any n-square matrices A and B

A B
= det(ADT + BC T ).
C D

The identity is invalid in general if D is singular.

Proof. It is easy to verify that


( )( ) ( )
A B DT 0 ADT + BC T B
= .
C D CT I 0 D

Taking determinants of both sides results in the desired identity.


For an example of the singular case, we take A, B, C, and D to
be, respectively,
( ) ( ) ( ) ( )
1 0 0 0 0 1 0 0
, , , ,
0 0 0 1 0 0 1 0

where D is singular. It is easy to see by a simple computation that


the determinant identity does not hold.
The continuity argument may be applied to more general func-
tions of matrices. For instance, the trace and determinant depend
continuously on the entries of a matrix. These are easy to see as the
trace is the sum of the main diagonal entries and the determinant
is the sum of all products of (different) diagonal entries. So we may
simply say that the trace and determinant are continuous functions
of (the entries of) the matrix.
We have used the term matrix function. What is a matrix func-
tion after all? A matrix function, f (A), or function of a matrix can
have several different meanings. It can be an operation on a matrix
producing a scalar, such as tr A and det A; it can be a mapping from
64 Partitioned Matrices, Rank, and Eigenvalues Chap. 2

a matrix space to a matrix space, like f (A) = A2 ; it can also be


entrywise operations on the matrix, for instance, g(A) = (a2ij ). In
this book we use the term matrix function in a general (loose) sense;
that is, a matrix function is a mapping f : A 7→ f (A) as long as f (A)
is well defined, where f (A) is a scalar or a matrix (or a vector).
Given a square matrix A, the square (of A, ) A2 , is well defined.
How about a square root of A? Take A = 00 10 , for example. There
is no matrix B such that B 2 = A. After a moment’s consideration,
one may realize that this thing is nontrivial. In fact, generalizing a
function f (z) of a scalar variable z ∈ C to a matrix function f (A) is
a serious business and it takes great effort.
Most of the terminology in calculus can be defined for square
matrices. For instance, a matrix sequence (or series) is convergent if
it is convergent entrywise. As an example,
( ) ( )
1 k−1
k k 0 1
→ , as k → ∞.
0 1 0 0
k

For differentiation and integration, let A(t) = (aij (t)) and denote
( ) ∫ (∫ )
d d
(A(t)) = aij (t) , A(t)dt = aij (t)dt .
dt dt
That is, by differentiating or integrating a matrix we mean to perform
the operation on the matrix entrywise. It can be shown that the
product rule for derivatives in calculus holds for matrices whereas
the power rule does not. Now one is off to a good start working on
matrix calculus, which is useful for differential equations. Interested
readers may pursue and explore more in this direction.

Problems
1. Why did the continuity argument fail Theorem 2.10?
2. Let C and D be real matrices such that CD T + DC T = 0. Show that
if C is skew-symmetric (i.e., C T = −C), then so is DC.
3. Show that A has no square root. How about B and C, where
 
( ) 0 1 0 ( )
0 1 A 0
A= , B= 0 0 0  , C= ?
0 0 0 A
0 0 0
Sec. 2.5 The Continuity Argument and Matrix Functions 65

(2 1)
2 3 . Find a 2 × 2 matrix X so that X = A.
2
4. Let A =
5. Use a continuity argument to show that for any A, B ∈ Mn

adj(AB) = adj(B) adj(A).

6. Show that Aϵ = Pϵ Jϵ Pϵ−1 if ϵ ̸= 0, where


( ) ( ) ( )
ϵ 0 0 ϵ 0 0
Aϵ = , Pϵ = , Jϵ = .
1 0 1 1 0 ϵ

What happens to the matrix identity if ϵ → 0? Is A0 similar to J0 ?


7. Explain why rank (A2 ) ≤ rank (A). Discuss whether a continuity
argument can be used to show the inequality.
8. Show that the eigenvalues of A are independent of ϵ, where
( )
ϵ−1 −1
A= .
ϵ2 − ϵ + 1 −ϵ

9. Denote
( by ) σmax and σmin , σmax ≥ σmin , the singular values of matrix
A = 1ϵ 1ϵ , ϵ > 0. Show that limϵ→1− σmax /σmin = +∞.
10. Let A be a nonsingular matrix with A−1 = B = (bij ). Show that
bij are continuous functions of aij , the entries of A, and that if
limt→0 A(t) = A and det A ̸= 0 (this condition is necessary), then
( )−1
lim A(t) = A−1 .
t→0

Conclude that
lim (A − λI)−1 = A−1
λ→0

and for any m × n matrix X and n × m matrix Y , independent of ϵ,


( )−1
Im ϵX
lim = Im+n .
ϵ→0 ϵY In

11. Let A ∈ Mn . If |λ| < 1 for all eigenvalues λ of A, show that




(I − A)−1 = Ak = I + A + A2 + A3 + · · · .
k=1

( ) ∑∞
−1 1 1 k
12. Let A = 0 −1 . Show that k=1 k2 A is convergent.
66 Partitioned Matrices, Rank, and Eigenvalues Chap. 2

13. Let p(x), q(x) be polynomials and A ∈ Mn be such that q(A) is


invertible. Show that p(A)(q(A))−1 = (q(A))−1 p(A). Conclude that
(I − A)−1 (I + A2 ) = (I + A2 )(I − A)−1 when A has no eigenvalue 1.
14. Let n be a positive number and x be a real number. Let
( )
1 − nx
A= x .
n 1

Show that
( ) ( )
1 0 1
lim lim (I − An ) = .
x→0 n→∞ x −1 0

[Hint: A = cP for some constant c and orthogonal matrix P .]


∑∞ 1 k
15. For any square matrix X, show that eX = k=0 k! X is well defined;
that is, the series always converges. Let A, B ∈ Mn . Show that
(a) If A = 0, then eA = I.
(b) If A = I, then eA = eI.
(c) If AB = BA, then eA+B = eA eB = eB eA .
(d) If A is invertible, then e−A = (eA )−1 .
−1
(e) If A is invertible, then eABA = AeB A−1 .
(f) If λ is an eigenvalue of A, then eλ is an eigenvalue of eA .
(g) det eA = etr A .
∗ ∗
(h) (eA ) = eA .
(i) If A is Hermitian, then eiA is unitary.
(j) If A is real skew-symmetric, then eA is (real) orthogonal.
16. Let A ∈ Mn and t ∈ R. Show that dt d tA
e = AetA = etA A.
( 2t ) ∫1
sin t , where t ∈ R. Find 0 A(t)dt and dt A(t).
e t d
17. Let A(t) = 1+t

. ⊙ .

You might also like