0% found this document useful (0 votes)
16 views5 pages

Probability Practice Problems With Solutions 6

probability practice problems with solutions

Uploaded by

getasew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views5 pages

Probability Practice Problems With Solutions 6

probability practice problems with solutions

Uploaded by

getasew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

STAT 714 MATRIX ALGEBRA REVIEW 6

TERMINOLOGY : The sum of the diagonal elements of a square matrix A is called the
trace of A, written tr(A), that is, for An×n = (aij ),
n
X
tr(A) = aii .
i=1

Result MAR6.1.
1. tr(A ± B) = tr(A) ± tr(B)

2. tr(cA) = ctr(A)

3. tr(A0 ) = tr(A)

4. tr(AB) = tr(BA)

5. tr(A0 A) = ni=1 nj=1 a2ij .


P P

TERMINOLOGY : The determinant of a square matrix A is a real number denoted by


|A| or det(A).

Result MAR6.2.
1. |A0 | = |A|

2. |AB| = |BA|

3. |A−1 | = |A|−1

4. |A| = 0 iff A is singular


Qn
5. For any n × n upper (lower) triangular matrix, |A| = i=1 aii .

REVIEW : The table below summarizes equivalent conditions for the existence of an
inverse matrix A−1 (where A has dimension n × n).

A−1 exists A−1 does not exist


A is nonsingular A is singular
|A| =
6 0 |A| = 0
A has full rank A has less than full rank
r(A) = n r(A) < n
A has LIN rows (columns) A does not have LIN rows (columns)
Ax = 0 has one solution, x = 0 Ax = 0 has many solutions

EIGENVALUES : Suppose that A is a square matrix and consider the equations Au =


λu. Note that
Au = λu ⇐⇒ Au − λu = (A − λI)u = 0.

PAGE 1
STAT 714 MATRIX ALGEBRA REVIEW 6

If u 6= 0, then A − λI must be singular (see last table). Thus, the values of λ which
satisfy Au = λu are those values where

|A − λI| = 0.

This is called the characteristic equation of A. If A is n × n, then the characteristic


equation is a polynomial (in λ) of degree n. The roots of this polynomial, say, λ1 , λ2 , ..., λn
are the eigenvalues of A (some of these may be zero or even imaginary). If A is a
symmetric matrix, then λ1 , λ2 , ..., λn must be real.

EIGENVECTORS : If λ1 , λ2 , ..., λn are eigenvalues for A, then vectors ui satisfying

Aui = λi ui ,

for i = 1, 2, ..., n, are called eigenvectors. Note that

Aui = λi ui =⇒ Aui − λi ui = (A − λi I)ui = 0.

From our discussion on systems of equations and consistency, we know a general solution
for ui is given by ui = [I − (A − λi I)− (A − λi I)]z, for z ∈ Rn .

Result MAR6.3. If λi and λj are eigenvalues of a symmetric matrix A, and if λi 6= λj ,


then the corresponding eigenvectors, ui and uj , are orthogonal.
Proof. We know that Aui = λi ui and Auj = λj uj . The key is to recognize that

λi u0i uj = u0i Auj = λj u0i uj ,

which can only happen if λi = λj or if u0i uj = 0. But λi 6= λj by assumption. 

PUNCHLINE : For a symmetric matrix A, eigenvectors associated with distinct eigen-


values are orthogonal (we’ve just proven this) and, hence, are linearly independent. If
the symmetric matrix A has an eigenvalue λk , of multiplicity mk , then we can find mk
orthogonal eigenvectors of A which correspond to λk (Searle, pp 291). This leads to the
following result (c.f., Christensen, pp 402):

Result MAR6.4. If A is a symmetric matrix, then there exists a basis for C(A) con-
sisting of eigenvectors of nonzero eigenvalues. If λ is a nonzero eigenvalue of multiplicity
m, then the basis will contain m eigenvectors for λ. Furthermore, N (A) consists of the
eigenvectors associated with λ = 0 (along with 0).

SPECTRAL DECOMPOSITION : Suppose that An×n is symmetric with eigenvalues


λ1 , λ2 , ..., λn . The spectral decomposition of A is given by A = QDQ0 , where

• Q is orthogonal; i.e., QQ0 = Q0 Q = I,

• D = diag(λ1 , λ2 , ..., λn ), a diagonal matrix consisting of the eigenvalues of A; note


that r(D) = r(A), because Q is orthogonal, and

• the columns of Q are orthonormal eigenvectors of A.

PAGE 2
STAT 714 MATRIX ALGEBRA REVIEW 6

Result MAR6.5. If A is an n × n symmetric matrix with eigenvalues λ1 , λ2 , ..., λn , then

Qn
1. |A| = i=1 λi
Pn
2. tr(A) = i=1 λi .

NOTE : These facts are also true for a general n × n matrix A.

Spectral Decomposition A = QDQ0 .


Proof (in the symmetric case). Write A in its Q
Note that |A| = |QDQ 0
| = |DQ Q| = |D| = ni=1 λi . Also, tr(A) = tr(QDQ0 ) =
0

tr(DQ0 Q) = tr(D) = ni=1 λi . 


P

Result MAR6.6. Suppose that A is symmetric. The rank of A equals the number of
nonzero eigenvalues of A.
Proof. Write A in its spectral decomposition A = QDQ0 . Because r(D) = r(A) and
because the only nonzero elements in D are the nonzero eigenvalues, the rank of D must
be the number of nonzero eigenvalues of A. 

Result MAR6.7. The eignenvalues of an idemptotent matrix A are equal to 0 or 1.


Proof. If λ is an eigenvalue of A, then Au = λu. Note that A2 u = AAu = Aλu =
λAu = λ2 u. This shows that λ2 is an eigenvalue of A2 = A. Thus, we have Au = λu
and Au = λ2 u, which implies that λ = 0 or λ = 1. 

Result MAR6.8. If the n × n matrix A is idempotent, then r(A) = tr(A).


Proof. From the last result, we know that the eignenvalues of A are equal to 0 or 1. Let
v1 , v2 , ..., vr be a basis for C(A). Denote by S the subspace of all eigenvectors associated
with λ = 1. Suppose v ∈ S. Then, because Av = v ∈ C(A), v can be written
as a linear combination of v1 , v2 , ..., vr . This means that any basis for C(A) is also a
basis for S. Furthermore, N (A) consists of eigenvectors associated with λ = 0 (because
Av = 0v = 0). Thus,

n = dim(Rn ) = dim[C(A)] + dim[N (A)]


= r + dim[N (A)],

showing that dim[N (A)] = n − r. Since A has n eigenvalues, all are accountedPfor λ = 1
(with multiplicity r) and for λ = 0 (with multiplicity n − r). Now tr(A) = i λi = r,
the multiplicity of λ = 1. But r(A) = dim[C(A)] = r as well. 

TERMINOLOGY : Suppose that x is an n × 1 vector. A quadratic form is a function


f : Rn → R of the form
n X
X n
f (x) = aij xi xj = x0 Ax.
i=1 j=1

The matrix A is called the matrix of the quadratic form.

PAGE 3
STAT 714 MATRIX ALGEBRA REVIEW 6

Result MAR6.9. If x0 Ax is any quadratic form, there exists a symmetric matrix B


such that x0 Ax = x0 Bx.
Proof. Note that x0 A0 x = (x0 Ax)0 = x0 Ax, since a quadratic form is a scalar. Thus,
1 0 1
x0 Ax = x Ax + x0 A0 x
2  2 
1 1
= x0 A + A0 x = x0 Bx,
2 2

where B = 21 A + 12 A0 . It is easy to show that B is symmetric. 

UPSHOT : In working with quadratic forms, we can, without loss of generality, assume
that the matrix of the quadratic form is symmetric.

TERMINOLOGY : The quadratic form x0 Ax is said to be


• nonnegative definite (nnd) if x0 Ax ≥ 0, for all x ∈ Rn .

• positive definite (pd) if x0 Ax > 0, for all x 6= 0.

• positive semidefinite (psd) if x0 Ax is nnd but not pd.

TERMINOLOGY : A symmetric n × n matrix A is said to be nnd, pd, or psd if the


quadratic form x0 Ax is nnd, pd, or psd, respectively.

Result MAR6.10. Let A be a symmetric matrix. Then

1. A pd =⇒ |A| > 0

2. A nnd =⇒ |A| ≥ 0.

Result MAR6.11. Let A be a symmetric matrix. Then

1. A pd ⇐⇒ all eigenvalues of A are positive

2. A nnd ⇐⇒ all eigenvalues of A are nonnegative.

Result MAR6.12. A pd matrix is nonsingular. A psd matrix is singular. The converses


are not true.

CONVENTION : If A1 and A2 are n × n matrices, we write A1 ≥nnd A2 if A1 − A2 is


nonnegative definite (nnd) and A1 ≥pd A2 if A1 − A2 is positive definite (pd).

Result MAR6.13. Let A be an m × n matrix of rank r. Then A0 A is nnd with rank


r. Furthermore, A0 A is pd if r = n and is psd if r < n.
Proof. Let x be an n × 1 vector. Then x0 (A0 A)x = (Ax)0 Ax ≥ 0, showing that A0 A is
nnd. Also, r(A0 A) = r(A) = r. If r = n, then the columns of A are linearly independent
and the only solution to Ax = 0 is x = 0. This shows that A0 A is pd. If r < n, then

PAGE 4
STAT 714 MATRIX ALGEBRA REVIEW 6

the columns of A are linearly dependent; i.e., there exists an x 6= 0 such that Ax = 0.
Thus, A0 A is nnd but not pd, so it must be psd. 

RESULT : A square matrix A is pd iff there exists a nonsingular lower triangular matrix
L such that A = LL0 . This is called the Choleski Factorization of A. Monahan
proves this result (see pp 258), provides an algorithm on how to find L, and includes an
example.

RESULT : Suppose that A is symmetric and pd. Writing A in its Spectral Decompo-
sition, we have A = QDQ0 . Because A is pd, λ1 , λ2 , ..., √
λn , the
√ eigenvalues
√ of A, are
1/2 0
positive. If we define A = QD Q , where D = diag( λ1 , λ2 , ..., λn ), then A1/2
1/2 1/2

is symmetric and

A1/2 A1/2 = QD1/2 Q0 QD1/2 Q0 = QD1/2 ID1/2 Q0 = QDQ0 = A.

The matrix A1/2 is called the symmetric square root of A. See Monahan (pp 259-60)
for an example.

PAGE 5

You might also like