0% found this document useful (0 votes)
22 views12 pages

Basis Changes and Matrix Diagonalization

Uploaded by

Sahil Saini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views12 pages

Basis Changes and Matrix Diagonalization

Uploaded by

Sahil Saini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Notes on basis changes and matrix diagonalization

Howard E. Haber
Santa Cruz Institute for Particle Physics,
University of California, Santa Cruz, CA 95064
April 15, 2015

1. Coordinates of vectors and matrix elements of linear operators


Let V be an n-dimensional real (or complex) vector space. Vectors that live
in V are usually represented by a single column of n real (or complex) numbers.
Linear operators act on vectors and are represented by square n × n real (or complex)
matrices.∗
If it is not specified, the representations of vectors and matrices described above
implicitly assume that the standard basis has been chosen. That is, all vectors in V
can be expressed as linear combinations of basis vectors:

Bs = b e1 , b
e2 , b
e3 , . . . , b
en

= (1, 0, 0, . . . , 0)T , (0, 1, 0, . . . , 0)T , (0, 0, 1, . . . , 0)T , . . . , (0, 0, 0, . . . , 1)T .

The subscript s indicates that this is the standard basis. The superscript T (which
stands for transpose) turns the row vectors into column vectors. Thus,
         
v1 1 0 0 0
 v2  0 1 0 0
         
         
~v =  v3  = v1 0 + v2  0  + v3 1 + · · · + vn 0 .
 ..   ..   ..   ..   .. 
. . . . .
vn 0 vn 0 1

The vi are the components of the vector ~v . However, it is more precise to say that
the vi are the coordinates of the abstract vector ~v with respect to the standard basis.
Consider a linear operator A. The corresponding matrix representation is given
by A = [aij ]. For example, if w
~ = A~v , then
n
X
wi = aij vj , (1)
j=1


A linear operator is a function that acts on vectors that live in a vector space V over a field
F and produces a vector that lives in another vector space W . If V is n-dimensional and W is
m-dimensional, then the linear operator is represented by an m × n matrix, whose entries are taken
from the field F . Typically, we choose either F = R (the real numbers) or F = C (the complex
numbers). In these notes, we will simplify the discussion by always taking W = V .

1
where vi and wi are the coordinates of ~v and w~ with respect to the standard basis
and aij are the matrix elements of A with respect to the standard basis. If we express
~v and w~ as linear combinations of basis vectors, then
n
X n
X
~v = vj b
ej , ~ =
w wi b
ei ,
j=1 i=1

then w
~ = A~v implies that
n X
X n n
X
aij vj b
ei = A vj b
ej ,
i=1 j=1 j=1

where we have used eq. (1) to substitute for wi . It follows that:


n n
!
X X
ej −
Ab aij b
ei vj = 0 . (2)
j=1 i=1

Eq. (2) must be true for any vector ~v ∈ V ; that is, for any choice of coordinates vj .
Thus, the coefficient inside the parentheses in eq. (2) must vanish. We conclude that:
n
X
ej =
Ab aij b
ei (3)
i=1

which can be used as the definition of the matrix elements aij with respect to the
standard basis of a linear operator A.
There is nothing sacrosanct about the choice of the standard basis. One can
expand a vector as a linear combination of any set of n linearly independent vectors.
Thus, we generalize the above discussion by introducing a basis

B = ~b1 , ~b2 , ~b3 , . . . , ~bn .

For any vector ~v ∈ V , we can find a unique set of coefficients vi such that
n
X
~v = vj ~bj . (4)
j=1

The vi are the coordinates of ~v with respect to the basis B. Likewise, for any linear
operator A,
n
X
A~bj = aij~bi (5)
i=1

defines the matrix elements of the linear operator A with respect to the basis B.
Clearly, these more general definitions reduce to the previous ones given in the case

2
of the standard basis. Moreover, we can easily compute A~v ≡ w
~ using the results of
eqs. (4) and (5):
n X n n n
! n
X X X X
A~v = ~
aij vj bi = ~
aij vj bi = wi~bi = w
~ ,
i=1 j=1 i=1 j=1 i=1

which implies that the coordinates of the vector w ~ = A~v with respect to the basis B
are given by:
X n
wi = aij vj .
j=1

Thus, the relation between the coordinates of ~v and w~ with respect to the basis B
is the same as the relation obtained with respect to the standard basis [see eq. (1)].
One must simply be consistent and always employ the same basis for defining the
vector coordinates and the matrix elements of a linear operator.

2. Change of basis and its effects on coordinates and matrix elements


The choice of basis is arbitrary. The existence of vectors and linear operators does
not depend on the choice of basis. However, a choice of basis is very convenient since
it permits explicit calculations involving vectors and matrices. Suppose we start with
some basis choice B and then later decide to employ a different basis choice C:

C = ~c1 , ~c2 , ~c3 , . . . , ~cn .

Note that we have not yet introduced the concept of an inner product or norm, so
in the present discussion there is no concept of orthogonality or unit vectors. (The
inner product will be employed only in Section 4 of these notes.)
Thus, we pose the following question. If the coordinates of a vector ~v and the
matrix elements of a linear operator A are known with respect to a basis B (which
need not be the standard basis), what are the coordinates of the vector ~v and the
matrix elements of a linear operator A with respect to a basis C? To answer this
question, we must describe the relation between B and C. The basis vectors of C can
be expressed as linear combinations of the basis vectors ~bi , since the latter span the
vector space V . We shall denote these coefficients as follows:
n
X
~cj = Pij~bi , j = 1, 2, 3, . . . , n . (6)
i=1

Note that eq. (6) is a shorthand for n separate equations, and provides the coefficients
Pi1 , Pi2 , . . ., Pin needed to expand ~c1 , ~c2 , . . ., ~cn , respectively, as linear combinations
of the ~bi . We can assemble the Pij into a matrix. A crucial observation is that this
matrix P is invertible. This must be true, since one can reverse the process and

3
express the basis vectors of B as linear combinations of the basis vectors ~ci (which
again follows from the fact that the latter span the vector space V ). Explicitly,
n
X
~bk = (P −1 )jk~cj , k = 1, 2, 3, . . . , n . (7)
j=1

We are now in the position to determine the coordinates of a vector ~v and the
matrix elements of a linear operator A with respect to a basis C. Assume that the
coordinates of ~v with respect to B are given by vi and the matrix elements of A
with respect to B are given by aij . With respect to C, we shall denote the vector
coordinates by vi′ and the matrix elements by a′ij . Then, using the definition of vector
coordinates [eq. (4)] and matrix elements [eq. (5)],
n n n n n
! n
X X X X X X
~v = v ′ ~cj =
j v′ Pij~bi =j Pij v ′ ~bi = vi~bi , j (8)
j=1 j=1 i=1 i=1 j=1 i=1

where we have used eq. (6) to express the ~cj in terms of the ~bi . The last step in
eq. (8) can be rewritten as:
n n
!
X X
vi − Pij vj′ ~bi = 0 . (9)
i=1 j=1

Since the ~bi are linearly independent, the coefficient inside the parentheses in eq. (9)
must vanish. Hence,
n
X
vi = Pij vj′ , or equivalently [~v ]B = P [~v ]C . (10)
j=1

Here we have introduced the notation [~v ]B to indicate the vector ~v represented in
terms of its coordinates with respect to the basis B. Inverting this result yields:
n
X
vj′ = (P −1 )jk vk , or equivalently [~v ]C = P −1 [~v ]B . (11)
k=1

Thus, we have determined the relation between the coordinates of ~v with respect to
the bases B and C.
A similar computation can determine the relation between the matrix elements of
A with respect to the basis B, which we denote by aij [see eq. (5)], and the matrix
elements of A with respect to the basis C, which we denote by a′ij :
n
X
A~cj = a′ij~ci . (12)
i=1

4
The desired relation can be obtained by evaluating A~bℓ :
n
X n
X n
X n
X
A~bℓ = A (P −1)jℓ~
cj = (P −1 )jℓ A~
cj = (P −1 )jℓ a′ij~ci
j=1 j=1 j=1 i=1

n n n n n X
n
!
X X X X X
= (P −1 )jℓ a′ij Pki~bk = Pki a′ij (P −1 )jℓ ~bk ,
j=1 i=1 k=1 k=1 i=1 j=1

where we have used eqs. (6) and (7) and the definition of the matrix elements of A
with respect to the basis C [eq. (12)]. Comparing this result with eq. (5), it follows
that !
Xn X n X n
akℓ − Pki a′ij (P −1 )jℓ ~bk = 0 .
k=1 i=1 j=1

Since the ~bk are linearly independent, we conclude that


n X
X n
akℓ = Pki a′ij (P −1 )jℓ .
i=1 j=1

The double sum above corresponds to the matrix multiplication of three matrices, so
it is convenient to write this result symbolically as:

[A]B = P [A]C P −1 . (13)

The meaning of this equation is that the matrix formed by the matrix elements of A
with respect to the basis B is related to the matrix formed by the matrix elements of
A with respect to the basis C by the similarity transformation given by eq. (13). We
can invert eq. (13) to obtain:

[A]C = P −1 [A]B P . (14)

In fact, there is a much faster method to derive eqs. (13) and (14). Consider the
equation w
~ = A~v evaluated with respect to bases B and C, respectively:

[w]
~ B = [A]B [~v ]B , [w]
~ C = [A]C [~v ]C .

Using eq. (10), [w]


~ B = [A]B [~v ]B can be rewritten as:

P [w]
~ C = [A]B P [~v ]C .

Hence,
~ C = [A]C [~v ]C = P −1[A]B P [~v ]C .
[w]
It then follows that

[A]C − P −1 [A]B P [~v ]C = 0 . (15)

5
Since this equation must be true for all ~v ∈ V (and thus for any choice of [~v ]C ), it
follows that the quantity inside the parentheses in eq. (15) must vanish. This yields
eq. (14).
The significance of eq. (14) is as follows. If two matrices are related by a similarity
transformation, then these matrices may represent the same linear operator with
respect to two different choices of basis.† These two choices are related by eq. (6).
Likewise, given two matrix representations of a group G that are related by a fixed
similarity transformation,

D2 (g) = P −1D1 (g)P , for all g ∈ G ,

where P is independent of the group element g, then we call D1 (g) and D2 (g) equiva-
lent representations, since D1 (g) and D2 (g) can be regarded as matrix representations
of the same linear operator D(g) with respect to two different basis choices.

3. Application to matrix diagonalization


Consider a matrix A ≡ [A] Bs , whose matrix elements are defined with respect
to the standard basis, Bs = b e1 , b
e2 , b
e3 , . . . , b
en . The eigenvalue problem for the
matrix A consists of finding all complex λi such that

A~v j = λj ~v j , ~v j 6= 0 for j = 1, 2, . . . , n . (16)

The λi are the roots of the characteristic equation det (A − λI) = 0. This is an nth
order polynomial equation which has n (possibly complex) roots, although some of
the roots could be degenerate. If the roots are non-degenerate, then the matrix A
is called simple. In this case, the n eigenvectors are linearly independent and span
the vector space V .‡ If some of the roots are degenerate, then the corresponding
n eigenvectors may or may not be linearly independent. In general, if A possesses
n linearly independent eigenvectors, then A is called semi-simple.§ If some of the
eigenvalues of A are degenerate and its eigenvectors do not span the vector space V ,
then we say that A is defective. A is diagonalizable if and only if it is semi-simple.¶
Since the eigenvectors of a semi-simple matrix A span the vector space V , we
may define a new basis made up of the eigenvectors of A, which we shall denote by

However, it would not be correct to conclude that two matrices that are related by a similarity
transformation cannot represent different linear operators. In fact, one could also interpret these
two matrices as representing (with respect to the same basis) two different linear operators that are
related by a similarity transformation. That is, given two linear operators A and B and an invertible
linear operator P , it is clear that if B = P −1 AP then the matrix elements of A and B with respect
to a fixed basis are related by the same similarity transformation.

This result is proved in the appendix to these notes.
§
Note that if A is semi-simple, then A is also simple only if the eigenvalues of A are distinct.

The terminology, simple (semi-simple), used above should not be confused with the correspond-
ing terminology employed to described groups that possess no proper normal (abelian) subgroups.

6

C = ~v 1 , ~v 2 , ~v 3 , . . . , ~v n . The matrix elements of A with respect to the basis C,
denoted by [A]C , is obtained by employing eq. (12):
n
X
A~v j = a′ij ~v i .
i=1

But, eq. (16) implies that a′ij = λj δij (no sum over j). That is,
 
λ1 0 ··· 0
 0 λ2 ··· 0 
 
[A]C =  .. .. .. ..  .
 . . . .
0 0 · · · λn

The relation between A and [A]C can be obtained from eq. (14). Thus, we must
determine the matrix P that governs the relation between Bs and C [eq. (6)]. Consider
the coordinates of ~v j with respect to the standard basis Bs :
n
X n
X
~v j = (~v j )i b
ei = Pij b
ei , (17)
i=1 i=1

where (~v j )i is the ith coordinate of the jth eigenvector. Using eq. (17), we identify
Pij = (~v j )i . In matrix form,
 
(v1 )1 (v2 )1 · · · (vn )1
 (v1 )2 (v2 )2 · · · (vn )2 
 
P =  .. .. . . ..  .
 . . . . 
(v1 )n (v2 )n · · · (vn )n

Finally, we use eq. (14) to conclude that [A]C = P −1 [A]Bs P . If we denote the diago-
nalized matrix by D ≡ [A]C and the matrix A with respect to the standard basis by
A ≡ [A]Bs , then

P −1 AP = D , (18)

where P is the matrix whose columns are the eigenvectors of A and D is the diagonal
matrix whose diagonal elements are the eigenvalues of A. Thus, we have succeeded
in diagonalizing an arbitrary semi-simple matrix.
If the eigenvectors of A do not span the vector space V (i.e., A is defective), then
A is not diagonalizable.k That is, there does not exist a matrix P and a diagonal
matrix D such that eq. (18) is satisfied.
k
The simplest example of a defective matrix is B = ( 00 10 ). One can quickly check that the
eigenvalues of B are given by the double root λ = 0 of the characteristic equation. However, solving
the eigenvalue equation, B~v = 0, yields only one linearly independent eigenvector, ( 10 ). One can
verify explicitly that no matrix P exists such that P −1 BP is diagonal.

7
4. Diagonalization by a unitary similarity transformation
Nothing in Sections 1–3 requires the existence of an inner product. However, if
an inner product is defined, then the vector space V is promoted to an inner product
space. In this case, we can define the concepts of orthogonality and orthonormality.
In particular, given an arbitrary basis B, we can use the Gram-Schmidt process to
construct an orthonormal basis. Thus, when considering inner product spaces, it is
convenient to always choose an orthonormal basis.
Hence, it is useful to examine the effect of changing basis from one orthonormal
basis to another. All the considerations of Section 2 apply, with the additional con-
straint that the matrix P is now a unitary matrix.∗∗ Namely, the transformation
between any two orthonormal bases is always unitary. Suppose that the matrix A
has the property that its eigenvectors comprise an orthonormal basis that spans the
inner product space V . Following the discussion of Section 3, it follows that there
exists a diagonalizing matrix U such that
U † AU = D ,
where U is the unitary matrix (U † = U −1 ) whose columns are the orthonormal eigen-
vectors of A and D is the diagonal matrix whose diagonal elements are the eigenvalues
of A.
A normal matrix A is defined to be a matrix that commutes with its hermitian
conjugate. That is,
A is normal ⇐⇒ AA† = A† A .
In this section, we prove the matrix A can be diagonalized by a unitary similarity
transformation if and only if it is normal.
We first prove that if A can be diagonalizable by a unitary similarity transforma-
tion, then A is normal. If U † AU = D, where D is a diagonal matrix, then A = UDU †
(using the fact that U is unitary). Then, A† = UD † U † and
AA† = (UDU † )(UD † U † ) = UDD † U † = UD † DU † = (UD † U † )(UDU † ) = A† A .
In this proof, we use the fact that diagonal matrices commute with each other, so
that DD † = D † D.
Conversely, if A is normal then it can be diagonalizable by a unitary similarity
transformation. To prove this, consider the eigenvalue problem A~v = λ~v , where
~v 6= 0. All matrices possess at least one eigenvector and corresponding eigenvalue.
Thus, we we focus on one of the eigenvalues and eigenvectors of A that satisfies
A~v 1 = λ~v 1 . We can always normalize ~v 1 to unity by dividing out by its norm. We
now construct a unitary matrix U1 as follows. Take the first column of U1 to be given
by (the normalized) ~v 1 . The rest of the unitary matrix will be called Y , which is an
n × (n − 1) matrix. Explicitly,

U1 = ~v 1 Y ,
∗∗
In a real inner product space, a unitary transformation is real and hence is an orthogonal
transformation.

8
where the vertical dashed line is inserted for the reader’s convenience as a reminder
that this is a partitioned matrix that is n × 1 to the left of the dashed line and
n × (n − 1) to the right of the dashed line. Since the columns of U1 comprise an
orthonormal set of vectors, the matrix elements of Y are of the form

Yij = (~v j )i , for i = 1, 2, . . . , n and j = 2, 3, . . . ,

where {~v1 , ~v 2 , . . . , ~v n } is an orthonormal set of vectors. Here (~v j )i is the ith coordi-
nate (with respect to a fixed orthonormal basis) of the jth vector of the orthonormal
set. It then follows that the inner product of ~v j and ~v 1 is zero for j = 2, 3, . . . , n.
That is,
n
X
h~v j |~v 1 i = (~v j )∗k (~v 1 )k = 0 , for j = 2, 3, ... , n, (19)
k=1

where (~v j )∗k is the complex conjugate of the kth component of the vector ~v j . We can
rewrite eq. (19) as a matrix product (where ~v 1 is an n × 1 matrix) as:
n
X n
X

Y ~v 1 = (Y )∗kj (~v 1 )k = (~v j )∗k (~v 1 )k = 0 . (20)
k=1 k=1

We now compute the following product of matrices:


! !

~v †1  ~v †1 A~v 1 ~v †1 AY
U1 AU1 = A ~v 1 Y = . (21)
Y† Y † A~v 1 Y † AY

Note that the partitioned matrix above has the following structure:
!
1×1 1 × (n − 1)
,
(n − 1) × 1 (n − 1) × (n − 1)

where we have indicated the dimensions (number of rows × number of columns)


of the matrices occupying the four possible positions of the partitioned matrix. In
particular, there is one row above the horizontal dashed line and (n − 1) rows below;
there is one column to the left of the vertical dashed line and (n − 1) columns to the
right. Using A~v 1 = λ1~v 1 , with ~v 1 normalized to unity (i.e., ~v †1~v 1 = 1), we see that:

~v †1 A~v 1 = λ1~v †1~v 1 = λ1 ,


Y † A~v 1 = λ1 Y †~v 1 = 0 .

after making use of eq. (20). Using these result in eq. (21) yields:

!
λ 1 ~
v 1 AY
U1† AU1 = . (22)
0 Y † AY

9
If A is normal then U1† AU1 is normal, since

U1† AU1 (U1† AU1 )† = U1† AU1 U1† A† U1 = U1† AA† U1 ,


(U1† AU1 )† U1† AU1 = U1† A† U1 U1† AU1 = U1† A† AU1 ,

where we have used the fact that U1 is unitary (U1 U1† = I). Imposing AA† = A† A,
we conclude that

U1† AU1 (U1† AU1 )† = (U1† AU1 )† U1† AU1 . (23)

However, eq. (22) implies that



! ! !
λ 1 ~
v 1 AY λ ∗
1 0 |λ1 |2 + ~v †1 AY Y † A†~v 1 ~v †1 AY Y † A† Y
U1† AU1 (U1† AU1 )† = = ,
0 Y † AY Y † A†~v 1 Y † A† Y Y † AY Y † A†~v 1 Y † AY Y † A† Y
! ! !
λ∗1 0 λ1 ~v †1 AY |λ1 |2 λ∗1~v †1 AY
(U1† AU1 )† U1† AU1 = = .
Y † A†~v 1 Y † A† Y 0 Y † AY λ1 Y † A†~v 1 Y † A†~v 1~v †1 AY + Y † A† Y Y † AY

Imposing the result of eq. (23), we first compare the upper left hand block of the two
matrices above. We conclude that:

~v †1 AY Y † A†~v 1 = 0 . (24)

But Y † A†~v 1 is an (n − 1)-dimensional vector, so that eq. (24) is the matrix version
of the following equation:

hY † A†~v 1 |Y † A†~v 1 i = 0 . (25)

Since hw| ~ = 0 implies that w


~ wi ~ † = 0), we conclude from eq. (25) that
~ = 0 (and w

Y † A†~v 1 = ~v †1 AY = 0 . (26)

Using eq. (26) in the expressions for U1† AU1 (U1† AU1 )† and (U1† AU1 )† U1† AU1 above, we
see that eq. (23) requires that eq. (26) and the following condition are both satisfied:

Y † AY Y † A† Y = Y † A† Y Y † AY .

The latter condition simply states that Y † AY is normal. Using eq. (26) in eq. (22)
then yields:
!

λ1 0
U1 AU1 = , (27)
0 Y † AY

10
where Y † AY is normal. Thus, we have succeeded in reducing the original n × n
normal matrix A down to an (n − 1) × (n − 1) normal matrix Y † AY , and we can now
repeat the procedure again. The end result is the unitary diagonalization of A,
 
λ1 0 · · · 0
 0 λ2 · · · 0 
 
U † AU = D ≡  .. .. . . ..  .
. . . .
0 0 · · · λn
Moreover, the eigenvalues of A (which are complex in general) are the diagonal ele-
ments of D and the eigenvectors of A are the columns of U. This should be clear from
the equation AU = UD. The proof given above does not care whether any degenera-
cies exist among the λi . Thus, we have proven that a normal matrix is diagonalizable
by a unitary similarity transformation.
Since hermitian matrices and unitary matrices are normal, it immediately follows
that hermitian and unitary matrices are also diagonalizable by a unitary similarity
transformation. In the case of hermitian matrices, the corresponding eigenvalues λi
must be real. In the case of unitary matrices, the corresponding eigenvalues are pure
phases (i.e. complex numbers of unit magnitude).

Appendix: Proof that the eigenvectors corresponding to distinct eigenval-


ues are linearly independent
In this appendix, we shall prove that if ~v 1 , ~v 2 , . . . , ~v n are eigenvectors correspond-
ing to the distinct eigenvalues λ1 , λ2 , . . . , λn of A, then ~v 1 , ~v 2 , . . . , ~vn are linearly
independent. Recall that if ~v 1 , ~v 2 , . . . , ~v n are linearly independent, then
c1~v 1 + c2~v 2 + · · · + cn~v n = 0 ⇐⇒ ci = 0 for all i = 1, 2, . . . , n . (28)
Starting from A~v = λ~v , we multiply on the left by A to get
A2~v = A·A~v = A(λ~v ) = λA~v = λ2~v .
Continuing this process of multiplication on the left by A, we conclude that:
 
Ak ~v = A Ak−1~v = A λk−1~v = λk−1A~v = λk ~v , (29)
for k = 2, 3, . . . , n. Thus, if we multiply eq. (28) on the left by Ak , then we obtain n
separate equations by choosing k = 0, 1, 2, . . . , n − 1 given by:
c1 λk1 ~v 1 + c2 λk2 ~v 2 + · · · + cn λkn~v n = 0 , k = 0, 1, 2, . . . , n − 1 .
In matrix form,
  
1 1 1 ··· 1 c1~v 1
 λ1 λ2 λ3 ··· λn   
   c2~v 2 
 2  
 λ1 λ22 λ23 ··· λ2n   c3~v 3  = 0 . (30)
  
 .. .. .. .. ..   .. 
 . . . . .  . 
λ1n−1 λ2n−1 λ3n−1 · · · λnn−1 cn~v n

11
The matrix appearing above is equal to the transpose of a well known matrix called
the Vandermonde matrix. There is a beautiful formula for its determinant:
1 1 1 ··· 1
λ1 λ2 λ3 ··· λn
Y
λ21 λ22 λ23 ··· λ2n = (λi − λj ) . (31)
.. .. .. .. .. i<j
. . . . .
λ1n−1 λ2n−1 λ3n−1 · · · λnn−1

I leave it as a challenge to the reader for providing a proof of eq. (31). This re-
sult implies that if all the eigenvalues λi are distinct, then the determinant of the
Vandermonde matrix is nonzero. In this case, the Vandermonde matrix is invertible.
Multiplying eq. (30) by the inverse of the Vandermonde matrix then yields ci~v i = 0
(no sum over i) for all i = 1, 2, . . . , n. Since the eigenvectors are nonzero by definition,
it follows that ci = 0 for all i = 1, 2, . . . , n. Hence the {~v i } are linearly independent.

12

You might also like