0% found this document useful (0 votes)
23 views16 pages

Lecture Notes On Matrices

Uploaded by

officialmame25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views16 pages

Lecture Notes On Matrices

Uploaded by

officialmame25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Lecture notes on Matrices for Kannur University Post Graduate

Students

Dr. K. M. Udayanandan
Associate Professor
Nehru Arts and Science College
Kanhangad.

Syllabus

Orthogonal matrices- Hermitian Matrices-Unitary matrices-Diagonalisation


of matrices.

1
Orthogonal Matrices

A matrix A is orthogonal if its transpose is equal to its inverse:

AT = A−1

which requires
AT A = AAT = I
where I is the identity matrix. An orthogonal matrix A is necessarily in-
vertible (with inverse A−1 = AT ), unitary (A−1 = A† ) and therefore normal
(A† A = A A† ). The determinant of any orthogonal matrix is either +1 or 1.

Questions

1. Show that direction cosines of a three dimensional coordinates consti-


tutes an orthogonal matrix.

Direction cosines in three dimensional coordinates is given by


 
cos θ sin θ 0
S =  − sin θ cos θ 0 
0 0 1

Then  
cos θ − sin θ 0
S T =  sin θ cos θ 0 
0 0 1
So   
cos θ sin θ 0 cos θ − sin θ 0
SS T =  − sin θ cos θ 0   sin θ cos θ 0 
0 0 1 0 0 1
cos2 θ + sin2 θ
 
− sin θ cos θ + sin θ cos θ 0
=  − sin θ cos θ + sin θ cos θ sin2 θ + cos2 θ 0 
0 0 1
 
1 0 0
=  0 1 0 =I
0 0 1

2
SS T = I
So direction cosines of a three dimensional coordinates constitutes an
orthogonal matrix.

2. Prove that the transpose of an orthogonal matrix is orthogonal.


Consider a orthogonal matrix S .Then

ST S = I

Take the transpose of matrix S as another matrix say D

B = AT

Then
B T B = (AT )T AT
= AAT = I
BT B = I
Therefore transpose of an orthogonal matrix is also orthogonal.

Hermitian and Unitary Matrices

Hermitian matrices

A Hermitian matrix (or self-adjoint matrix) is a square matrix which is equal


to its own conjugate transpose. If the conjugate transpose of a matrix A is
denoted by A† , called ’A dagger’, then the Hermitian property can be written
concisely as
A = A†
.

Properties

1. The sum of a square matrix and its conjugate transpose (C + C † ) is


Hermitian

3
2. The difference of a square matrix and its conjugate transpose (C − C † )
is skew-Hermitian (also called anti hermitian, A = −A† )

3. An arbitrary square matrix C can be written as the sum of a Hermitian


matrix A and a skew-Hermitian matrix B:
1 1
C =A+B with A = (C + C † ) and B = (C − C † )
2 2
.

4. The determinant of a Hermitian matrix is real:


Proof:
det(A) = det(AT ) ⇒ det(A† ) = det(A)∗ Therefore if A = A† ⇒
det(A) = det(A)∗

Problems

1. Show that eigenvalues of Hermitian matrices are real

note:
A column matrix is represented by |Xi called ket.
A row matrix is represented by hX| called bra.This method is called
Bracket method.
The conjugate transpose (also called Hermitian conjugate) of a bra is
the corresponding ket and vice versa: hA|† = |Ai, |Ai† = hA|
Proof:
We have
AX = λX
for any matrix with eigenvalue λ. Representing by bracket method

A |Xi = λ |Xi (1)

Taking conjugate transpose(dagger),

(A |Xi)† = (λ |Xi)†

|Xi† A† = λ∗ |Xi†
hX| A† = λ∗ hX| (2)

4
Multiplying eqn (1) by hX| from left side, We get

hX| A |Xi = λ hX|Xi (3)

Multiplying eqn (2) by |Xi from right side,

hX| A† |Xi = λ∗ hX|Xi (4)

Here A is Hermitian, A† = A
Then (4) ⇒
hX| A |i = λ∗ hX|Xi (5)
Comparing eqn (3) and (5),LHS are equal, hence RHS must be equal.

λ hX|Xi = λ∗ hX|xi

λ = λ∗
Thus for any Hermitian matrices, eigenvalues are real.
2. Show that for any Hermitian matrix eigenvectors corresponding to dis-
tinct eigenvalues are orthogonal to each other.

Consider a matrix A.Let λ1 and λ2 be two distinct eigenvalues of A


and X1 and X2 be corresponding eigenvectors. Then we can write

A X 1 = λ1 X 1

A |X1 i = λ |X1 i (6)


A X 2 = λ2 X 2
A |X2 i = λ2 |X2 i (7)
Multiplying eqn (6) by hX| from left side,

hX2 | A |X1 i = λ1 hX2 |X1 i

Taking conjugate transpose (dagger)

(hX2 | A |X1 i)† = (λ1 hX2 |X1 i)†

hX1 | A† |X2 i = λ∗1 hX1 |X2 i


If A is Hermitian, A† = A and eigenvalues are real(λ∗1 = λ1 )

hX1 | A |X2 i = λ1 hX1 |X2 i (8)

5
Multiplying eqn (7) by hX| from left side,

hX1 | A |X2 i = λ2 hX1 |X2 i (9)

Subtracting eqn (9) from (8),

λ1 hX1 |x2 i − λ2 hX1 |X2 i = 0

(λ1 − λ2 )hX1 |X2 i = 0


Since λ1 and λ2 are distinct,

λ1 − λ2 6= 0

Then
hX1 |X2 i = 0
That is eigenvectors are orthogonal.

3. If A and B are Hermitian matrices, show that AB + BA is also Her-


mitian.
Given that A and B are Hermitian matrices.Then

A = (A∗ )T

B = (B ∗ )T
Substituting for AB+BA we get

AB + BA = (A∗ )T (B ∗ )T + (B ∗ )T (A∗ )T

= (B ∗ A∗ )T + (A∗ B ∗ )T
= (B ∗ A∗ + A∗ B ∗ )T
= [(BA)∗ + (AB)∗ ]T
= [(AB)∗ + (BA)∗ ]T
AB + BA = [(AB + BA)∗ ]T
which is definition of Hermitian matrix. Hence AB+BA is a Hermitian
matrix.

4. Show that product of two Hermitian matrices A and B are Hermitian


if and only if A and B commute.

6
Consider two Hermitian matrices A and B.then by definition

A = (A∗ )T

B = (B ∗ )T
if AB is Hermitian,
AB = [(AB)∗ ]T
But (AB)∗ = A∗ B ∗ ⇒
AB = (A∗ B ∗ )T
also (AB)T = B T AT ⇒

AB = (B ∗ )T (A∗ )T

ie,
AB = BA
hence A and B commute.

Unitary matrices

A complex square matrix U is unitary if

U †U = U U † = I

or
U −1 = U †
where I is the identity matrix and U † is the conjugate transpose of U.

note: U † = (U ∗ )T = (U T )∗ where ’∗’ denotes conjugate.

7
problems
 √ √ 
√ 2/2 −i√ 2/2 0
1. Show that A =  i 2/2 − 2/2 0  is a unitary matrix.
0 0 1
we have for a matrix A to be unitary AA† = I = A† A
 √ √ 
√2/2 −i √ 2/2 0
A =  i 2/2 − 2/2 0 
0 0 1

Taking conjugate
 √ √ 
2/2
√ i √2/2 0
A∗ =  −i 2/2 − 2/2 0 
0 0 1

Taking transpose, we get


 √ √ 
√2/2 −i√ 2/2 0
(A∗ )T = A† =  i 2/2 − 2/2 0 
0 0 1

Multiplying by A from left, we get


 
1 0 0
AA† =  0 1 0  = I
0 0 1

Checking A† A is also identity matrix. Hence A is a unitary matrix.

Diagonalisation of matrices
 
a1 b1
Consider a square matrix say of order 2, A = . Let λ1 and λ2 be its
   a2 b2
x1 x2
eigen value and , be corresponding eigen vectors.Constructing
y1 y2

8
a matrix P by writing the eigen vector as columns, We get
 
x1 x2
P =
y1 y2

Then  
a1 x1 + b1 y1 a1 x2 + b1 y2
AP =
a2 x1 + b2 y1 a2 x2 + b2 y2
Now consider the eigen value equations,

AX = λX
    
a1 b1 x1 x1
= λ1
a2 b2 y1 y1
and     
a1 b1 x2 x2
= λ2
a2 b2 y2 y2
Equating both sides, we get the equations

a1 x1 + b1 y1 = λ1 x1

a2 x1 + b2 y1 = λ1 y1
a1 x2 + b1 y2 = λ2 x2
a2 x2 + b2 y2 = λ2 y2
Substituting these values
 
λ1 x 1 λ2 x 2
AP =
λ1 y 1 λ2 y 2

  
x1 x2 λ1 0
AP =
y1 y2 0 λ2
 
λ1 0
Let = D, Then
0 λ2
AP = P D
Then  
−1 λ1 0
P AP = D =
0 λ2
, which is nothing but diagonalised matrix.

9
So, If you want to diagonalise a diagonalisable matrix, find its eigen values
and write it as a diagonal elements of corresponding dimension.

Problems
 
6 −2 2
1. Diagonalise the matrix A =  −2 3 −1 
2 −1 3
We have
AX = λX
(A − λI)X = 0
|A − λI| = 0
6 − λ −2 2
−2 3 − λ −1 =0
2 −1 3 − λ
Simplifying, we get λ1 = 2, λ2 = 2 and λ3 = 8
Then diagonal matrix is
   
λ1 0 0 2 0 0
D=  0 λ2 0  =  0 2 0 
0 0 λ3 0 0 8
 
0 0 1
2. Find the eigenvalues and eigenvectors of the matrix A given by A= 0 1 0 
1 0 0
−1
and obtain the matrix S such that S AS is diagonal.
We have  
0 0 1
A= 0 1 0 
1 0 0
AX = λX
(A − λI)X = 0
|A − λI| = 0
−λ 0 1
0 1−λ 0 =0
1 0 −λ

10
We get
λ1 = 1 λ2 = 1 λ3 = −1
where λ1 ,λ2 and λ3 are the eigen values. Then diagonal matrix is
   
λ1 0 0 1 0 0
D=  0 λ2 0  =  0 1 0 
0 0 λ3 0 0 −1

3. If U is an unitary matrix and λ is its eigenvalue, Show that |λ|2 = 1.


If U is a unitary matrix, we have

U †U = U U †U = I (10)

Let λ be the eigenvalue and X be eigen vector of U. Then,

U X = λX (11)

Taking conjugate transpose (dagger)

(U X)† = (λX)†

X † U † = λ∗ X † (12)
Multiplying (10) and (11) ⇒

X † U † (U X) = λ∗ X † (λX)

X † (U † U )X = λ∗ λX † X
Substituting equation (10) ⇒

X † (I)X = λ∗ λX † X

X † X = λ∗ λX † X
(λ∗ λ − I) X † X = 0
X † X 6= 0 ⇒
λ∗ λ − 1 = 0
λ∗ λ = 1
|λ|2 = 1

4. Show that the eigenvalues of a real symmetric matrix are real and

11
eigenvectors with distinct eigenvalues are orthogonal.
We have AX = λX for any matrix with eigenvalue λ. Representing by
bracket method
A |Xi = λ |Xi (13)

Taking conjugate transpose(dagger),

(A |Xi)† = (λ |Xi)†

|Xi† A† = λ∗ |Xi†
since A is real matrix,A† = AT

hX| AT = λ∗ hX| (14)

Multiplying eqn (13) by hX| from left side, we get

hX| A |Xi = λ hX|Xi (15)

Multiplying eqn (14) by |Xi from right side,

hX| AT |Xi = λ∗ hX|Xi (16)

If A is real symmetric matrix, AT = A (for a real matrix A† = AT and


for a symmetric matrix AT = A)
Then (16) ⇒
hX| A |i = λ∗ hX|Xi (17)
Comparing eqn (15) and (17),LHS are equal, hence RHS must be equal.

λ hX|Xi = λ∗ hX|xi

λ = λ∗
Thus for any real symmetric matrices, eigenvalues are real.
Consider a matrix A. Let λ1 and λ2 be two distinct eigenvalues of A
and X1 and X2 be corresponding eigenvectors. Then we can write

A X 1 = λ1 X 1

A |X1 i = λ |X1 i (18)


A X 2 = λ2 X 2
A |X2 i = λ2 |X2 i (19)

12
Multiplying eqn (18) by hX| from left side,

hX2 | A |X1 i = λ1 hX2 |X1 i

Taking conjugate transpose (dagger)

(hX2 | A |X1 i)† = (λ1 hX2 |X1 i)†

hX1 | A† |X2 i = λ∗1 hX1 |X2 i


If A is real symmetric matrix, A† = AT = A and eigenvalues are
real(λ∗1 = λ1 )
hX1 | A |X2 i = λ1 hX1 |X2 i (20)
Multiplying eqn (19) by hX| from left side,

hX1 | A |X2 i = λ2 hX1 |X2 i (21)

Subtracting eqn (21) from (20),

λ1 hX1 |x2 i − λ2 hX1 |X2 i = 0

(λ1 − λ2 )hX1 |X2 i = 0


Since λ1 and λ2 are distinct,

λ1 − λ2 6= 0

Then
hX1 |X2 i = 0
That is ,eigenvectors are orthogonal.

5. Show that eigenfunctions of a unitary matrix are orthogonal.

6. Find the real value of λ for which the equation is:

x + 2y + 3z = λx

3x + y + 2z = λy
2x + 3y + z = λz
The equations can be represented using matrices as follows,

AX = λX

13
    
1 2 3 x x
 3 1 2  y  = λ y 
2 3 1 z z
AX − λX = 0
(A − λI)X = 0
since λ 6= 0 we can write

|A − λI| = 0

ie,
1−λ 2 3
3 1−λ 2 =0
2 3 1−λ

−3 ± 9 − 12
λ=6 and λ=
2

−3 ± i 3
λ=
2
Hence real value of λ is 6.

7. If A is diagonal with all diagonal elements different, and A and B


commute. Show that B is also diagonal.

8. Find P such that P −1 AP is diagonal and has diagonal elements, the


characteristic roots of A, given
 
7 −2 1
A =  −2 10 −2 
1 −2 7

Answer:
AX = λX
(A − λI)X = 0
|A − λI| = 0
7−λ −2 1
−2 10 − λ −2 =0
1 −2 7−λ
λ1 = 6 λ2 = 6 λ3 = 12

14
Then diagonal matrix is
   
λ1 0 0 6 0 0
D =  0 λ2 0  =  0 6 0 
0 0 λ3 0 0 12

Diagonalisation of Hermitian matrices

We had shown that the eigenvectors of Hermitian matrices are orthogonal.


Hence a matrix P made up of the eigenvectors will be always unitary. Then
P = U . Hence when we want to diagonalise a Hermitian matrix A, we get
D = P −1 AP = U −1 AU
or
D = U † AU
Hence diagonalisation of a Hermition matrix is also called unitary transfor-
mation.

Matrix GATE Questions


 
0 1 0
1. Find the eigen value of the matrix 1 0 1
0 1 0

2. Two matrices A and B are said to be similar if B = P −1 AP for some


invertible matrix P . What is true about A and B?

3. A 3×3 matrix has elements such that its trace is 11 and its determinant
is 36. The eigenvalues of the matrix are all known to positive integers.
What is the largest eigen value of the matrix?
 
2 3 0
4. Find the eigenvalues of 3 2 0
0 0 1
 
0 i
5. Find the eigenvalues of
i 0

15
 
aeiα b
6. is a unitary matrix. If a, b, b, d, α, β are real , what is the
ceiβ d
inverse of the matrix?

7. The determinant of a 3 × 3 real symmetric matrix is 36. If two of its


eigenvalues are 2 and 3, then what is the third eigenvalue?
 
1 −1
8. Find the inverse of
0 1
 
cos θ − sin θ
9. If θ = 30◦ , what are the eigen values of
sin θ cos θ
 
5 4
10. Find eigenvalues and eigenvectors of
1 2
11. A real traceless 4 × 4 matrix has two eigenvalues +1,-1. What are the
other eigenvalues?
 
1 i
12. What are the eigenvalues of
−i 1

16

You might also like