0% found this document useful (0 votes)
14 views7 pages

T5 Thu - Feb-13@13:59: 1 by Aayushman-Raina

The document outlines solutions to various linear algebra questions involving eigenvalues, eigenvectors, and orthogonal matrices. It includes specific matrices and their corresponding linear maps, as well as the application of the Gram-Schmidt procedure to convert a basis into an orthonormal basis. The document is structured with questions followed by detailed solutions for each problem.

Uploaded by

mekoushikbhakta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views7 pages

T5 Thu - Feb-13@13:59: 1 by Aayushman-Raina

The document outlines solutions to various linear algebra questions involving eigenvalues, eigenvectors, and orthogonal matrices. It includes specific matrices and their corresponding linear maps, as well as the application of the Gram-Schmidt procedure to convert a basis into an orthonormal basis. The document is structured with questions followed by detailed solutions for each problem.

Uploaded by

mekoushikbhakta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Solutions for T5.

Deadline is Thu.Feb-13@13:59.

Last Update: Thu.Feb-13@13:30


1 by Aayushman-Raina Last Modified:Tue.Feb-11@20:08

Question:
Solution::
2 by Suchanda-Roy Last Modified:Thu.Feb-13@12:25

Question: For each of the following cases, give an example of a linear map Ti : R6 → R6 such
that the “spectrum” of eigenvalues satisfies the given table:

λ Alg. mult. Geom. mult.


T1 T2 T3 T4 T5 T6
2 1 1 1 1 1 1 1
−3 2 1 1 1 2 2 2
4 3 1 2 3 1 2 3
Solution::
The matrix A1 corresponding to T1 is
 
2 0 0 0 0 0
0 −3 1 0 0 0
 
0 0 −3 0 0 0
 
0 0 0 4 1 0 
 
0 0 0 0 4 1
0 0 0 0 0 4

The matrix A2 corresponding to T2 is


 
2 0 0 0 0 0
0 −3 1 0 0 0
 
0 0 −3 0 0 0
 
0 0 0 4 1 0
 
0 0 0 0 4 0
0 0 0 0 0 4

The matrix A3 corresponding to T3 is


 
2 0 0 0 0 0
0 −3 1 0 0 0
 
0 0 −3 0 0 0
 
0 0 0 4 0 0 
 
0 0 0 0 4 0
0 0 0 0 0 4

The matrix A4 corresponding to T4 is


 
2 0 0 0 0 0
0 −3 0 0 0 0
 
0 0 −3 0 0 0
 
0 0 0 4 1 0 
 
0 0 0 0 4 1
0 0 0 0 0 4

1
The matrix A5 corresponding to T5 is
 
2 0 0 0 0 0
0 −3 0 0 0 0
 
0 0 −3 0 0 0
 
0 0 0 4 1 0 
 
0 0 0 0 4 0
0 0 0 0 0 4

The matrix A6 corresponding to T6 is


 
2 0 0 0 0 0
0 −3 0 0 0 0
 
0 0 −3 0 0 0
 
0 0 0 4 0 0 
 
0 0 0 0 4 0
0 0 0 0 0 4

For each of the above matrix consider the linear map Ti : R6 → R6 as Ti (x) = Ai x where
x ∈ R6 .
3 by Nirmal-Kumar Last Modified:Wed.Feb-12@12:07

Question: Let D : P3 (R) → P3 (R) be the differentiation map. Then, compute the eigenvalues,
their algebraic multiplicity and the geometric multiplicity and the corresponding eigenvector.
Solution::
Now D : P3 (R) → P3 (R) is defined by p(x) 7→ p′ (x). Let B = {1, x, x2 , x3 } be the standard
basis of P3 (R).
D(1) = 0  D(x)
= 1 D(x2 ) = 2x D(x3 ) = 3x2
0 1 0 0
0 0 2 0
Now, [D]B = A =  0 0 0 3

0 0 0 0
4
Since CA (λ) = λ , 0 is the only eigenvalue with algebraic multiplicity 4.
Also, nullity(A)=4-rank(A)=1
  shows that geometric multiplicity of 0 is 1.

 1 
  
0

Hence, E0 (A) = span   .

 0 
 
0
 
Note that in P3 (R), the set E0 (A) corresponds to the set of all constant polynomials.
Since, geometric multiplicity of 0 is less than algebraic multiplicity of 0, A is NOT diagonal-
izable.
4 by Pratham-Singh Last Modified:Wed.Feb-12@00:01

Question:
Solution:: The matrix is:  
1 −4 −4
A =  8 −11 −8
−8 8 5
The eigenvalues of A are the roots of the characteristic equation:

det(A − λI) = 0.

2
We compute:
1−λ −4 −4
8 −11 − λ −8 = 0.
−8 8 5−λ
Expanding along the first row and solving we get:

(1 − λ)(λ2 + 6λ + 9) = 0

we find the eigenvalues as:


λ1 = 1, λ2 = −3, λ3 = −3.
For each eigenvalue λi , we will solve (A − λi I)x = 0 to get the eigenvectors.
1. For λ1 = 1:
(A − I)x = 0.
 
1
Solving, we obtain eigenvector v1 =  2 .
−2
2. For λ2 = −3:
(A + 3I)x = 0.
 
4 −4 −4
A + 3I = 8 −8
 −8
−8 8 8
As, µ(A + 3I) = 2 we  eigenvectors corresponding to λ = −3 Solving, we obtain
 shall get two
1 1
eigenvector v2 = 1 and v3 = 0.
  
0 1
The matrix P is formed using eigenvectors as columns:
 
1 1 1
P =  2 1 0 .
−2 0 1
 
1 −1 −1
P −1 = −2 3 2 .
2 −2 −1
The diagonal matrix D will be determined as D = P −1 AP
 
1 0 0
D= 0  −3 0  .
0 0 −3

hence we can write:


A = P DP −1
A10 = P D10 P −1
 
1 0 0
D10 = 0 (−3)10 0 .
0 0 (−3)10
Thus,
A10 = P D10 P −1 .

3
Multiplying these matrices gives the final result.
5 by Anuj-Narode Last Modified:Thu.Feb-13@13:04

Question: (a) Show that every 2 × 2 orthogonal matrix is of the form


   
a −b a b
or
b a b −a
 
a
where is a unit vector.
b

(b) For θ ∈ R, let  


cos θ − sin θ
ρθ = .
sin θ cos θ
For ϕ ∈ R, let  
cos 2ϕ sin 2ϕ
τϕ = .
sin 2ϕ − cos 2ϕ
Using (a), show that a matrix A is orthogonal if and only if the map fA is of the form ρθ for
some θ or τϕ for some ϕ.

(c)

(i) Show that ρθ ρθ′ = ρψ . Determine ψ.

(ii) Show that ρθ τϕ = τψ . Determine ψ.

(iii) Show that τϕ τϕ′ = ρψ . Determine ψ.

Solution::
   2   
p q T p + r2 pq + rs 1 0
(a) Let A = be a 2×2 orthogonal matrix. Then A A = = .
r s pq + rs q 2 + s2 0 1
 t  t
Form this, one can conclude that p r and q s are unit vectors. We also get

pq + rs = 0 (1)
p2 + r 2 = 1 (2)
q 2 + s2 = 1 (3)

Using Equation (1) we get

pq = −rs
(1 − r2 )q 2 = r2 (1 − q 2 )
q2 = r2
q = ±r

Similarly we can obtain p = ±s. Case 1: q = r = 0. If q = r = 0 then we get p = s = 1


and A = I. I is an orthogonal matrix. Case 2: q = r ̸= 0. In this case we get r(p + s) = 0
p r
this implies p = −s. and the matrix A = . Case 3: q = −r ̸= 0. In this case we
r −p  
p −r
get r(−p + s) = 0 this implies p = s and the matrix A = .
r p

4
     
a −b a b a
b) Suppose A is a orthogonal matrix then by (a) A = or A = and is
  b a b −a b
a
a unit vector. Let α be an angle made by with positive x-axis. Then a = cos(α) and
  b  
cos(α) − sin(α) cos(α) sin(α)
b = sin(α). Thus A = or A = . Therefore A = ρα
sin(α) cos(α) sin(α) − cos(α)
or A = τ α2 . Conversely, map fA is of the form ρθ for some θ or τϕ for some ϕ. Then clearly
ρθ and τϕ are orthogonal matrices.

cos(θ′ ) − sin(θ′ )
     
cos(θ) − sin(θ) cos(ψ) − sin(ψ)
c(i) ρθ ρθ′ = · = where ψ = θ + θ′ .
sin(θ) cos(θ) sin(θ′ ) cos(θ′ ) sin(ψ) cos(ψ)
     
cos(θ) − sin(θ) cos(2ϕ) sin(2ϕ) cos(θ + 2ϕ) sin(θ + 2ϕ)
c(ii) ρθ τϕ = · = .
sin(θ) cos(θ) sin(2ϕ) − cos(2ϕ) sin(θ + 2ϕ) − cos(θ + 2ϕ)
where ψ = 2θ + ϕ.

cos(2ϕ′ ) sin(2ϕ′ ) cos(2ϕ − 2ϕ′ ) − sin(2ϕ − 2ϕ′ )


    
cos(2ϕ) sin(2ϕ)
c(iii) τϕ τϕ′ = · = .
sin(2ϕ) − cos(2ϕ) sin(2ϕ′ ) − cos(2ϕ′ ) sin(2ϕ − 2ϕ′ ) cos(2ϕ − 2ϕ′ )
Thus we get ψ = 2ϕ − 2ϕ′ .

6 by Arijit-Pal Last Modified:Wed.Feb-12@10:04

Question: Using the Gram-Schmidt procedure, convert the basis {(−1, 1, 1), (1, −1, 1), (1, 1, −1)}
of R3 into an orthonormal basis of R3 .
Solution::
Gram–Schmidt Procedure:
Suppose v1 , . . . , vm is a linearly independent list of vectors in V . Let f1 = v1 .
For k = 2, . . . , m, define fk inductively by:
k−1
X ⟨vk , fj ⟩
fk = vk − fj .
j=1
∥fj ∥2

For each k = 1, . . . , m, let:


fk
ek = .
∥fk ∥
Then e1 , . . . , em is an orthonormal list of vectors in V such that:

span(v1 , . . . , vk ) = span(e1 , . . . , ek )

for each k = 1, . . . , m.
Applying Gram-Schmidt to Convert a Given Basis to an Orthonormal Basis:
Given the basis {(−1, 1, 1), (1, −1, 1), (1, 1, −1)} of R3 , we apply the Gram-Schmidt procedure.
Define:

v1 = (−1, 1, 1),
v2 = (1, −1, 1),
v3 = (1, 1, −1).

Step 1: Compute f1 :
f1 = v1 = (−1, 1, 1)

∥f1 ∥ = 3.

5
Thus,  
f1 −1 1 1
e1 = = √ ,√ ,√ .
∥f1 ∥ 3 3 3
Step 2: Compute f2 :
⟨v2 , f1 ⟩ = (−1)(1) + (1)(−1) + (1)(1) = −1.
⟨v2 , f1 ⟩ 1
2
=− .
∥f1 ∥ 3
     
−1 1 −1 −1 2 −2 4
f2 = v2 − f1 = (1, −1, 1) − , , = , , .
3 3 3 3 3 3 3

2 6
∥f2 ∥ = .
3
Thus,  
1 −1 2
e2 = √ ,√ ,√ .
6 6 6
Step 3: Compute f3 :
⟨v3 , f1 ⟩ = (−1)(1) + (1)(1) + (−1)(1) = −1.
2 −2 4 4
⟨v3 , f2 ⟩ = (1)( ) + (1)( ) + (−1)( ) = − .
3 3 3 3
⟨v3 , f1 ⟩ 1
2
=− .
∥f1 ∥ 3
⟨v3 , f2 ⟩ − 43 1
2
= 24 = − .
∥f2 ∥ 9
2
       
−1 1 1 −1 −1 1 1 2
f3 = v3 − f1 − − f2 = (1, 1, −1) − , , − − , ,− = (1, 1, 0) .
3 2 3 3 3 3 3 3

∥f3 ∥ = 2.
Thus,  
1 1
e3 = √ , √ ,0 .
2 2
Hence, the resulting orthonormal basis is:
     
−1 1 1 1 −1 2 1 1
e1 = √ , √ , √ , e2 = √ , √ , √ , e3 = √ , √ ,0 .
3 3 3 6 6 6 2 2

7 by Anindita-Saha Last Modified:Thu.Feb-13@00:18

Question:
When we apply the Gram-Schmidt Procedure on a basis of Rn , without normalizing the per-
pendicular part at each stage, we get an orthogonal basis of Rn . Assume you have applied
this “modified” procedure to convert a basis {v1 , ..., vn } into the orthogonal basis {w1 , ..., wn }.
Show that the n-volume of these bases are equal.
Solution::

6
By applying (modified) Gram-Schmidt Procedure:

w1 = v1
v2 · w1
w2 = v2 − w1
∥w1 ∥2
v3 · w1 v3 · w2
w3 = v3 − 2
w1 − w2
∥w1 ∥ ∥w2 ∥2
..
.
n−1
X vn · wi
wn = vn − w
2 i
i=1
∥w i ∥

and therefore,

v1 = w1
v2 · w1
v2 = w2 + w1
∥w1 ∥2
v3 · w1 v3 · w2
v3 = w3 + 2
w1 + w2
∥w1 ∥ ∥w2 ∥2
..
.
n−1
X vn · wi
vn = wn + wi
i=1
∥wi ∥2

The n-volume of the basis {v1 , . . . , vn } is | det V | and n-volume of {w1 , . . . , wn } is | det W |,
where V = [v1 v2 · · · vn ] and W = [w1 w2 · · · wn ]. Now,

V = [v1 v2 · · · vn ]
n−1
v2 · w1 v3 · w1 v3 · w 2 X vn · w i
= [w1 2
w1 + w2 2
w1 + 2
w2 + w3 · · · w + wn ]
2 i
∥w1 ∥ ∥w1 ∥ ∥w2 ∥ i=1
∥w i ∥
= [w1 w2 · · · wn ]An

where,
v2 ·w1 v3 ·w1 vn ·w1 
1 ···

∥w1 ∥2 ∥w1 ∥2 ∥w1 ∥2
v3 ·w2 vn ·w2 
0
 1 ∥w2 ∥2
··· ∥w2 ∥2 
vn ·w3 
An = 0
 0 1 ··· ∥w3 ∥2 
. .. .. ... .. 
 .. . . . 
0 0 0 ··· 1
is an upper triangular matrix whose diagonal entries are 1 and therefore det An = 1. Thus,
det V = det W · det An = det W . Hence, the n-volume of these bases are equal.

You might also like