0% found this document useful (0 votes)
28 views7 pages

Deep Learning Assignment5 Solutions

This document contains solutions to 7 problems related to eigenvectors, eigenvalues, and principal component analysis (PCA). Problem 1 asks which vector is not an eigenvector of a 3x3 matrix and determines the solution is vector D. Problem 2 asks if 3 vectors could be eigenvectors of a symmetric matrix and determines the answer is no. Problem 3 asks about the convergence of repeatedly multiplying a vector by the matrix from problem 1, determining it will diverge.

Uploaded by

Sangeeta Tegnoor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views7 pages

Deep Learning Assignment5 Solutions

This document contains solutions to 7 problems related to eigenvectors, eigenvalues, and principal component analysis (PCA). Problem 1 asks which vector is not an eigenvector of a 3x3 matrix and determines the solution is vector D. Problem 2 asks if 3 vectors could be eigenvectors of a symmetric matrix and determines the answer is no. Problem 3 asks about the convergence of repeatedly multiplying a vector by the matrix from problem 1, determining it will diverge.

Uploaded by

Sangeeta Tegnoor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Deep Learning - Assignment 5 Your Name, Roll Number

1. Consider the following matrix:

 
1 1 2
A = 1 2 1
2 1 1
Which of the following vectors is not an eigenvector of this matrix ?
 
1
A. 1
1
 
1
B. −2
1
 
−1
C. 0
1
 
−1
D. 1
0

Solution: Option D is the correct answer. For each of the vectorsgiven in the
1
options you can compute the product Ax. For example, consider x = 1
1
      
1 1 2 1 4 1
Ax = 1 2 1 1 = 4 = 4 1 = 4x
      
2 1 1 1 4 1
   
1 1
Hence, x = 1 is an eigenvector of A. Similarly, you can show that x = −2
  
 1 1
−1
and x =  0  are also eigenvectors of this matrix with the corresponding eigen
1  
−1
values being 1 and -1 respectively. You can then also check that x = 1  is not

0
an eigenvector of this matrix.
2. Consider a square matrix A ∈ R3×3 such that AT = A. My friend told me that the
following three vectors are the eigenvectors of this matrix A:
     
−1 1 1
x =  1  , y = 1 , z =  1 
1 1 −1
Is my friend telling the truth ?
A. Yes
B. No
C. Can’t say without knowing all the elements of A
D. Yes, only if all the diagonal elements of A are 1

Solution: Note that A is a square symmetric matrix (∵ A ∈ R3×3 and AT = A). We


know that the eigenvectors of a square symmetric matrix are orthogonal. In other
words, if x, y, z are the eigenvectors of A then xT y = xT z = y T z = 0. You can easily
verify that this is not the case. Hence, my friend is not telling the truth. Option B
is the correct answer.

Page 2
3. Consider the following matrix:

 
1 1 2
A = 1 2 1
2 1 1

What can you say about the series x, Ax, A2 xA3 x, .... ?
A. It will diverge (explode)
B. It will converge (vanish)
C. It will reach a steady state
D. Can’t say without knowing all the elements of x

Solution: Referring to the solution for question 1, we know that the dominant
eigenvalue of this matrix is 4. From slide 10 of Lecture 6 we know that if the
dominant eigenvalue λd > 1 then the series will diverge (explode) irrespective of
which x we start with. Option A is the correct answer.

Page 3
4. Which of the following sets of vectors does not form a valid basis in R3
     
1 −1 1
A. 1 ,  0  , −2
1 1 1
     
1 3 1
B. 2 , 2 , 4
3 1 5
     
1 4 6
C. 3 , −1 , 5
1 2 4
     
3 1 4
D. 5 , 2 , 7
7 2 4

Solution: Option C is the correct answer. A set of 3 vectors can form a basis
in
  R3 if the
 vectors
  in the set are linearly independent. Now, consider the vectors
1 4 6
3 , −1 , 5. We observe that,
1 2 4
       
1 4 6 0
2 3 + −1 − 5 = 0
1 2 4 0
Hence, the vectors are linearly dependent and thus cannot form a basis in R3 .

Page 4
5. Consider the matrix A:
 
1 1 2
A = 1 2 1
2 1 1

Now consider the following optimization problem:

min xT Ax
x
s.t kxk = 1

Which of the following vectors is a solution to the above minimization problem?


 
1
A. 1
1
 
1
B. −2
1
 
−1
C. 0
1
D. None of the above

Solution: From the Theorem on Slide 26 of Lecture 6 we know that the solution
to the above minimization problem is the eigenvector corresponding to the smallest
eigenvalue of A. From the solution to question1 we know that the eigenvector cor-
−1
responding to the smallest eigenvalue of A is  0  (the corresponding eigenvalue
1
is -1 and the other two eigenvalues are 1 and 4). Hence Option C is the correct
answer.

Page 5
3
 M ∈ R . The sum of the elements of each row of this
6. Consider a row stochastic matrix
1
matrix is 1. Is the vector x = 1 an eigenvector of this matrix?

1
A. Yes
B. No
C. Can’t say without knowing the elements of A
D. Yes, only if each row represents a uniform distribution

Solution: Any row stochastic matrix M ∈ R3 will have the following form:
 
a b 1 − (a + b)
= m n 1 − (m + n)
p q 1 − (p + q)

where the sum of the elements


  of each row is 1. If we multiply such a row stochastic
1
matrix by the vector x = 1 we get
1
    
a b 1 − (a + b) 1 1
M x = m n 1 − (m + n) 1 = 1
    
p q 1 − (p + q) 1 1
 
1
Hence 1 is an eigenvector of any row stochastic matrix M ∈ R3 .

1

Page 6
7. Consider
 a set of
 points xm ∈ R2 represented using the standard basis x =
 x1 , x2 , ..., m×2
1 0 and y = 0 1 . Let X ∈ R be a matrix such that x1 , x2 , ..., xm are the rows
of this matrix. Using PCA, we want to represent this data husing a newi basis. Toh do so, we i
T √1 √1 −1
√ √1 .
find the eigenvectors of X X, which happen to be u1 = 2 2
and u 2 = 2 2
 
Now suppose, we want to represent one of the m points, say xi = 2.1 2.4 using only
u1 (i.e., we want to represent the data using fewer dimensions then what would be the
squared error in reconstructing xi using only u1 ?
A. 0.045
B. 0.030
C. 0.015
D. 0

Solution:
   
Consider the point xi = 2.1 2.4 represented using the standard
h basis
i x = 1 0
and y = 0 1 . We want to represent it using only u1 = √12 √12 . To do this we
 

first need to find the projection of xi on u1 and u2 as:

4.5
α1 = xTi u1 = √
2
0.3
α2 = xTi u2 = √
2
We can then see that,
 
2.1
xi = α1 u1 + α2 u2 =
2.4

This is the full error-free reconstruction of xi using both u1 and u2 . However, in the
question we are asked to reconstruct xi using only u1 . Hence, we get,
 
2.25
x̂i = α1 u1 =
2.25

We can now compute the squared error between xi and x̂i as,

error = (2.25 − 2.1)2 + (2.25 − 2.4)2 = 0.045

Hence, Option A is the correct answer

Page 7

You might also like