0% found this document useful (0 votes)
73 views2 pages

Multiple View Geometry: Exercise Sheet 2: A B A B A B A B A B

The document is an exercise sheet for a computer vision course. It contains two parts - the first part presents theory questions about groups, eigenvalues/eigenvectors, and the singular value decomposition. The second part provides practical exercises involving solving a linear system using the Moore-Penrose pseudoinverse and analyzing the solution set.

Uploaded by

Berkay Özerbay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views2 pages

Multiple View Geometry: Exercise Sheet 2: A B A B A B A B A B

The document is an exercise sheet for a computer vision course. It contains two parts - the first part presents theory questions about groups, eigenvalues/eigenvectors, and the singular value decomposition. The second part provides practical exercises involving solving a linear system using the Moore-Penrose pseudoinverse and analyzing the solution set.

Uploaded by

Berkay Özerbay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Multiple View Geometry: Exercise Sheet 2

Prof. Dr. Florian Bernard, Florian Hofherr, Tarun Yenamandra


Computer Vision Group, TU Munich
Link Zoom Room , Password: 307238

Exercise: May 5th, 2021

Part I: Theory

1. Which groups have you seen in the lecture? Write down the names and the correct inclusions!
(e.g.: group A ⊂ group B)

2. Let A be a symmetric matrix, and λa , λb eigenvalues with eigenvectors va and vb . Prove: if va


and vb are not orthogonal, it follows: λa = λb .
Hint: What can you say about hAva , vb i?

R
3. Let A ∈ n×n be a symmetric matrix with the orthonormal basis of eigenvectors v1 , . . . , vn
and eigenvalues λ1 ≥ . . . ≥ λn . Find all vectors x, that minimize the following term:

min x> Ax
||x||=1

How many solutions exist? How can the term be maximized?

R
n
P
Hint: Use the expression x = αi vi with coefficients αi ∈ and compute appropriate coeffi-
i=1
cients!

4. Let A ∈ Rm×n. Prove that kernel(A) = kernel(A>A).


Hint: Consider a) x ∈ kernel(A) ⇒ x ∈ kernel(A> A)
and b) x ∈ kernel(A> A) ⇒ x ∈ kernel(A).

5. Singular Value Decomposition (SVD)


Let A = U SV > be the SVD of A.

(a) Write down possible dimensions for A, U, S and V .


(b) What are the similarities and differences between the SVD and the eigenvalue decompo-
sition?
(c) What do you know about the relationship between U, S, V and the eigenvalues and eigen-
vectors of A> A and AA> ?
(d) What is the interpretation of the entries in S and what do the entries of S tell us about A?

1
Part II: Practical Exercises
The Moore-Penrose pseudo-inverse

To solve the linear system Ax = b for an arbitrary (non-quadratic) matrix A ∈ Rm×n of rank
r ≤ min(m, n), one can define a (generalized) inverse, also called the Moore-Penrose pseudo-inverse
(refer to Chapter 1, last slide).
In this exercise we want to solve the linear system Dx = b with D ∈ Rm×4 , b ∈ Rm a vector whose
components are all equal to 1, and x∗ = [4, −3, 2, −1]T ∈ R4 should be one possible solution of the
linear system, i.e. for any row [d1 , d2 , d3 , d4 ] of D:

4d1 − 3d2 + 2d3 − d4 = 1

We recall that the set of all possible solutions is given by S = {x∗ + v | v ∈ kernel(D)}.

1. Create some data

(a) Generate such a matrix D using random values with m = 4 rows.


(Hint: Use rand to define d1 , d2 , d3 and set d4 = 4d1 − 3d2 + 2d3 − 1.)
In general, rank(D) = 4, hence there is a unique solution.
(b) Introduce small additive errors into the data.
(Hint: Use eps*rand with eps=1.e-4)

2. Find the coefficients x solving the system Dx = b

(a) Compute the SVD for the matrix D.


(Hint: Use svd)
(b) Compute the Moore-Penrose pseudo-inverse using the result from (a), and compare it to
the output of the MATLAB function pinv.
(c) Compute the coefficients x, and compare it to the true solution x∗ .

3. Repeat the two previous questions, by setting m to a higher value. How is the precision im-
pacted?

4. We assume in the following that m = 3, hence we have infinitely many solutions.

(a) Solve again the linear system using questions (1) and (2).
Thus rank(D) = 3 and dim(kernel(D)) = 1.
(b) Use the function null to get a vector v ∈ kernel(D).
The set of all possible solutions is S = {x + λv | λ ∈ R}.
(c) According to the last slide of Chapter 1, we know that the following statement holds:
xmin = A+ b is among all minimizers of kAx − bk2 the one with the smallest norm |x|.
Let λ ∈ R, xλ = x + λv one possible solution, and eλ = kDxλ − bk2 the associated error.
Using the function plot, display both graphs of kxλ k and eλ according to λ ∈ {−100, . . . , 100},
and observe that the statement indeed holds.

You might also like