0% found this document useful (0 votes)
86 views2 pages

Multiple View Geometry: Solution Sheet 2: A A B A B A B A B A B A B A B B A B

The document summarizes key concepts from the course "Multiple View Geometry" including: 1) Common groups used in computer vision like SO(n), O(n), and SE(3). 2) Properties of the singular value decomposition (SVD) including that it can decompose any matrix into the product of three matrices - two orthogonal matrices U and V and a diagonal matrix S containing the singular values. 3) Relationships between the SVD of a matrix A and the eigendecompositions of the related matrices A^T A and A A^T - the columns of V and U are the eigenvectors and the squares of S are the eigenvalues.

Uploaded by

Vard Farrell
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views2 pages

Multiple View Geometry: Solution Sheet 2: A A B A B A B A B A B A B A B B A B

The document summarizes key concepts from the course "Multiple View Geometry" including: 1) Common groups used in computer vision like SO(n), O(n), and SE(3). 2) Properties of the singular value decomposition (SVD) including that it can decompose any matrix into the product of three matrices - two orthogonal matrices U and V and a diagonal matrix S containing the singular values. 3) Relationships between the SVD of a matrix A and the eigendecompositions of the related matrices A^T A and A A^T - the columns of V and U are the eigenvectors and the squares of S are the eigenvalues.

Uploaded by

Vard Farrell
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Multiple View Geometry: Solution Sheet 2

Prof. Dr. Daniel Cremers, David Schubert, Mohammed Brahimi


Computer Vision Group, TU Munich
http://vision.in.tum.de/teaching/ss2019/mvg2019

Exercise: May 22nd, 2019

Part I: Theory

1. Groups and inclusions:


Groups

(a) SO(n): special orthogonal group


(b) O(n): orthogonal group
(c) GL(n): general linear group
(d) SL(n): special linear group
(e) SE(n): special euclidean group (In particular, SE(3) represents the rigid-body motions
in R3 )
(f) E(n): euclidean group
(g) A(n): affine group

Inclusions

(a) SO(n) ⊂ O(n) ⊂ GL(n)


(b) SE(n) ⊂ E(n) ⊂ A(n) ⊂ GL(n + 1)

(λa va )> vb va> A> vb va> Avb va> (λb vb )


2. λa = hva ,vb i = hva ,vb i = hva ,vb i = hva ,vb i = λb

3. Let V be the orthonormal matrix (i.e. V > = V −1 ) given by the eigenvectors, and Σ the diagonal
matrix containing the eigenvalues:
 .. 
|

|

λ1 0 .
V = v1 · · · vn 

Σ= .. 
and 0 . 0 .

| | ..
. 0 λn

As V is a basis, we can express x


Pas a2linear>combination of the eigenvectors x = V α, for some
α∈ R n . For ||x|| = 1 we have i αi = α α = x V V x = x> x = 1. This gives
> >

x> Ax = x> V ΣV −1 x
= α> V > V ΣV > V α
X
= α> Σα = αi2 λi
i

1
Considering i αi2 = 1, we can conclude that this expression is minimized iff only the αi
P
corresponding to the smallest eigenvalue(s) are non-zero. If λn−1 λn , there exist only two
solutions (αn = ±1), otherwise infinitely many.
For maximisation, only the the αi corresponding to the largest eigenvalue(s) can be non-zero.

4. We show that: x ∈ kernel(A) ⇔ x ∈ kernel(A> A).

”⇒”: Let x ∈ kernel(A)


A> |{z}
Ax = A> 0 = 0 ⇒ x ∈ kernel(A> A)
=0

”⇐”: Let x ∈ kernel(A> A)


>
0 = x> A| {zAx} = hAx, Axi = ||Ax||
2 ⇒ Ax = 0 ⇒ x ∈ kernel(A)
=0

5. Singular Value Decomposition (SVD)


Note: There exist multiple slightly different definitions of the SVD. Depending on the conven-
tion used, we might have S ∈ Rm×n , S ∈ Rd×d , or S ∈ Rp×p where d = min(m, n) and
p = rank(A) . In the lecture the third option was presented, for which S is invertible (no zeros
on the diagonal). In the following, we present the results for the first option, since that is the
one that Matlab’s svd function returns by default.
Small mistake during tutorial : svd(A,’econ’) doesn’t give the most compact SVD where
S ∈ Rp×p and p = rank(A), but give only S ∈ Rd×d where d = min(m, n), which still
contain all singular values, including zero.

(a) A ∈ Rm×n , U ∈ Rm×m , S ∈ Rm×n , V ∈ Rn×n


(b) Similarities and differences between SVD and EVD:
i. Both are matrix diagonalization techniques.
ii. The SVD can be applied to matrices A ∈ Rm×n with m 6= n, whereas the EVD is
only applicable to quadratic matrices (A ∈ Rm×n with m = n).
(c) Relationship between U, S, V and the eigenvalues and eigenvectors of A> A and AA> :
i. A> A: The columns of V are eigenvectors; the squares of the diagonal elements of S
are eigenvalues.
ii. AA> : The columns of U are eigenvectors; the squares of the diagonal elements of S
are eigenvalues (possibly filled up with zeros).
(d) Entries in S:
i. S is a diagonal matrix. The elements along the diagonal are the singular values of A.
ii. The number of non-zero singular values gives us the rank of the matrix A.

You might also like