Exercise Sheet 1
Exercise Sheet 1
Exercise 1
Consider the matrix
−1.0 1.0 0.0 · · · 0.0
.. ..
0.0 −1.0 1.0
. .
A= . .. .. .. ∈ Rn×n+1
.. . . . 0.0
0.0 · · · 0.0 −1.0 1.0
(a) Depending on n, how many numbers (integers and floats) are required to store the matrix
in dense format and in the
sparse matrix formats. And write down the matrix representations of A in the three
formats for the case n = 4.
(b) Compute for which n which matrix format from (a) is most efficient.
(c) Assume a sparse matrix with unknown distribution of the nonzero entries. Discuss the
efficiency of the operation X
diag A.
Hint:
(c) Remember that direct access to single entries of sparse matrices is expensive.
Exercise 2
Show formally:
1
Exercise 3
Verify that the definitions v
u n X
m
uX
∥A∥F := t |aij |2
i=1 j=1
and
∥A∥2F = tr A⊤ A
are equivalent.
Exercise 4
Let A be diagonalizable with
A = V DV −1 . (1)
(a) Discuss the order of the number of operations (addition & multiplication) necessary for
computing
Ak
directly or efficiently, making use of eq. (1) (as discussed in the lecture/lecture notes).
Assume that A and V are both dense, while D is – of course – sparse.
(b) If, instead of computing
Ak ,
we are interested in computing
Ak v
for some vector v, how does the situation change?
(c) What would be the impact of A being sparse, but V being dense.
Hint: As long as the matrix is dense, the numbers of operations can be simply counted.
(c) For the discussion, remember that the multiplication with a sparse matrix only depends
on the number of non-zero entries.
Exercise 5
Let A ∈ Rn×n be symmetric positive definite. Show:
(a) The spectral norm of A corresponds to its largest eigenvalue, that is,
∥A∥2 = λmax .
is given by
λmax (A)
κ (A) =
λmin (A)
with λmax (A) and λmin (A) the maximum respectively minimum eigenvalue of A.
2
Hint:
where (λi , vi ) are the eigenpairs of A; here, the vi have to be chosen in an appropriate
way. Then, show that P 2 2
λa
max Pi i 2 i = λ2max .
i ai
x̸=0
Exercise 6
(b) Argue why the classical Gram–Schmidt orthogonalization algorithm may experience a
loss of orthogonality.
(b) Show that Givens rotations are orthogonal and compute their spectral condition number.
Hint:
Exercise 7
Let A ∈ Rn×n be symmetric. Show that
Exercise 8
Show that the summation of two vectors
S = x1 + x2
is backward stable.
Hint: Let the numerical algorithm of adding to floating point numbers give you S̃ with
S̃i = ((x1 )i + (x2 )i ) (1 + εi ) with |εi | ≤ ε.
3
Exercise 9
Show that the left and right eigenvalues of a matrix A ∈ Rn×n are the same.
Hint: Make use of
det A⊤ = det (A) .
Exercise 10
Let A ∈ Rn×n with
A = LDL⊤ ,
where
···
1 0 0
.. .. ..
l
. . .
L = 21
... ... ..
. 0
ln1 · · · ln,(n−1) 1
is a lower triangular matrix and
d11 0 · · · 0
... ... ..
0 .
D= . . .
.. .. .. 0
0 · · · 0 dnn
(c) A is positive definite or negative definite if and only if all diagonal entries of D are
positive or negative, respectively.
Exercise 11
Consider the gradient descent method with a varying learning rate αk :
Algorithm 1: Gradient descent method
Data: Initial guess x(0) ∈ Rn , learning rate α ∈ R+ , and tolerance T OL > 0
r(0) := b − Ax(0) ;
while r(k) ≥ T OL r(0) do
Compute αk
x(k+1) := x(k) + αk r(k)
r(k+1) := b − Ax(k+1)
end
Result: Approximate solution of Ax = b
Assume that A is symmetric positive definite. Compute a formula for αk (step in red) such
that, in each step
2
e(k+1) A
is minimized, where
∥v∥2A ; = v ⊤ Av,
4
e(k+1) = x⋆ − x(k+1) and x⋆ is the solution of
Ax = b.