Exercise 2: Optimization: Problem 1
Exercise 2: Optimization: Problem 1
Exercise 2: Optimization
Problem 1:
Consider the iterative process
1 a
xk+1 = xk + ,
2 xk
where a > 0. Assuming the process converges, to what does it converge? What is the order of
convergence?
Problem 2:
Consider the problem
min 5x2 + 5y 2 − xy − 11x + 11y + 11
Problem 3:
Show that the solution to the following optimization problem
n
X
max − pi log pi
{pi }
i=1
n
X n
X
subject to pi = 1 and xi pi = m,
i=1 i=1
exp{−λx i}
is the Gibbs’ distribution pi = exp{ n
P , where λ is a Lagrange multiplier.
k=1 λxk }
Problem 4:
Consider the quadratic program
1 ⊤
min x Qx − b⊤ x
x2
subject to Ax = c
2-1
Exercise 2: 7th February 2024 2-2
Prove that x∗ is a local minimum point if and only if it is a global minimum point. (No convexity
is assumed).
Problem 5:
Consider the problem of minimizing a quadratic function:
1 ⊤
min f (x) = x P x + q ⊤ x + r,
2
where P is a n × n symmetric matrix.
a) Show that if P is not a positive semi-definite matrix, i.e., the objective function f is not convex,
then the problem is unbounded below.
b) Now suppose that P is positive semi-definite, but the optimality condition P x∗ = −q does not
have a solution. Show that the problem is unbounded below.
Problem 6:
The purpose of this exercise is to show that Newton’s method is unaffected by linear scaling of the
variables. Consider a linear invertible transformation of variables x = Sy. Write Newton’s method
in the space of the variables y and show that it generates the sequence yk = S −1 xk , where {xk } is
the sequence generated by Newton’s method in the space of the variables x.
Problem 7:
Among all rectangles contained in a given circle show that the one that has maximal area is a square.
Problem 8:
Show that if x∗ is a local minimum of the twice continuously differentiable function f : Rn → R over
the convex set X , then
(x − x∗ )⊤ ∇2 f (x∗ )(x − x∗ ) ≥ 0
for all x ∈ X such that ∇f (x∗ )(x − x∗ ) = 0.
Problem 9:
Pm
Let a1 , . . . , am be given vectors in Rn , and consider the problem of minimizing j=1 ∥x − aj ∥2 over
a convex set P X . Show that this problem is equivalent to the problem of projecting on X the center
1 m
of gravity m j=1 aj .
Problem 10:
In three-dimensional space, consider a two-dimensional plane and two points z1 and z2 lying outside
the plane. Use the optimality condition (discussed in projected gradient method class) to characterize
the vector x∗ , which minimizes ∥z1 − x∥ + ∥z2 − x∥ over all x in the plane.
Exercise 2: 7th February 2024 2-3
Problem 11:
where s is a constant step size, ϵk are error terms satisfying ∥ϵk ∥ ≤ δ for all k, and f is the positive
definite quadratic function f (x) = 21 (x−x∗ )⊤ Q(x−x∗ ). Let q = max{|1−sλmin (Q)|, |1−sλmax (Q)|},
and assume that q < 1. Show that for all k, we have
sδ
∥xk − x∗ ∥ ≤ + q k ∥x0 − x∗ ∥.
1−q
Problem 12:
Let f (x) be a convex function. Define S = {x : Rn : f (x) ≤ c} for some constant c. Is S always a
convex set? Either provide a proof or a counterexample.
Problem 13:
Exercise 3.11 Boyd’s book
Problem 14:
we proved that the projection operation is non-expansive, i.e. ∥[x]+ −[z]+ ∥ ≤ ∥x−z∥. Now, re-define
the projection operation as follows
where Q is a positive definite symmetric matrix. Re-derive the non-expaniveness property for the
new definition of projection, and show that it given by
Problem 15: