0% found this document useful (0 votes)
54 views3 pages

Exercise 2: Optimization: Problem 1

Uploaded by

anuragv1823
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views3 pages

Exercise 2: Optimization: Problem 1

Uploaded by

anuragv1823
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

MCL758 Optimization 7th February 2024

Exercise 2: Optimization

Problem 1:
Consider the iterative process
1 a
xk+1 = xk + ,
2 xk
where a > 0. Assuming the process converges, to what does it converge? What is the order of
convergence?

Problem 2:
Consider the problem
min 5x2 + 5y 2 − xy − 11x + 11y + 11

a) Find a point satisfying the first-order necessary conditions for a solution.


b) Show that this point is a global minimum.
c) What would be the rate of convergence of steepest descent for this problem?

Problem 3:
Show that the solution to the following optimization problem
n
X
max − pi log pi
{pi }
i=1
n
X n
X
subject to pi = 1 and xi pi = m,
i=1 i=1

exp{−λx i}
is the Gibbs’ distribution pi = exp{ n
P , where λ is a Lagrange multiplier.
k=1 λxk }

Problem 4:
Consider the quadratic program
1 ⊤
min x Qx − b⊤ x
x2
subject to Ax = c

2-1
Exercise 2: 7th February 2024 2-2

Prove that x∗ is a local minimum point if and only if it is a global minimum point. (No convexity
is assumed).

Problem 5:
Consider the problem of minimizing a quadratic function:
1 ⊤
min f (x) = x P x + q ⊤ x + r,
2
where P is a n × n symmetric matrix.

a) Show that if P is not a positive semi-definite matrix, i.e., the objective function f is not convex,
then the problem is unbounded below.
b) Now suppose that P is positive semi-definite, but the optimality condition P x∗ = −q does not
have a solution. Show that the problem is unbounded below.

Problem 6:
The purpose of this exercise is to show that Newton’s method is unaffected by linear scaling of the
variables. Consider a linear invertible transformation of variables x = Sy. Write Newton’s method
in the space of the variables y and show that it generates the sequence yk = S −1 xk , where {xk } is
the sequence generated by Newton’s method in the space of the variables x.

Problem 7:
Among all rectangles contained in a given circle show that the one that has maximal area is a square.

Problem 8:
Show that if x∗ is a local minimum of the twice continuously differentiable function f : Rn → R over
the convex set X , then
(x − x∗ )⊤ ∇2 f (x∗ )(x − x∗ ) ≥ 0
for all x ∈ X such that ∇f (x∗ )(x − x∗ ) = 0.

Problem 9:
Pm
Let a1 , . . . , am be given vectors in Rn , and consider the problem of minimizing j=1 ∥x − aj ∥2 over
a convex set P X . Show that this problem is equivalent to the problem of projecting on X the center
1 m
of gravity m j=1 aj .

Problem 10:
In three-dimensional space, consider a two-dimensional plane and two points z1 and z2 lying outside
the plane. Use the optimality condition (discussed in projected gradient method class) to characterize
the vector x∗ , which minimizes ∥z1 − x∥ + ∥z2 − x∥ over all x in the plane.
Exercise 2: 7th February 2024 2-3

Problem 11:

Consider the steepest descent method with bounded error,

xk+1 = xk − s(∇f (xk ) + ϵk ),

where s is a constant step size, ϵk are error terms satisfying ∥ϵk ∥ ≤ δ for all k, and f is the positive
definite quadratic function f (x) = 21 (x−x∗ )⊤ Q(x−x∗ ). Let q = max{|1−sλmin (Q)|, |1−sλmax (Q)|},
and assume that q < 1. Show that for all k, we have

∥xk − x∗ ∥ ≤ + q k ∥x0 − x∗ ∥.
1−q

Problem 12:
Let f (x) be a convex function. Define S = {x : Rn : f (x) ≤ c} for some constant c. Is S always a
convex set? Either provide a proof or a counterexample.

Problem 13:
Exercise 3.11 Boyd’s book

Problem 14:

Recall that for the following definition of projection

[x]+ = arg min ∥y − x∥2 ,


y∈S

we proved that the projection operation is non-expansive, i.e. ∥[x]+ −[z]+ ∥ ≤ ∥x−z∥. Now, re-define
the projection operation as follows

[x]+ = arg min(y − x)⊤ Q(y − x),


y∈S

where Q is a positive definite symmetric matrix. Re-derive the non-expaniveness property for the
new definition of projection, and show that it given by

∥Q1/2 (z − x)∥ ≥ ∥Q1/2 ([z]+ − [x]+ )∥

Problem 15:

Find the optimal x∗ for the following optimization problem.

min ∥Ax − b∥2 subject to Gx = h,


x∈Rn

where A ∈ Rm×n is rank n and G ∈ Rp×n is rank p.

You might also like