0% found this document useful (0 votes)
4 views2 pages

ListOfQuestions Optimization T23

Uploaded by

jokin4319
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views2 pages

ListOfQuestions Optimization T23

Uploaded by

jokin4319
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Optimization

Fall 2023

List of questions for the exam:

The exam will consist of:

- A theoretical block consisting of one orange, one green and one blue type of questions.
- A practical block consisting of a small computational problem.

You should be able to:


In general:
1. Formally state unconstrained/constrained optimization problems with general/convex objective
functions and give examples of each type.
2. Solve unconstrained and constrained optimization problems (with equalities and inequalities)
using necessary and/or sufficient conditions.

from Lecture 2:
3. Compute linear and quadratic approximations of a function.
4. State first order optimality condition for unconstrained minimization problem.
5. State the second order optimality condition for unconstrained minimization problem.

from Lectures 3-4:


6. State the general scheme of descent methods.
7. Explain the role of gradient in gradient-based descent methods.
8. Explain the idea of the exact line search and implement one or two steps of exact line search in
simple examples (feasible with pen-and-paper).
9. Explain the idea of an approximate line search and be able to state and explain geometric
meaning the two Wolfe conditions.
10. State the convergence result of an approximate line search under the Wolfe conditions.
11. Explain the idea and generic algorithm of the trust region method (solution of the trust region
subproblem will not be asked).
12. Explain the idea behind the steepest gradient descent and role of the weighted norm for the
convex quadratic objective function.
13. Explain the idea behind the Newton method and state convergence theorem (and explain the
meaning of quadratic convergence).

from Lectures 5-6:


14. Prove that if one does a descent in the direction of −∇𝑓 with exact line search, then the
successive directions are orthogonal. Be able to explain why this causes computational problems.
15. Explain the idea of the hyper-gradient descent method.
16. Explain the idea of the secant method for scalar functions.
17. Explain the idea of the quasi-Newton method and derive the corresponding formulas.
18. State the general problem solved by stochastic gradient descent and give examples.
19. Explain the idea of stochastic gradient descent (note, this is not a descent, strictly speaking)
20. Give the definition of a stochastic gradient and prove that ∇𝑓𝑖(𝑘) (𝑥 (𝑘) ) is a stochastic gradient
when the index 𝑖(𝑘) is chosen uniformly at random from {1, … , 𝑛}.
(𝑘) 2
21. Prove that if the sum learning rates ∑𝑇−1
𝑘=1 𝛼
(𝑘)
grows faster than ∑𝑇−1
𝑘=1(𝛼 ) as 𝑇 → ∞, then the
stochastic gradient descent converges in expectation.

from Lectures 7-8:


22. In constrained optimization with equalities, define the Lagrange function, state, and prove the
necessary condition for optimality, state the sufficient condition for optimality.
23. In constrained optimization with equalities and inequalities, give definitions of: feasible set,
active/passive constraints, local/global solution, regular minimum.
24. State Karush-Kuhn-Tucker theorem on necessary conditions for optimality; give example of non-
regular minima.
25. State KKT theorem on sufficient condition for optimality.

from Lectures 9-10:


26. Explain the general idea of introducing penalty functions, discuss pro’s and con’s of this method.
27. Give an example of a penalty function and state the unconstrained optimization problem using
this function.
28. Explain the general idea of a barrier method for optimization problems with inequality
constraints, give examples of barrier functions.
29. State the general procedure of barrier methods.
30. Explain the connection between the barrier method for optimization problems with inequality
constraints and the Lagrange multipliers method.
31. Explain the connection between the penalty method for optimization problems with equality
constraints and the Lagrange multipliers method.

from Lectures 11-12:


32. Give the definition of a convex function and state the general convex optimization problem.
33. State first and second order criteria for convexity of a function.
34. Prove that for a convex function defined on a convex domain, every local minimum is a global
one. Derive a corollary that for convex functions, first order necessary optimality conditions are
also sufficient.
35. Know what operations preserve convexity of functions.
36. State Linear Programming and Quadratic Programming problem. Give examples.
37. State the Support Vector Machine as a convex optimization problem.
38. Explain how to approach convex optimization problems with equality and inequality constraints
using methods of unconstrained optimization.

You might also like