0% found this document useful (0 votes)
22 views3 pages

ListOfQuestions Optimization T24

Uploaded by

jokin4319
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views3 pages

ListOfQuestions Optimization T24

Uploaded by

jokin4319
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Optimization Fall 2024

List of questions for the exam:

The exam will consist of:

- A theoretical block consisting of one orange, one green and one blue type of questions.
- A practical block consisting of a small computational problem, similar to examples given in the lectures.

You should be able to:


In general:
1. Formally state unconstrained/constrained optimization problems with general/convex objective
functions and give examples of each type.
2. Solve unconstrained and constrained optimization problems (with equalities and inequalities)
using necessary and/or sufficient conditions.

from Block 2:
3. Compute linear and quadratic approximations of a function.
4. Prove that minus the gradient is the direction of the fastest descent.
5. Prove the first order optimality condition (necessary) for unconstrained minimization problem.
6. State the second order optimality condition (necessary and sufficient) for unconstrained
minimization problem.

from Block 3:
7. State the general scheme of descent methods.
8. Explain the role of gradient in gradient-based descent methods.
9. Explain the idea of the exact line search and implement one or two steps of exact line search in
simple examples (feasible with pen-and-paper).
10. Explain the idea of bracketing with dyadic and Fibonacci search methods.
11. Explain the idea of bracketing with quadratic fit methods.
12. Explain the idea of an approximate line search and be able to state and explain geometric
meaning the two Wolfe conditions.
13. Prove the existence of step length satisfying strong Wolfe conditions.
14. State the convergence result of an approximate line search under the Wolfe conditions.
15. Explain the idea and generic algorithm of the trust region method.
16. Derive the solution of the trust region subproblem (using the Cauchy point) and explain its
advantages.
17. Explain the idea behind the steepest gradient descent and role of the weighted norm for the
convex quadratic objective function.
18. Explain the idea behind the Newton method and state the convergence theorem; explain the
meaning of quadratic convergence.
19. Prove quadratic convergence 𝐱 𝑘 → 𝐱 ⋆ under the Newton method.
20. Explain the idea behind conjugate direction method.
21. Prove the convergence of the conjugate direction method.
22. Explain the idea of conjugate gradient method, state the termination theorem.
23. Explain the idea of the secant method for scalar functions, state the convergence result.
24. Explain the idea of the quasi-Newton method and derive the corresponding formulas.

from Block 4:
25. State the general problem solved by stochastic gradient descent and give examples.
26. Explain the idea of stochastic gradient descent (note, this is not a descent, strictly speaking)
27. Show a 1-dimensional toy example indicating why stochastic gradient could move in the right
direction.
28. Give the definition of a stochastic gradient and prove that ∇𝑓𝑖(𝑘) (𝑥 (𝑘) ) is a stochastic gradient
when the index 𝑖(𝑘) is chosen uniformly at random from {1, … , 𝑛}.
(𝑘) 2
29. Prove that if the sum learning rates ∑𝑇−1
𝑘=1 𝛼
(𝑘)
grows faster than ∑𝑇−1
𝑘=1(𝛼 ) as 𝑇 → ∞, then the
stochastic gradient descent converges in expectation.

from Block 5:
30. In constrained optimization with equalities, define the Lagrange function, state, and prove the
necessary condition for optimality, state the sufficient condition for optimality.
31. In constrained optimization with equalities and inequalities, give definitions of: a feasible set,
active/passive constraints, local/global solution, regular minimum.
32. State Karush-Kuhn-Tucker theorem on necessary conditions for optimality; give example of non-
regular minima.
33. State KKT theorem on sufficient condition for optimality.

from Block 6:
34. Explain the general idea of introducing penalty functions, discuss pro’s and con’s of this method.
35. Give an example of a penalty function and state the unconstrained optimization problem using
this function.
36. Explain the general idea of a barrier method for optimization problems with inequality
constraints, give examples of barrier functions.
37. State the general procedure of barrier methods.
38. Explain the connection between the barrier method for optimization problems with inequality
constraints and the Lagrange multipliers method.
39. Explain the connection between the penalty method for optimization problems with equality
constraints and the Lagrange multipliers method.
40. State the result on convergence of barrier function method (don’t forget all assumption).
41. In the theorem on convergence of barrier function method, prove that successive iterates
decrease the value of the objective function (item 1 in the statement).

from Block 7:
42. Give the definition of a convex function and state the general convex optimization problem.
43. State first and second order criteria for convexity of a function.
44. Prove that for a convex function defined on a convex domain, every local minimum is a global
one. Derive a corollary that for convex functions, first order necessary optimality conditions are
also sufficient.
45. Know what operations preserve convexity of functions.
46. State Linear Programming and Quadratic Programming problem. Give examples.
47. State the Support Vector Machine as a convex optimization problem.

from Block 8:
48. Explain the idea of Particle Swarm Optimization.
49. Explain the idea of Ant Colony Optimization.
50. Explain the idea of Genetic Algorithm (you can use an example).

You might also like