0% found this document useful (0 votes)
73 views

Applied Numerical Optimization: Prof. Alexander Mitsos, Ph.D. What Is Optimization and How Do We Use It?

opti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

Applied Numerical Optimization: Prof. Alexander Mitsos, Ph.D. What Is Optimization and How Do We Use It?

opti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Applied Numerical Optimization

Prof. Alexander Mitsos, Ph.D.

What is optimization and how do we use it?


Definition: Numerical (Mathematical) Optimization

Optimization (in everyday language):


Improvement of a good solution by intuitive, brute-force or heuristics-based decision-making

Numerical (Mathematical) Optimization:


Finding the best possible solution using a mathematical problem formulation and a rigorous/ heuristic
numerical solution method

Often the term mathematical programming is used as an alternative to numerical optimization. This term
dates back to the times before computers. The term programming referred to the solution of planning
problems.
For those interested in the history of optimization: Documenta Mathematica, Journal der
Deutschen Mathematiker-Vereinigung, Extra Volume - Optimization Stories, 21st
International Symposium on Mathematical Programming, Berlin, August 19–24, 2012

2 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Formulation of Optimization Problems (1)

The general formulation of an optimization problem consists of:


• The variables (also called decision variables, degrees of freedom, parameters, …)

• An objective function

• A mathematical model for the description of the system to be optimized

• Additional restrictions on the optimal solution, including bounds of the variables.

The mathematical model of the system under consideration and the additional restrictions are also referred
to as constraints.

The objective function can either be minimized or maximized.

3 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Formulation of Optimization Problems (2)

• The objective function describes an economical measure (operating costs, investment costs, profit, etc.), or
technological, or ...

• The mathematical modeling of the system results in models to be added to the optimization problem as
equality constraints.

• The additional constraints (mostly linear inequalities) result, for instance, from:
 plant- or equipment-specific limitations (capacity, pressure, etc.)
 material limitations (explosion limit, boiling point, corrosivity, etc.)
 product requirements (quality, etc.)
 resources (availability, quality, etc.)

[1] Edgar T. F., Himmelblau D. M., Optimization of Chemical Processes,


4 of 9 Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D. 2nd Edition, McGraw-Hill, 2001.
Solution of Optimization Problems

What defines the solution of an optimization problem?

• Those values of the influencing variables (decision variables or degrees of freedom) are sought,
which maximize or minimize the objective function.

• The values of the degrees of freedom must satisfy the mathematical model and all additional
constraints like, for instance, physical or resource limitations at the optimum.

• The solution is, typically, a compromise between opposing effects. In process design, for instance, the
investment costs can be reduced while increasing the operating costs (and vice versa).

5 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Applications of Optimization

Optimization is widely used in science and engineering, and in particular in process and energy systems
engineering, e.g.,

• Business decisions (determination of product portfolio, choice of location of production sites, analysis of
competing investments, etc.)

• Design decisions: Process, plant and equipment (structure of a process or energy conversion plant,
favorable operating point, selection and dimensions of major equipment, modes of process operation,
etc.)

• Operational decisions (adjustment of the operating point to changing environmental conditions,


production planning, control for disturbance mitigation and set-point tracking, etc.)

• Model identification (parameter estimation, design of experiments, model structure discrimination, etc.)

6 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Short Examples

• Engineering: design and operation

• Operations research, e.g., airlines


 How to schedule routes: results in huge linear programs (LPs)
 How to price airline tickets?
 Should the airline aim at always having full airplanes?
 Must consider uncertainty, typically as stochastic formulation

• Navigation systems: how to go from A to B in shortest time (or shortest distance, lowest fuel consumption or
...)

• LaTeX varies spacing and arrangement of figures to maximize visual appeal of documents

• Successful natural processes not using numerical methods


 Evolution of species
 Behavior of animals
 Equilibrium processes in nature maximize entropy generation

7 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Check Yourself

• What constitutes an optimization problem?

• What types of problems are typically found?

• Why do we typically seek a compromise in optimization?

• What is the difference between a nonlinear program, an optimal control problem and a stochastic program?

8 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Bilevel Optimization in Grad School
max great papers
Constraints: Variables:

• # nervous breakdowns < OSHA limit • Pressure


Where are my paperz?
• sponsors happy
• Encouragement
Occasional free beer and food

max slack max social impact min graduation time max papers

Constraints: Variables:
• sleep > 4hrs • work load
• pay rent • free lunch schemes
• keep funding • seem busy schemes

9 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.

Examples of optimization problems – basic examples


Example: Design of a Pipeline (1)

• A fluid at temperature 600°C flows through a pipeline.

• Surface heat losses must be balanced by additional heating.

• The heating costs (operational costs) are proportional to the heat loss, which can be reduced by the
installation of an insulation (investment costs).

600 °C

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Example: Design of a Pipeline (2)

• The aim is to find the best compromise between the cost of additional heating and cost of additional insulation.
The objective function corresponds thus to the total (annualized) cost.

• The degree of freedom is the insulation thickness.

[1]

Applied Numerical Optimization [1] Kaynakli, Economic thermal insulation thickness for pipes and ducts: A
Prof. Alexander Mitsos, Ph.D. review study, 2014
Example: Optimal Motion Planning of Robots

Task:

• Transportation and accurate positioning of a part, e.g., during


the assembly of an automobile windscreen.

Aims:

• Short cycle time for production, e.g., minimization of


transportation time through optimal motion planning

• Correct positioning of the part during assembly

• No collisions during movement

Source: FG Simulation und Systemoptimierung, TU Darmstadt; Kuka Roboter GmbH

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Example: Optimization of Semi-batch Reactor Operation

In a semi-batch reactor, a product B should be manufactured according A


to the reaction scheme FA(t)
Tc(t)
ABC

TR
where C is an undesirable by-product.
ABC

Optimization problem:
• The selectivity of the reaction can be maximized over the batch by manipulating the
dosage of reactant A and the reaction temperature.
• The degrees of freedom are functions of time.
• Like in robot motion planning, this problem is an optimal trajectory planning problem.

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Example: Gemstone Cutting as (Multi-Body) Design Centering Problem

Optimal Cut?

maximize gemstone volume


minimize waste

Pictures from Fraunhoffer ITWM


Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D. Algorithm by Oliver Stein, solver by ITWM
Check Yourself

• For each of the considered examples, state: variables, objective function, model, additional constraints

• Formulate the application of your interest as an optimization problem

• What are some limits of optimization?

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.

Examples of optimization problems – solar thermal


Example: Heliostat Fields – Construct on Plane or Hill?

Masdar & Sener, Collage: D. Codd eSolar, Collage: D. Codd

• Renewable energy requires huge land areas


and is expensive
• Central receiver plants - a promising scalable
technology
• Can use hills in beam-down (CSPonD) or
beam-up (“natural-tower")
Noone, et int. Mitsos*, Solar Energy

Applied Numerical Optimization CSPonD: Slocum*, Codd, et int. Mitsos, Solar Energy
Prof. Alexander Mitsos, Ph.D.
Example: Heliostat Fields – Optimization Applicable?

• Objective: Maximize field efficiency and minimize land area usage


 Minimize economical & ecological costs
 Factorial number of local minima

• Noone (with guidance by Mitsos) developed and validated a model suitable for optimization (fast yet

accurate, compatible with reverse mode algorithmic differentiation)

• Heuristic global methods (genetic algorithm, multistart) prohibitive for realistic number of heliostats

• Local optimization from arbitrary initial guess not suitable as results are very sensitive to initial guess

• Heuristic solution tried: Start with existing designs and optimize locally

• Result obtained: Spiral pattern recognized by Prof. Manuel Torrilhon

• Long-term goal: Deterministic global optimization using Relaxation of Algorithms [1]

[1] Mitsos A., Chachuat B. and Barton P., McCormick-based relaxations of


Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D. algorithms. SIAM Journal on Optimization, 2009
Example: Heliostat Field Optimization – Some Results

• Identified spiral pattern from local optimization of


radially staggered pattern [1]
 Abengoa concurrently proposed spiral
• Optimized biomimetic spiral → appreciable improvement
in efficiency, substantial savings in land area.
http://www.bbc.co.uk/mundo/noticias/2012/01/120123_girasol_energia_solar_am.shtml,
https://www.popsci.com/technology/article/2012-01/sunflower-design-inspires-more-efficient-
solar-power-plants/ and picked up by many more…

[1] Noone C. J., Torrilhon M., Mitsos A., Heliostat field optimization: A new
Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D. computationally efficient model and biomimetic layout, Solar Energy, 2012
Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.

Examples of optimization problems - wind


Example: Wind Farm Layout Optimization

• Wind turbines are built in groups (=wind farms)


to produce more electricity in a given limited area
• Wind farm layout: Where to position turbines within farm limits?
Potentially also: How many turbines?
• Typical objectives:
• Maximize annual electricity production
[1] • Minimize levelized cost of electricity (=cost per unit of electrical energy)

• Key Factor: Distance between turbines Wind farm


Forbidden
areas
 Less interference
Increasing distance

 Land use
 Infrastructure cost

[1] https://commons.wikimedia.org/wiki/File:Wind_farm_near_North_Sea_coast.jpg (CC BY-SA 4.0)


Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.
Example: Wind Farm Layout Optimization – How to Describe Layout?

Fixed cells Continuous positions Patterns


y

[1] [2] [3]


x

 easy to optimize #turbines  most freedom  few variables


 less freedom  difficult to optimize #turbines  less freedom
 many discrete variables  many continuous variables  complex areas difficult

 Has implications on applicable optimization algorithms & quality of solution


[1] Mosetti et al. (1994). J. Wind Eng. Ind. Aerod., 51, 105.
Applied Numerical Optimization [2] DuPont & Cagan (2012). J. Mech. Des., 134, 081002. Mod.
[3] González et al. (2017). Appl. Energ., 200, 28.
Prof. Alexander Mitsos, Ph.D.
Example: Wind Farm Layout Optimization – Global Optimization

• Most basic case: constant wind from one direction, minimize levelized cost of electricity

Benchmark solution Improved solution


• Fixed cells approach • Pattern approach
• Genetic algorithm • (open source,
(stochastic global) deterministic global)
Wind Wind
 Levelized cost of electricity -.13 .%
45𝐷
𝑁c =6  Annual electricity production +68 .%
 Efficiency +4.4 %-pt

𝑦
𝑁r =8  Both are optimized layouts
 Problem formulation and
[2]
[1]
0 algorithm make a difference
0 𝑥 45𝐷
[1] Grady et al. (2005), Renew. Energ., 30, 259.
Applied Numerical Optimization [2] Bongartz (2020), in preparation.
Prof. Alexander Mitsos, Ph.D.
Example: Sailing – Technology Choice

Typically,
inventions by
human creativity,
not by
[1] [2] [3] [4] mathematical
Ancient sailing: Fixed mast; no Classic sailing: Fixed mast;
optimization.
Novel hulls (catamaran)
boom → mostly downwind boom → can go upwind Novel sails (wing, …)

[5] [6] [7] [8]

Hydrofoiling: wing in water. Wing instead of sail or kite.


Windsurfing: Mast moves Kite-surfing: No mast No mast, no boom, no ropes
From planning to flying!

[1] via http://www.salimbeti.com/micenei/ships.htm [2] Photo credit Eva Lambidoni


Applied Numerical Optimization [3] Photo credit Jörn Viell [4] Alexander Mitsos & Daniel Fouquet (skipper), (photo credit Eva Lambidoni)
[5] Alexander Mitsos (photo credit Stephanie Mitsos) [6] Verena Niem & Dimitris Chatzigeorgiou
Prof. Alexander Mitsos, Ph.D.
[7] Sotiris Kontolios [8] Robby Naish
Example: Sailing – Optimization
[1]
Complex physics!
Coupled hydro-,
Infinite aero- and structural-
degrees of dynamics problem
CAD model of a ship‘s hull Geometric parametrization of bulbous bow [2]
freedom
Shape parameters
Parametric Mathematical
CAD model
modeler modeling

[3] [5] Objective function

Constraints

CAD model of a Catamaran Exemplary geometry


approximation using B- Optimization
splines Optimal shape
Geometric parametrization
of hydrofoil [2]
algorithm
[4]

CAD model of a hydrofoil

Choice of parametrization decides the degrees of freedom

[1] via https://www.friendship-systems.com/solutions/for-ship-design (copyright free)


Applied Numerical Optimization [2] Berrini et al., Geometric Modelling and Deformation for Shape Optimization of Ship Hulls and Appendages
Prof. Alexander Mitsos, Ph.D. [3] via http://www.wavescalpel.com/development-first-solidworks-cad-model-for-wavescalpel/ (copyright free)
[4] via https://grabcad.com/ (copyright free)
[5] Masters et al., A Geometric Comparison of Aerofoil Shape Parameterisation Methods, 2016
Renewable Electricity Generation by Kite: Optimization of Operation

• Wind power generation by kite • Optimization over finite control

• Noisy data, uncertain wind prediction


• Inaccurate control model
By AweCrosswind - Specially created for an article about Crosswind
Kite Power, CC BY-SA 3.0, • Path found by ad-hoc schemes or
https://commons.wikimedia.org/w/index.php?curid=26463531
based on nonlinear model-predictive
• Kite has to be moved to generate power, ∫ 𝐹 𝑡 𝑣 𝑡 𝑑𝑡 > 0 control
• Hard optimal control under uncertainty problem Costello, Francois & Bonvin European Journal of
Control 2017
Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.
Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.

Classification and issues of optimization


Classification of Optimization Problems

Optimization problems are classified with respect to the type of the objective function, constraints and
variables, in particular
• Linearity of objective function and constraints:
• Linear (LP) versus nonlinear programs (NLP)
• NLPs can be convex or nonconvex, smooth or nonsmooth
• Discrete and/or continuous variables:
• Integer programs (IP) and mixed-integer programs (MIP or MILP and MINLP, respectively)
• Time-dependence:
• Dynamic optimization or optimal control programs (DO or OCP)
• Stochastic or deterministic models and variables:
• Stochastic programs, semi-infinite optimization, …
• Single objective vs multi-objective, single-level vs multi-level, ...

2 of 7 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
NEOS Classification of Stationary Optimization Problems
http://neos-guide.org/content/optimization-taxonomy

3 of 7 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Common Terminology Used in Numerical Optimization

• An optimization problem: mathematical formulation to find the best possible solution out of all feasible
solutions. Typically comprising one or multiple objective function(s), decision variables, equality constraints
and/or inequality constraints.

• An algorithm is a procedure for solving a problem based on conducting a sequence of specified actions.
The terms ‘algorithm’ and ‘solution method’ are commonly used interchangeably.

• A solver is the implementation of an algorithm in a computer using a programming language. Often, the
terms ‘solver’ and ‘software’ are used interchangeably.

4 of 7 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Formulation and Solution of Optimization Problems

1. Determine variables and phenomena of interest through systems analysis


2. Define optimality criteria: objective function(s) and (additional) constraints
3. Formulate a mathematical model of the system and determination of
degrees of freedom (number and nature)
4. Identify of the problem class (LP, QP, NLP, MINLP, OCP etc.)
5. Select (or develop) a suitable algorithm
6. Solve the problem using a numerical solver
7. Verify the solution through sensitivity analysis, understand results, …

5 of 7 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Some Issues with Optimization

• Not a button-press technology


 Need expertise for model formulation, algorithm selection and tuning, checking results, …

• “Optimizer's curse”: solution using good algorithm and bad model will look better than what it is
 Random error: if the model has a random error and we optimize, the true objective value of the solution found will be
worse than the calculated one
 If model allows for nonphysical solution with good objective value, good optimizer will pick such
 On the other hand, model has to just lead in correct direction, not be correct

• Many engineering (design) problems are nonconvex, but global algorithms are inherently very expensive

• Often optimal solution at constraint, thus tradeoff good vs. robust solution

6 of 7 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Check Yourself

• What is the difference between a nonlinear program, an optimal control problem and a stochastic program?

• What are the steps in formulating and solving an optimization problem?

• What are some issues in optimization?

• Formulate the application of your interest as an optimization problem

7 of 7 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.

Formal definition of optimization


Some Simple Optimization Problems and Their Solutions
optimal solution optimal
min 𝑓(𝒙) objective function solution,
𝒙 with EC and IC
3
𝑥2 only EC
equality
s.t. 𝑐𝑖 𝒙 = 0, ∀𝑖 ∈ 𝐸
constraints (EC)
inequality
𝑐𝑖 𝒙 ≤ 0, ∀𝑖 ∈ 𝐼 2
constraints (IC)
feasible set 
for EC & IC 
Example: 1 2.6 
0.1
0.4
1.9
1
2 2
min(𝑥1 − 2) +(𝑥2 − 1)
𝒙 𝑥1
1 2 3
s.t. 𝑥2 − 2𝑥1 = 0
𝑥12 − 𝑥2 ≤ 0
constant objective
𝑥1 + 𝑥2 ≤ 2 function optimal solution
 contour lines (unconstrained)

2 of 6 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Nonlinear Optimization Problem (Nonlinear Program, NLP)

General formulation: 𝒙 =[𝑥1 , 𝑥2 , … , 𝑥𝑛 ]𝑇  D  𝑅𝑛 a vector (point in n-dimensional space)


D host set
min 𝑓(𝒙) 𝑓 : D R objective function
𝒙∈𝐷

𝑐𝑖 : D R constraint functions  𝑖  𝐸 ∪ 𝐼
s.t. 𝑐𝑖 𝒙 = 0, 𝑖 ∈ 𝐸
𝐸 the index set of equality constraints
𝑐𝑖 𝒙 ≤ 0, 𝑖 ∈ 𝐼
𝐼 the index sets of inequality constraints

The constraints and the host set define the feasible set, i.e., the set of all feasible solutions:

 = {𝒙  𝐷 | 𝑐𝑖 𝒙 ≤ 0  𝑖 ∈ 𝐼, 𝑐𝑖 𝒙 = 0  𝑖 ∈ 𝐸}

Equivalent formulation: min 𝑓(𝒙)


𝒙∈

3 of 6 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
What Is an Optimal Solution ?

Definition (optimal solution, minimum):


  x*
min 𝑓(𝒙)
x*
𝒙∈ N ( x*)
N ( x*)

Solution in interior Solution on boundary


of feasible set of feasible set

a) 𝒙∗ is a local solution if 𝒙∗ ∈  and a neighborhood 𝑁 𝒙∗ of 𝒙∗ exists: 𝑓 𝒙∗ ≤ 𝑓 𝒙 ∀𝒙 ∈ 𝑁 𝒙∗ ⋂

b) 𝒙∗ is a strict local solution if 𝒙∗ ∈  and a neighborhood 𝑁 𝒙∗ of 𝒙∗ exists: 𝑓 𝒙∗ < 𝑓 𝒙 ∀𝒙 ∈ 𝑁 𝒙∗ ⋂, 𝒙 ≠ 𝒙∗

c) 𝒙∗ is a global solution if 𝒙∗ ∈  and 𝑓 𝒙∗ ≤ 𝑓 𝒙 ∀𝒙 ∈ 

More formally, these are solution points

4 of 6 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Optimal Solution – Some Examples
min 𝑓(𝑥)
𝑥∈𝑅
a) strict global minimum b) Two strict local minima,
out of which one is strict global minimum
f f

N1 N2

x* x 𝑥1∗ 𝑥2∗ x
c) a strict local minimum, d) each 𝑥 ∗ ∈ 𝑎, 𝑏 is a local and global minimum
no global minimum no strict minima

f f

x* x a x* b x
5 of 6 Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.
Check Yourself

• Write down the general definition of optimization problem

• Definition of local and global solution of an optimization problem?

• Is every local solution also a global solution? Is every global solution also a local solution?

• What is the feasible set of an optimization problem?

• Can a solution be in the interior of the feasible set? On its boundary? Outside the feasible set?
 Draw the corresponding picture

• For given problem recognize the (local or global) optimal solution points

6 of 6 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.

Mathematical background
Nonlinear Optimization Problem (Nonlinear Program, NLP)

General formulation: 𝒙 =[𝑥1 , 𝑥2 , … , 𝑥𝑛 ]𝑇  D  𝑅𝑛 a vector (point in n-dimensional space)


D host set
min 𝑓(𝒙) 𝑓 : D R objective function
𝒙∈𝐷

𝑐𝑖 : D R constraint functions  𝑖  𝐸 ∪ 𝐼
s.t. 𝑐𝑖 𝒙 = 0, 𝑖 ∈ 𝐸
𝐸 the index set of equality constraints
𝑐𝑖 𝒙 ≤ 0, 𝑖 ∈ 𝐼
𝐼 the index sets of inequality constraints

The constraints and the host set define the feasible set, i.e., the set of all feasible solutions:

 = {𝒙  𝐷 | 𝑐𝑖 𝒙 ≤ 0  𝑖 ∈ 𝐼, 𝑐𝑖 𝒙 = 0  𝑖 ∈ 𝐸}

Equivalent formulation: min 𝑓(𝒙)


𝒙∈

2 of 8 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Directional Derivative

Definition:

𝑛 𝑛
𝑓
Let 𝑓:𝐷𝑅, 𝐷  𝑅 , 𝒙𝐷 and 𝒑𝑅 with 𝒑 =1.

𝑓 is differentiable at the point 𝒙 = 𝒙𝑎 in the direction 𝒑 if the limit,

𝑓 𝒙𝑎 + 𝜀𝒑 − 𝑓(𝒙𝑎 )
𝐷 𝑓, 𝒑 |𝒙=𝒙𝑎 = lim =: 𝜵𝒑 𝑓(𝒙𝑎 )
𝜀→0 𝜀

exists and is finite.

𝐷 𝑓, 𝒑 is called the directional derivative of 𝑓 in the


𝒑
direction 𝒑. 
𝒙𝑎

3 of 8 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Gradient

Definition:
• The first derivative of a scalar, continuous function 𝑓 is called the gradient of 𝑓 at point 𝒙:
𝜕𝑓

𝜕𝑥1 𝒙
𝜵𝑓 𝒙 = ⋮ .
𝜕𝑓

𝜕𝑥𝑛 𝒙

Remarks:
• If 𝒙 is a function of time 𝑡, the chain rule applies:
𝑛
𝑑𝑓 𝑑𝒙 𝜕𝑓 𝜕𝑥𝑖
ቤ = 𝜵𝑓(𝒙)𝑇 ቤ = ෍ ቤ ቤ .
𝑑𝑡 𝒙 𝑡
𝑑𝑡 𝑡 𝜕𝑥𝑖 𝒙 𝑡
𝜕𝑡 𝑡
𝑖=1

• The directional derivative is related to the gradient:


𝑓 𝒙 + 𝜀𝒑 − 𝑓(𝒙)
𝐷 𝑓 𝒙 , 𝒑 = 𝜵𝒑 𝑓 𝒙 = lim = 𝜵𝑓(𝒙)𝑇 𝒑
𝜀→0 𝜀

4 of 8 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Hessian (matrix)

Definition:
• The second derivative of a scalar, twice continuously differentiable function 𝑓 is the symmetric Hessian
(matrix) 𝑯 𝒙 of the function 𝑓

𝜕2𝑓 𝜕2𝑓
อ ⋯ อ
𝜕𝑥12 𝜕𝑥1 𝑥𝑛
𝒙 𝒙
2
𝑯 𝒙 =𝜵 𝑓 𝒙 = ⋮ ⋱ ⋮ .
𝜕2𝑓 𝜕2𝑓
อ ⋯ อ
𝜕𝑥1 𝑥𝑛 𝜕𝑥𝑛2
𝒙 𝒙

5 of 8 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Necessary and Sufficient Conditions: Definitions and Properties

• Necessary condition: Statement A is a necessary condition for statement B if (and only if) the falsity of A
guarantees the falsity of B. In math notation: not𝐴 ⇒ not𝐵
• Sufficient condition: Statement A is a sufficient condition for statement B, if (and only if) the truth of A
guarantees the truth of B. In math notation: 𝐴 ⇒ 𝐵
• If statement A is necessary condition for statement B, then B is sufficient condition for statement A.
 not𝐴 ⇒ not𝐵 implies 𝐵 ⇒ 𝐴
• If statement A is sufficient condition for statement B, then B is necessary condition for statement A.
 𝐴 ⇒ 𝐵 implies not𝐵 ⇒ not𝐴
• In optimization we would like to have easy to check conditions that tell us if a candidate point
 is a local optimum (sufficient condition for optimality is sufficient)
 is not an optimal condition (necessary condition is violated)
Ideally we want conditions that are necessary and sufficient for local optimum (or even better for global)

6 of 8 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Necessary and Sufficient Conditions: Examples

• Simple example: let 𝑥 ∈ 𝑅 and 𝑦 = 𝑥 2 . Statement A “𝑥 is positive” and statement B “𝑦 is positive”


 A is sufficient for B. Proof: A true ⇔ 𝑥 > 0 ⇒ 𝑥 2 > 0 ⇒ 𝑦 > 0 ⇔ B true
 A is not necessary for B. Proof by counter-example: 𝑥 = −1 ⇒ 𝑦 = 𝑥 2 = 1, so B is true and A is false
• Example for sets. Let 𝐷𝐴 , 𝐷𝐵 ⊂ 𝑅𝑛 and 𝐷𝐶 = 𝐷𝐴 ∩ 𝐷𝐵
 Statement A: 𝒙 ∈ 𝐷𝐴
𝐷𝐴 𝐷𝐵
 Statement B: 𝒙 ∈ 𝐷𝐵 𝐷𝐶
 Statement C: 𝒙 ∈ 𝐷𝐶
 A is necessary for C, B is necessary for C
 C is sufficient for A, C is sufficient for B
 (A and B) is both necessary and sufficient for C

7 of 8 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Check Yourself

• Which functions are continuous, differentiable, continuous and differentiable?


• How is the directional derivative of a function defined? How is the partial derivative related to the directional
derivative?
• What is the definition of the gradient and the Hessian of a function?

8 of 8 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.

Optimality conditions in smooth unconstrained problems


Unconstrained Optimization

Unconstrained optimization problem:


Special case for which the feasible set  = 𝑅𝑛

min 𝑓(𝒙)
𝒙∈𝑅 𝑛

𝒙∗ is a local solution if 𝒙∗ ∈ 𝑅𝑛 and a neighborhood 𝑁 𝒙∗ of 𝒙∗ exists: 𝑓 𝒙∗ ≤ 𝑓 𝒙 , ∀𝒙 ∈ 𝑁 𝒙∗

We want easy to check conditions


Necessary: if 𝒙∗ is optimal then conditions is satisfied
Sufficient: if condition is satisfied then 𝒙∗ is optimal
Ideally both necessary and sufficient!

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
First-Order Necessary Conditions
Theorem (First-Order Necessary Conditions):
Let 𝑓 be continuously differentiable and let 𝒙∗ ∈ 𝑅𝑛 be a local minimizer of 𝑓, then 𝜵𝑓(𝒙∗ ) = 𝟎
7
𝑥2
Proof: 6

5
20
∗ 𝑛
As 𝒙 is a local minimizer of 𝑓, for each 𝒑 ∈ 𝑅 , there exists 𝜏 > 0, such 4
10
11.5
3 2.81
𝒑
that 𝑓 𝒙∗ + 𝜀𝒑 ≥ 𝑓 𝒙∗ ∀ 𝜀 ∈ [0, 𝜏]. 𝒙∗
2
7
1 0.1 4
1
1.5
0 2.81 20
By the definition of the directional derivative: -1
𝑥1
-2 -1 0 1 2 3 4
𝑓 𝒙∗ +𝜀𝒑 −𝑓(𝒙∗ )
𝜵𝒑 𝑓 𝒙∗ = lim = 𝜵𝑓(𝒙∗ )𝑇 𝒑 ≥ 0 (1)
𝜀→0 𝜀
30

25
𝑓 𝒙∗ + 𝜀𝒑
The special choice, 𝒑 = −𝜵𝑓(𝒙∗ ), leads to 20

15

𝛁𝑓 𝒙∗ 𝑇 𝒑 = −𝛁𝑓 𝒙∗ 𝑇 𝜵𝑓 𝒙∗ = − 𝜵𝑓 𝒙∗ 2 ≤ 0 (norm property) (2) 10

𝜀
5

(1) and (2) ⇒ 𝜵𝑓 𝒙∗ = 𝟎. 0

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Stationary Points

• Let 𝑓 be continuously differentiable and 𝒙∗ ∈ 𝑅𝑛 . If 𝜵𝑓(𝒙∗ ) = 𝟎 holds, then 𝒙∗ is called a stationary


point of 𝑓.
• This condition is a necessary, but not a sufficient condition for a local minimum.
• Example: 𝑓 𝑥 = −𝑥 2 possesses its only stationary point at 𝑥 ∗ = 0, as 𝛻𝑓 𝑥 ∗ = −2𝑥 ∗ = 0. This
point is, however, not a minimum but rather the unique global maximum.

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Saddle Points

• A stationary point does not have to be a minimum or a maximum. Such a stationary point is
called a saddle point.
• Example: the gradient of 𝑓 𝒙 = 𝑥12 − 𝑥22 is 𝜵𝑓 𝒙 = [2𝑥1 , −2𝑥2 ]𝑇 . Thus, 𝒙∗ = 𝟎 is its only
stationary point. As 𝑓 is positively curved in 𝑥1 -direction and negatively curved in 𝑥2 -direction,
𝒙∗ is a saddle point.

saddle point

𝒙∗

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Second-Order Necessary Conditions

Theorem (Second-order necessary conditions):


Let 𝑓 be twice continuously differentiable and let 𝒙∗ ∈ 𝑅𝑛 be a local minimizer of 𝑓, then

1. 𝜵𝑓(𝒙∗ ) = 𝟎,
2. 𝜵2 𝑓(𝒙∗ ) is positive semidefinite.

These conditions are only necessary and not sufficient


• The only stationary point of 𝑓 𝑥 = 𝑥 3 is 𝑥 ∗ = 0, with 𝛻𝑓 0 = 0, 𝛻 2 𝑓 0 = 0. The above conditions are fulfilled.
𝑥 ∗ = 0 is not a local minimum but rather a saddle point.

• The only stationary point of 𝑓 𝑥 = −𝑥 4 is 𝑥 ∗ = 0, with 𝛻𝑓 0 = 0, 𝛻 2 𝑓 0 = 0. The above conditions are


fulfilled. 𝑥 ∗ = 0 is not a local minimum but rather a local maximum.

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Second-Order Necessary Conditions: Informal Proof by Contradiction

Let 𝑓 be twice continuously differentiable and let 𝒙∗ ∈ 𝑅𝑛 be a local minimizer of 𝑓.

Assume 𝜵2 𝑓 𝒙∗ is not positive semidefinite.

Thus, ∃𝒑 ∈ 𝑅𝑛 : 𝒑𝑻 𝜵2 𝑓 𝒙∗ 𝒑 < 0

Taylor expansion at 𝒙∗ gives

1
𝑓 𝒙∗ + 𝜖𝒑 = 𝑓 𝒙∗ + 𝜖𝜵𝑓(𝒙∗ )𝑇 𝒑 + 2 𝜖 2 𝒑𝑇 𝜵2 𝑓 𝒙∗ 𝒑 + 𝑂(𝜖 3 ).

𝒙∗ is a local minimum and thus by first-order necessary condition

𝜵𝑓 𝒙∗ = 𝟎

For sufficiently small 𝜖, 𝑂 𝜖 2 dominates over 𝑂(𝜖 3 ). Since 𝒑𝑻 𝜵2 𝑓 𝒙∗ 𝒑 < 0

𝑓 𝒙∗ + 𝜖𝒑 < 𝑓(𝒙∗ )

𝒙∗ is not a local minimum

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Sufficient Optimality Conditions
Theorem (sufficient optimality conditions):
Let 𝑓 be twice continuously differentiable and let 𝒙∗ ∈ 𝑅𝑛 , if

1. 𝜵𝑓(𝒙∗ ) = 𝟎,
2. 𝜵2 𝑓(𝒙∗ ) is positive definite.

then 𝒙∗ is a strict local minimizer of 𝑓.

Proof similar to second order necessary

Remark
• 𝑓 𝑥 = 𝑥 4 attains at 𝑥 ∗ = 0 its (unique) strict global minimum. Further, 𝛻𝑓 0 = 0 and 𝛻 2 𝑓 0 = 0 hold, thus
the 2nd condition in the above theorem is violated.

• Hence, the conditions mentioned in the theorem are sufficient but not necessary

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Optimality Conditions for Smooth Problems

𝑅𝑛
• Optimality conditions are at a point, not for the
whole 𝑅𝑛
1st order conditions
satisfied
• All the sets shown are true subsets
2nd order necessary
conditions satisfied
• The first-order necessary conditions exclude non-
stationary points
Local minima

• The second-order necessary conditions exclude


some saddle points and some local maxima, but
2nd order sufficient not all
conditions satisfied

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Check Yourself

• What is a stationary point? Are there different kinds of stationary points?


• What are the first-order necessary conditions of optimality for smooth unconstrained problems?
• What are the second-order necessary conditions of optimality for smooth unconstrained problems?
• What are there the second-order sufficient conditions for smooth unconstrained problems?
• Are there any necessary and sufficient optimality conditions? In general vs for specific classes

Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.

Examples for optimality conditions


Smooth Unconstrained Optimization (Recap)
Unconstrained optimization problem: min 𝑓(𝒙)
𝒙∈𝑅 𝑛
Special case for which the feasible set  = 𝑅 𝑛

Definition:
𝒙∗ is a local solution if ∃𝑁 𝒙∗ : 𝑓 𝒙∗ ≤ 𝑓 𝒙 , ∀𝒙 ∈ 𝑁 𝒙∗ x* N
N ( x*) 𝑥∗ x

Optimality Conditions:
1st-Order Necessary : If 𝒙∗ is a local minimum then 𝜵𝑓 𝒙∗ = 𝟎
2nd-Order Necessary: If 𝒙∗ is a local minimum then 𝜵𝑓 𝒙∗ = 𝟎 and 𝑯 𝒙∗ is positive semi definite
2nd-Order Sufficient : If 𝜵𝑓 𝒙∗ = 𝟎 and 𝑯 𝒙∗ is positive definite then 𝒙∗ is a local minimum

2 of 6 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Example for Application of Necessary Condition (1)
Problem
Find all stationary points of the function
𝑓(𝒙) = 𝑥14 + 𝑥12 1 − 2𝑥2 + 2𝑥22 − 2𝑥1 𝑥2 + 4.5𝑥1 − 4𝑥2 + 4

and use these to determine all minima.

Solution
The stationary points 𝒙∗ are defined by the condition, 𝜵𝑓 𝒙∗ = 𝟎
𝜕𝑓

𝜕𝑥1 𝒙 4𝑥13 + 2𝑥1 1 − 2𝑥2 − 2𝑥2 + 4.5
𝜵𝑓 𝒙 = = =𝟎
𝜕𝑓 −2𝑥12 + 4𝑥2 − 2𝑥1 − 4

𝜕𝑥2 𝒙
4𝑥13 + 2𝑥1 1 − 2𝑥2 − 2𝑥2 + 4.5 = 0
⇒൝
−2𝑥12 + 4𝑥2 − 2𝑥1 − 4 = 0

3 of 6 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Example for Application of Necessary Condition (2)

• Solving the system of equations results in the stationary points


𝑨(−1.053,0.9855),
𝐁(1.941,3.854),
𝑪(0.6117,1.4929).

𝑥2
• To classify the stationary points, we investigate the
definiteness of the Hessian 𝑯 𝒙

12𝑥12 + 2(1 − 2𝑥2 ) −4𝑥1 − 2


𝑯 𝒙 =
−4𝑥1 − 2 4

𝑥1
• At 𝑨 and 𝐁 all eigenvalues are positive. By 2nd order sufficient conditions 𝑨 and 𝐁 are local minima. (A is
indeed the unique global minimizer)
At 𝑪, the Hessian has one positive and one negative eigenvalue. The 2nd order necessary conditions are
violated and thus 𝑪 is not a local minimum (𝑪 is indeed a saddle point)
4 of 6 Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.
A Funky Function [1] (1)
3000 0.02
𝑓 𝒙 = (𝑥2 − 𝑥12 )(𝑥2 − 4𝑥12 ) 𝑓 500𝑥 2 , 𝑥z)
f(500*z, 2
0.015
𝑓 𝑥f(z,
2 , 𝑥2
z)

2000
𝑥1 = 500𝑥2 0.01 𝑥1 = 𝑥2
1000
0.005

0 0
-0.01 -0.005 0 0.005 0.01 -0.1 -0.05 0 0.05 0.1
z𝑥 z𝑥
-3
2 2
x 10
8 0.015
𝑓 𝑥f(y,
1, 0
0) 𝑓 f(0,
0, 𝑥z)
2

−10𝑥1 𝑥2 +16𝑥13
6
0.01
• 𝜵𝑓 𝒙 = , 𝜵𝑓 𝟎 = 𝟎 𝑥2 = 0 𝑥1 = 0
2 4
2𝑥2 − 5𝑥1 0.005
2

−10𝑥2 + 48𝑥12 −10𝑥1 0 0 0 0


• 𝑯 𝒙 = ,𝑯 𝟎 =
-0.2 -0.1 0 0.1 0.2 -0.1 -0.05 0 0.05 0.1
y𝑥 z𝑥
−10𝑥1 2 0 2 1 2

• Necessary conditions (1st and 2nd) satisfied • 0 is local minimum w.r.t. every line through it
• 0 is not a local minimum of 𝑓
• Sufficient conditions not satisfied
• How can that be?

5 of 6 Applied Numerical Optimization [1] Bertsekas D.P., Nonlinear Programming – Third Edition, Athena Scientific, 2016.
Prof. Alexander Mitsos, Ph.D.
A Funky Function [1] (2)

𝑓 𝒙 = (𝑥2 − 𝑥12 )(𝑥2 − 4𝑥12 ) • Take 𝑥2 = 2𝑥12 . We have 𝑓 𝒙 = 𝑓 𝑥1 , 2𝑥12 = −2𝑥14


and clearly (0, 0) is not a minimum along that curve.
0
𝑓 f(y, 2𝑥122)
𝑥1 , 2*y
-5

-10

-15

-20 𝑥2 = 2𝑥12
-25

-30

-35
-2 -1 0 1 2
y𝑥1

• Trick: it is not a minimum for a curve that passes


through it

6 of 6 Applied Numerical Optimization [1] Bertsekas D.P., Nonlinear Programming – Third Edition, Athena Scientific, 2016.
Prof. Alexander Mitsos, Ph.D.
Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.

Convexity in optimization
Convexity of a Set

Definition (convex set):


• A set   𝑅𝑛 is convex, if  𝒙1 , 𝒙2   and    [0,1], 𝒙1 + (1-)𝒙2  

 

convex nonconvex

• The constraints define the feasible set  = {𝒙𝐷 | 𝑐𝑖 𝒙 ≤ 0  𝑖 ∈ 𝐼, 𝑐𝑖 𝒙 = 0  𝑖 ∈ 𝐸}

• Convexity of  makes a big difference in theoretical properties and in numerical solution

• A set is either convex or nonconvex (no “concave sets” please)


2 of 9 Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.
Convexity of a Function
Definition (convex function): assume D is convex
• A function 𝑓 is convex on D, if  𝒙1 , 𝒙2  D,    [0,1]: 𝑓(𝒙1 + (1−)𝒙2 ) ≤ 𝑓(𝒙1 )+ (1-)𝑓(𝒙2 )
• 𝑓 is strictly convex on D, if  𝒙1 , 𝒙2  D,    (0,1): 𝑓(𝒙1 + (1−)𝒙2 ) < 𝑓(𝒙1 )+ (1-)𝑓(𝒙2 )
• 𝑓 is (strictly) concave on D, if (-𝑓) is (strictly) convex
• Affine linear functions are both convex and concave, but not strict

f f f

x x x
a b
strictly convex concave, but not strictly neither convex, nor concave

3 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Criteria for Convexity (1)
Definition (positive definite):
• A symmetric (n×n)-matrix 𝑨 is called positive definite, if 𝒑𝑇 𝑨𝒑 > 0 ∀ 𝒑 ∈ 𝑅𝑛 , 𝒑 ≠ 𝟎.
• A symmetric (n×n)-matrix 𝑨 is called positive semi-definite, if 𝒑𝑇 𝑨𝒑 ≥ 0 ∀ 𝒑 ∈ 𝑅𝑛 .
• If (-𝑨) is positive (semi-)definite, then 𝑨 is called negative (semi-)definite.

Theorem:
A symmetric (n×n)-matrix 𝑨 is positive definite, if 𝜆𝑘 > 0, ∀𝑘 ∈ 1, … , 𝑛 where
𝜆𝑘 represent the eigenvalues of 𝑨, i.e., the solutions of det(𝑨 - 𝜆𝑰) = 0.
Similarly positive semi-definite if 𝜆𝑘 ≥ 0, ∀𝑘 ∈ 1, … , 𝑛

Theorem (convexity under differentiability):


• 𝑓 is convex, iff the Hessian 𝑯 𝒙 is positive semi-definite ∀𝒙 ∈ 𝐷.
• If 𝑯 𝒙 is positive definite ∀𝒙 ∈ 𝐷, then 𝑓 is strictly convex.
4 of 9 Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.
Criteria for Convexity (2)

if ∀ 𝒙 ∈ 𝐷
∀ 𝒑 ∈ 𝑅𝑛 ,
𝑓 is 𝑯 𝒙 is all 𝜆𝑘 are
𝒑𝑇 𝑯 𝒙 𝒑 is
strictly convex positive definite >0 >0

convex positive semi-definite ≥0 ≥0

strictly concave negative definite <0 <0

concave negative semi-definite ≤0 ≤0

neither convex, - ≥ 0 or ≤ 0 ≥ 0 or ≤ 0
nor concave
Definiteness of Sign of Sign of the
the Hessian the eigenvalues quadratic form

5 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Geometric Illustration of Convexity for 𝒙 ∈ 𝑅2

𝑓 𝒙 = 𝑥12 + 𝑥22 𝑓 𝒙 = 𝑥12

1,2> 0 1 > 0, 2 = 0

𝑓 𝒙 = 𝑥12 − 𝑥22 𝑓 𝒙 = (𝑥2 − 𝑥12 )(𝑥2 − 4𝑥12 ) Sign of eigenvalues


depend on 𝒙

1 > 0, 2 < 0

6 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Convex Optimization Problems

Definition (convex optimization problem):

• The optimization problem min 𝑓(𝒙) is convex, if the objective function 𝑓 is convex and if the feasible set  is
𝒙∈
convex.

• If D is a convex set, 𝑐𝑖  𝑖 ∈ 𝐼 are convex on D and 𝑐𝑖  𝑖 ∈ 𝐸 are linear then


 = {𝒙  D | 𝑐𝑖 𝒙 ≤ 0  𝑖 ∈ 𝐼, 𝑐𝑖 𝒙 = 0  𝑖 ∈ 𝐸} is convex.

• Apart from trivial exceptions:

  is nonconvex, if any 𝑐𝑖  𝑖 ∈ 𝐸 is a nonlinear function.

  is nonconvex, if any 𝑐𝑖  𝑖 ∈ 𝐼 is nonconvex on D.

 Extra challenge: find such exceptions

• “… in fact, the great watershed in optimization isn't between linearity and nonlinearity, but convexity and
nonconvexity.” R. Tyrrell Rockafellar in SIAM Review, 1993
7 of 9 Applied Numerical Optimization
Prof. Alexander Mitsos, Ph.D.
Optimality Conditions for Smooth Convex Problems

• Let 𝑓 be twice continuously differentiable and convex.

• Since 𝑓 is smooth and convex, 𝜵2 𝑓(𝒙) is positive semi-definite for all 𝒙.

• If 𝒙∗ ∈ 𝑅𝑛 is a local solution point, then it also is a global solution point.


 Proof argument: Convexity implies that first derivative does not decrease when we move away from 𝒙∗ . So we cannot find
other distinct local minimum.

• A point 𝒙∗ ∈ 𝑅𝑛 is a global solution point if and only if 𝛁𝑓 𝒙∗ = 𝟎


𝑇
 Convexity implies 𝑓 𝒙 ≥ 𝑓 𝒙∗ + 𝛁𝑓 𝒙∗ 𝒙 − 𝒙∗
 With stationarity 𝑓 𝒙 ≥ 𝑓 𝒙∗

• Simply said:
 The first order optimality condition is both necessary and sufficient.
 A stationary point is equivalent to a local solution point and a global solution point.
 In constrained problems a similar property exists: convexity implies that the first-order optimality conditions are both
necessary and sufficient

8 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.
Check Yourself

• When is an optimization problem convex?


• How can we check the convexity of a smooth unconstrained optimization problem? (at least in principle)
• Are there any necessary and sufficient optimality conditions? In general vs for specific classes

9 of 9 Applied Numerical Optimization


Prof. Alexander Mitsos, Ph.D.

You might also like