EDO - Lecture 5 - 2024
EDO - Lecture 5 - 2024
PPU19160
Optimization algorithm
Part I – Unconstrained Optimization
EDO Chapter 4.1 – 4.5
Gauti Asbjörnsson
Department of Industrial and Materials Science
Chalmers University of Technology
SE-412 96 Gothenburg
e-mail: [email protected]
Lecture objective
Key points
Gradient based algorithms
Optimality conditions
Line search
Search direction
𝜕𝑓 𝜕𝑓 𝜕𝑓
Gradient: 𝛻𝑓(𝐱) ≜ , ,…,
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
𝜕2 𝑓 𝜕2 𝑓
⋯
𝜕𝑥12 𝜕𝑥1 𝑥𝑛
Hessian: 𝐇(𝐱) ≜ ⋮ ⋮
𝜕2 𝑓 𝜕2 𝑓
⋯ 2
𝜕𝑥𝑛 𝑥1 𝜕𝑥𝑛
Convex
Non-convex
More rigorously, a set 𝑆 is convex if, for every point 𝐱1 , 𝐱 2 ∈ 𝑆, the point
𝐱 𝜆 = 𝜆𝐱 2 + 1 − 𝜆 𝐱1 , 0≤𝜆≤1
also belongs to 𝑆.
If the Hessian matrix is positive-definite at the stationery point then that x is a local minimum. A
differentiable function is convex if the Hessian matrix is positive-semidefinite in its entire convex domain
15Division of Product Development, IMS 2024-09-04
2024-09-04
Find the optimal solution
Applying algorithms to find solution(s)
• Basic approaches
• Nonlinear, gradient-based
• Nonlinear, gradient-free
Positive definite
Positive definite
Indefinite
One major issue with the steepest descent is that, in general, the entries in the gradient and
its overall scale can vary greatly depending on the magnitudes of the objective function and
design variables.
The search direction is often better posed as a normalized direction
Assuming that we will obtain a decrease in objective function at the current line search that is
comparable to the previous one.
Gradient &
Hessian:
Algorithm:
0.27
f
= 0.58
f
0.77
Gradient &
Hessian:
f k f kT f k f k
Algorithm: pk = k = T pk = − + pk −1
f k f k −1f k −1 f k
34 2 [0.38;-0.27;-0.60]
Division of Product Development, IMS 2024-09-04
Newton’s Method
Newton’s method uses second-order (curvature) information to get better estimates for
search directions.
−1 𝛁𝑓
𝐱 𝑘+1 = 𝐱 𝑘 − 𝐇(𝐱 𝑘 ) 𝐱0
For a quadratic function using an exact line search and Newton’s method. Unsurprisingly, only one
iteration is required.
35Division of Product Development, IMS 2024-09-04
2024-09-04
Newton´s method
Gradient &
Hessian:
Algorithm:
Gradient &
Hessian:
Algorithm: H
𝑎𝑐𝑡𝑢𝑎𝑙
=
𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑
An r value close to 1 means that the model agrees well with the actual function. Value larger
than 1 means that the actual decrease was even greater than expected. A negative value means
that the function actually increased at the expected minimum, and therefore the model is not
suitable.
43 Division of Product Development, IMS 2024-09-04
Trust region
Wednesday at 13:15
Material selection and DOE workshop (W3 and W4)
Wednesday at 15:15
Material Selection in Design (L7)
Q&A