0% found this document useful (0 votes)
26 views24 pages

Math 404 - W01 - SVO

Those are the material for optimization course, assignment and lecture.

Uploaded by

s-mebrahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views24 pages

Math 404 - W01 - SVO

Those are the material for optimization course, assignment and lecture.

Uploaded by

s-mebrahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Math 404

Linear and Nonlinear Programming


Classical Optimization Techniques

Dr. Ahmed Sayed AbdelSamea


Giza, Egypt, Fall 2024
[email protected]
Classical Optimization Techniques
Single Variable Optimization
• A function 𝑓(𝑥) of a single variable, is said to have a local
(relative) min at 𝑥 = 𝑥 ∗ if 𝑓(𝑥 ∗ ) ≤ 𝑓 𝑥 ∗ + ℎ for all
sufficiently small positive and negative values of ℎ (in the
neighborhood of 𝑥 ∗ , i.e., ℎ → 0).
• A function 𝑓(𝑥) has a global (absolute) min at 𝑥 ∗ if
𝑓(𝑥 ∗ ) ≤ 𝑓 𝑥 ∗ + ℎ for all 𝑥  dom(𝑓 )

2
Classical Optimization Techniques
Single Variable Optimization
Theorem (Necessary condition)
If a function 𝑓(𝑥), is defined in the interval 𝑎 ≤ 𝑥 ≤ 𝑏
and has a local (relative) minima at 𝑥 = 𝑥 ∗ , where
𝑎 < 𝑥 ∗ < 𝑏 and if 𝑓 ′ (𝑥 ∗ ) exists, then 𝑓 ′ 𝑥 ∗ = 0.
Proof
𝑓 𝑥 ∗ +ℎ −𝑓(𝑥 ∗ )
It is given that 𝑓′ 𝑥∗ = lim exists, we need
ℎ→0 𝒉
to prove that it is zero!!
Since 𝑥 ∗ is a local minima, we have 𝑓(𝑥 ∗ ) ≤ 𝑓 𝑥 ∗ + ℎ
for all values of ℎ sufficiently close to zero.

3
Classical Optimization Techniques
Single Variable Optimization
Hence,

𝑓 𝑥 ∗ + ℎ − 𝑓(𝑥 ∗ )
lim ≥ 0 if ℎ > 0 ֜ 𝑓 ′ 𝑥 ∗ ≥ 0
ℎ→0 ℎ

𝑓 𝑥 ∗ + ℎ − 𝑓(𝑥 ∗ )
lim ≤ 0 if ℎ < 0 ֜ 𝑓 ′ 𝑥 ∗ ≤ 0
ℎ→0 ℎ

which never happens except for 𝑓 ′ 𝑥 ∗ = 0.

4
Classical Optimization Techniques
Single Variable Optimization
Notes
1. The same in the case of local max. (try!)
2. Theorem is not applicable if 𝑓 ′ 𝑥 ∗ does not exist
𝑓 𝑥 = |𝑥|
3. Theorem is also not applicable at end points (one-
sided derivative only exists)
4. Theorem states that if 𝑥 ∗ is a local min (max), then
𝑓 ′ 𝑥 ∗ = 0, however, converse is not true,
for example (e.g., 𝑓 𝑥 = 𝑥 3 )

5
Classical Optimization Techniques
Single Variable Optimization
Theorem (Sufficient Condition)
Let 𝑓 ′ 𝑥 ∗ = ⋯ = 𝑓 (𝑛−1) 𝑥 ∗ = 0, but 𝑓 (𝑛) 𝑥 ∗ ≠ 0
Then 𝑓(𝑥 ∗ ) is
– a min value of 𝑓 if 𝑓 (𝑛) 𝑥 ∗ > 0 and 𝑛 is even.
– a max value of 𝑓 if 𝑓 (𝑛) 𝑥 ∗ < 0 and 𝑛 is even.
– Neither max nor min if 𝑛 is odd.
Proof
By applying Taylor’s theorem, it becomes
ℎ 𝑛
𝑓 𝑥∗ + ℎ − 𝑓 𝑥∗ = 𝑓 (𝑛) 𝑥 ∗ + 𝜃ℎ
𝑛!
6
Classical Optimization Techniques
Single Variable Optimization
Example
Determine the max and min values of the function
𝑓 𝑥 = 12𝑥 5 − 45𝑥 4 + 40𝑥 3 + 5
Solution
Get the stationary points 𝑓 ′ 𝒙 = 𝟎 → 𝒙 = 𝟎, 𝟏, 𝟐
– At 𝒙 = 𝟐, a min value of 𝑓 as 𝑓 ′′ 2 > 0 and 𝑓𝑚𝑖𝑛 = −11
– At 𝒙 = 𝟏, a max value of 𝑓 as 𝑓 ′′ 1 < 0 and 𝑓𝑚𝑎𝑥 = 12
– At 𝒙 = 𝟎, an inflection point as 𝑓 ′′ 0 = 0 and 𝑓 ′′′ 𝟎 ≠ 0

7
Classical Optimization Techniques
Single Variable Optimization

8
Classical Optimization Techniques
Multivariable Optimization
Theorem (Necessary condition)
If 𝑓(𝒙), has a local min (max) at 𝒙 = 𝒙∗ , and if the first
partial derivatives of 𝑓(𝒙) exists at 𝒙∗ , then ∇𝒇 𝒙∗ = 0.
Proof
𝒇 𝒙∗ + 𝒉 − 𝒇 𝒙∗ = 𝒉𝑻 ∇𝒇 𝒙∗ + 𝑹𝟏 (𝒙∗ , 𝒉)
The 1st order terms of 𝒉 dominates the higher orders for
small 𝒉. The sign of LHS depends on the sign of 1st term
(component-wise).
By contradiction, and because 𝑓(𝒙) has a local min
(max) at 𝒙 = 𝒙∗ , then for any 𝒉
∇𝒇 𝒙∗ = 0
9
Classical Optimization Techniques
Multivariable Optimization
Theorem (Sufficient condition)
A sufficient condition for a stationary point 𝒙∗ to be a
local min is the second partial derivatives (Hessian
matrix) of 𝑓(𝒙), at 𝒙∗ is positive definite matrix (PD)
𝐇 𝒙∗ = ∇2 𝒇 𝒙∗ ≻ 0
Proof
Using 2nd order Taylor expansion:
𝟏 𝑻 2
𝒇 𝒙∗ +𝒉 −𝒇 𝒙∗ = 𝒉𝑻 ∇𝒇 𝒙∗ + 𝒉 ∇ 𝒇 𝒙∗ + 𝜽𝒉 𝒉
𝟐
For 𝟎 < 𝜽 < 𝟏.
10
Classical Optimization Techniques
Multivariable Optimization
Proof (Sufficient condition)
Using 2nd order Taylor expansion:
∗ ∗ 𝑻 ∗
𝟏 𝑻 2
𝒇 𝒙 +𝒉 −𝒇 𝒙 = 𝒉 ∇𝒇 𝒙 + 𝒉 ∇ 𝒇 𝒙∗ + 𝜽𝒉 𝒉
𝟐
For 𝟎 < 𝜽 < 𝟏.
Then at the stationary point 𝒙∗ :
∗ ∗ 𝟏 𝑻 2
𝒇 𝒙 +𝒉 −𝒇 𝒙 = 𝒉 ∇ 𝒇 𝒙∗ + 𝜽𝒉 𝒉, 𝟎 < 𝜽 < 𝟏.
𝟐
To be a local min, the RHS should be > 0, then:
𝐇 𝒙∗ = ∇2 𝒇 𝒙∗ ≻ 0

11
Classical Optimization Techniques
Multivariable Optimization
Notes
The matrix 𝑨 is a PD matrix if:
• Eigenvalues of 𝑨 are all positive.
• All minor determinants of 𝑨 are positive.
• 𝒉𝑻 𝑨𝒉 > 0 ∀𝒉

Think about negative definite matrix!

12
Classical Optimization Techniques
Multivariable Optimization
Semi-definite Case

𝐇 𝒙∗ = ∇2 𝒇 𝒙∗ ≽ 0
Investigate the higher order derivatives in the
Taylor’s series expansion
Saddle point
• In case of a function of two variables the Hessian
matrix may be neither positive nor negative
definite at (𝒙∗ , 𝒚∗ ).
• Saddle points may exist for functions of more than
two variables also. 13
Classical Optimization Techniques
Multivariable Optimization
Saddle point
Example

𝒇 𝒙, 𝒚 = 𝒙𝟐 − 𝒚𝟐

Exercise
Find the extreme points for the function:
𝒇 𝒙𝟏 , 𝒙𝟐 = 𝒙𝟑𝟏 + 𝒙𝟑𝟐 + 𝟐𝒙𝟐𝟏 + 𝟒𝒙𝟐𝟐 + 𝟔 14
Classical Optimization Techniques
Multivariable Optimization

15
Classical Optimization Techniques
Multivariable Optimization

16
Classical Optimization Techniques
Multivariable Optimization

17
Classical Optimization Techniques
Multivariable Optimization

18
Classical Optimization Techniques
Multivariable Optimization

19
Classical Optimization Techniques
Multivariable Optimization

20
Classical Optimization Techniques
Multivariable Optimization

21
Classical Optimization Techniques
Multivariable Optimization

22
Classical Optimization Techniques
Multivariable Optimization
Multivariable optimization with equality constraints

min 𝑓 𝒙
𝒙

𝐬. 𝐭. 𝑔𝒋 𝒙 = 0, 𝑗 ∈ 𝐽 = {1,2, . . , 𝑚}

• Solution with direct substitution


• Solution by the method of constrained variation
• Solution by the method of Lagrange Multipliers

23
Classical Optimization Techniques
Multivariable Optimization
Multivariable optimization with inequality constraints

min 𝑓 𝒙
𝒙

𝐬. 𝐭. 𝑔𝒋 𝒙 ≤ 0, 𝑗 ∈ 𝐽 = {1,2, . . , 𝑚}

• Kuhn-Tucker (KKT) conditions

24

You might also like