0% found this document useful (0 votes)
15 views10 pages

APM 4OPT1 TA Lecture 5

Uploaded by

mcfirezero9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views10 pages

APM 4OPT1 TA Lecture 5

Uploaded by

mcfirezero9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

[APM_4OPT1_TA]

5. Duality in convex optimization


[Section 6.2 / Lecture notes]

Sorin-Mihai Grad

[email protected]
10/12/2024
Recap

▶ applications modeled as optimization problems

▶ main classes of optimization problems

▶ optimality conditions for differentiable optimization problems

▶ SQP algorithm for solving constrained differentiable


optimization problems
▶ conjugate functions

Sorin-Mihai Grad APM_4OPT1_TA/5: Duality in convex optimization, 10/12/2024 2 / 10


Duality in convex optimization

Let F : Rn → R be a proper function, and consider the


optimization problem
(P O) inf F (x).
x∈Rn

Problem (P O) covers both constrained and unconstrained


continuous optimization problems, as
▶ for f, g : Rn → R, minimizing their sum

(P S) inf {f (x) + g(x)},


x∈Rn

is a special case of (P O) for F = f + g,


▶ for f : Rn → R, h : Rn → Rp , the problem

(P C) inf f (x)
x∈Rn
h(x)≦0p

is a special case of (P O) for F = f + δA , where


A = {x ∈ Rn : h(x) ≦ 0p }.
Sorin-Mihai Grad APM_4OPT1_TA/5: Duality in convex optimization, 10/12/2024 3 / 10
Duality in convex optimization

Let F : Rn → R proper and the optimization problem

(P O) inf F (x).
x∈Rn

Definition 37. A perturbation function of F is any


Φ : Rn × Rm → R fulfilling Φ(x, 0m ) = F (x) for all x ∈ Rn .

Remark 38. Given a perturbation function (of F )


Φ : Rn × Rm → R, problem (P O) can be equivalently written as

(P G) inf Φ(x, 0m ),
x∈Rn

in the sense that v(P O) = v(P G) and x̄ ∈ Rn is an optimal


solution to (P O) if and only if it is one to (P G).

Definition 39. The conjugate dual problem to (P G) is

(DG) sup {−Φ∗ (0n , u)}.


u∈Rm

Sorin-Mihai Grad APM_4OPT1_TA/5: Duality in convex optimization, 10/12/2024 4 / 10


Duality in convex optimization

Proposition 40. [weak duality] It always holds


v(DG) ≤ v(P O) = v(P G).
Proof. Blackboard - see also [Proposition 101 / Lecture notes].

Theorem 41. [strong duality] When Φ is proper and convex, and


the constraint qualification
(
(x̃, 0m ) ∈ dom Φ
(RC) ∃x̃ ∈ Rn :
Φ is continuous at (x̃, 0m )

is fulfilled, then v(P O) = v(P G) = v(DG)


and (DG) has optimal solutions.
Proof. Skipped - see [Theorem 102 / Lecture notes].

Remark 42. A linear optimization problem and its dual problem


have both optimal solutions if and only if both are feasible. In this
case, strong duality holds for them without requiring the fulfillment
of any constraint qualification.
Sorin-Mihai Grad APM_4OPT1_TA/5: Duality in convex optimization, 10/12/2024 5 / 10
Duality in convex optimization

Definition 43. The Lagrangian function associated to the


primal-dual pair (P G) − (DG) is
n o
LΦ : Rn × Rm → R, LΦ (x, u) = infm Φ(x, w) − w⊤ u .
w∈R

Definition 44. A pair (x̄, ū) ∈ Rn × Rm is a saddle-point of LΦ if


for all x ∈ Rn and all u ∈ Rm one has

LΦ (x̄, u) ≤ LΦ (x̄, ū) ≤ LΦ (x, ū).

Theorem 45. Let Φ be proper, convex and lower semicontinuous,


and (x̄, ū) ∈ Rn × Rm . Then (x̄, ū) is a saddle-point of LΦ if and
only if x̄ is an optimal solution to (P G), ū is an optimal solution
to (DG) and v(P G) = v(DG).
Proof. Blackboard - see also [Theorem 107 / Lecture notes].

Sorin-Mihai Grad APM_4OPT1_TA/5: Duality in convex optimization, 10/12/2024 6 / 10


Duality in convex optimization

Corollary 46. [Fenchel duality] Consider the proper functions


f, g : Rn → R. The conjugate dual problem to

(P S) inf {f (x) + g(x)},


x∈Rn

known as its Fenchel dual problem, is

(DS) sup {−f ∗ (z) − g ∗ (−z)}.


z∈Rn
Proof. [hint] Use the perturbation function

ΦF : Rn × Rn → R, ΦF (x, y) = f (x + y) + g(x).

Sorin-Mihai Grad APM_4OPT1_TA/5: Duality in convex optimization, 10/12/2024 7 / 10


Duality in convex optimization

Corollary 47. [Lagrange duality] Let the proper function


f : Rn → R and the vector function h : Rn → Rp . The conjugate
dual problem to

(P C) inf f (x),
x∈Rn
h(x)≦0p

known as its Lagrange dual problem, is


n o
(DCL) sup infn f (x) + η ⊤ h(x) .
η∈Rp+ x∈R

Proof. [hint] Use the perturbation function


(
n p f (x), h(x) − z ≦ 0p
ΦL : R × R → R, ΦL (x, z) =
+∞, otherwise.
Remark 48. All duality statements presented above can be
particularized for both Fenchel and Lagrange duality by employing
the perturbation functions that deliver the dual problems obtained
in Corollary 46 and Corollary 47, respectively.
Sorin-Mihai Grad APM_4OPT1_TA/5: Duality in convex optimization, 10/12/2024 8 / 10
Duality in convex optimization

The duality theory can be employed for obtaining


▶ a lower bound for the optimal value of the initial
problem (via weak duality)
▶ characterizations of the optimal solutions to the
initial problem (via optimality conditions)
▶ primal-dual algorithms for solving (convex)
optimization problems
▶ dual smoothing algorithms for solving (convex)
optimization problems
▶ saddle points of bifunctions
▶ dual characterizations of various formulae and
functions (e.g. risk measures in finance
mathematics), alternative statements, saddle-point
properties etc.

Sorin-Mihai Grad APM_4OPT1_TA/5: Duality in convex optimization, 10/12/2024 9 / 10


Next

▶ subgradients and subdifferentials

▶ optimality conditions for convex optimization problems

▶ exam: 07/01/2025

Sorin-Mihai Grad APM_4OPT1_TA/5: Duality in convex optimization, 10/12/2024 10 / 10

You might also like