0% found this document useful (0 votes)
9 views

Chapter 4 Constrained Convex Optimization

Uploaded by

hai.nguyen29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Chapter 4 Constrained Convex Optimization

Uploaded by

hai.nguyen29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

FUNDAMENTALS OF

OPTIMIZATION
Constrained convex optimization
CONTENT

• Lagrange dual function


• Lagrange dual problem
• KKT condition
Lagrange dual function

• Optimization problem in the standard form

(P) minimize f(x)


s.t. gi(x) ≤ 0, i = 1, 2,…, m
hi(x) = 0, i = 1, 2, …, p
x  X  Rn
with x  Rn, and assume D = (𝑖=1
𝑚 𝑚
dom gi)(𝑖=1 dom hi) is not empty.
• Denote f* the optimal value of f(x)
• If f, gi (i = 1,2,…,m) are convex functions, hi (i = 1,…,p) are linear → convex program
Lagrange dual function

• Define Lagrangian function L: Rn x Rm x Rp → R


𝑝
L(x, , ) = f(x) + σ𝑚
𝑖=1 𝑖𝑔𝑖(𝑥) + σ𝑖=1 𝑖ℎ𝑖 (𝑥)

• Lagrange dual function (or dual function)


q(, ) = 𝑖𝑛𝑓𝑥𝐷 L(x, , )
• Lagrange dual problem
(D) maximize q(, )
,   0
Lagrange dual function

• Weak duality theorem if x* is an optimal solution to the primal problem and (*,*) is an
optimal solution to the dual problem, then f(x*)  q(*,*)
• Corollary If there exist x* and (*,*) such that f(x*) = q(*,*), then x* and (*,*) are
respectively optimal solutions to the primal and dual problems
KKT Conditions

• Theorem (Fritz John necessary conditions) Let x* be a feasible solution of (P). If x* is a


local minimum of (P), then there exists (u,,) such that:
𝑝
• uf(x*) + σ𝑚 𝑖=1 𝑖𝑔𝑖 (x*) + σ𝑖=1 𝑖ℎ𝑖 (x*) = 0
• u,  0, (u,,)  0
• i gi(x*) = 0, i = 1,…, m
KKT Conditions

• Theorem (Karush-Kuhn-Tucker (KKT) necessary conditions) Let x* be a feasible solution


of (P) and  = {i: gi(x*) = 0}. Further, suppose that hi(x*) for i = 1,…, p and gi(x*) for i  
are linearly independent. If x* is a local minimum of (P), then there exists (,) such that:
𝑝
• f(x*) + σ𝑚 𝑖=1 𝑖𝑔𝑖 (x*) + σ𝑖=1 𝑖ℎ𝑖 (x*) = 0
•   0,
• i gi(x*) = 0, i = 1,…, m
KKT Conditions

• Theorem (KKT sufficient conditions) Let x* be a feasible solution of (P) which is convex
program (f, gi are convex functions, hi are linear functions). If there exists (,) such that:
𝑝
• f(x*) + σ𝑚 𝑖=1 𝑖𝑔𝑖 (x*) + σ𝑖=1 𝑖ℎ𝑖 (x*) = 0
•   0,
• i gi(x*) = 0, i = 1,…, m
then x* is a global optimal solution of (P)
Example

minimize f(x,y) = 2x – y
s.t. g1(x,y) = x2 + y2 – 2 ≤ 0
g2(x,y) = x – y – 1 ≤ 0

• f(x,y) = [2 -1]T, g1(x,y) = [2x 2y]T, g2(x,y) = [1 -1]T


• f and g2 are linear, so they are convex
• 2g1(x,y) = 2 0 which is positive definite, so g1 is convex
0 2

• Exercise Apply KKT condition, solve the problem ?


Lagrange relaxation for Integer Programming

ZIP = minimize f(x) = cTx


s.t. Ax  b
Dx  d
x integer

• Let X = {x integer | Dx  d}
• Assume optimizing over X can be solved easily, but adding constraint Ax  b makes the
problem too difficult
Lagrange relaxation for Integer Programming

Z() = minx cTx + T(b - Ax)


s.t. Dx  d
x integer c

• For a fixed , Z() is assumed to be computed easily


• Important: compute the best lower bound
ZD = max 𝑍()
0
Lagrange relaxation for Integer Programming

• Exercise Given a constant value  = (k), suppose x(k) is an optimal solution to the problem

Z() = minx cTx + T(b - Ax)


s.t. Dx  d
x integer c

Explain why s(k) = b - Ax(k) is a subgradient of function Z at (k) ?


Lagrange relaxation for Integer Programming

• Subgradient method for computing ZD

Choose starting point (0) (e.g., (0) = 0); k = 0


while (STOP condition not reach) {
x(k) is the solution of Z((0))
Compute subgradient s(k) = b - Ax(k) of function Z at (k)
if s(k) = 0 then BREAK
(k+1) = max{0, (k) + (k)s(k)} /* (k) denote the step size */
k=k+1
}
Thank you
for your
attentions!

You might also like