0% found this document useful (0 votes)
2 views

ConvexSpring25_Week4

The document discusses various optimization problems, including the equivalence of certain formulations, the preservation of convexity under variable changes, and the implications of active and inactive constraints. It covers linear programming (LP) in standard forms, the fundamental theorem of linear programming, and the relationship between primal and dual problems. Additionally, it introduces the Lagrangian function and its dual, along with examples illustrating these concepts.

Uploaded by

sauhardya dutta
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

ConvexSpring25_Week4

The document discusses various optimization problems, including the equivalence of certain formulations, the preservation of convexity under variable changes, and the implications of active and inactive constraints. It covers linear programming (LP) in standard forms, the fundamental theorem of linear programming, and the relationship between primal and dual problems. Additionally, it introduces the Lagrangian function and its dual, along with examples illustrating these concepts.

Uploaded by

sauhardya dutta
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Slack Variables and Epigraph Form

The following two optimization problems are equivalent:


k
X
min i (x)
x
i=1
s.t. x 2 X , (A)

k
X
min ti
x,t
i=1
s.t. x 2 X , (B)
i (x)  ti , i 2 {1, 2, . . . , k}.

1
Change of Variables

Consider an optimization problem:

min fo (x)
x
s.t. fi (x)  0, i 2 {1, 2, . . . , k}
hi (x) = 0, i 2 {1, 2, . . . , p}.

Let F : X ! Y be an invertible mapping with y = F (x). Then, we have the


following equivalent optimization problem with respect to y:

When is convexity preserved?

2
Substituting Equality with Inequality Constraints

Consider an optimization problem:

p⇤ = min f (x)
x2X
s.t. b(x) = u.

If we replace the equality constraint with an inequality, we obtain:

g ⇤ = min f (x)
x2X
s.t. b(x)  u.

Proposition. p⇤ = g ⇤ under the following conditions: (i) f0 is non-increasing


over X , (ii) b is non-decreasing over X , and (iii) the optimal values are
attained for both problems.

3
Elimination of Inactive Constraints

Consider an optimization optimization problem:

min fo (x)
x
s.t. fi (x)  0, i 2 {1, 2, . . . , m}, (1)
Ax = b.

Let x? be an optimal solution. We define the set of active and inactive constraints
as follows:

A(x? ) := {i 2 {1, 2, . . . , m} : fi (x? ) = 0},


Ā(x? ) := {i 2 {1, 2, . . . , m} : fi (x? ) < 0}.

The following proposition says that when the problem is convex, we can remove
the inactive constraints without changing the optimal solution.
Proposition. Let x? be an optimal solution of (1). Then, x? is also an
optimal solution of

min fo (x)
x
s.t. fi (x)  0, i 2 A(x? ), (2)
Ax = b.

4
Linear Programming (LP)

LP is a class of optimization problems where the cost function is linear in


the decision variable and the feasibility set is a polyhedron.

A polyhedron is intersection of finitely many half-spaces:

H = {x 2 Rn |Ax  b} = {x 2 Rn |a>
i x  bi , i = 1, 2, . . . , m}.

Any polyhedron can also be represented as the Minkowski sum of the con-
vex hull of a finite number of extreme points {v1 , v2 , . . . , vk } and cone
generated by a finite number of extreme rays {r1 , r2 , . . . , rp }, i.e.,
k
X p
X k
X
x 2 H () x = i vi + µj rj , µj 0, i 0, i = 1.
i=1 j=1 i=1

If a polyhedron is bounded, it is called a polytope, and the set of extreme


directions is empty.

Example: The probability simplex

5
Linear Programming in Standard Forms

LP in standard equality form:

minn c> xc
x2R
s.t. Ax = b,
x 0.

LP in standard inequality form:

minn c> x
x2R
s.t. Ax  b.

We can easily go from one form to the other. Any LP can be represented in each
of the above standard forms.
Example:

min 3x1 + 1.5x2


x2R2
s.t. 1  x1  2,
0  x2  3.

6
Network Flows

7
Chebyshev Center of a Polyhedron

8
Fundamental Theorem of Linear Programming

Theorem 1
A linear programming problem is either infeasible, or unbounded or has an
optimal solution.

An intersection of a polytope with a supporting hyperplane is called a face


of the polytope. Face includes vertex, edge or facet as special cases.
The optimal solution lies on a face of the polytope, and is unique if it is on
a vertex.
Example:

min 3x1 + 1.5x2


x2R2
s.t. 1  x1  2,
0  x2  3.

9
Obtaining a lower bound on the cost function

Consider a LP in the standard equality form:

minn c> x
x2R
s.t. Ax = b, (P)
x 0.

10
Finding best possible lower bound

This happens to be another linear program:

max
m
b> y
y2R

s.t. A> y  c. (D)

The above problem is referred to as the dual of problem (P).


A LP stated as above is called standard inequality form.
We can show that the dual of (D) is (P).

11
Properties

Theorem 2
For the primal-dual pair of optimization problems stated above, the following
are true.
1. If (P) is infeasible, and (D) is feasible, then (D) is unbounded.
2. If (P) is unbounded, then (D) is infeasible.
3. Weak Duality: For any feasible solution x̄ and ȳ of the respective
problems, we always have c> x̄ b> ȳ.
4. Strong Duality: Suppose both (P) and (D) are feasible. Show
that for the respective optimal solutions x? and y ? , we must have
c> x? = b> y ? .

HW: Give an example of (P) and (D) where both are infeasible.
Lemma 1 (Farkas’ Lemma). Let A 2 Rm⇥n and b 2 Rm . Then, exactly one
of the following sets must be empty:
1. {x 2 Rn | Ax = b, x 0}
2. {y 2 Rm | A> y  0, b> y > 0}.

12
Proof

13
Proof

14
Lagrangian Function

Consider the following optimization problem in standard form:

min f (x)
x2Rn
s.t. gi (x)  0, i 2 [m] := {1, 2, . . . , m},
hj (x) = 0, j 2 [p].

The Lagrangian function L : Rn ⇥ Rm ⇥ Rp ! R is defined as


X X
L(x, , µ) := f (x) + i gi (x) + µj hj (x),
i2[m] j2[p]

where

i is the Lagrange multiplier associated with gi (x)  0


µj is the Lagrange multiplier associated with hj (x) = 0.

Lower Bound Property:


Lemma 2. If x̄ is feasible and ¯ 0, then f (x̄) L(x̄, ¯ , µ).

15
Lagrangian Dual

From the previous lemma, we know that if x̄ is feasible and ¯ 0, then

f (x̄) L(x̄, ¯ , µ) inf L(x, ¯ , µ) =: d( ¯ , µ).


x

where ⇥ X X ⇤
d( , µ) := inf f (x) + i gi (x) + µi hj (x) .
x
i2[m] j2[p]

d( , µ) requires solving an unconstrained optimization problem.


Given any 0, µ, d( , µ)  f ⇤ where f ⇤ is the optimal value.
d( , µ) may take value 1 for some choice of and µ.
d( , µ) is concave in and µ.

16
Lagrangian Dual Optimization Problem

Let us compute the best lower bound on f ⇤ :

max d( , µ)
2Rm ,µ2Rp
s.t. 0,
( , µ) 2 dom(d).

The above is a convex optimization problem since d( , µ) is concave in


and µ irrespective of whether the original problem is convex or not.
Let the optimal value of the dual optimization problem be denoted d⇤ .

= f⇤ d⇤ is called the duality gap.
Weak Duality: d⇤  f ⇤ always holds (even for non-convex problems).
Strong Duality: d⇤ = f ⇤ is guaranteed to hold for convex problems satisfying
certain conditions, referred to as constraint qualification conditions.

17
Example 1: Lagrangian Dual of LP

minn c> x
x2R
s.t. Ax = b, x 0.

Find L, d and dom(d).

18
Example 2: Least Norm Solution

Least norm solution:


1 >
minn x x
x2R 2
s.t. Ax = b.

Find L and d.

19
Example 3

min x2
x2R
s.t. x 1  0, x  0.

Find the optimal value of the above problem, derive the dual and determine
whether strong duality holds.

20
Example 4

min2 x21 x22


x2R
s.t. x21 + x22 1  0.

Find the optimal value of the above problem, derive the dual and determine
whether strong duality holds.

21

You might also like