1 References and Resources
1 References and Resources
Lecture 1
Goal: In this lecture we discuss the general optimization formulation and Linear Programming.
We will also review some basic definitions.
The rest of the class will focus on the Theory of Convex Optimization and we will mostly follow
the following textbook:
• Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press,
2004. (Link to download: https://web.stanford.edu/~boyd/cvxbook/)
minimize f0 (x)
subject to fi (x) ≤ 0 for i = 1, . . . , m (1)
hi (x) = 0 for i = 1, . . . , p
A few remarks:
1
• f0 is the objective function
• fi (x) ≤ 0 for i = 1, . . . , m are inequality constraints
• hi (x) ≤ 0 for i = 1, . . . , p are equality constraints
• If a vector x̂ satisfies all the constraints it is called a feasible point
• The set of points x that satisfy the equality and inequality constraints is called feasible set
• A problem is called infeasible if there is no point in x ∈ Rn that satisfies all the constraints,
i.e., the feasible set is the empty set
• x∗ is called an optimal solution if it is feasible and satisfies f0 (x∗ ) ≤ f (y) for all feasible y
• The set of all optimal solutions is called the optimal set
• The optimal objective value f0 (x∗ ) is the objective function value at an optimal solution
4 Linear Programming
In Linear Programming (LP), as the name suggests, the objective and constrains are all linear
functions. To be more precise, the general form of an LP can be written as
n
X
minimize cj xj
j=1
n
X
subject to aij xj ≤ bi for i = 1, . . . , m (2)
j=1
Xn
dij xj = ei for i = 1, . . . , p
j=1
In this case we havePn optimization variables x1 , . . . , xn that should be chosen such that the linear
objective function nj=1 cj xj is minimized and m inequality constraints and p equality constraints
are satisfied. Here, cj , aij , bi , dij , ei ∈ R are given and considered as problem parameters.
2
4.2 Example: Minimization of Piecewise-linear function
A piecewise-linear (piecewise-affine) function f : Rn → R can be expressed as
minimize t
subject to a>
i x + bi ≤ t for i = 1, . . . , m (6)
To see equivalence, note that for fixed x, the optimal t is t = f (x). This is not an LP yet! To write
it as an LP we need to define the following matrices and vector
>
a1 −1 −b1
x 0
x̂ = ∈ Rn+1 , ĉ = ∈ Rn+1 , Â = ... .. ∈ Rm×(n+1) , b̂ = ... ∈ Rm
t 1 .
a> m −1 −bm
minimize ĉ> x̂
subject to Âx̂ ≤ b̂ (7)
Indeed, this problem is not a linear program as the last constraint is not a linear function. Hence,
we cannot solve it directly with algorithms that can solve LPs. If we decide to consider all feasible
3
options then we need to compute the value of the objective function for all n! possible assignment
choices, which is indeed costly!
An alternative approach is to study the relaxed version of this problem in which we replace the
constraint xij ∈ {0, 1} with 0 ≤ xij ≤ 1. Once we do this substitution, then we face an LP which
we know how to solve. The resulted problem will be
n X
X n
minimize cij xij
i=1 j=1
Xn
subject to xij = 1 for i = 1, . . . , n (9)
j=1
Xn
xij = 1 for j = 1, . . . , n
i=1
0 ≤ xij ≤ 1 for i = 1, . . . , n j = 1, . . . , n
However, there is an issue! Since the feasible set of the relaxed problem in (9) contains the feasible
set of the original assignment problem (8), then the optimal objective function value of (9) could
be smaller than the one for (8). Moreover, the optimal solution of the problem in (9) may not be
feasible for (8).
The good news is this problem has a nice structure that if you solve the relaxed LP, then the set
of optimal solutions contains at least one integer solution which is also an optimal solution of the
original assignment problem. We will prove this in the upcoming lectures.
5 A Few Definitions
Definition 1. Solution set of one linear equality a> x = b with nonzero coefficient vector (a 6= 0)
is called a hyperplane.
2
Example: a = and b = 0. In this case, a> x = b is equivalent to 2x1 + x2 = 0
1
Definition 2. Solution set of one linear inequality a> x ≤ b with nonzero coefficient vector (a 6= 0)
is called a half-space.
2
Example: a = and b = 0. In this case, a> x ≤ b is equivalent to 2x1 + x2 ≤ 0
1
4
Definition 3. The solution set of a set of equality constraints is called an affine set. Also, the
intersection of a set of hyperplanes is called an affine set.
2 1
Example: a1 = , b1 = 0 and a2 = , b2 = 1. In this case Ax = b is equivalent to
1 −3
2x1 + x2 = 0 and x1 − 3x2 = 1.
Definition 4. The solution set of a set of inequality constraints is called a polyhedron. Also, the
intersection of a set of half-spaces is called a polyhedron.
2 1
Example: a1 = , b1 = 0 and a2 = , b2 = 1. In this case Ax ≤ b is equivalent to
1 −3
2x1 + x2 ≤ 0 and x1 − 3x2 ≤ 1.
Definition 5. Consider the polyhedron P = {x|Ax ≤ b, Dx = e}. The Lineality Space L of the
polyhedron P is defined as the null space of the matrix [A; D], i.e.,
A
L = null space = {v ∈ Rn | Av = 0, Dv = 0} (10)
D
5
• {x = (x1 , x2 , x3 ) | |x1 | ≤ 1, |x2 | ≤ 1}
1 0 0
−1 0 0
Solution: In this case A = 0
. It cab be easily verified that the null-space of A
1 0
0 −1 0
is L = {v = (v1 , v2 , v3 ) | v1 = v2 = 0}.
A
P = {x|Ax ≤ b, Dx = e} is a pointed polyhedron ⇐⇒ null space = {0}
D
Note: Among the above examples, only the last two polyhedrons are pointed.