Lecture 4
Lecture 4
Scribed by:
• Deepak (2019MT10685)
1 Overview
In the last lecture, we looked at what convex functions are and why they’re essential for optimiza-
tion. We talked about piecewise linear convex objective functions and some examples involving the
same. We also covered the graphical representation and solutions for two-variable linear programs.
In this lecture, we continue to talk about the graphical representation and solution of two-variable
linear programs. We introduce some new terms involving polyhedra and the convex set. We
conclude this lecture by discussing the existence and optimality of extreme points.
2 Main Section
In this part, we consider a few simple examples that provide useful geometric insights into the
nature of linear programming problems.
1
2.1.1 Two-variable LP Solving method
Soln.
2
Figure 3: Graphing the constraint 2x + 3y ≤ 10, x ≥ 0, y ≥ 0
3
Figure 5: Adding the constraint x + y ≤ 5
• A constraint is called redundant if deleting the constraint does not increase the size of the
feasible region. =⇒ x + y = 5 is redundant
4
• Now, graph points such that 3x+5y=p for various values of p and choose p maximal.
• Find the maximum value p such that there is a feasible solution with 3x+5y=p and move the
line with profit p parallel as much as possible.
5
Types of LPs
The LPs can be divided into three main categories:
• LPs with unbounded objective. (For a max problem this means unbounded from above).
Theorem 1. If the feasible region is non-empty and bounded, then there is an optimal solution.
This is true when all of the inequalities are “≤” constraints, as opposed to “<” constraints.
For example, the following problem has no optimum: Maximize x; subject to 0 < x < 1.
2.1.2 Definitions
• In two dimensions, an edge of the feasible region is one of the line segments making up the
boundary of the feasible region. The endpoints of an edge are corner points.
• In three dimensions, an edge of the feasible region is one of the line segments making up the
framework of a polyhedron. The edges are where the faces intersect each other. A face is a
flat region of the feasible region.
6
Figure 10: Example of edges and faces of the feasible region in 3-D
Extreme Rays
An extreme ray is like an edge, but it starts at a corner point and goes on infinitely.
• Find an edge (or extreme ray) in which the objective value is continually improving.
• Go to the next corner point. (If there is no such corner point, stop. The objective is un-
bounded.)
7
Maximize Z = 3x + 5y
Note: In two dimensions it is pretty easy to find a corner point to start at, especially if the LP is
already graphed. But with larger LPs, it is surprisingly tricky.
8
2.2 Polyhedra and Convex Set
2.2.1 Definitions
1. Halfspace
2. Hyperplane
3. Polyhedron
• The feasible set of any linear programming problem can be described by inequality
constraints of the form of Ax ≥ b and is therefore a polyhedron.
• A set of the form {x ∈ Rn |Ax = b, x ≥ 0} is also a polyhedron in the standard form
representation.
9
Figure 15: The polyhedron {x|aTi x ≥ bi , i = 1, . . . , 5}
4. Convex Set
A set S ⊂ Rn is convex if for any x, y ∈ S, and any λ ∈ [0, 1], we have λx + (1 − λ)y ∈ S.
10
• Some sets that are not convex
Proof. Let Si , i ∈ I be convex sets where I is some index set, and suppose that x and y belong
to the intersection ∩i∈I Si . Let λ ∈ [0, 1]. Since each Si is convex and contains x, y we have
λx + (1 − λ)y ∈ Si , which proves that λx + (1 − λ)y also belongs to the intersection of the sets Si .
Therefore ∩i∈I Si is convex.
Proof. Let a be a vector and let b a scalar. Suppose that x and y satisfy aT x ≥ b and aT y ≥ b,
respectively, and therefore belong to the same halfspace. Let λ ∈ [0, 1]. Then aT (λx + (1 − λ)y) ≥
λb + (1 − λ)b = b, which proves that (λx + (1 − λ)y) also belongs to the same halfspace. Therefore
a halfspace is convex. Since a polyhedron is the intersection of a finite number of halfspaces, the
result follows from the previous theorem.
2.3.1 Definition
Extreme Points
11
Figure 17: Vector w is not an extreme point but vector x is an extreme point
Example 1. As shown in Figure 5, The vector w is not an extreme point because it is a convex
combination of v and u while the vector x is an extreme point since if x = λy + (1 − λ)z and
λ ∈ [0, 1], then either y ∈
/ P , or z ∈
/ P , or x = y, or x = z.
3. There exists n vectors out of the family a1 , ..., am , which are linearly independent.
Proof. (2) ⇒ (1) Let x be an element of P and let I = {i|aTi x = bi }. If n of the vectors ai , i ∈ I,
corresponding to the active constraints are linearly independent then x is, by definition, a basic
feasible solution and, therefore, a basic feasible solution exists. If this is not the case, then all
of the vectors ai , i ∈ I, lie in a proper subspace of Rn and there exists a nonzero vector d ∈ Rn
such that aTi d = 0, for every i ∈ I. Let us consider the line consisting of all points of the form
y = x + λd, where λ is an arbitrary scalar. For i ∈ I, we have aTi y = aTi x + λaTi d = aTi x = bi .
Thus, those constraints that were active at x remain active at all points on the line. However,
since the polyhedron is assumed to contain no lines, it follows that as we vary λ, some constraints
12
will be eventually violated. At the point where some constraint is about to be violated, a new
constraint must become active, and we conclude that there exists some λ∗ and some j ∈ / I such
that aTj (x + λ∗ d) = bj .
We claim that aj is not a linear combination of the vectors ai , i ∈ I. Indeed, we have aTj x ̸= bj ,
because j ∈ / I) and aTj (x + λ∗ d) = bj , (by the definition of λ∗ ). Thus, aTj d ̸= 0. On the other
hand, aTi d = 0 for every i ∈ I(by the definition of d) and therefore, d is orthogonal to any linear
combination of the vectors ai , i ∈ I. Since d is not orthogonal to aj , we conclude that aj is not
a linear combination of the vectors ai , i ∈ I. Thus, by moving from x to x + λ∗ d, the number of
linearly independent active constraints has been increased by at least one. By repeating the same
argument, as many times as needed, we eventually end up with a point at which there are n linearly
independent active constraints. Such a point is, by definition, a basic solution; it is also feasible
since we have stayed within the feasible set.
(1) ⇒ (3) If P has an extreme point x, then x is also a basic feasible solution, and there exist n
constraints that are active at x, with the corresponding vectors ai , being linearly independent.
(3) ⇒ (2) Suppose that n of the vectors ai are linearly independent and, without loss of generality,
let us assume that a1 , ..an , are linearly independent. Suppose that P contains a line x + λd, where
d is a nonzero vector. We then have aTi (x + λd) ≥ bi for all i and all λ. We conclude that aTi d = 0
for all i. (If aTi d ≤ 0, we can violate the constraint by picking λ very large; a symmetric argument
applies if aTi d ≥ 0.) Since the vectors ai , i = 1, ...n, are linearly independent, this implies that
d = 0. This is a contradiction and establishes that P does not contain a line.
Corollary 5. Every nonempty bounded polyhedron and nonempty polyhedron in standard form has
at least one extreme point.
Proof. Let the polyhedron P be defined as {x ∈ Rn |Ax ≤ b} and the linear programming problem
LP be defined as max {cT x : x ∈ P } Let α be the value of the optimal solution and let O be the
set of optimal solutions, i.e. O = {x ∈ P : cT x = α}. Since P has an extreme point, it necessarily
means that it does not contain a line. Since O ⊆ P it doesn’t contain a line either, hence, O contains
an extreme point x0 . We will now show that x0 is also an extreme point in P . Let x1 , x2 ∈ P and
λ ∈ (0, 1) s.t. x0 = λx1 + (1 − λ)x2 . Then cT x0 = λcT x1 + (1 − λ)cT x2 . Since x1 , x2 ∈ P and α is
the optimal solution in P , it is necessarily the case that cT x1 ≤ cT x0 = α and cT x2 ≤ cT x0 = α.
But since cT x0 = λcT x1 + (1 − λ)cT x2 , this necessarily implies that x1 = x2 = x0 (as otherwise
x1 , x2 ∈ O contradicting x0 being an extreme point in O), implying that x0 is an extreme point in
P.
13
References
14