0% found this document useful (0 votes)
26 views14 pages

Lecture 4

The document discusses linear programming and optimization methods. It introduces graphical representation of two-variable linear programs and defines concepts like polyhedra, convex sets, and extreme points. It proves that a polyhedron has at least one extreme point if it does not contain a line, and that an optimal solution occurs at an extreme point of the feasible region.

Uploaded by

Tony Abhishek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views14 pages

Lecture 4

The document discusses linear programming and optimization methods. It introduces graphical representation of two-variable linear programs and defines concepts like polyhedra, convex sets, and extreme points. It proves that a polyhedron has at least one extreme point if it does not contain a line, and that an optimal solution occurs at an extreme point of the feasible region.

Uploaded by

Tony Abhishek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

MTL103: Optimization Methods and Applications Spring 2023

Lecture 4 — 10 Jan, 2023


Lecturer: Prof. Minati De Scribe: Team 4

Scribed by:

• Rajdeep Das (2019MT10718)

• Deepak (2019MT10685)

• Ayan Jain (2019MT10678)

• Leaandro Rodricks (2019MT10702)

• Aabid Hoda (2019MT10068)

• Rakshita Choudhary (2019CS10389)

1 Overview

In the last lecture, we looked at what convex functions are and why they’re essential for optimiza-
tion. We talked about piecewise linear convex objective functions and some examples involving the
same. We also covered the graphical representation and solutions for two-variable linear programs.
In this lecture, we continue to talk about the graphical representation and solution of two-variable
linear programs. We introduce some new terms involving polyhedra and the convex set. We
conclude this lecture by discussing the existence and optimality of extreme points.

2 Main Section

2.1 Graphical Representation and Solution

In this part, we consider a few simple examples that provide useful geometric insights into the
nature of linear programming problems.

(a) Some 2-dimensional examples (b) Some 3-dimensional examples

Figure 1: Examples of feasible regions of an LP

1
2.1.1 Two-variable LP Solving method

Example 1. Consider a two-variable Linear Program with objective Z:

Z = 3x + 5y; with constraints:


2x + 3y ≤ 10 − (1)
x + 2y ≤ 6 − (2)
x+y ≤5 − (3)
x≤4 − (4)
y≤3 − (5)
x, y ≥ 0 − (6)

Soln.

Graphing the feasible region of the inequalities(constraints)

Figure 2: Graphing the constraint x + 2y ≤ 6

2
Figure 3: Graphing the constraint 2x + 3y ≤ 10, x ≥ 0, y ≥ 0

Figure 4: Adding the constraint x + 2y ≤ 6

3
Figure 5: Adding the constraint x + y ≤ 5

• A constraint is called redundant if deleting the constraint does not increase the size of the
feasible region. =⇒ x + y = 5 is redundant

Figure 6: Adding the constraints x ≤ 4, y ≤ 3

• We have now graphed the feasible region.

4
• Now, graph points such that 3x+5y=p for various values of p and choose p maximal.

Figure 7: Geometrical method for optimizing 3x + 5y

• Find the maximum value p such that there is a feasible solution with 3x+5y=p and move the
line with profit p parallel as much as possible.

• The optimal solution occurs at a corner point.

Figure 8: Geometrical method for optimizing 3x + 5y

5
Types of LPs
The LPs can be divided into three main categories:

• Infeasible LP’s: that is, there is no feasible solution.

• LPs that have an optimal solution.

• LPs with unbounded objective. (For a max problem this means unbounded from above).

Theorem 1. If the feasible region is non-empty and bounded, then there is an optimal solution.
This is true when all of the inequalities are “≤” constraints, as opposed to “<” constraints.

For example, the following problem has no optimum: Maximize x; subject to 0 < x < 1.

2.1.2 Definitions

Edges of the feasible region

• In two dimensions, an edge of the feasible region is one of the line segments making up the
boundary of the feasible region. The endpoints of an edge are corner points.

Figure 9: Example of edges of the feasible region in 2-D

• In three dimensions, an edge of the feasible region is one of the line segments making up the
framework of a polyhedron. The edges are where the faces intersect each other. A face is a
flat region of the feasible region.

6
Figure 10: Example of edges and faces of the feasible region in 3-D

Extreme Rays
An extreme ray is like an edge, but it starts at a corner point and goes on infinitely.

Figure 11: Example of Extreme Rays

2.1.3 The Simplex method

• Start at any feasible corner point.

• Find an edge (or extreme ray) in which the objective value is continually improving.

• Go to the next corner point. (If there is no such corner point, stop. The objective is un-
bounded.)

• Continue until no adjacent corner point has a better objective value.

7
Maximize Z = 3x + 5y

Figure 12: Simplex method to maximize z = 3x + 5y

Figure 13: Simplex method to maximize z = 3x + 5y

Note: In two dimensions it is pretty easy to find a corner point to start at, especially if the LP is
already graphed. But with larger LPs, it is surprisingly tricky.

8
2.2 Polyhedra and Convex Set

2.2.1 Definitions

1. Halfspace

Let a be a nonzero vector in Rn and let b be a scalar. The set {x ∈ Rn | aT x ≥ b} is


called a Halfspace.

2. Hyperplane

Let a be a nonzero vector in Rn and let b be a scalar. The set {x ∈ Rn | aT x = b} is


called a hyperplane.

• Hyperplane is the boundary of a corresponding hyperspace.


• Vector a in the definition of the hyperplane is perpendicular to the hyperplane itself.
(If x and y belong to the same hyperplane, then aT x = aT y. Hence aT (x − y) = 0 and
therefore a is orthogonal to any direction vector confined to the hyperplane.)

Figure 14: A hyperplane and two halfspaces

3. Polyhedron

A polyhedron is a set that can be described in the form {x ∈ Rn | Ax ≥ b}, where A is


an m × n matrix and b is a vector in Rm .

• The feasible set of any linear programming problem can be described by inequality
constraints of the form of Ax ≥ b and is therefore a polyhedron.
• A set of the form {x ∈ Rn |Ax = b, x ≥ 0} is also a polyhedron in the standard form
representation.

9
Figure 15: The polyhedron {x|aTi x ≥ bi , i = 1, . . . , 5}

4. Convex Set

A set S ⊂ Rn is convex if for any x, y ∈ S, and any λ ∈ [0, 1], we have λx + (1 − λ)y ∈ S.

• If λ ∈ [0, 1] we have λx + (1 − λ)y is a weighted average of the vectors x, y and therefore


belongs to the line segment joining x and y. Thus a set is convex if the segment joining
any two of its elements is contained in the set.

Figure 16: Set S is convex but set Q is not

• Some convex sets

10
• Some sets that are not convex

2.2.2 Some results

Theorem 2. The intersection of convex sets is convex.

Proof. Let Si , i ∈ I be convex sets where I is some index set, and suppose that x and y belong
to the intersection ∩i∈I Si . Let λ ∈ [0, 1]. Since each Si is convex and contains x, y we have
λx + (1 − λ)y ∈ Si , which proves that λx + (1 − λ)y also belongs to the intersection of the sets Si .
Therefore ∩i∈I Si is convex.

Theorem 3. Every polyhedron is a convex set.

Proof. Let a be a vector and let b a scalar. Suppose that x and y satisfy aT x ≥ b and aT y ≥ b,
respectively, and therefore belong to the same halfspace. Let λ ∈ [0, 1]. Then aT (λx + (1 − λ)y) ≥
λb + (1 − λ)b = b, which proves that (λx + (1 − λ)y) also belongs to the same halfspace. Therefore
a halfspace is convex. Since a polyhedron is the intersection of a finite number of halfspaces, the
result follows from the previous theorem.

2.3 Optimality of Extreme Points

2.3.1 Definition

Extreme Points

Let P be a polyhedron. A vector x ∈ P is an extreme point of P if we cannot find two vec-


tors y,z ∈ P , both different from x, and a scalar λ ∈ [0,1], such that x = λy + (1 − λ)z.

11
Figure 17: Vector w is not an extreme point but vector x is an extreme point

Example 1. As shown in Figure 5, The vector w is not an extreme point because it is a convex
combination of v and u while the vector x is an extreme point since if x = λy + (1 − λ)z and
λ ∈ [0, 1], then either y ∈
/ P , or z ∈
/ P , or x = y, or x = z.

2.3.2 Existence of Extreme Points

Theorem 4. Suppose that a polyhedron P = {x ∈ Rn |aTi x ≥ bi , i = 1, 2, ..m} is nonempty. Then,


the following are equivalent:

1. The polyhedron P has at least one extreme point.

2. The polyhedron P does not contain a line.

3. There exists n vectors out of the family a1 , ..., am , which are linearly independent.

Proof. (2) ⇒ (1) Let x be an element of P and let I = {i|aTi x = bi }. If n of the vectors ai , i ∈ I,
corresponding to the active constraints are linearly independent then x is, by definition, a basic
feasible solution and, therefore, a basic feasible solution exists. If this is not the case, then all
of the vectors ai , i ∈ I, lie in a proper subspace of Rn and there exists a nonzero vector d ∈ Rn
such that aTi d = 0, for every i ∈ I. Let us consider the line consisting of all points of the form
y = x + λd, where λ is an arbitrary scalar. For i ∈ I, we have aTi y = aTi x + λaTi d = aTi x = bi .
Thus, those constraints that were active at x remain active at all points on the line. However,
since the polyhedron is assumed to contain no lines, it follows that as we vary λ, some constraints

12
will be eventually violated. At the point where some constraint is about to be violated, a new
constraint must become active, and we conclude that there exists some λ∗ and some j ∈ / I such
that aTj (x + λ∗ d) = bj .

We claim that aj is not a linear combination of the vectors ai , i ∈ I. Indeed, we have aTj x ̸= bj ,
because j ∈ / I) and aTj (x + λ∗ d) = bj , (by the definition of λ∗ ). Thus, aTj d ̸= 0. On the other
hand, aTi d = 0 for every i ∈ I(by the definition of d) and therefore, d is orthogonal to any linear
combination of the vectors ai , i ∈ I. Since d is not orthogonal to aj , we conclude that aj is not
a linear combination of the vectors ai , i ∈ I. Thus, by moving from x to x + λ∗ d, the number of
linearly independent active constraints has been increased by at least one. By repeating the same
argument, as many times as needed, we eventually end up with a point at which there are n linearly
independent active constraints. Such a point is, by definition, a basic solution; it is also feasible
since we have stayed within the feasible set.
(1) ⇒ (3) If P has an extreme point x, then x is also a basic feasible solution, and there exist n
constraints that are active at x, with the corresponding vectors ai , being linearly independent.
(3) ⇒ (2) Suppose that n of the vectors ai are linearly independent and, without loss of generality,
let us assume that a1 , ..an , are linearly independent. Suppose that P contains a line x + λd, where
d is a nonzero vector. We then have aTi (x + λd) ≥ bi for all i and all λ. We conclude that aTi d = 0
for all i. (If aTi d ≤ 0, we can violate the constraint by picking λ very large; a symmetric argument
applies if aTi d ≥ 0.) Since the vectors ai , i = 1, ...n, are linearly independent, this implies that
d = 0. This is a contradiction and establishes that P does not contain a line.

Corollary 5. Every nonempty bounded polyhedron and nonempty polyhedron in standard form has
at least one extreme point.

2.3.3 Optimality of Extreme Points

Theorem 6. Consider the linear programming problem of minimizing cT x over a polyhedron P .


Suppose that P has at least one extreme point and that there exists an optimal solution. Then,
there exists an optimal solution which is an extreme point of P .

Proof. Let the polyhedron P be defined as {x ∈ Rn |Ax ≤ b} and the linear programming problem
LP be defined as max {cT x : x ∈ P } Let α be the value of the optimal solution and let O be the
set of optimal solutions, i.e. O = {x ∈ P : cT x = α}. Since P has an extreme point, it necessarily
means that it does not contain a line. Since O ⊆ P it doesn’t contain a line either, hence, O contains
an extreme point x0 . We will now show that x0 is also an extreme point in P . Let x1 , x2 ∈ P and
λ ∈ (0, 1) s.t. x0 = λx1 + (1 − λ)x2 . Then cT x0 = λcT x1 + (1 − λ)cT x2 . Since x1 , x2 ∈ P and α is
the optimal solution in P , it is necessarily the case that cT x1 ≤ cT x0 = α and cT x2 ≤ cT x0 = α.
But since cT x0 = λcT x1 + (1 − λ)cT x2 , this necessarily implies that x1 = x2 = x0 (as otherwise
x1 , x2 ∈ O contradicting x0 being an extreme point in O), implying that x0 is an extreme point in
P.

13
References

[1] Dimitris Bertsimas , John Tsitsiklis, Introduction to Linear Optimization, 1997.

14

You might also like