0% found this document useful (0 votes)
112 views

Simplex Method

This document discusses iterative improvement algorithms for solving optimization problems. It focuses on the simplex method, a classic iterative algorithm for solving linear programming (LP) problems. The simplex method works by starting with a feasible solution and repeatedly moving to adjacent solutions with better objective values until an optimal solution is found. It outlines the standard steps of the simplex method and provides an example application to an LP problem. Later sections discuss theoretical properties and improvements to the simplex method over time.

Uploaded by

Nandini Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views

Simplex Method

This document discusses iterative improvement algorithms for solving optimization problems. It focuses on the simplex method, a classic iterative algorithm for solving linear programming (LP) problems. The simplex method works by starting with a feasible solution and repeatedly moving to adjacent solutions with better objective values until an optimal solution is found. It outlines the standard steps of the simplex method and provides an example application to an LP problem. Later sections discuss theoretical properties and improvements to the simplex method over time.

Uploaded by

Nandini Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 19

The Design and Analysis of Algorithms

Chapter 10:
Iterative Improvement

Simplex Method
Iterative Improvement
 Introduction
 Linear Programming
 The Simplex Method
 Standard Form of LP Problem
 Basic Feasible Solutions
 Outline of the Simplex Method
 Example
 Notes on the Simplex Method
 Improvements

2
Introduction
Algorithm design technique for solving
optimization problems
 Start with a feasible solution
 Repeat the following step until no
improvement can be found:
 change the current feasible solution to a
feasible solution with a better value of
the objective function
 Return the last feasible solution as
optimal
3
Introduction

 Note: Typically, a change in a current


solution is “small” (local search)

 Major difficulty: Local optimum vs. global


optimum

4
Important Examples

 Simplex method
 Ford-Fulkerson algorithm for maximum
flow problem
 Maximum matching of graph vertices
 Gale-Shapley algorithm for the stable
marriage problem

5
Linear Programming
 Linear programming (LP) problem is to optimize a linear
function of several variables subject to linear constraints:

maximize (or minimize) c1 x1 + ...+ cn xn


subject to ai1x1+ ...+ ain xn ≤ (or ≥ or =) bi ,
i = 1,...,m , x1 ≥ 0, ... , xn ≥ 0

The function z = c1 x1 + ...+ cn xn is called the


objective function;
constraints x1 ≥ 0, ... , xn ≥ 0 are called
non-negativity constraints

6
maximize 3x + 5y
subject to x+ y ≤4
x + 3y ≤ 6 Example
x ≥ 0, y ≥ 0
y

Feasible region is the set of


points defined by the
constraints
x + 3y = 6

( 0, 2 )
( 3, 1 )

x
( 0, 0 ) ( 4, 0 )

x+y=4

7
maximize 3x + 5y
subject to x+ y ≤4
x + 3y ≤ 6
Geometric solution
x ≥ 0, y ≥ 0

y
Extreme Point Theorem Any LP
problem with a nonempty bounded
feasible region has an optimal solution;
moreover, an optimal solution can always
be found at an extreme point of the
problem's feasible region.
( 0, 2 )
( 3, 1 )

x
( 0, 0 ) ( 4, 0 )
3x + 5y = 20
3x + 5y = 14

3x + 5y = 10

8
Possible outcomes in solving
an LP problem
 has a finite optimal solution, which may not
be unique
 unbounded: the objective function of
maximization (minimization) LP problem is
unbounded from above (below) on its
feasible region
 infeasible: there are no points satisfying all
the constraints, i.e. the constraints are
contradictory 9
The Simplex Method
 Simplex method is the classic method for solving
LP problems, one of the most important
algorithms ever invented
 Invented by George Dantzig in 1947 (Stanford
University)
 Based on the iterative improvement idea:
Generates a sequence of adjacent points of the
problem’s feasible region with improving
values of the objective function until no
further improvement is possible
10
Standard form of LP problem
• must be a maximization problem
• all constraints (except the non-negativity constraints)
must be in the form of linear equations
• all the variables must be required to be nonnegative

Thus, the general linear programming problem in standard


form with m constraints and n unknowns (n ≥ m) is

maximize c1 x1 + ...+ cn xn
subject to ai1x1+ ...+ ain xn = bi , , i = 1,...,m,
x1 ≥ 0, ... , xn ≥ 0
Every LP problem can be represented in such form
11
Example
maximize 3x + 5y maximize 3x + 5y + 0u + 0v
subject to subject to

x+ y≤4 x+ y+ u =4
x + 3y ≤ 6 x + 3y + v =6
x≥0, y≥0 x≥0, y≥0, u≥0, v≥0

Variables u and v, transforming inequality


constraints into equality constrains, are called
slack variables

12
Basic feasible solutions
A basic solution to a system of m linear equations in n
unknowns (n ≥ m) is obtained by setting n – m variables
to 0 and solving the resulting system to get the values of
the other m variables.

The variables set to 0 are called nonbasic;


the variables obtained by solving the system are called
basic.

A basic solution is called feasible if all its (basic) variables


are nonnegative.
Example x+ y+u =4
x + 3y +v =6
(0, 0, 4, 6) is basic feasible solution
(x, y are nonbasic; u, v are basic)
13
maximize
z = 3x + 5y + 0u + 0v
subject to
Simplex Tableau
x+ y+ u =4
x + 3y + v =6
x≥0, y≥0, u≥0, v≥0

14
Outline of the Simplex Method
Step 0 [Initialization] Present a given LP problem in
standard form and set up initial tableau.

Step 1 [Optimality test] If all entries in the objective


row are nonnegative — stop: the tableau represents
an optimal solution.

Step 2 [Find entering variable] Select (the most)


negative entry in the objective row.
Mark its column to indicate the entering variable and
the pivot column.
15
Outline of the Simplex Method
Step 3 [Find departing variable]
•For each positive entry in the pivot column, calculate the θ-ratio by
dividing that row's entry in the rightmost column by its entry in the pivot
column.
(If there are no positive entries in the pivot column — stop: the
problem is unbounded.)
•Find the row with the smallest θ-ratio, mark this row to indicate the
departing variable and the pivot row.

Step 4 [Form the next tableau]


• Divide all the entries in the pivot row by its entry in the pivot column.
• Subtract from each of the other rows, including the objective row, the
new pivot row multiplied by the entry in the pivot column of the row in
question.
• Replace the label of the pivot row by the variable's name of the pivot
column and go back to Step 1.
16
Example of Simplex Method
maximize
z = 3x + 5y + 0u + 0v
subject to
x+ y+ u =4
x + 3y + v =6
x≥0, y≥0, u≥0, v≥0

x y u v x y u v x y u v
2 1
u 1 1 1 0 4 u 0 1 2 x 1 0 3/2 1/3 3
3 3
1 1 y
v 1 3 0 1 6 y 1 0 2 0 1 1/2 1/2 1
3 3
4 5
3 5 0 0 0 0 0 10 0 0 2 1 14
3 3

basic feasible sol. basic feasible sol. basic feasible sol.


(0, 0, 4, 6) (0, 2, 2, 0) (3, 1, 0, 0)
z=0 z = 10 z = 14

17
Notes on the Simplex Method
 Finding an initial basic feasible solution may
pose a problem
 Theoretical possibility of cycling
 Typical number of iterations is between m and
3m, where m is the number of equality
constraints in the standard form. Number of
operations per iteration: O(nm)
 Worse-case efficiency is exponential

18
Improvements
 L. G. Khachian introduced an ellipsoid method (1979)
that seemed to overcome some of the simplex method's
limitations. O(n6). Disadvantage – runs with the same
complexity on all problems

 Narendra K. Karmarkar of AT&T Bell Laboratories


proposed in1984 a new very efficient interior-point
algorithm - O(n 3.5). In empirical tests it performs
competitively with the simplex method.

19

You might also like