Simplex Method: 4 Year - Report in Optimization Technique
Simplex Method: 4 Year - Report in Optimization Technique
Simplex method
4th year - Report
in
Optimization technique
By
Haider Hashim Dkhikh
Supervised by
Dr. Asmaa Hameed
July 2020
Abstract
operations are used to reach at the optimal solution. Therefore, this procedure has a
number of steps to find out a solution of the problem. There may be any number of
variables and constraint in the problem, however, it is very tough to solve manually,
when there are more than 4 variables. It requires a computer for large number of
variables.
Chapter 1: Introduction 4
1.1 Overview 4
1.2 Objective 6
1.3 History 6
List of Figures
Figure Title Page
No. No.
1-1 A system of linear inequalities defines a polytope as a feasible 5
region.
1-2 Polyhedron of simplex algorithm in 3D 5
2-1 Flowchart for finding the optimal solution by the simplex 10
algorithm
1.1 Overview
with 𝐜 = (𝑐1 , … , 𝑐𝑛 ) the coefficients of the objective function, (⋅)T is the matrix
transpose, and 𝐱 = (𝑥1 , … , 𝑥𝑛 ) are the variables of the problem, (A) is a p×n matrix,
and 𝐛 = (𝑏1 , … , 𝑏𝑝 ) are nonnegative constants (∀𝑗, 𝑏𝑗 ≥ 0). There is a straightforward
process to convert any linear program into one in standard form, so using this form of
linear programs results in no loss of generality.
In geometric terms, the feasible region defined by all values of (X) such that 𝐴𝐱 ≤ 𝐛
and ∀𝑖, 𝑥𝑖 ≥ 0 is a (possibly unbounded) convex polytope. An extreme point or vertex
of this polytope is known as basic feasible solution (BFS).
It can be shown that for a linear program in standard form, if the objective function has
a maximum value on the feasible region, then it has this value on (at least) one of the
extreme points. [7] This in itself reduces the problem to a finite computation since there
is a finite number of extreme points, but the number of extreme points is unmanageably
large for all but the smallest linear programs. [8]
The solution of a linear program is accomplished in two steps. In the first step, known
as Phase I, a starting extreme point is found. Depending on the nature of the program
this may be trivial, but in general it can be solved by applying the simplex algorithm
to a modified version of the original program. The possible results of Phase I are either
that a basic feasible solution is found or that the feasible region is empty. [10] [11] [12]
1.3 History
George Dantzig worked on planning methods for the US Army Air Force during
World War II using a desk calculator. During 1946 his colleague challenged him to
mechanize the planning process to distract him from taking another job. Dantzig
formulated the problem as linear inequalities inspired by the work of Wassily Leontief,
however, at that time he didn't include an objective as part of his formulation. Without
an objective, a vast number of solutions can be feasible, and therefore to find the "best"
feasible solution, military-specified "ground rules" must be used that describe how
goals can be achieved as opposed to specifying a goal itself. Dantzig's core insight was
to realize that most such ground rules can be translated into a linear objective function
that needs to be maximized. [13] Development of the simplex method was evolutionary
and happened over a period of about a year. [13]
Third, each unrestricted variable is eliminated from the linear program. This can be
done in two ways, one is by solving for the variable in one of the equations in which it
appears and then eliminating the variable by substitution. The other is to replace the
The first row defines the objective function and the remaining rows specify the
constraints. The zero in the first column represents the zero vector of the same
dimension as vector b. (Different authors use different conventions as to the exact
layout.) If the columns of A can be rearranged so that it contains the identity matrix of
order p (the number of rows in A) then the tableau is said to be in canonical form. [14]
The variables corresponding to the columns of the identity matrix are called basic
variables while the remaining variables are called nonbasic or free variables. If the
values of the nonbasic variables are set to 0, then the values of the basic variables are
easily obtained as entries in b and this solution is a basic feasible solution. The
algebraic interpretation here is that the coefficients of the linear equation represented
by each row are either 0,1, or some other number. Each row will have 1 column with
value 1, p -1 columns with coefficients 0, and the remaining columns with some other
coefficients (these other variables represent our non-basic variables). By setting the
values of the non-basic variables to zero we ensure in each row that the value of the
variable represented by a 1 in its column is equal to the b value at that row.
where 𝑧𝐵 is the value of the objective function at the corresponding basic feasible
solution. The updated coefficients, also known as relative cost coefficients, are the rates
of change of the objective function with respect to the nonbasic variables. [11]
Figure (2-1): Flowchart for finding the optimal solution by the simplex algorithm.
Haider Hashim 10 | P a g e
2.5 Application
Most applications of the simplex method are fairly straightforward, subject to
the availability of the data in the correct form. The restrictions on time, materials,
money and other resources can usually be written as equations or inequalities as
linear functions of the variables. The objective function is also often linear in these
variables. Occasionally the variable that is used must be changed: instead of the
amount of each product which is made being the variable it is necessary to use
another measure, e.g. the time a machine is used, or the quantity of raw material, or
the proportions that are mixed. This change of variable is sometimes difficult to see
and this chapter gives some unusual applications.
Example: Business Application: Maximum Profit
Solution:
Haider Hashim 11 | P a g e
This method is general and can be applied to any experimental work involving
optimization of an unknown function depending on several variables.
Haider Hashim 12 | P a g e
Chapter 3
3.1 Discussion
Summary of The simplex method demonstrated in the previous section consists
of the following steps:
Haider Hashim 13 | P a g e
3.2 Conclusions and Result
When there is more than one scarce input factor, linear programming can be used
information can be obtained by using Simplex method. The Simplex method has the
added advantage that it provides details of the opportunity costs and the marginal rates
of substitution for the scarce resources. Linear programming can be applied to a variety
costs of production inputs to be computed. This information can be used for decision-
making, standard costing variance analysis and the setting of transfer prices in
applied to real world situations but some of these problems can be overcome by
Haider Hashim 14 | P a g e
References
[1]. Stone, Richard E.; Tovey, Craig A. (1991). "The simplex and projective
scaling algorithms as iteratively reweighted least squares methods". SIAM
Review. 33 (2): 220–237.
[2]. Murty, Katta G. Linear programming. John Wiley & Sons Inc.1, 2000.
[3]. G. B. Dantzig, Linear Programming and Extensions, Princeton University
Press, Princeton, NJ, 1963. 3.2 W. J. Adams, A. Gewirtz, and L. V. Quintas,
Elements of Linear Programming, Van Nostrand Reinhold, New York, 1969.
W.W. Garvin, Introduction to Linear Programming, McGraw-Hill, New York,
1960.
[4]. S. I. Gass, Linear Programming: Methods and Applications, 5th ed., McGraw-
Hill, NewYork, 1985. 3.5 G. Hadley, Linear Programming, Addison-Wesley,
Reading, MA, 1962.
[5]. Vajda, An Introduction to Linear Programming and the Theory of Games,
Wiley, NewYork, 1960.
[6]. W. Orchard-Hays, Advanced Linear Programming Computing Techniques,
McGraw-Hill,
[7]. New York, 1968. 3.8 S. I. Gass, An Illustrated Guide to Linear Programming,
McGraw-Hill, New York, 1970. 3.9 M. F. Rubinstein and J. Karagozian, building
design using linear programming,
[8]. Journal of the Structural Division, Proceedings of ASCE, Vol. 92, No. ST6,
pp. 223−245, Dec.1966. T. Au, Introduction to Systems Engineering:
Deterministic Models, Addison-Wesley, Reading, MA, 1969.
[9]. H. A. Taha, Operations Research: An Introduction, 5th ed., Macmillan, New
York, 1992. 3.12 W. F. Stoecker, Design of Thermal Systems, 3rd ed., McGraw-
Hill, New York, 1989.
[10]. W. L. Winston, Operations Research: Applications and Algorithms, 2nd ed.,
PWS-Kent, Boston, 1991.
[11]. R. M. Stark and R. L. Nicholls, Mathematical Foundations for Design: Civil
Engineering Systems, McGraw-Hill, New York, 1972.
[12]. N. Karmarkar, A new polynomial-time algorithm for linear programming,
Combinatorica, Vol. 4, No. 4, pp. 373−395, 1984.
[13]. A. Maass et al., Design of Water Resources Systems, Harvard University Press,
Cambridge, MA, 1962.
[14]. Pan P Q 2000 Primal perturbation simplex algorithms for linear programming
J. Comput. Math. 18 587–96
Haider Hashim 15 | P a g e