0% found this document useful (0 votes)
70 views9 pages

Scan Doc0002

Linear programming problems involve optimizing a linear objective function subject to linear constraints. The document provides an overview of linear programming problems and their applications. It also describes the simplex method, which is an algorithm for solving linear programming problems by moving from vertex to vertex of the feasible region until an optimal solution is found. The document concludes by outlining the big M method, which transforms inequality constraints into equations by introducing slack, surplus, and artificial variables.

Uploaded by

jonty777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views9 pages

Scan Doc0002

Linear programming problems involve optimizing a linear objective function subject to linear constraints. The document provides an overview of linear programming problems and their applications. It also describes the simplex method, which is an algorithm for solving linear programming problems by moving from vertex to vertex of the feasible region until an optimal solution is found. The document concludes by outlining the big M method, which transforms inequality constraints into equations by introducing slack, surplus, and artificial variables.

Uploaded by

jonty777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

LINEAR PROGRAMMING PROBLEM

Many problems can be formulated as maximizing or minimizing an objective, given limited resources and competing constraints. If we specify the objective as a linear function of certain variables, and if we can specify the constraints on resources as equalities or inequalities on those variables, then we have a linear-programming problem. Linear programs arise in a variety of practical applications.

General linear programs


In the general linear-programming problem, we wish to optimize a linear function subject to a set of linear inequalities. Given a set of real numbers al, a2, ..., an and a set of variables Xl, X2, ..., Xn, a linear function f on those variables is defined by
11

X J -, 1' ..::I'"""'

-J'(x

x) = a1"1 x =a ::Ix 2 + ... r a x


11 11

11 =

'a i-x i
..;:....

J-1

If b is a real number and f is a linear function, then then equation

is a linear equality and the inequalities

and

are linear inequalities. We use the term linear constraints to denote either linear equalities or linear inequalities. In linear programming, we do not allow strict inequalities. Formally, a linear-programming problem is the problem of either minimizing or maximizing a linear function subject to a finite set of linear constraints. If we are to minimize, then we call the linear program a minimization linear program, and if we are to maximize, then we call the linear program a

maxlmiratton linear program.

ProblelD Input
Consider a linear programming problem, Maximize eT x subject to Ax _ h, x _

o.

The simplex algorithm requires the linear programming problem to be in augmented form. The problem can then be written as follows in matrix form: Maximize Z in:
1 _eT [OA

01 [.z.... ]= IJ x
Xs

[.. '0].
b

where x are the variables from the standard form, Xs are the introduced slack variables from the augmentation process, C contains the optimization coefficients, A and b describe the system of constraint equations, and Z is the variable to be maximized. The system is typically underdetermined, since the number of variables exceeds the number of equations. The difference between the number of variables and the number of equations gives us the degrees of freedom associated with the problem. Any solution, optimal or not, will therefore include a number of variables of arbitrary value. The simplex algorithm uses zero as this arbitrary value, and the number of variables with value zero equals the degrees of freedom.

Variables of nonzero value are called basic variables, and variables of zero value are called nonbasic variables in the simplex algorithm. [This definition is problematic, since basic variables can also take zero value.] This form simplifies finding the initial basic feasible solution (BF), since all variables from the standard form can be chosen to be nonbasic (zero), while all new variables introduced in the augmented form are basic (nonzero). since their value can be trivially calculated (X.~d= b, for them, since the augmented problem matrix is diagonal on its right half).

AN OVERVIEW ON LINEAR PROGRAMMING


In order to describe properties of and algorithms for linear programs, it is convenient to have canonical forms in which to express them. We shall use two forms, standard and slack Informally, a linear program in standard form is the maximization of a linear function subject to linear inequalities, whereas a linear program in slack form is the maximization of a linear function subject to linear equalities.

APPLICATIONS OF LINEAR PROGRAMMING

Linear programming has a large number of applications. It is now a standard tool taught to students in most business schools. The election scenario is one typical example. Two more examples of linear programming are the following: An airline wishes to schedule its flight crews. The Federal Aviation Administration imposes many constraints, such as limiting the number of consecutive hours that each crew member can work and insisting that a particular crew work only on one model of aircraft during each month. The airline wants to schedule crews on all of its flights using as few crew members as possible. An oil company wants to decide where to drill for oil. Siting a drill at a particular location has an associated cost and, based on geological surveys, an expected payoff of some number of barrels of oil. The company has a limited budget for locating new drills and wants to maximize the amount of oil it expects to find, given this budget.

THE SIMPLEX METHOD


The Simplex Method is Ita systematic procedure for generating and testing candidate vertex solutions to a linear proqrcm." It begins at an arbitrary corner of the solution set. At each iteration, the Simplex Method selects the variable that will produce the largest change towards the minimum(or maximum)solution. That variable replaces one of its compatriots that is most severely restricting it, thus moving the Simplex Method to a different corner of the solution set and closer to the final solution. In addition, the Simplex Method can determine if no solution actually exists. Note that the algorithm is greedy since it selects the best choice at each iteration without needing information from previous or future iterations. The Simplex Method solves a linear program of the form described in Figure. Here, the coefficients Cj represent the respective weights, or costs, of the variables x} The minimized statement is similarly called the cost of the solution. The coefficients of the. system of equations are represented by aj, and any constant values in the system of . equations are combined on the right-hand side of the inequality in the variables bj, Combined, these statements represent a linear program, to which we seek a solution of minimumcost.

Figure
n

m mi mi z e

i-I

Le.x. ,I

subject to

(i (j

= 1,2,
=

,m

< n)

1.2,

,n)

A Linear Program

Solving this linear program involves solutions of the set of equations. If no solution to the set of equations is yet known, slack variables Xn+l' xn+2,, xn+lI!, adding no cost to the solution, are introduced. The initial basic feasible solution (BFS) will be the solution of the linear program where the following holds:
(i = 1" 2 ... ,n) n + 1,n + 2,, n + ,m)

(i =

Once a solution to the linear program successive improvements are made to particular, one of the non-basic variables zero) is chosen to be increased so that the
n

has been found, the solution. In (with a value of value of the cost

function, I-J. I I , decreases. That variable is then increased, maintaining the equality of all the equations while keeping the other non-basic variables at zero, until one of the basic (nonzero) variables is reduced to zero and thus removed
&

Lex

l'

from the basis. At this point, a new solution has been determined at a different corner of the solution set.

The process is then repeated with a new variable becoming basic as another becomes non-basic. Eventually, one of three things will happen. First, a solution may occur where no nonbasic variable will decrease the cost, in which case the current solution is the optimal solution. Second, a non... basic variable might increase to infinity without causing a basicvariable to become zero, resulting in an unbounded solution. Finally, no solution may actually exist and the Simplex Method must abort. As is common for research in linear programming, the possibility that the Simplex Method might return to a previously visited corner will not be considered here.

The Big M Method Introducing slack, surplus and artificial variables.


1. If any problem constraint have negative constant on the right side, multiply both sides by -1 to obtain a constraint with a non-negative constant. 2. Introduce a slack variable in each i constraints. 3. Introduce a surplus variable and an artificial variable in each ~ constraint. 4. Introduce an artificial variable in each = constraint. 5. For each artificial variable Ai .cdd -MAi to the objective function in case of maximization and +MAi in case of minimization. Use the same constant M for artificial variables. 6. Form the simplex table for the modified problem. 7. Solve the modified problem by the simplex table. 8. Relate the solution of the modified problem to the original solution: i. If the modified problem has no solution, then the original problem has no solution. ii. If any artificial variables are non-zero in the solution . to the modified problem, then the original problem has no solution.

You might also like