0% found this document useful (0 votes)
0 views

Linear Programming Problems

Uploaded by

sr30072005
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Linear Programming Problems

Uploaded by

sr30072005
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

1

Linear programming problems :


A linear programming problem includes a set of simultaneous linear equations
which represents the conditions of the problem and a linear function of the
problem .
The linear function which is to be optimized is called the objective function
and the conditions of the problem expressed as simultaneous linear
equations (or inequalities) are referred as constraints.
It also consists of the non-negative restrictions of the variables.

A LPP can be stated as,

(3)
The above LPP may also be stated in matrix form as follows:
optimze Z = CX
subject to: AX(≤,=,≥)b

And X≥0

Where,

is the coefficient matrix of order mxn.


A=

C= is a row vector known as price vector.

X= is known as variable matrix.


2

is column vector called the requirement vector or base


b=
matrix or value matrix .
m

Note:
In a general linear programming problem it is assumed that the number of
rows of coefficients matrix A is less than it's number of columns.

Basic solution:
If there are 'n' number of equations with 'm' number of variables for (m>n),
then setting (m-n) variables to zero the solution thus obtained is known as
the basic solution.

[
No. of basic solution=
m
= m!/n!(m-n)!
[
n

Basic variables:
The variables associated with the basic solution are known as the basic
variables.
Basic solutions are of two types.
1. Non-degenerate basic solutions:
A basic solution is called non degenerate basic solutions,if none of the
basic variables is zero.

2. Degenerate basic solutions:


A basic solution is called degenerate basic solution if one or more of the
basic variable are zero.

Feasible solution :
Any solution which satisfies the conditions of the problem and the non-
negative restrictions is known as feasible solution.

Basic feasible solution :


A feasible solution which is also a basic solution is known as basic feasible
solution.
3

Example:
Find out the basic feasible solution of the problem

Solution:
Here,
no. of equations(n)=2
and no. of variables (m)=3

So, we shall set 3-2=1 variables to zero


For,

B1X1

⇒ X1

So, one of the basic solution will be, X 1=


4

Let x2 =0, then the equation becomes,

Putting x1=0
then,

⇒ X3 3

X3 so, X1,X2,X3, are the basic solutions .


Basic feasible solutions are X1 and X3 .
5

In general we use two methods for the solution of a linear programming problem.
1. Graphical method.
2. Simplex method.

Graphical method :
If the objective function Z is a function of two wariables only then the problem
can be solved by graphical method. A problem of three variables, can be solved
by this method, but it is complicated enough.

Simplex method:

This is the most powerful tool of the linear programming as any problem can be
solved by this method.
This method is an algebraic procedure which progressively approaches the
optimal solution, The procedure is straight forward and requires
only time and patience to execute it manually.

Simplex method
Introduction :
It is either impossible or require great labour to search for optimal solution from
amongst all the feasible solutions which may be infirnite in number. The
fundamental theorem of linear programming which states that sif the given
linear programming problem has an optimal solution, then at least one basic
feasible solution must be optimal” forms a firm base for the solution of L.P.P.
According to this theorem we can search the optimal solution among the basic
feasible solutions only which are finite in number, Also it is casy to find an
optimal among the basic feasibles then to find that among all the feasible
solutions which may be infinite in number. In this way a L.P.P can be solved be
enumerating all the B.F.solutions. But it is also not an easy job to enumerate all
the B.F.solutions even for small values of m(number of constraints) and n (no. of
variables), To over come this dificulty a method known as Simplex Method (or
Simplex Algorithm) was developed by 𝐆𝐞𝐨𝐫𝐠𝐞 𝐃𝐚𝐧𝐭𝐳𝐢𝐠 in 1947 which was made
available in 1951.This method is an iterative (step by step) procedure in which
we pto-ceed in systematic steps from an initial B.P.solution to other
B.F.solutions and finally in a finite number of steps, to an optimal B.E, solution,
in such a way that the value of the objective func.tion at each step (iteration) is
better (or at least not worse) than at the preceding step.
6

In others words the simplex algorithm consists of the following main steps,
(i) Finding a trial B.F.S. of the L.P.P.
(ii) Testing whether it is an optimal solution or not.
(iii) Improving the first trial B.F.S, (if it is not optimal) by a set of rules.
(iv) Repeating the steps (i) and (i) till we get an optimal solution.

Fundamental theorem of linear programming:


If a L. P. problem
Max. Z=cx, s.t. Ax=b, x≥0
where A is a mxn matrix of coefficients given by A= , has an
optimal solution, then at least one basic feasible solution must be optimal.

Proof.
Let x*= , be an optimal feasible solution of the given L.P.
problem and qixy be the maximum value of the objective function
corresponding to this solution x*.
Suppose that k components (variables) in x* are non-Zero and remaining (m+n-k)
components are zero. We can assume these non-zero components as the first k
components of x*.
(m+n-k)
i.e:

∴ (1)

and (2)

Now there are two possibilities.

1. If are L. I. If the vectors are L. I. then by deinition x*


is a B. F.S. which is also optimal.Hence the theorem is true in this case. This
solution is degenerate if k < m and non-degenerate if k=m.

2. If are L. D. This is the case when k >m.In this case we can reduce
the number of non-zero variables step by step as follows.
7

Since are L. D., therefore there exist scalars


with at least one , s.t.

k
(3)
i=1
Now suppose that at least one λ1 is positive because if none is positive
then we can multiply the equation by -I and get positive λ1 .

Let
max (4)
v= (λ i /x i )
1≤i≤k

Clearly v>0,since x i ≥ 0, for i=1,2,...k and atleast λ i is


positive.

k
i
i
i=1

Also from (4), we have

or

or

i.e; all the components of the solution x' given by (6) are non-negative and
thus x' is a F. S. to the given L. P.P.
Also from (4),we have
8

i.e; vanishes at least for one value of i. Therefore the B. S. x' given by
(6) cannot have more than (k-1) non-zero components (variables).

Thus we have derived anew F, S. from the given optimal solution, containing
less number of non-zero variables.

Now we shall prove that x' is also an optimal solution,


Let Z' be the value of the objective function corresponding to this solution.


= (7)

Now if Z' is optimal then Z=Z*. Which is possible only if

To prove ,we shall use contradiction.

If possible , suppose that .

Either (i)

(ii)

We can find a real number r(negative in case (i) and positive in case (ii) such that

r.
9

or

or

or
(8)

[from (2)]

Now multiplying (3) by r and adding to (1), we have

which implies that

for all values of r is also a solution of


the system Ax=b
if we choose r, s.t.

r unrestricted , when λ i =0
thus k, if we choose r such that

(9)

Also we have
10

Which implies that interval given by (9) is non-empty interval.

Hence when r lies in the non-empty interval given by (9) then the solution given by

is also a F.S. of the system Ax=b


and from (8), this solution gives a value Z which is greater than Z* (optimal value
of greatest value).Which is a contradiction

or Z'=Z*

e.g; X' given by (6) is also an optimal solution.


In this way from a given optimal solution we have construc-ted (derived) a new
optimal solution containing less number of non-zero variables. This solution is
optimal B.F.s. if the column vectors associated to non-zero variables in this new
solution are L. I.]Hence the theorem is true. If these associated vectors are not L.
I. (i.e. solution not B.F.S.) then by repeating the above proccss we can get another
optimal B.F.S. of the given system containing not more than (k-2) non zero
variables. Continueing in this way for finite number of times we will certainly get
an optimal B.F.S. of the system.
Hence the theorem is true.

Why simplex method is preferred instead of graphical method:


The simplex method is preferred over the graphical method for serving linear
programming problem (L.P.P.) because it can handle problems with more than
two variables whereas graphical method is limited to problems with only two
variables.
The simplex method also provides a more systematic and efficient way to find the
optimal solution, particularly for large-scale problems.
The graphical method visually represents the constraints and objective function
on a 2D graph, making it suitable for problems with two decision variables (e.g., x
and y). However, when the number of variables increases beyond two, the
graphical method becomes impractical due to the complexity of representing
higher dimensional spaces.
The simplex method can handle problems with any number of variables and
constraints. It systematically searches for the optimal solution by moving from
one feasible solution to another, improving the objective function value at each
step.
Therefore we shall prefer the simplex method instead of graphical methd.

You might also like