Theory: Assignment Problems
Theory: Assignment Problems
linear equations and /or inequalities ( called Linear constraints ) is given the name
LINEAR PROGRAMMING
L.P.P. DUALITY : Associated with every linear programming problem (L.P.P.) there is
another intimatedly related , called the DUAL problem of the original LPP. The original LPP is
called the PRIMAL problem
According to the duality theorem : For every maximization (or minimization ) problem in linear
programming , there is a unique similar problem of minimization ( or maximization ) involving
the same data which describes the original problem.
Linear Programming Problem (L.P.P.) : Programming problems are concerned with the
allocation of limited resources, in order to meet the desired goals.
Basically , L.P.P. is a resource allocation problem that deals with the best allocation of
limited resources to a number of competing activities.
The goals, when written mathematically are called objective function.
The limitation on the availability or requirements of the resources are called as the
constraints.
If the objective function as well as the constraints are linear, then the problem is called a
Linear
Programming Problem (L.P.P.). An L.P.P. is defined to find the decision variables x and y
such the objective function Z = ax + by should be optimum ( i. e. maximum or minimum )
subject to the constraints.
a1x + b1y / c1
a2x + b2y / c2
:
:
x 0, y 0
The decision variables are such that
x 0, y 0. These are called non-negativity
constraints.
2.
Feasible Region : The common region of the constraints of the L.P.P. in the
graphical representation is called the feasible region. The objective function is optimum at
one of the corner points of the feasible region.
3.
Convex Polygon : A polygon is said to be convex if the line segment joining any
two points on it lies within the polygon.
# 1. Feasible solution : Any set X = { x1 , x2 , -------- xn } of variables is called a feasible
solution of the L.P.P. if it satisfies the constraints and non negativity
restrictions.
# 2. Basic Feasible Solution (B.F.S.) : A feasible solution is called a basic feasible
solution if it has no more than m ( where m is the number of constraints ) positive XJ ( J
= 1,2,3, -------- ). If there are m equations in n variables ( n > m ) then the basic solutions
( not necessarily feasible ) are obtained by setting any n m variables equal to zero and
solving them for remaining m variables.
3. Non degenerate and Degenerate basic feasible solutions : A basic feasible solution
is said to be non degenerate if it has exactly if it has exactly m number of positive X
J( J = 1,2, ------- n ). Otherwise the solution is called degenerate basic feasible solution.
# 4 . Optimum solution : A feasible solution (not necessarily basic ) is said to be optimum
if it optimizes the objective function.
# 5. Unbounded solution : If the value of the objective function can be increased or
decreased indefinitely , then such solutions are called unbounded solution.
# 6. Alternative optimum solution : If a non basic variable corresponding to which
ZJ CJ = 0 is entered in to the basis , a new solution will be obtained but the value of
the objective function will be same then that solution is known as alternative optimum
solution.
# 7. Infeasible optimum solution : If the solution is satisfying the optimality
condition but it contains artificial variable / s at non zero level then the optimum
solution is known as infeasible solution or Pseudo solution.
# 8. Slack , Surplus and Artificial variables :
Slack variable : The non negative quantity (variable ) which will be added to the
L.H.S. of the constraints if the constraints are of less than or equal to type.
Surplus variable : The non negative quantity (variable ) which will be subtracted
from the L.H.S. of constraints if the constraints are of greater than or equal to type.
Artificial variable : The non negative quantity( variable ) which will be added to
the L.H.S. of constraints to form unit matrix for selecting basic variables.
Integer Programming problems :
Integer programming problems(IPP) are a special class of linear programming problems where all or
some of the decision variables in the optimum solution are restricted to non negative integer values.
It may be recalled that in LPP the decision variables are continuous,with the result that they may take
Fractional values in the optimum solution: for example 153/4 units of product A and 251/2 units of
product B per day. However , many situations in real life may not allow decision variables to take
fractional values rather they take only integer values. For example , in production , manufacturing is
frequently scheduled in terms of lots , batches or production runs which are always integers.
Integer Programming Problem Methods
Gomory Cutting plane Method : To solve an integer programming problem (IPP) using the cutting
Plane algorithm, we first drop the integer requirements and obtain the linear programming (LP) relaxation.
The LP relaxation is solved using the simplex method in the usual way. If all the variables in the optimal
solution assume integer values , we have found an optimum solution to the IPP. Otherwise , we introduce
a cut using a constraint in the optimal solution which has a fractional right hand side (bi) value. The cut is
then introduced into the optimal solution to LP relaxation and the revised problem is solved using LPP
simplex method,with some modification. Cuts are introduced until all the variables are integers.
Branch and Bound Method : The branch and bound method is another method used to solve integer
programming problems(IPP). It may be mentioned at the outset that the branch and bound method is not a
particular method for solving a particular problem. It refers to a certain procedure for finding an optimal
solution and is applied differently in different kinds of problems. This is generally used in what may be
called as combinatorial problems problems where there are a finite number of solutions. By applying
some rules , these solutions are divided into two parts one that most probably contains the optimal
solution and therefore ,should be examined further ; and the second part that would not contain the
optimal solution and , thus , be left out of further consideration. Thus branching and bounding essentially
keeps dividing up the feasible region in to smaller and smaller parts until the solution that optimizes
(maximizes or minimizes) is determined.