Generic Solution Methods For LP
Generic Solution Methods For LP
① Preliminaries
maximize :
100 aw + 30 ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw ≥ 0
maximize :
100 aw + 30 ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw ≥ 0
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
aw , ac ≥ 0
Restriction on Domain
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
aw , ac ≥ 0
Restriction on Domain
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
10 ac ≥ 30
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
10 ac ≥ 30
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
10 ac ≥ 30
First draw the equation 10 ac = 30
The select the appropriate Half Space
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
ac + aw ≤ 7
Half Space
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
ac + aw ≤ 7
Half Space
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
ac + aw ≤ 7
Half Space
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
4 ac + 10 aw ≤ 40
Half Space
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
4 ac + 10 aw ≤ 40
Half Space
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
4 ac + 10 aw ≤ 40
Half Space
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
4 ac + 10 aw ≤ 40
Half Space
[0,0]T ac
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
a2 a1 = [3, 0]T
a2 = [3, 2.8]T
a3
a3 = [5, 2]T
a4 = [7, 2]T
a1
ac a4
[0,0]T
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
a2 Observation-1:
a3 ☞ Called as ‘Feasible Set’
☞ Intersection of linear equations (Hyperplanes
in high dimensions)
a1 ☞ Bounded (Typically, when it is bounded we
a4 call it a ‘Polytope’. When it is unbounded, then
[0,0]T ac we call it a ‘Polyhedron’)
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
a2 Observation-2:
a3 ☞ Feasible set divides the variable space into two
sets.
☞ Generally, a solution can be represented by a
a1 VECTOR.
a4 ☞ ‘Feasible Solution’
[0,0]T ac ☞ ‘Infeasible Solution’
subject to :
ac + aw ≤ 7
4 ac + 10 aw ≤ 40
10 ac ≥ 30
ac ≥ 0
aw aw ≥ 0
a6
a1
ac a4
[0,0]T
maximize :
100 aw + 30 ac
aw
[0,0]T ac
maximize :
100 aw + 30 ac
[0,0]T ac
100 aw +30 ac =100
maximize :
100 aw + 30 ac
[0,0]T ac
z=100
maximize :
100 aw + 30 ac
z=800
It can be plotted by converting to an equation.
aw Like:
z=700 z = 100 aw + 30 ac
z=300
[0,0]T ac z=200
z=100
maximize :
100 aw + 30 ac
z=300
[0,0]T ac
maximize :
100 aw + 30 ac
subject to :
ac + aw ≤ 7
aw 4 ac + 10 aw ≤ 40
10 ac ≥ 30
a2 ac ≥ 0
z=500
a3 aw ≥ 0
z=400
z=300
a1
ac a4
[0,0]T
maximize :
100 aw + 30 ac
subject to :
ac + aw ≤ 7
aw 4 ac + 10 aw ≤ 40
10 ac ≥ 30
a2 ac ≥ 0
z=500
a3 aw ≥ 0
z=400
The optimal solution will be a feasible solution that has the
maximum objective function value.
z=300
a1 optimal solution: a2 = a⋆ = [3, 2.8]T
a4 objective value: z ⋆ = 370
[0,0]T ac
maximize :
5 x1 + 5 x2
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x 2 ≥ 0
maximize :
5 x1 + 5 x2
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x 2 ≥ 0
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
x1 , x2 ≥ 0
Restriction on Domain
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
x1 , x2 ≥ 0
Restriction on Domain
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
2x1 + x2 ≤ 8
Half Space
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
2x1 + x2 ≤ 8
Half Space
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
2x1 + x2 ≤ 8
Half Space
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
2x1 + x2 ≤ 8
Half Space
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
x1 + x2 ≤ 6
Half Space
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
x1 + x2 ≤ 6
Half Space
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
x1 + 3x2 ≥ 3
Half Space
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
x1 + 3x2 ≥ 3
Half Space
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
x1 + 3x2 ≥ 3
Half Space
x1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
a2 x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
a3
a1 = [0, 1]T
a2 = [0, 6]T
a3 = [2, 4]T
a4 = [4, 0]T
a5 = [3, 0]T
a1
a5 a4 x
1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
a2 x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
a3
The objective function contours:
z = 5 x1 + 5 x2
a1
a5 a4 x
1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
a2 x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
a3
x1 + 5 x2 = 1
a1
a5 a4 x
1
subject to :
2x1 + x2 ≤ 8
x1 + x2 ≤ 6
a2 x1 + 3x2 ≥ 3
x1 , x2 ≥ 0
x2
a3
z=1
a1
a5 a4 x
1
z=7
z=6
z=5
z=4 x2
z=3
z=2
z=1
x1
z=7
z=6
z=5
x2
x1
z=7
z=6
z=5
x2
x1
z=7
z=6
z=5 a2
x2
a3
a1
a5 a4 x
1
z=7
z=6
z=5 a2
x2
a3
a1
a5 a4 x
1
maximize :
5 x1 + 5 x2
a2 subject to :
2x1 + x2 ≤ 8
x2 x1 + x2 ≤ 6
a3
x1 + 3x2 ≥ 3
x1 , x 2 ≥ 0
a1
a5 a4 x
1
maximize :
5 x1 + 5 x2
a2 subject to :
2x1 + x2 ≤ 8
x2 x1 + x2 ≤ 6
a3
x1 + 3x2 ≥ 3
x1 , x 2 ≥ 0
a5 a4 x
1
✪ Infeasible Solution
✪ Unbounded Solution
✪ Optimal Solution
★ Unique Optimal
★ Alternate Optimal
✪ Infeasible Solution
✪ Unbounded Solution
✪ Optimal Solution
★ Unique Optimal
★ Alternate Optimal
✪ Infeasible Solution
For the case of maximization,
✪ Unbounded Solution the problem is unbounded.
✪ Optimal Solution If the objective is to minimize,
then we have a solution.
★ Unique Optimal Note: Unbounded feasible set
does not imply unbounded LP.
★ Alternate Optimal
𝑧 = 50
𝑧 = 20 𝑧 = 30 𝑧 = 40
𝑧 = 10
✪ Infeasible Solution
✪ Unbounded Solution
𝑧 = 300
✪ Optimal Solution
★ Unique Optimal 𝑧 = 200
★ Alternate Optimal 𝑧 = 100
𝑧 = 300
𝑧 = 200
𝑧 = 100
✪ Infeasible Solution
✪ Unbounded Solution
✪ Optimal Solution
★ Unique Optimal
★ Alternate Optimal
𝑧 = 150
𝑧 = 100
𝑧 = 33
𝑧 = 50
𝑧 = 20
𝑧 = 15
minimize: cT x, (1)
subject to: Ax = b, (2)
x ≥ 0. (3)
minimize: cT x, (1)
subject to: Ax = b, (2)
x ≥ 0. (3)
Requirement Space: Equation (2) & (3) enforce that b, RHS requirement vector, should belong to the conic
combination of columns of A.
minimize: cT x, (1)
subject to: Ax = b, (2)
x ≥ 0. (3)
Requirement Space: Equation (2) & (3) enforce that b, RHS requirement vector, should belong to the conic
combination of columns of A.
☞ Feasibility: the problem is feasible, if and only if, b belongs to the conic combination of columns of A.
minimize: cT x, (1)
subject to: Ax = b, (2)
x ≥ 0. (3)
Requirement Space: Equation (2) & (3) enforce that b, RHS requirement vector, should belong to the conic
combination of columns of A.
☞ Feasibility: the problem is feasible, if and only if, b belongs to the conic combination of columns of A.
minimize: cT x, (1)
subject to: Ax = b, (2)
x ≥ 0. (3)
Requirement Space: Equation (2) & (3) enforce that b, RHS requirement vector, should belong to the conic
combination of columns of A.
☞ Feasibility: the problem is feasible, if and only if, b belongs to the conic combination of columns of A.
☞ Optimality: the objective function can be seen as another equation with a new variable, i.e.,
c z
x=
A b
Thus, the finding optimal solution is similar to finding optimal value of z in the above expanded space (increase
by 1d).
minimize: cT x, (1)
subject to: Ax = b, (2)
x ≥ 0. (3)
Requirement Space: Equation (2) & (3) enforce that b, RHS requirement vector, should belong to the conic
combination of columns of A.
☞ Feasibility: the problem is feasible, if and only if, b belongs to the conic combination of columns of A.
☞ Optimality: the objective function can be seen as another equation with a new variable, i.e.,
c z
x=
A b
Thus, the finding optimal solution is similar to finding optimal value of z in the above expanded space (increase
by 1d).
minimize: cT x, (1)
subject to: Ax = b, (2)
x ≥ 0. (3)
Requirement Space: Equation (2) & (3) enforce that b, RHS requirement vector, should belong to the conic
combination of columns of A.
☞ Feasibility: the problem is feasible, if and only if, b belongs to the conic combination of columns of A.
☞ Optimality: the objective function can be seen as another equation with a new variable, i.e.,
c z
x=
A b
Thus, the finding optimal solution is similar to finding optimal value of z in the above expanded space (increase
by 1d).
minimize: cT x, (1)
subject to: Ax = b, (2)
x ≥ 0. (3)
Requirement Space: Equation (2) & (3) enforce that b, RHS requirement vector, should belong to the conic
combination of columns of A.
☞ Feasibility: the problem is feasible, if and only if, b belongs to the conic combination of columns of A.
☞ Optimality: the objective function can be seen as another equation with a new variable, i.e.,
c z
x=
A b
Thus, the finding optimal solution is similar to finding optimal value of z in the above expanded space (increase
by 1d).
minimize: cT x, (1)
subject to: Ax = b, (2)
x ≥ 0. (3)
Requirement Space: Equation (2) & (3) enforce that b, RHS requirement vector, should belong to the conic
combination of columns of A.
☞ Feasibility: the problem is feasible, if and only if, b belongs to the conic combination of columns of A.
☞ Optimality: the objective function can be seen as another equation with a new variable, i.e.,
c z
x=
A b
Thus, the finding optimal solution is similar to finding optimal value of z in the above expanded space (increase
by 1d).
☞ Unboundedness: Using vector [z, bT ]T in the expanded space, unboundedness can be detected.
𝒂𝟏
𝒂𝟐
𝒂𝟑
𝒂𝟏
𝒂𝟐
𝒂𝟑
𝒂𝟏
𝒂𝟐
𝒂𝟑
𝒂𝟏
𝒂𝟐
𝒂𝟑
≤𝒃
𝒂𝟏
𝒂𝟐
𝒂𝟑
𝒂𝟏
𝒂𝟐
𝒂𝟑
≤𝒃
x = x1 + λ d,
x
x
x
x1
☞ Gradient of f (x) = cT x is
☞ Gradient of f (x) = cT x is c
☞ Gradient of f (x) = cT x is c
☞ A gradient at a point indicates possible direction which increases the function’s value.
Improving direction
Let x1 ∈ Rn be a feasible point. A nonzero direction d ∈ Rn is a improving direction, if
f (x1 + λd) is better than f (x1 ), where f () : Rn 7→ R is any objective function.
For the linear programs in the standard form, an improving direction will be:
Improving direction
Let x1 ∈ Rn be a feasible point. A nonzero direction d ∈ Rn is a improving direction, if
f (x1 + λd) is better than f (x1 ), where f () : Rn 7→ R is any objective function.
For the linear programs in the standard form, an improving direction will be:
maximization: cT (x + d) > cT x
minimization: cT (x + d) < cT x
Improving direction
Let x1 ∈ Rn be a feasible point. A nonzero direction d ∈ Rn is a improving direction, if
f (x1 + λd) is better than f (x1 ), where f () : Rn 7→ R is any objective function.
For the linear programs in the standard form, an improving direction will be:
maximization: cT d > 0
minimization: cT d < 0
Improving direction
Let x1 ∈ Rn be a feasible point. A nonzero direction d ∈ Rn is a improving direction, if
f (x1 + λd) is better than f (x1 ), where f () : Rn 7→ R is any objective function.
For the linear programs in the standard form, an improving direction will be:
maximization: cT d > 0
minimization: cT d < 0
Feasible direction
Let x1 ∈ Rn be a feasible point. A nonzero direction d ∈ Rn is a feasible direction, if there
exists a scalar λ > 0 such that x1 + λd is feasible.
For the linear programs in the standard form, the properties of the feasible direction will be:
Feasible direction
Let x1 ∈ Rn be a feasible point. A nonzero direction d ∈ Rn is a feasible direction, if there
exists a scalar λ > 0 such that x1 + λd is feasible.
For the linear programs in the standard form, the properties of the feasible direction will be:
A(x1 +λd) = b, (1)
x1 + λd ≥ 0, (2)
λ > 0, (3)
d ̸= 0. (4)
Feasible direction
Let x1 ∈ Rn be a feasible point. A nonzero direction d ∈ Rn is a feasible direction, if there
exists a scalar λ > 0 such that x1 + λd is feasible.
For the linear programs in the standard form, the properties of the feasible direction will be:
Ad = 0, (1)
x1 + λd ≥ 0, (2)
λ > 0, (3)
d ̸= 0. (4)
Feasible direction
Let x1 ∈ Rn be a feasible point. A nonzero direction d ∈ Rn is a feasible direction, if there
exists a scalar λ > 0 such that x1 + λd is feasible.
For the linear programs in the standard form, the properties of the feasible direction will be:
Ad = 0, (1)
x1 + λd ≥ 0, (2)
λ > 0, (3)
d ̸= 0. (4)
Feasible direction
Let x1 ∈ Rn be a feasible point. A nonzero direction d ∈ Rn is a feasible direction, if there
exists a scalar λ > 0 such that x1 + λd is feasible.
For the linear programs in the standard form, the properties of the feasible direction will be:
Ad = 0, (1)
x1 + λd ≥ 0, (2)
λ > 0, (3)
d ̸= 0. (4)
☞ Let us say, one of the element of d, say dr is negative. What will be the maximum step
length, say λmax ?
Syed N Mujahid Linear Programming February 12, 2023 15 / 38
Feasible Directions
Feasible direction
Let x1 ∈ Rn be a feasible point. A nonzero direction d ∈ Rn is a feasible direction, if there
exists a scalar λ > 0 such that x1 + λd is feasible.
For the linear programs in the standard form, the properties of the feasible direction will be:
Ad = 0, (1)
x1 + λd ≥ 0, (2)
λ > 0, (3)
d ̸= 0. (4)
☞ Let us say, some of the elements of d are negative. What will be the maximum step length,
say λmax ?
Syed N Mujahid Linear Programming February 12, 2023 15 / 38
Feasible Directions
Feasible direction
Let x1 ∈ Rn be a feasible point. A nonzero direction d ∈ Rn is a feasible direction, if there
exists a scalar λ > 0 such that x1 + λd is feasible.
For the linear programs in the standard form, the properties of the feasible direction will be:
Ad = 0, (1)
x1 + λd ≥ 0, (2)
λ > 0, (3)
d ̸= 0. (4)
☞
x1r
λmax = min | dr < 0, Ad = 0
|dr |
Syed N Mujahid Linear Programming February 12, 2023 15 / 38
An Idea
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
𝒙𝟏
𝒙𝟐
minimize: cT x,
subject to:
Ax ⋇ b,
x ≥ 0.
0T x = 0,
0T x ≤ +ve or
T
0 x ≥ −ve.
Otherwise, if the LP at any time have any of the following constraint structure, then the LP is infeasible:
0T x < −ve or
0T x > +ve.
☞ All-Zero Columns
☞ Singleton Rows
☞ Singleton Columns
☞ Grained Rows
☞ Grained Columns
Syed N Mujahid Linear Programming February 12, 2023 23 / 38
Preprocessing: Basic Analysis
The basic reductions for an LP can be stated as follows:
☞ All-Zero Rows: Redundant or Infeasible Constraint
☞ All-Zero Columns: Fixing or Unbounded Variable
If LP at any time have any of the following column structure, then the primal variable corresponding to the column
can be fixed, and the column can be removed from the LP:
··· c j xj ···
··· 0xj ···
··· 0xj ···
.. .. ..
. . .
··· 0xj ···
Now, following cases arise:
✎ If cj > 0, then xj = 0.
✎ If cj < 0, then the primal problem is unbounded.
Remember, the updated bounds on a variable should be feasible w.r.t the existing bounds. In addition to the above
cases, the objective function is compensated accordingly.
☞ Singleton Rows
☞ Singleton Columns
☞ Grained Rows
☞ Grained Columns
Syed N Mujahid Linear Programming February 12, 2023 23 / 38
Preprocessing: Basic Analysis
The basic reductions for an LP can be stated as follows:
☞ All-Zero Rows: Redundant or Infeasible Constraint
☞ All-Zero Columns: Fixing or Unbounded Variable
☞ Singleton Rows: New bounds or Fixing Variable
If LP at any time have any of the following constraint structures, then that primal variable bounds can be improved.
0 ··· 0+ ai,j xj +0 ··· = bi ,
0 ··· 0+ ai,j xj +0 ··· ≥ bi , or
0 ··· 0+ ai,j xj +0 ··· ≤ bi .
Now, following cases arise:
✎ If ai,j > 0, then correspondingly the bounds on the primal variable can be defined as:
xj = bi /ai,j ,
xj ≥ bi /ai,j , or
xj ≤ bi /ai,j .
✎ If ai,j < 0, then correspondingly the bounds on the primal variable can be defined as:
xj = bi /ai,j ,
xj ≤ bi /ai,j , or
xj ≥ bi /ai,j .
If the primal variable’s above bounds are tight, then some of the primal constraints may become redundant. If the
bounds are conflicting with the existing bounds, then the primal problem is infeasible. If the primal variable’s bounds
do not contain zero, then the corresponding dual constraint is never active. If the primal variable’s value is fixed, then
the variable can be removed from the LP and the objective function is compensated accordingly.
☞ SingletonSyed N Mujahid
Columns Linear Programming February 12, 2023 23 / 38
Preprocessing: Basic Analysis
The basic reductions for an LP can be stated as follows:
☞ All-Zero Rows: Redundant or Infeasible Constraint
☞ All-Zero Columns: Fixing or Unbounded Variable
☞ Singleton Rows: New bounds or Fixing Variable
☞ Singleton Columns: New bounds or Fixing Dual Variable
If LP at any time have any of the following constraint structures, then that dual variable bounds can be improved.
··· cj xj ···
··· 0xj ···
. . .
. . .
. . .
··· 0xj ···
··· ai,j xj ···
··· 0xj ···
. . .
. . .
. . .
··· 0xj ···
☞ Grained Columns
then the primal problem is infeasible. If bi = 0, then all the primal variables xj can be fixed to zero.
☞ Grained Columns
··· cj xj ···
··· a1,j xj ··· ≤ b1
··· a2,j xj ··· ≤ b2
. . . . .
.. .. .. .. ..
··· am,j xj ··· ≤ bm
then xj = 0.
Syed N Mujahid Linear Programming February 12, 2023 23 / 38
Preprocessing: Basic Analysis
The basic reductions for an LP can be stated as follows:
☞ All-Zero Rows: Redundant or Infeasible Constraint
☞ All-Zero Columns: Fixing or Unbounded Variable
☞ Singleton Rows: New bounds or Fixing Variable
☞ Singleton Columns: New bounds or Fixing Dual Variable
☞ Grained Rows: Eliminating Constraint or Infeasible Problem
☞ Grained Columns: Fixing Variable or Unbounded Problem
The idea of singleton rows and singleton columns can be further extended. In fact, the singleton columns can be
further extended to grained columns. Grained column is a column of LP, whose corresponding dual constraint is a
grained constraint. For example, let us say that in a grained column, all the coefficients are non-negative and the
constraints are specific type. Following two cases arise for the grained columns:
✎ If cj < 0, and there is a grained column of the following type:
··· cj xj ···
··· a1,j xj ··· ≥ b1
··· a2,j xj ··· ≥ b2
.. .. .. .. ..
. . . . .
··· am,j xj ··· ≥ bm
then the primal problem is unbounded. If cj = 0, then all the dual variables corresponding to the constraint
containing non-zero coefficient for xj can be fixed.
Syed N Mujahid Linear Programming February 12, 2023 23 / 38
Preprocessing: Duplicate Rows
If in the LP, there exists two rows, in the following structure, then these rows are duplicate.
X X
LDj = ai,j ldi + ai,j udi (4)
i:ai,j >0 i:ai,j <0
Similar to the primal constraint check. It requires analysis of dual constraints. Following things can happen:
✎ A redundant constraint in the dual implies the corresponding variable is undefined (typically fixed to zero, when the
bounds are treated separately from the constraints.),
✎ An infeasible constraint in the dual implies infeasibility or unboundedness in the primal,
✎ A variable fixing constraint in the dual implies (when the dual variable is not fixed to zero) the corresponding primal
constraints are active.
where ⅁ can be equality or inequality, and ai,j > 0. Consider the following:
X X
U Ii,j = ai,r upr + ai,r lpr = U Pi − ai,j upj (5)
r:ai,r >0 r:ai,r <0
r̸=j r̸=j
X X
LIi,j = ai,r lpr + ai,r upr = LPi − ai,j lpj (6)
r:ai,r >0 r:ai,r <0
r̸=j r̸=j
which implies:
ai,j xj ≥ bi − (LPi − ai,j lpj )
or a new upper bound on xj can be defined as:
Syed N Mujahid 1 Programming
Linear February 12, 2023 28 / 38
Preprocessing: Primal Implied Bounds
To sum, following things can happen:
✎ Bounds on the variables can be improved (tighter the initial bounds).
✎ Infeasibility can be detected from variable bounds.
✎ Variables can be fixed.
Then substituting the following in place of xj will eliminate variable j and one constraint i:
1 X
xj = (bi − ai,r xr )
ai,j r̸=j
However, the above process may not be time efficient when there are too many substitutions containing too many terms.
Thus, following formula can be used to identify if the substitution is efficient or not.
All-Zero Rows
Column corresponding to variable x4 is singleton. This implies y2 ≤ 10.
All-Zero Columns
Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
All-Zero Rows
Column corresponding to variable x4 is singleton. This implies y2 ≤ 10.
All-Zero Columns
Since y2 ≤ 10 is redundant, we have x4 = 0. Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
All-Zero Rows
Column corresponding to variable x4 is singleton. This implies y2 ≤ 10.
All-Zero Columns
Since y2 ≤ 10 is redundant, we have x4 = 0. Singleton Rows
Dual constraint corresponding to variable x1 is redundant, this implies x1 = 0 at optimality (or Singleton Columns
Column corresponding to variable x1 is grained, this implies x1 = 0). Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
All-Zero Rows
Column corresponding to variable x4 is singleton. This implies y2 ≤ 10.
All-Zero Columns
Since y2 ≤ 10 is redundant, we have x4 = 0. Singleton Rows
Dual constraint corresponding to variable x1 is redundant, this implies x1 = 0 at optimality (or Singleton Columns
Column corresponding to variable x1 is grained, this implies x1 = 0). Grained Rows
Grained Columns
There are two singleton rows, which implies x2 ≥ 1 and x3 ≤ 8 3
. Now, the bounds on x2 does Duplicate Rows & Columns
not contain zero, which implies the corresponding dual constraint is tight at optimality. Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
All-Zero Rows
Column corresponding to variable x4 is singleton. This implies y2 ≤ 10.
All-Zero Columns
Since y2 ≤ 10 is redundant, we have x4 = 0. Singleton Rows
Dual constraint corresponding to variable x1 is redundant, this implies x1 = 0 at optimality (or Singleton Columns
Column corresponding to variable x1 is grained, this implies x1 = 0). Grained Rows
Grained Columns
There are two singleton rows, which implies x2 ≥ 1 and x3 ≤ 8 3
. Now, the bounds on x2 does Duplicate Rows & Columns
not contain zero, which implies the corresponding dual constraint is tight at optimality. Primal & Dual Constraints Analysis
Notice that with the updated bounds, the first constraint becomes redundant. Primal & Dual Implied Bounds
Free Variable Elimination
All-Zero Rows
Column corresponding to variable x4 is singleton. This implies y2 ≤ 10.
All-Zero Columns
Since y2 ≤ 10 is redundant, we have x4 = 0. Singleton Rows
Dual constraint corresponding to variable x1 is redundant, this implies x1 = 0 at optimality (or Singleton Columns
Column corresponding to variable x1 is grained, this implies x1 = 0). Grained Rows
Grained Columns
There are two singleton rows, which implies x2 ≥ 1 and x3 ≤ 8 3
. Now, the bounds on x2 does Duplicate Rows & Columns
not contain zero, which implies the corresponding dual constraint is tight at optimality. Primal & Dual Constraints Analysis
Notice that with the updated bounds, the first constraint becomes redundant. Primal & Dual Implied Bounds
Free Variable Elimination
Thus, the LP boils down to variable separable problem.
All-Zero Rows
Column corresponding to variable x4 is singleton. This implies y2 ≤ 10.
All-Zero Columns
Since y2 ≤ 10 is redundant, we have x4 = 0. Singleton Rows
Dual constraint corresponding to variable x1 is redundant, this implies x1 = 0 at optimality (or Singleton Columns
Column corresponding to variable x1 is grained, this implies x1 = 0). Grained Rows
Grained Columns
There are two singleton rows, which implies x2 ≥ 1 and x3 ≤ 8 3
. Now, the bounds on x2 does Duplicate Rows & Columns
not contain zero, which implies the corresponding dual constraint is tight at optimality. Primal & Dual Constraints Analysis
Notice that with the updated bounds, the first constraint becomes redundant. Primal & Dual Implied Bounds
Free Variable Elimination
Thus, the LP boils down to variable separable problem.
The optimal solution is x1 = 0, x2 = 1, x3 = 8
3
and x4 = 0.
All-Zero Rows
All-Zero Columns
Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
All-Zero Rows
One of the two constraints is duplicate, and can be removed.
All-Zero Columns
Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
All-Zero Rows
One of the two constraints is duplicate, and can be removed.
All-Zero Columns
Now, all the columns are singleton. This implies dual bounds can be analyzed. Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
All-Zero Rows
One of the two constraints is duplicate, and can be removed.
All-Zero Columns
Now, all the columns are singleton. This implies dual bounds can be analyzed. Singleton Rows
We get the following: y1 ≥ 5, y1 ≥ 3, y1 ≥ 0.5, from the three columns. Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
All-Zero Rows
One of the two constraints is duplicate, and can be removed.
All-Zero Columns
Now, all the columns are singleton. This implies dual bounds can be analyzed. Singleton Rows
We get the following: y1 ≥ 5, y1 ≥ 3, y1 ≥ 0.5, from the three columns. Singleton Columns
Grained Rows
Thus, the updated bound on the dual variable is 5 ≤ y1 ≤ ∞. The constraints corresponding to Grained Columns
x2 and x3 are redundant. This implies x2 = x3 = 0. Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
All-Zero Rows
One of the two constraints is duplicate, and can be removed.
All-Zero Columns
Now, all the columns are singleton. This implies dual bounds can be analyzed. Singleton Rows
We get the following: y1 ≥ 5, y1 ≥ 3, y1 ≥ 0.5, from the three columns. Singleton Columns
Grained Rows
Thus, the updated bound on the dual variable is 5 ≤ y1 ≤ ∞. The constraints corresponding to Grained Columns
x2 and x3 are redundant. This implies x2 = x3 = 0. Duplicate Rows & Columns
Clearly, the bounds on the dual variable does not contain zero. Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
All-Zero Rows
One of the two constraints is duplicate, and can be removed.
All-Zero Columns
Now, all the columns are singleton. This implies dual bounds can be analyzed. Singleton Rows
We get the following: y1 ≥ 5, y1 ≥ 3, y1 ≥ 0.5, from the three columns. Singleton Columns
Grained Rows
Thus, the updated bound on the dual variable is 5 ≤ y1 ≤ ∞. The constraints corresponding to Grained Columns
x2 and x3 are redundant. This implies x2 = x3 = 0. Duplicate Rows & Columns
Clearly, the bounds on the dual variable does not contain zero. This implies x1 = 6. Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Thus, the optimal solution is: x1 = 6, x2 = 0 and x3 = 0.
Free Variable Elimination
All-Zero Rows
All-Zero Columns
Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Syed N Mujahid Linear Programming Free Variable Elimination
February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −2x1 −3x2 +x3 +x4 LP UP ld ud
s.t. x1 +x2 +x3 −2x4 ≤4 −∞ 0
−x1 −x2 +x3 −x4 ≤1 −∞ 0
x1 +x4 ≤3 −∞ 0
x1 , x2 , x3 , x4 ≥0
lp : 0 0 0 0
up : ∞ ∞ ∞ ∞
LD :
UD :
Sol: Applying the presolving ideas:
All-Zero Rows
All-Zero Columns
Column for x3 is a grained column. Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Syed N Mujahid Linear Programming Free Variable Elimination
February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −2x1 −3x2 +x4 LP UP ld ud
s.t. x1 +x2 −2x4 ≤4 −∞ 0
−x1 −x2 −x4 ≤1 −∞ 0
x1 +x4 ≤3 −∞ 0
x1 , x2 , x4 ≥0
lp : 0 0 0
up : ∞ ∞ ∞
LD :
UD :
Sol: Applying the presolving ideas:
All-Zero Rows
All-Zero Columns
Column for x3 is a grained column. ⇒ x3 = 0. Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Syed N Mujahid Linear Programming Free Variable Elimination
February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −2x1 −3x2 +x4 LP UP ld ud
s.t. x1 +x2 −2x4 ≤4 − ∞ ∞ −∞ 0
−x1 −x2 −x4 ≤1 − ∞ 0 −∞ 0
x1 +x4 ≤3 0 ∞ −∞ 0
x1 , x2 , x4 ≥0
lp : 0 0 0
up : ∞ ∞ ∞
LD :
UD :
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0
All-Zero Columns
Primal Constraint Analysis gives these bounds. Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Syed N Mujahid Linear Programming Free Variable Elimination
February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −2x1 −3x2 +x4 LP UP ld ud
s.t. x1 +x2 −2x4 ≤4 − ∞ ∞ −∞ 0
−x1 −x2 −x4 ≤1 − ∞ 0 −∞ 0
x1 +x4 ≤3 0 ∞ −∞ 0
x1 , x2 , x4 ≥0
lp : 0 0 0
up : ∞ ∞ ∞
LD :
UD :
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0
All-Zero Columns
Primal Constraint Analysis gives these bounds. The second constraint is redundant. Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Syed N Mujahid Linear Programming Free Variable Elimination
February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −2x1 −3x2 +x4 LP UP ld ud
s.t. x1 +x2 −2x4 ≤4 − ∞ ∞ −∞ 0
x1 +x4 ≤3 0 ∞ −∞ 0
x1 , x2 , x4 ≥0
lp : 0 0 0
up : ∞ ∞ ∞
LD :
UD :
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0
All-Zero Columns
Now, the column for x2 is a singleton column. Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −2x1 −3x2 +x4 LP UP ld ud
s.t. x1 +x2 −2x4 ≤4 − ∞ ∞ −∞ −3
x1 +x4 ≤3 0 ∞ −∞ 0
x1 , x2 , x4 ≥0
lp : 0 0 0
up : ∞ ∞ ∞
LD :
UD :
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0
All-Zero Columns
Now, the column for x2 is a singleton column. y1 ≤ −3. Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −2x1 −3x2 +x4 LP UP ld ud
s.t. x1 +x2 −2x4 ≤4 − ∞ ∞ −∞ −3
x1 +x4 ≤3 0 ∞ −∞ 0
x1 , x2 , x4 ≥0
lp : 0 0 0
up : ∞ ∞ ∞
LD :
UD :
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0
All-Zero Columns
Now, the column for x2 is a singleton column. y1 ≤ −3. Singleton Rows
Since the dual bound is improved, let us analyze the dual constraints. The dual constraint analysis Singleton Columns
implies that the dual constraint corresponding to x1 is redundant. Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −2x1 −3x2 +x4 LP UP ld ud
s.t. x1 +x2 −2x4 ≤4 − ∞ ∞ −∞ −3
x1 +x4 ≤3 0 ∞ −∞ 0
x1 , x2 , x4 ≥0
lp : 0 0 0
up : ∞ ∞ ∞
LD :
UD :
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0
All-Zero Columns
Now, the column for x2 is a singleton column. y1 ≤ −3. Singleton Rows
Since the dual bound is improved, let us analyze the dual constraints. The dual constraint analysis Singleton Columns
implies that the dual constraint corresponding to x1 is redundant. ⇒ x1 = 0. Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −3x2 +x4 LP UP ld ud
s.t. +x2 −2x4 ≤4 − ∞ ∞ −∞ −3
+x4 ≤3 0 ∞ −∞ 0
x2 , x4 ≥0
lp : 0 0
up : ∞ ∞
LD :
UD :
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0 , x1 = 0
All-Zero Columns
The last constraint is singleton. Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −3x2 +x4 LP UP ld ud
s.t. +x2 −2x4 ≤4 − ∞ ∞ −∞ −3
+x4 ≤3 0 ∞ −∞ 0
x2 , x4 ≥0
lp : 0 0
up : ∞ 3
LD :
UD :
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0 , x1 = 0
All-Zero Columns
The last constraint is singleton. ⇒ x4 ≤ 3. Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −3x2 +x4 LP UP ld ud
s.t. +x2 −2x4 ≤4 −3 ∞ −∞ −3
+x4 ≤3 0 3 −∞ 0
x2 , x4 ≥0
lp : 0 0
up : ∞ 3
LD :
UD :
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0 , x1 = 0
All-Zero Columns
The last constraint is singleton. ⇒ x4 ≤ 3. Since the primal bound is improved, let us analyze the Singleton Rows
primal constraints. The LP and UP bound for the constraints is now updated. Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −3x2 +x4 LP UP ld ud
s.t. +x2 −2x4 ≤4 −3 ∞ −∞ −3
+x4 ≤3 0 3 −∞ 0
x2 , x4 ≥0
lp : 0 0
up : ∞ 3
LD : −∞ −∞
UD : −3 ∞
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0 , x1 = 0
All-Zero Columns
Constraint Analysis: Dual limits can be updated. Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −3x2 +x4 LP UP ld ud
s.t. +x2 −2x4 ≤4 −3 ∞ −∞ −3
+x4 ≤3 0 3 −∞ 0
x2 , x4 ≥0
lp : 0 0
up : ∞ 3
LD : −∞ −∞
UD : −3 ∞
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0 , x1 = 0
All-Zero Columns
Implied Bounds Analysis: Primal limits for variable x2 can be updated as: Singleton Rows
Singleton Columns
x2 ≤ 4 + 2x4 → x2 ≤ 10 Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −3x2 +x4 LP UP ld ud
s.t. +x2 −2x4 ≤4 −3 ∞ −∞ −3
+x4 ≤3 0 3 −∞ 0
x2 , x4 ≥0
lp : 0 0
up : 10 3
LD : −∞ −∞
UD : −3 ∞
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0 , x1 = 0
All-Zero Columns
Implied Bounds Analysis: Primal limits for variable x2 can be updated as: Singleton Rows
Singleton Columns
x2 ≤ 4 + 2x4 → x2 ≤ 10 Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −3x2 +x4 LP UP ld ud
s.t. +x2 −2x4 ≤4 −3 ∞ −∞ −3
+x4 ≤3 0 3 −∞ 0
x2 , x4 ≥0
lp : 0 0
up : 10 3
LD : −∞ −∞
UD : −3 ∞
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0 , x1 = 0
All-Zero Columns
Implied Bounds Analysis: Dual limits from constraint corresponding to column of x4 can be Singleton Rows
updated as: Singleton Columns
y2 ≤ 1 + 2y1 → y2 ≤ −5 Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −3x2 +x4 LP UP ld ud
s.t. +x2 −2x4 ≤4 −3 ∞ −∞ −3
+x4 ≤3 0 3 −∞ −5
x2 , x4 ≥0
lp : 0 0
up : 10 3
LD : −∞ −∞
UD : −3 ∞
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0 , x1 = 0
All-Zero Columns
Implied Bounds Analysis: Dual limits from constraint corresponding to column of x4 can be Singleton Rows
updated as: Singleton Columns
y2 ≤ 1 + 2y1 → y2 ≤ −5 Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −3x2 +x4 LP UP ld ud
s.t. +x2 −2x4 ≤4 −3 ∞ −∞ −3
+x4 ≤3 0 3 −∞ −5
x2 , x4 ≥0
lp : 0 0
up : 10 3
LD : −∞ −∞
UD : −3 ∞
Sol: Applying the presolving ideas:
All-Zero Rows
x3 = 0 , x1 = 0
All-Zero Columns
Dual variables bounds indicate the dual variable cannot be zero. This implies the above primal Singleton Rows
constraints are active. This in turn implies x4 = 3 and x2 = 10. Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Preprocessing: Example - 3
Q. Solve the following LP:
min : −3x2 +x4 LP UP ld ud
s.t. +x2 −2x4 ≤4 −3 ∞ −∞ −3
+x4 ≤3 0 3 −∞ −5
x2 , x4 ≥0
lp : 0 0
up : 10 3
LD : −∞ −∞
UD : −3 ∞
Sol: Applying the presolving ideas:
All-Zero Rows
Thus the optimal solution is: x3 = 0 , x1 = 0, x4 = 3, x2 = 10
All-Zero Columns
Singleton Rows
Singleton Columns
Grained Rows
Grained Columns
Duplicate Rows & Columns
Primal & Dual Constraints Analysis
Primal & Dual Implied Bounds
Free Variable Elimination
Syed N Mujahid Linear Programming February 12, 2023 34 / 38
Generic Solution Methods for LP
1 Graphical Search Methods
2D Graphical Method
Requirement Space Method
2 Algebraic Search Tools
Improving Directions
Feasible Directions
General Algorithms
3 Improving Search Illustrated
Boundary Point
Interior Point
Exterior Point
4 Preprocessing LPs
Basic Analysis
Advanced Analysis
5 Summary
Syed N Mujahid Linear Programming February 12, 2023 35 / 38
Conclusions
Chapter 1.3, 1.4, 5.3: Bazaraa, M. S., Jarvis, J. J., & Sherali, H. D. (2011). Linear
programming and network flows. John Wiley & Sons.
Bixby, R. E. (2002). Solving real- world linear programs: A decade and more of
progress. Operations research, 50(1), 3-15.
Brearley, A. L., Mitra, G., & Williams, H. P. (1975). Analysis of mathematical
programming problems prior to applying the simplex algorithm. Mathematical
programming, 8(1), 54-83.
Andersen, E. D., & Andersen, K. D. (1995). Presolving in linear programming.
Mathematical Programming, 71(2), 221-245.
Mészáros, C., & Suhl, U. H. (2003). Advanced preprocessing techniques for linear and
quadratic programming. OR Spectrum, 25(4), 575-595.