Quadratic Programming
1
Quadratic Programming
A quadratic programming problem is a non-linear
programming problem of the form
Maximize z CX X T DX
subject to AX b, X 0
where X x1 , x2 ,..., xn T , b b1 , b2 ,..., bn T , C c1 , c2 ,..., cn
a11 a12 . . a1n d11 d12 . . d1n
a . a2 n d d
21 a22 .
21 22 . . d 2 n
A . D .
. .
am1 am 2 . . amn d n1 d n 2 . . d nn
2
Quadratic Programming
Assume that the n n matrix D is symmetric and
negative-definite.
This means that the objective function is strictly
concave.
Since the constraints are linear, the feasible region is
a convex set.
3
Quadratic Programming
In scalar notation, the quadratic programming
problem reads:
n n
Maximize z c j x j d jj x 2j 2 d x xj
ij i
j 1 j 1 1i j n
subject to a11 x1 a12 x2 . . . a1n xn b1
a21 x1 a22 x2 . . . a2 n xn b2
am1 x1 am 2 x2 . . . amn xn bm
x1 , x2 ,. . ., xn 0
4
Wolfes Method to solve a Quadratic
Programming Problem:
The solution to this problem is based on the KKT
conditions. Since the objective function is strictly
concave and the solution space is convex, the KKT
conditions are also sufficient for optimum.
Since there are m + n constraints, we have m + n
Lagrange multipliers; the first m of them are denoted
by 1, 2 , , m and the last n of them are denoted
by 1, 2 , , n.
5
Wolfes Method
The KKT (necessary) conditions are:
1. 1 , 2 , . . . , m , 1 , 2 , . . . , n 0
n m
2. c j 2 dij xi i aij j 0, j 1, 2,. . ., n
i 1 i 1
n
3. i aij x j bi 0, i 1, 2,. . ., m
j 1
j x j 0, j 1, 2,. . ., n
n
4. a x
j 1
ij j bi , i 1, 2,. . ., m and x j 0, j 1, 2,. . ., n
6
Wolfes Method
Denoting the (non-negative) slack variable for the ith
constraint n
a
j 1
ij x j bi
by si, the 3rd condition(s) can be written in an equivalent
form as:
3. i si 0, i 1, 2,. . ., m
j x j 0, j 1, 2,. . ., n
(Referred to as " Restricted Basis " conditions).
7
Wolfes Method
Also condition(s) (2) can be rewritten as:
n m
2. 2 dij xi i aij j c j
i 1 i 1
j 1, 2,. . ., n
and condition(s) (4) can be rewritten as:
n
4. a
j 1
ij x j si bi , i 1, 2,. . ., m
x j 0, j 1, 2,. . ., n
8
Wolfes Method
* * *
x
Thus we find the optimal solution 1 1 , x , . . . , xn is a
solution of the system of m + n linear equations in
the n + m + n unknowns x j , i , j
n n
2 d ij xi i aij j c j , j 1, 2, . . . , n
i 1 i 1
n
a
j 1
ij x j si bi , i 1, 2,. . ., m
along with the " Restricted Basis " conditions
i si 0, i 1, 2,. . ., m,
j x j 0, j 1, 2,. . ., n
9
Wolfes Method
Since we are interested only in a " feasible solution"
of the above system of linear equations, we use
Phase-I method to find such a feasible solution.
By the sufficiency of the KKT conditions, it will be
automatically the optimum solution of the given
quadratic programming problem.
10
Example-1:
Maximize z 8x1 4 x2 x12 x22
subject to x1 x2 2
x1 , x2 0.
11
Example-1: Solution
Denoting the Lagrange multipliers by 1, 1, and 2,
the KKT conditions are:
1. 1 , 1 , 2 0
2. 8 2 x1 1 1 0
4 2 x2 1 2 0
i.e. 2 x1 1 1 8
2 x2 1 2 4
3. x1 x2 s1 2, 1s1 1 x1 2 x2 0
All variables 0.
12
Example-1: Solution
Introducing artificial variables R1, R2, we thus have to
Minimize r R1 R2
subject to the constraints
2 x1 1 1 R1 8
2 x2 1 2 R2 4
x1 x2 S1 2
1S1 1 x1 2 x2 0
All variables 0 (We solve by " Modified Simplex " Algorithm).
13
Basic r x 1 x2 1 1 2 R1 R2 s1 Sol
2 2 2 -1 -1 0 0 0 12
r 1 0 0 0 0 0 -1 -1 0 0
R1 0 2 0 1 -1 0 1 0 0 8
R2 0 0 2 1 0 -1 0 1 0 4
S1 0 1 1 0 0 0 0 0 1 2
r 1 0 0 2 -1 -1 0 0 -2 8
R1 0 0 -2 1 -1 0 1 0 -2 4
R2 0 0 2 1 0 -1 0 1 0 4
x1 0 1 1 0 0 0 0 0 1 2
r 1 0 4 0 1 -1 -2 0 2 0
1 0 0 -2 1 -1 0 1 0 -2 4
R2 0 0 4 0 1 -1 -1 1 2 0
14
x 0 1 1 0 0 0 0 0 1 2
Example-1: Solution
0 -1/2 -1/2 1/2 1/2 -1
x2 1 1/4 -1/4 -1/4 1/4 1/2
0 -1/4 1/4 1/4 -1/4 1/2
Thus we have got the feasible solution
x1 = 2, x2 = 0, 1 = 4, 1 = 0, 2 = 0
and the optimal value is: z = 12
15
Example-2
Maximize z 6 x1 3x2 2 x 4 x1 x2 3x
2
1
2
2
subject to x1 x2 1
2 x1 3 x2 4
x1 , x2 0
16
Example-2: Solution
Denoting the Lagrange multipliers by 1, 2, 1, and 2,
the KKT conditions are:
1. 1 , 2 , 1 , 2 0
2. 6 4 x1 4 x2 1 22 1 0
3 4 x1 6 x2 1 32 2 0
i.e. 4 x1 4 x2 1 22 1 6
4 x1 6 x2 1 32 2 3
17
Example-2: Solution
3. x1 x2 S1 1,
2 x1 3x2 S 2 4,
1S1 2 S2 0
1 x1 2 x2 0
and all variables 0.
Solving this by " Modified Simplex Algorithm ", the
optimal solution is:
x1 = 1, x2 = 0 and the optimal z = 4.
18
Basic r x 1 x2 1 2 1 2 R1 R2 S1 S2 Sol
8 10 2 5 -1 -1 0 0 0 0 9
r 1 0 0 0 0 0 0 -1 -1 0 0 0
R1 0 4 4 1 2 -1 0 1 0 0 0 6
R2 0 4 6 1 3 0 -1 0 1 0 0 3
S1 0 1 1 0 0 0 0 0 0 1 0 1
S2 0 2 3 0 0 0 0 0 0 0 1 4
r 1 4/3 0 1/3 0 -1 2/3 0 -5/3 0 0 4
R1 0 4/3 0 1/3 0 -1 2/3 1 -2/3 0 0 4
x2 0 2/3 1 1/6 1/2 0 -1/6 0 1/6 0 0 1/2
S1 0 1/3 0 -1/6 1/2 0 1/6 0 -1/6 1 0 1/2
19
S2 0 0 0 -1/2 -3/2 0 1/2 0 -1/2 0 1 5/2
Basic r x 1 x2 1 2 1 2 R1 R2 S1 S2 Sol
r 1 0 -2 0 -1 -1 1 0 -2 0 0 3
R1 0 4 4 1 2 -1 0 1 0 0 0 6
x1 0 1 3/2 1/4 3/4 0 -1/4 0 1/4 0 0 3/4
S1 0 0 -1/2 -1/43/4 0 1/4 0 -1/4 1 0 1/4
S2 0 0 0 -1/2 -3/2 0 1/2 0 -1/2 0 1 5/2
r 1 0 0 1 2 -1 0 0 -1 -4 0 2
R1 0 0 0 1 2 -1 0 1 0 -4 0 2
x1 0 1 1 0 0 0 0 0 0 1 0 1
2 0 0 -2 -1 3 0 1 0 -1 4 0 1
20
S2 0 0 1 0 0 0 0 0 0 -2 1 2
Basic r x 1 x2 1 2 1 2 R1 R2 S1 S2 Sol
r 1 0 0 0 0 0 0 -1 -1 0 0 0
1 0 0 0 1 2 -1 0 1 0 -4 0 2
x1 0 1 1 0 0 0 0 0 0 1 0 1
2 0 0 -2 0 1 -1 1 1 -1 0 0 3
S2 0 0 1 0 0 0 0 0 0 -2 1 2
Thus the optimum solution is:
x1 = 1, x2 = 0, 1 = 2, 2 = 0, 1 = 0, 2 = 3
And the optimal value is: z = 4
21
Example-3
Maximize z 8x1 x 2 x2 x3
2
1
subject to
x1 3x2 2 x3 12
x1 , x2 , x3 0
22
Example-3: Solution
Denoting the Lagrange multipliers by 1,1, 2, and 3,
the KKT conditions are:
1. 1 , 1 , 2 , 3 0
2. 8 2 x1 1 1 0
2 31 2 0
1 21 3 0
i.e. 2 x1 1 1 8
31 2 2
21 3 1
23
Example-3: Solution
3. x1 3x2 2 x3 S1 12,
1S1 0
1 x1 2 x2 3 x3 0 All variables 0.
Solving this by " Modified Simplex Algorithm ", the optimal
solution is: 11 25
x1 , x2 , x3 0
3 9
and the optimal z = 193
9
24
Basic r x1 x2 x3 1 1 2 3 R1 R2 R3 S1 Sol
2 0 0 6 -1 -1 -1 0 0 0 0 11
r 1 0 0 0 0 0 0 0 -1 -1 -1 0 0
R1 0 2 0 0 1 -1 0 0 1 0 0 0 8
R2 0 0 0 0 3 0 -1 0 0 1 0 0 2
R3 0 0 0 0 2 0 0 -1 0 0 1 0 1
S1 0 1 3 2 0 0 0 0 0 0 0 1 12
Since 1 S1 = 0 and S1 is in the basis, 1 cannot enter.
So we allow x1 to enter the basis and of course by minimum
ratio test R1 leaves the basis.
25
Basic r x1 x2 x3 1 1 2 3 R1 R2 R3 S1 Sol
r 1 0 0 0 5 0 -1 -1 -1 0 0 0 3
x1 0 1 0 0 1/2 -1/2 0 0 1/2 0 0 0 4
R2 0 0 0 0 3 0 -1 0 0 1 0 0 2
R3 0 0 0 0 2 0 0 -1 0 0 1 0 1
S1 0 0 3 2 -1/2 1/2 0 0 -1/2 0 0 1 8
Since 1 S1 = 0 and S1 is in the basis, 1 cannot enter.
So we allow x2 to enter the basis and of course by minimum
ratio test S1 leaves the basis.
26
Basic r x 1 x2 x3 1 1 2 3 R1 R2 R3 S1 Sol
r 1 0 0 0 5 0 -1 -1 -1 0 0 0 3
x1 0 1 0 0 1/2 -1/2 0 0 1/2 0 0 0 4
R2 0 0 0 0 3 0 -1 0 0 1 0 0 2
R3 0 0 0 0 2 0 0 -1 0 0 1 0 1
x2 0 0 1 2/3 -1/6 1/6 0 0 -1/6 0 0 1/3 8/3
As S1 is not in the basis, now 1 enters the basis .
And by minimum ratio test R3 leaves the basis.
27
Basic r x1 x2 x3 1 1 2 3 R1 R2 R3 S1 Sol
r 1 0 0 0 0 0 -1 3/2 -1 0 -5/2 0 1/2
x1 0 1 0 0 0 -1/2 0 1/4 1/2 0 -1/4 0 15/4
R2 0 0 0 0 0 0 -1 3/2 0 1 -3/2 0 1/2
1 0 0 0 0 1 0 0 -1/2 0 0 1/2 0 1/2
x2 0 0 1 2/3 0 1/6 0 -1/12 -1/6 0 1/12 1/3 11/4
Now 3 enters the basis .
And by minimum ratio test R2 leaves the basis.
28
Basic r x1 x2 x3 1 1 2 3 R1 R2 R3 S1 Sol
r 1 0 0 0 0 0 0 0 -1 -1 -1 0 0
x1 0 1 0 0 0 -1/2 1/6 0 1/2 -1/6 0 0 11/3
3 0 0 0 0 0 0 -2/3 1 0 2/3 -1 0 1/3
1 0 0 0 0 1 0 -1/3 0 0 1/3 0 0 2/3
x2 0 0 1 2/3 0 1/6 -1/18 0 -1/6 1/18 0 1/3 25/9
This is the end of Phase I. Thus the optimal
Thus the optimal solution is: value z is: 193
11 25 9
x1 , x2 , x3 0
3 9
29
Hillier & Lieberman
Solve the following quadratic programming problem
Maximize
z 20 x1 50 x2 20 x 18x1 x2 5x
2
1
2
2
subject to
x1 x2 6
x1 4 x2 18
x1 , x2 0
Max z = 224
Using Excel solver, the opt solution is: x1=2, x2=4
30
Remark:
If the problem is a minimization problem, say, Minimize z,
we convert it into a maximization problem,
Minimize z = - Maximize ( -z ).
31