0% found this document useful (0 votes)
4 views

Optimization - Introduction_Part_3

The document discusses the conversion of inequalities in linear programming (LP) into equalities using surplus and slack variables, along with compact matrix and vector representations. It includes examples of maximizing objective functions subject to constraints and explains the formulation of the Lagrangian function in optimization problems. Additionally, it covers the KKT conditions, the role of Lagrangian multipliers, and their application in constrained optimization scenarios.

Uploaded by

rohanrtron
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Optimization - Introduction_Part_3

The document discusses the conversion of inequalities in linear programming (LP) into equalities using surplus and slack variables, along with compact matrix and vector representations. It includes examples of maximizing objective functions subject to constraints and explains the formulation of the Lagrangian function in optimization problems. Additionally, it covers the KKT conditions, the role of Lagrangian multipliers, and their application in constrained optimization scenarios.

Uploaded by

rohanrtron
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Converting inequalities in LP into

Equalities

1. Surplus and slack variables


2. Expressing the problem compactly
Using matrices and vectors
Maximize Z 4 x1  x2
Subject to
x1  x2 50 x1  x2  x3 50; x3 0
3 x1  x2 90 x1  x2  x4 90; x4 0
x1 0; x2 0

Maximize Z cT x
subject to Maximize Z cT x
Subject to
Ax b; x 0; Maximize Z 4 x1  x2  0 x3  0 x4
 1 1 1 0   x1 
where  3 1 0 0   x  50 Subject to
  2   
 x1   4   x1  x2  x3 50;
 x3   90 
x   1  
 50  3 x1  x2  x4 90;
b   ; x  2  ; c    x4 
 90   x3   0 x1 0; x2 0; x3 0; x4 0 x1 0; x2 0; x3 0; x4 0
   
x
 4  0 cT 4 1 0 0 
x= 30.00
Maximize Z cT x 0.00
20.00
subject to
0.00
Ax b; x 0;
where Z = 120.00
 x1   4
x   1
 1 1 1 0  50 
A   ; b   ; x  2  ; c  
 3 1 0 1  90   x3   0
% problem 1    
x
 4  0
% problem 1 A=[1 1 1 0; 3 1 0 1]; b=[50;90];
Will be assumed as column
cvx_begin quiet c=[4 1 0 0];
vector
variables x y cvx_begin quiet
maximize 4*x+y variables x(4)
subject to maximize c*x
x+y<=50 subject to
3*x+y<=90 1
x>=0 cvx_end
y>=0 % display
cvx_end x
sprintf('x=%0.2f y=%0.2f Z=c*x
maxvalue=%0.2f',x,y,4*x+y)
One surplus and
One slack variable
T % problem 1
Minimize Z c x A=[1 2 -1 0; 3 4 0 1]; b=[10;24];
subject to c=[200 500 0 0];
cvx_begin quiet
Ax b; x 0; variables x(4)
where maximize c*x
subject to
 x1   200  A*x==b; x=4.00
x   500  x>=0 3.00
 1 2  1 0  10
   2  ; c   0.00
A   ; b   ; x  cvx_end
 x3   0  0.00
 3 4 0 1  24  % display
    x Z=2300
 x4   0  Z=c*x
Question 5
Write KKT conditions for the problem
and verify with CVX
Minimize Z cT x
subject to
Ax b; x 0;
where
 x1   200 
x   500 
 1 2  1 0  10
   2  ; c  
A   ; b   ; x 
 3 4 0 1  24   x3   0 
   
 x4   0 
Writing Lagrangian Function

First understand geometrically


why the rule produces
right KKT Conditions
It is required for understanding most
Powerful suit of algorithm
Called ADMM
It is replacing old LP,QP,ILP
It is replacing Most Graph Algorithms
It is replacing old deep learning
Training algorithms
If inequality constraints are active,
what should be the directions of gradients
at the optimal point
*
x arg min f ( x1 , x2 ) on g(x1 , x2 ) 0
x
f ( x) g ( x),   0
L  x,  0   f ( x)   g ( x) ;x  R 2

g ( x1 , x2 ) 0
f ( x)
If written with + sign , formulation is wrong
1
g ( x1 , x2 )  0 g ( x1 , x2 )  0
Lagarngian sign chosen in advance
f ( x) 30
g ( x)
f ( x) 20
f ( x) 10

Unconstrained minima

Constraint is active
f ( x) and g ( x) are in same direction at constrained optimization minima on g(x)=0
*
x arg min f ( x1 , x2 ) on g(x1 , x2 ) 0
x
f ( x) g ( x),  0
L  x,  0   f ( x)   g ( x) ;x  R 2

g ( x1 , x2 ) 0 If written with + sign , formulation is wrong


2
g ( x1 , x2 )  0 f ( x) 30 Lagarngian sign chosen in advance
g ( x1 , x2 )  0
f ( x) 20
f ( x) 10

* f ( x* ) 0 vector
x  *
g ( x ) 0 vector

Unconstrained minima

Constraint is inactive
*
x arg min f ( x1 , x2 ) on g(x1 , x2 ) 0
x
f ( x)  g ( x),   0
L  x,  0   f ( x)   g ( x) ;x  R 2

g ( x1 , x2 ) 0
f ( x)
If written with - sign , formulation is wrong
3
g ( x1 , x2 )  0 f ( x) 30 Lagarngian sign chosen in advance
g ( x1 , x2 )  0
f ( x) 20
f ( x) 10
g ( x)

Unconstrained minima

Constraint is active
f ( x) and g ( x) are in opposite direction at constrained optimization minima on g(x)=0
*
x arg min f ( x1 , x2 ) on g(x1 , x2 ) 0
x
f ( x)  g ( x),  0
L  x,  0   f ( x)   g ( x) ;x  R 2

g ( x1 , x2 )  0 g ( x1 , x2 ) 0 If written with - sign , formulation is wrong


4
g ( x1 , x2 )  0
f ( x) 30 Lagarngian sign chosen in advance

f ( x) 20
f ( x) 10

* f ( x* ) 0 vector
x  *
g ( x ) 0 vector

Unconstrained minima

Constraint is inactive
Maximization
Maximization of f(x) with constraints
f ( x) will be directed towards the unconstrained maxima point

Refer the diagram and convince youself


*
x arg max f ( x1 , x2 ) on g(x1 , x2 ) 0
x
f ( x)  g ( x),   0
L  x,  0   f ( x)   g ( x) ;x  R 2

g ( x1 , x2 ) 0 If written with - sign , formulation is wrong


5
g ( x1 , x2 )  0 g ( x)
f ( x) 10

f ( x) 20
Lagarngian sign chosen in advance
f ( x) 30

f ( x) 40
g ( x1 , x2 )  0
f ( x)

Unconstrained maxima

Constraint is active
f ( x) and g ( x) are in opposite direction at constrained optimization minima on g(x)=0
*
x arg max f ( x1 , x2 ) on g(x1 , x2 ) 0
x
f ( x)  g ( x),  0
L  x,  0   f ( x)   g ( x) ;x  R 2

g ( x1 , x2 ) 0 If written with - sign , formulation is wrong


6
g ( x1 , x2 )  0 f ( x) 10

Lagarngian sign chosen in advance


f ( x) 20

f ( x) 30

f ( x) 40 g ( x1 , x2 )  0
 f ( x* ) 0 vector
*
x  *
g ( x ) 0 vector

Unconstrained maxima

Constraint is inactive
*
x arg max f ( x1 , x2 ) on g(x1 , x2 ) 0
x
f ( x) g ( x),  0
 L  x,  0   f ( x)   g ( x);x  R 2

g ( x1 , x2 )  0
g ( x1 , x2 ) 0 7
If written with +sign , formulation is wrong

f ( x) 10

f ( x) 20
Lagarngian sign chosen in advance
f ( x) 30

f ( x) 40
g ( x) g ( x1 , x2 )  0
f ( x)

Unconstrained maxima

Constraint is active
f ( x) and g ( x) are in same direction at constrained optimization minima on g(x)=0
*
x arg max f ( x1 , x2 ) on g(x1 , x2 ) 0
x
f ( x) g ( x),  0
 L  x,  0   f ( x)   g ( x);x  R 2

g ( x1 , x2 )  0
g ( x1 , x2 ) 0 8
If written with +sign , formulation is wrong

f ( x) 10

f ( x) 20
Lagarngian sign chosen in advance

g ( x1 , x2 )  0
f ( x) 30

f ( x) 40

 f ( x* ) 0 vector
*
x  *
g ( x ) 0 vector

Unconstrained maxima

Constraint is inactive
What if we have many constraints?
Take One lagrangian multiplier for each
constraints. For n constraints , n
lagrangian multipliers.
Write a combined Lagrangian function
Simple rule for Writing Lagrangian function
1. Express all inequality in the form g(x) >=0
2. In minimization, subtract from objective function after
multiplication with lagrangian multiplier variables
3. In maximization, add to objective function after
multiplication with lagrangian multiplier variables
In Lagrangian function
Each Lagrangian multiplier
is treated as a variable.
We call it as a dual variable

So lagrangian function is a function with


Primal and dual variables

You might also like