Optimization - Introduction
Optimization - Introduction
• Decision Variables: Variables that we can control or change to achieve the best outcome. For
example, in a manufacturing process, decision variables might be the quantities of different
products to produce.
• Constraints: These are limitations or conditions that must be satisfied. Constraints can include
resource limitations (like time, money, or materials) or physical limitations (such as space or
capacity).
Types of Optimization:
• Linear Optimization (LP): Deals with optimizing linear objective functions
subject to linear constraints. It's often used in resource allocation, production
planning, and logistics.
A company manufactures two products, Product A and Product B. Each unit of Product A
requires 2 hours of labor and 1 hour of machine time, while each unit of Product B requires 1
hour of labor and 3 hours of machine time. The company has a total of 100 hours of labor
available and 90 hours of machine time available. Each unit of Product A yields a profit of $10,
and each unit of Product B yields a profit of $15. The company wants to maximize its profit.
How do we solve it ??
Objective:
Maximize Profit = 10A + 15B
Constraints:
Labor constraint: 2A + 1B ≤ 100 (Available labor hours)
Machine time constraint: 1A + 3B ≤ 90 (Available machine hours)
Non-negativity constraint: A ≥ 0, B ≥ 0
Solution:
1.Formulate the Objective Function:
Maximize Z = 10A + 15B
x = units of Product A
y = units of Product B
Objective Function:
The objective is to maximize profit: Z = 10x+15y
Constraints:
1.Labor constraint: 2x+y ≤ 100
2.Machine time constraint: x+3y ≤ 90
3.Non-negativity constraint: x ≥ 0 and y ≥ 0
Graphical Solution
In linear programming (LP) problems with a convex feasible region, the
optimal solution always occurs at one of the corner points (vertices) of
the feasible region. This is known as the Corner Point Principle or the
Extreme Point Theorem.
• The reason behind this principle is rooted in the properties of linear objective functions and
linear constraints.
• In LP, the objective function and the constraints are all linear functions.
• While having linear constraints, the feasible region is always a convex polytope, which
means it's a bounded region with flat sides and corners.
Plotting the 2D feasible region
Plot the feasible Region meeting the following constraints
x y - 4 0 Or
x y
1;
Code is not generic.
4 4
𝑥≥0; Depending on the y -intercept
x y
3 x y - 6 0 Or 1;
𝑦 ≥ 0. of constraints, you need to change
2 6
Range of x and y in the code
N=200000; % number (x,y) points in domain
x = rand(N,1)*6;
y = rand(N,1)*6;
f1 = x+y-4; % An array of N fn values in domain
ind1 = (f1<0); % An N_array of logical 0s and 1s
f2 = 3*x+y-6; % An array of N fn values in domain
ind2 = (f2<0); % An N_array of logical 0s and 1s
ind3=and(ind1,ind2); % An N_array of logical 0s and 1s
a = [x(ind3),y(ind3)];% points which are in feasible region
figure
plot(a(:,1),a(:,2),'.','MarkerSize',10);
axis equal
xlim([0 6])
ylim([0 6])
CVX
% Define the optimization variables
cvx_begin
variables A B % Define decision variables A and B
% Non-negativity constraint
A >= 0;
B >= 0;
cvx_end
How do we solve this problem
analytically?
Assignment 4
Problem 1:
A farmer has two types of crops, Crop X and Crop Y. Each unit of Crop X
requires 3 hours of labor and 2 hours of irrigation, while each unit of Crop Y
requires 2 hours of labor and 4 hours of irrigation. The farmer has a total of
40 hours of labor available and 32 hours of irrigation available. Each unit of
Crop X yields a profit of $20, and each unit of Crop Y yields a profit of $25.
The farmer wants to maximize their profit.
Problem 2:
A company produces two types of products, Product P and Product Q. Each
unit of Product P requires 1 hour of assembly time and 2 hours of testing
time, while each unit of Product Q requires 2 hours of assembly time and 1
hour of testing time. The company has a total of 40 hours of assembly time
available and 30 hours of testing time available. Each unit of Product P
yields a profit of $30, and each unit of Product Q yields a profit of $25. The
company wants to maximize its profit.
Few important concepts
• Function vs Equation
• Level Set
• Gradient of a function
• Constraints
• Equality Constraints
• Inequality Constraints
• Lagrangian multiplier
Function vs Equation
𝑓 ( 𝑥 ) =2 𝑥 +5 11
f ( x1 , x2 ) x1 3 x2 5
2 2
- =9
𝑓 ( 𝑥 ) =2 𝑥 +5 function
P2 = (1,7)
f(x) 11
P1 = (0,5) x=3 equation
x
Function vs Equation
Example
G(x) , with x elements of Rn is a function.
For viewing it ,you need (n+1)
dimensional space
x1 x2 x3
𝑥4 𝑥5 𝑥6
f ( x1 , x2 ) x1 3 x2 5
2 2
f
x
1 2 x1 6
f ( x) f ( x1 , x2 ) Gradient of the function =
f 2 x2 10
x
2
f
2( x1 3) 0 0 x1* 3
x1
f
2( x1 5) 0 0 x2* 5
x2 f
x 0 x1* 3
f ( x) f ( x1 , x2 ) x
1 *
f 0 *
x2 5
x
2
Local maximum, local minimum and Gradient
Equality/inequality constraints
Facts about Gradient
1.Gradient exist at every point
2. Gradient is a vector. Each element represent a rate of change of
function.
3. If the domain is R2 , then gradient is a 2-tuple vector
4. Gradient vector is drawn in domain at the point where it is
computed.) This vector is orthogonal to level set (tangent drawn to
the level set at the point
5. Its magnitude represent the rate at which function is increasing at
the point in the direction of the vector
6. At maxima and minima, the gradient vector magnitude is zero
7. Only one level set passes through a given point
close(gcf); x=-0.9:.1:0.9; y=x;
[X,Y]=meshgrid(x,y);
fxy=25*X.^2-12*X.^4-6*(X.*Y)+25*Y.^2-24*(X.^2).*(Y.^2)-12*Y.^4;
[px,py]=gradient(fxy,.5,.25);
contour(X,Y,fxy), hold on
quiver(X,Y,px,py), hold off, axis image
0.8
f 25 x 2 12 x 4 6 xy 25 y 2 25 x 2 y 2 12 y 4
0.6 50 x 48 x3 6 y 50 xy 2
f 2 3
0.4
6 x 50 y 50 x y 48 y
0.2
-0.8
specified by HX and HY.
-0.5 0 0.5
Note that, the Gradient of the function
at different points in domain is viewed
as vectors in the domain of the function
itself
0.8
0.6
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-0.5 0 0.5
Two variable functions and its visualization
using level sets
f ( x1 , x2 ) x1 3 x2 5
2 2
Equality Constraints
Constrained Optimization
M in f ( x) s.t. g ( x) 0
x
*
f ( x ) 0 Not applicable
Search is limited to this region
A high school problem reformulated using optimization
theory notation
What is the nearest point on the line from origin ?.
Reformulate to
5 x1 6 x2 30 0 Consider x* arg min f ( x)
x
x , x
*
1
*
2 s.t g ( x) 0
x2
(0,0) x1
Example.1
x ( x1 , x2 ) R 2
min f ( x) x12 x22 s.t g ( x)=5x1 6 x2 30 0
x
Constrained Optimization and
Lagrange multiplier Note: g ( x1 , x2 )=5x1 6 x2 30 is a function
*
Consider x arg min f ( x) It is a plane in 3D
x
But 5x1 6 x2 30 0 is a line in 2D
s.t g ( x) 0 g ( x1 , x2 ) 1 5x1 6 x2 31 0
x : g ( x) 0 defines search domain C It is a line parallel to the previous line
We have to find where in C defined by g ( x) 0, f ( x) as a function assume its minimim value
Example.1
x ( x1 , x2 ) R 2 f ( x* )
f ( x* ) g ( x* ); >0
min f ( x) x12 x22 s.t g ( x)=5x1 6 x2 30 0
x g ( x* )
Level sets of g ( x)
At the optimal point x*
g ( x)=1
f ( x ) g ( x ) and
* *
2x 5 g ( x)=0
* f ( x) 1 , g ( x) ,
g ( x ) 0 2 x2 6
Level sets of f ( x)
These conditions are called Karuh Kuhn Tucker (KKT) Conditions
is called Lagrangian Multiplier Which level-set just touches the line?
While solving optimization
Problem in CVX,
It can be
‘commanded’
to give
Lagrangian multipliers
Checking KKT condition at solution point
clear all;clc;
cvx_begin quiet x1 = 2.4591
variables x1 x2 x2 = 2.9508
minimize x1^2+x2^2 L = 0.9836
dual variable L delfx =
subject to 4.9182
L:5*x1+6*x2-30==0 5.9015
cvx_end Lxdelgx =
format short 4.9180
x1 5.9015
x2
L
delfx=[2*x1 2*x2]’
Lxdelgx=L*[5 6]'
Constrained Optimization and
Lagrange
Example.2
multiplier
x ( x1 , x2 ) R 2
min f ( x) x12 x22 s.t g ( x)= 5x1 6 x2 30 0
x
g ( x* ) 0 g ( x* )
Level sets of f ( x)
Constrained Optimization and Lagrangian
multiplier
Solution to Example.1
x ( x1 , x2 ) R 2
min f ( x) x12 x22 s.t g ( x)=5x1 6 x2 30 0
x
2 x1 5
f ( x) ; g ( x) ;
2 x2 6
f ( x) g ( x) 2 x1 5 & 2 x2 6 x1 5 / 2; x2 3 ;
Further ( x1 , x2 ) must satisfy 5x1 6 x2 30 0
Given an objective function and a set of constraints (in primal variables), a new
function which is an algebraic sum of objective function and scalar (dual
variables) multiples of constraint functions such that the new function’s first
order optimality conditions w.r.t to primal and dual variables are same as the
KKT condition of original problem with objective function and constraints .
min f ( x)
Lagrangian Function
x
Apply first order optimality condition
f ( x) 1g1 ( x) 2g 2 ( x) 0
g1 ( x) 0 g 2 ( x) 0
f ( x* ) 1g1 ( x* ) + 2g 2 ( x* )
g1 ( x* ) =0 g 2 ( x* ) 0 KKT Conditions
Constrained Optimization and Lagrangian
multiplier
min f ( x) 5x1 6 x2 s.t g ( x)=x12 +x22 225 0
x
?
ve provide minimum point location 5x1 6 x2 =0
ve provide maximum point location 5x1 6 x2 = -1
Karuh Khun Tucker (KKT) Conditions
Two possibilities
s.t. g ( x) 0
g ( x ) 0
Solution point g ( x ) 0 Solution point
min f ( x )
x
min f ( x )
x
Two possibilities in words
If Inequality Constraint is active
Solution Point x * : f ( x* ) g ( x* ), 0
Yes
Through complemenarity Condition
Optimization problem with inequality
constraints
min f ( x )
x
s.t . g ( x ) 0
f ( x* ) g ( x* ), 0
g ( x* ) 0
g ( x* ) 0
Complementarity condition
Two cases analyzed
Active min f ( x )
x
Inactive
s.t . g ( x ) 0
At the optimal point
At the optimal point
g ( x* ) 0
g ( x* ) 0
It forces to take 0 value
g ( x ) 0
* g ( x* ) 0
f ( x* ) 0 vector
f ( x* ) g ( x* ), 0
Assignment 5
• Solve the following and plot the corresponding level set
representations of the functions
min f ( x) x1 x2 2 2
x min 𝑓 ( 𝑥 ) = 𝑥1 + 𝑥 2
2 2 𝑥
s.t g ( x) x x 5
1 2 s.t g(x) =
2
min f ( x) x1 x2
3
2 2
min 𝑓 ( 𝑥 ) = 𝑥1 + 𝑥 2
x
s.t g(x) =