0% found this document useful (0 votes)
39 views40 pages

5 Optimization Techniques

Uploaded by

Bhaskar Mulik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views40 pages

5 Optimization Techniques

Uploaded by

Bhaskar Mulik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Optimization

Techniques
Contents
• Types of optimization-Constrained and Unconstrained
optimization
• Methods of Optimization
• Numerical Optimization
• Bracketing Methods-Bisection Method
• False Position Method
• Newton’s Method
• Steepest Descent Method
• Penalty Function Method.
Queen Dido’s problem
What is optimization?
• “Optimization” comes from the same root as “optimal”, which means
best. When you optimize something, you are “making it best”.
• But “best” can vary.
1.Transportation Routing: Companies like Uber and Ola use optimization to
calculate the most efficient routes for drivers picking up and dropping off
passengers, minimizing travel time and costs.
2.Supply Chain Management: Businesses optimize their supply chain
logistics to minimize costs while ensuring timely delivery of products to
consumers.
3.Product Design: Engineers optimize the design of cars, airplanes, and
other vehicles to improve performance, fuel efficiency, and safety while
considering manufacturing constraints.
• Both maximizing and minimizing are types of optimization problems.
What is optimization?
• Mathematical optimization is the process of finding the
maximum or minimum value of an objective function by
systematically choosing the best available values from a
set of permissible inputs.
Application areas
• Manufacturing • Production
• Inventory control • Transportation
• Scheduling • Networks
• Finance • Engineering
• Mechanics • Economics
• Control engineering • Marketing
• Policy Modeling
Ingredients of Optimization
• An objective function
• Decision variable
• Constraints
Ingredients of Optimization
• The objective function, f(x), which is the output you’re
trying to maximize or minimize
• Examples:
• yield per time in a chemical reaction
• mileage per liter in a car
• Revenue in a product of TV sets
• Tensile strength of a rope
• Cost per unit production of a radio
• Operating cost of a power plant
• Time of travel from one city to another
Ingredients of Optimization
• Decision/Control variables: Variables, x1 x2 x3 and so
on, which are the inputs – things you can control. They
are abbreviated xn to refer to individuals or x to refer to
them as a group.
• Example: Efficiency of air conditioning system: pressure,
temperature, moisture content, area, etc.
• In optimization theory, we develop methods for optimal
choices of control variable to maximize( or minimize)
the objective function
Ingredients of Optimization
• Constraints are equations that place limits on how big
or small some variables can get. Constraints arise from
the nature of problems an the variables
• Example: If x1 is production cost, x1 >= 0.
• Constraint can be in form of equalities or inequalities
• Equality constraints are usually noted hn (x) and
inequality constraints are noted gn (x)
Statement of an Optimization
Problem
• Find X=, which minimize f (x)
• Subject to
• hi(x1,..., xn ) =0 i=1,2 .. m (m equality
constraints)
• gj (x1,..., xn ) <=0 j=1,2 .. n (n equality
constraints)
Examples:
• For each of the following tasks, write an objective
function (“maximize ____”) and at least two constraints
(“subject to _____ ≤ c1 ”, or ≥ or =)
1. A student must create a poster project for a class.
2. A shipping company must deliver packages to
customers.
3. A grocery store must decide how to organize the store
layout
Regression
Types of optimization
• Optimization problems can be classified based on the
type of constraints, nature of design variables, physical
structure of the problem, nature of the equations
involved, deterministic nature of the variables,
permissible value of the design variables, separability of
the functions and number of objective functions.
Types of optimization
• Constrained optimization problems: which are subject to one
or more constraints.
• Unconstrained optimization problems: in which no constraints
exist.
• Example:
• Find the path between two points that minimizes the distance
traveled
• We need to enclose a field with a fence. We have 500 ft of
fencing material. There is a building on one side of the field,
for which no fencing is needed. Determine maximum area of
the rectangular shape field that can be enclosed by the fence.
Methods of Optimization: Numerical
Optimization-Unconstrained
Optimization
• A point x* on a function is said to be a critical point if f '
(x*) = 0.
• This is the first order condition for x* to be a
maximum/minimum.
• Second order condition:
• x* is a maximum of f(x) if f ' '(x*) < 0;
• x* is a minimum of f(x) if f ' '(x*) > 0;
• x* can be a maximum, a minimum or neither if f ' '(x*)
=0
Example
• Suppose you have the following function:
• f(x) = x3 – 6x2 + 9x
• Then the first order condition to find the critical
points is:
• f’(x) = 3x2 - 12x + 9 = 0
• This implies that the critical points are at x = 1 and x
4

= 3.
0

-2

-4

-6

-8
-0.5 0 0.5 1 1.5 2 2.5 3 3.5 4
Example
• The next step is to determine whether the critical
points are maximums or minimums.
• These can be found by using the second order condition.
• f ' '(x) = 6x – 12 = 6(x-2)
• Testing x = 1 implies:
• f ' '(1) = 6(1-2) = -6 < 0.
• Hence at x =1, we have a maximum.
• Testing x = 3 implies:
• f ' '(3) = 6(3-2) = 6 > 0.
• Hence at x =3, we have a minimum.
Local and global minima/maxima
• A local maximum is a point that f(x*) ≥ f(x) for all x in
some open interval containing x* and a local minimum
is a point that f(x*) ≤ f(x) for all x in some open interval
containing x*;
• A global maximum is a point that f(x*) ≥ f(x) for all x in
the domain of f and a global minimum is a point that
f(x*) ≤ f(x) for all x in the domain of f.
• For the previous example, f(x)   as x   and f(x)  -
as x  -. Neither critical point is a global max or min of
f(x).
Local and global minima/maxima
• When f ''(x)≥0 for all x, i.e., f(x) is a convex function,
then the local minimum x* is the global minimum of f(x)
• When f ''(x)≤0 for all x, i.e., f(x) is a concave function,
then the local maximum x* is the global maximum of
f(x)
Conditions for a Minimum or a
Maximum Value of a Function of
Several Variables
• Correspondingly, for a function f(x) of
several independent variables x
• Calculatef x  and set it to zero.
• Solve the equation set to get a solution
vector x*.
• Calculate2 f x  .
• Evaluate it at x*.
• Inspect the Hessian matrix at point x*.
H x  2 f x 
Hessian Matrix of f(x)

f x is a C 2 function of n variable s,


  2 f x   2 f x 
 2
 
 x1 x1xn 
H x  2 f x      .
  2 f x   2 f x 
  2

 xn x1 xn 
Since cross - partials are equal for a C 2 function, H(x)
is a symmetric matrix.
Conditions for a Minimum or a Maximum
Value of a Function of Several Variables
(cont.)
• Let f(x) be a C2 function in Rn. Suppose that x* is
a critical point 
off f(x),
x * i.e.,
0 .

1. If the Hessian H x * is a positive definite


matrix, then x* is a local minimum of f(x);
2. If the Hessian H x * is a negative definite
matrix, then x* is a local maximum of f(x).
3. If the HessianH x * is an indefinite matrix,
then x* is neither a local maximum nor a local minimum of
f(x).
Example
 Find the local maxs and mins of f(x,y)

3 3
f ( x, y )  x  y  9 xy
 Firstly, computing the first order partial derivatives (i.e.,
gradient of f(x,y)) and setting them to zero

 f 
 
x   3 x  9 y 
 2

f ( x, y )   0
 f  2 
 3y  9x 
  
 y 
 critical points x*, y *is (0,0) and ( 3, -3 ).
Example (Cont.)
 We now compute the Hessian of f(x,y)

 2 f 2 f 
 2 
2  x xy   6 x 9 
 f ( x, y )  2 2   9  6 y .
 f  f   
 yx y 2 
 
 The first order leading principal minor is 6x and the
second order principal minor is -36xy-81.
 At (0,0), these two minors are 0 and -81, respectively.
Since the second order leading principal minor is
negative, (0,0) is a saddle of f(x,y), i.e., neither a max
nor a min.
 At (3, -3), these two minors are 18 and 243. So, the
Hessian is positive definite and (3,-3) is a local min of
f(x,y).
 Is (3, -3) a global min?
Global Maxima and Minima of a
Function of Several Variables
• Let f(x) be a C2 function in Rn, then

• When f(x) is a concave function, f x 


2 i.e., is negative
f xfor
semidefinite * all
0 x and , then x* is a
global max of f(x);
2 f x 
isf ax *convex
• When f(x)   0 function, i.e., is positive
semidefinite for all x and , then x* is a
global min of f(x);
Example (Discriminating
Monopolist)
 A monopolist producing a single output has two types of
customers. If it produces q1 units for type 1, then these
customers are willing to pay a price of 50-5q1 per unit. If it
produces q2 units for type 2, then these customers are willing
to pay a price of 100-10q2 per unit.
 The monopolist’s cost of manufacturing q units of output is
90+20q.
 In order to maximize profits, how much should the monopolist
produce
f (q1 , q2 for
) qeach market?
1 (50  5q1 )  q2 (100  10 q2 )  (90  20( q1  q2 )).
 Profit is:
The critical points are
f f
50  10q1  20 0  q1 3, 100  20q2  20 0  q2 4.
q1 q 2
2 f 2 f 2 f 2 f
2
 10, 2
 20,  0.
q1 q2 q1q2 q2 q1
 2 f is negative definite  (3,4) is the profit - maximizing supply plan.
Bisection Optimization
• Definition:
• If a function f(X) is continuous in the interval [a,b] and f(a) and f(b)
have opposite signs, then there exists at least one root for f(x)
within [a,b]
• The function f(x) is usually non-linear and has a geometrical view
similar to the one below.

• Given that the initial interval [a,b] meets the above conditions, we
can now proceed with the bisection method and get the optimal
root values.
Working of Bisection
Algorithm
• Suppose an interval [a,b] cointains at least one root, i.e, f(a) and
f(b) have opposite signs, then using the bisection method, we
determine the roots as follows:

• Bisect−−−−− the initial interval and set the new values to x0,
i.e. x0=b+a / 2.
• Note: x0 is the midpoint of the interval [a,b].

• Using x0, we consider three cases to determine if x0 is the root


or if not so, we determine the new interval containing the root.
Working of Bisection
Algorithm

If case one occurs, we terminate the bisection process


since we have found the root.
If either case (2) or (3) occurs, the process is repeated until
the root is obtained to the desired tolerance.
Example
1. Find a root of an equation f(x)=x^3 -x-1 using
Bisection method
Number of iterations
False Partition Method
Newton’s Method
• Consider a function f of a single real variable x
• Assume that at any measurement point , we can calculate , ,
• Idea: fit a quadratic function through that matches its first and
second derivatives with that of the function f.
Newton’s Method
• Then,

• First order condition for the minimizer of q is

• Setting we obtain

• These iterations are carried out until


Steepest Descent Method

In the gradient descent step we want to descend from a mountain called


Mount Everest. We can’t see the path very well because it’s very foggy,
so our best bet is to look around us and take the one step that helps us
descend the most. Repeating this step many times is highly likely to
take us to the bottom of the mountain.
Steepest Descent Method
Steepest Descent Method
• Convergence criteria
Penalty Function Method
• Motivation:
• the original objective of the constrained optimization problem,
plus
• one additional term for each constraint, which is positive when
the current point x violates that constraint and zero otherwise.
• Most approaches define a sequence of such penalty
functions, in which the penalty terms for the constraint
violations are multiplied by a positive coefficient. By
making this coefficient larger, we penalize constraint
violations more severely, thereby forcing the minimizer
of the penalty function closer to the feasible region for
the constrained problem. The simplest penalty function
of this type is the quadratic penalty function , in which
the penalty terms are the squares of the constraint
violations.
Penalty Function Method
• Motivation:
• the original objective of the constrained optimization problem,
plus
• one additional term for each constraint, which is positive when
the current point x violates that constraint and zero otherwise.
• Most approaches define a sequence of such penalty
functions, in which the penalty terms for the constraint
violations are multiplied by a positive coefficient. By
making this coefficient larger, we penalize constraint
violations more severely, thereby forcing the minimizer
of the penalty function closer to the feasible region for
the constrained problem. The simplest penalty function
of this type is the quadratic penalty function , in which
the penalty terms are the squares of the constraint
violations.

You might also like