02 - Reduced O. T. Introduction
02 - Reduced O. T. Introduction
Introduction to Optimization
Nagaraj Shanbhog, M.Tech. Ph.D.
Mechanical Engineering Department
Faculty of Technology and Engineering
The M. S. University of Baroda
2023
1
Optimization - Introduction
Optimization is the act of obtaining the best result under given
circumstances.
Optimization can be defined as the process of finding the conditions
that give the maximum or minimum of a function.
The optimum seeking methods are also known as “mathematical
programming” techniques and are generally studied as a part of
operations research.
A typical single variable optimization or a mathematical programming problem.
f(x): Objective function
Minimize f(x)
subject to h(x)=c x: Decision/Design variables
g(x)≥ b h(x)=c: Equality constraint
g(x)≥ b: Inequality constraint
f*(x) : Solution; x*: Value of decision variable at optimal solution
Optimization - Introduction
Types of optimization problems - Classification can be based on different criteria
(1) Based on presence of constraints (5) Permissable values of the variables
Unconstrained optimization problems Continuous variable, Real valued
Constrained optimization problems programming problems
(2) Based on number of variables Integer programming problems
Single variable optimization problems Mixed-inter programming problems
(one dimensional)
Multi variable optimization problems (6) Based on Nature of Objective function
and constraints (Nature of the equations)
(multi dimensional) Linear programming problems
(3) Based on number of objectives Non linear programming problems
Single objective optimization problems
Multi objective optimization problems Non linear programming problems
Linearly constrained N-L programming
(4) Based on Nature of variables (Certainty) problems
Deterministic optimization problems Quadratic programming problems
Stochastic optimization problems Geometric programming problems
Optimization - Introduction
Integer Programming Problem
• If some or all of the design variables x1,x2,..,xn of an optimization problem are
restricted to take on only integer (or discrete) values, the problem is called an
integer programming problem.
• If all the design variables are permitted to take any real value, the optimization
problem is called a real-valued programming problem.
Stochastic Programming Problem
• A stochastic programming problem is an optimization problem in which some or
all of the parameters (design variables and/or preassigned parameters) are
probabilistic (nondeterministic or stochastic).
• In other words, stochastic programming deals with the solution of the optimization
problems in which some of the variables are described by probability distributions.
Optimization - Introduction
What is a Function?
Unimodal:
ƒ(x) is unimodal on the interval if and
only if it is monotonic on either side of the single
optimal point x* in the interval.
An unimodal function
-Multimodal:
A multimodal function
Optimization - Introduction
Single variable optimization
• Useful in finding the optimum solutions of continuous and differentiable
functions
• These methods are analytical and make use of the techniques of
differential calculus in locating the optimum points.
• Since some of the practical problems involve objective functions that
are not continuous and/or differentiable, the classical optimization
techniques have limited scope in practical applications .
Optimization - Introduction Optimization terminology
(a) The unbounded domain (b) The bounded domain
Representation of optimum
points.
(a) The unbounded domain and
function (no global
(b) The bounded optimum).
domain and function
(global minimum and maximum
exist).
Optimization - Introduction Optimization terminology
Types of minima/maxima (Optima)
Inflation points
• A point x* is said to be an inflection point if the function
value increases locally as x* increases and decreases
locally as x* reduces
Optimization - Introduction
Example: Solve the following:
Global vs. local optima
Minimize f(x)=x3-17x2+80x-100
Example: Solve the following:
Solution: f’(x)=3x2-34x+80=0
Minimize f(x)=x2.
Solving the above results in x=3.33 and x=8.
Solution: f’(x)=2x=0x*=0.
This solution is a local optimum. Issue#1: Which is the best solution?
It is also the global optimum Issue#2: Is the best solution the global solution?
Issue#1: Which is the best
solution?
x=8
Issue#2: Is the best
solution the global
solution?
No! It is unbounded.
Optimization - Introduction
Convex functions
Definition #1: A function f(x) is Definition #2: A function f(x) is convex if a
convex in an interval if its second line drawn between any two points on the
derivative is positive on that function remains on or above the function in
interval. the interval between the two points.
Example: f(x)=x2 is convex since Is a linear function convex?
f’(x)=2x, f’’(x)=2>0
2.5
1.5 0.1
0. 1
0.2 0.3 0. 2
1 0.4
0.5
0.5 0.
6
0.1
0. 4
0.7
0.3
0.4
0
7
0.1
0. 0.8 0. 5
0. 3
0.2
-0.5 5
0.
0.
2
0.6
-1 0.4
0.3
0. 1 0.2
-1.5 0.1
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
Optimization - Introduction Contour plot
Global optimum Vs Local optimum (In two dimensions)
Multi-objective Optimization
• Optimization of multiple objective (more than one) objective functions at the
same time.
• Different in the optimal solutions corresponding to each objectives because
the objective functions are often conflicting (competing) to each other.
• No one solution can be considered to be better than any other with respect
to all objective functions. The non-dominant solution concept.
• Set of trade-off optimal solutions instead of one optimal
solution, generally known as “Pareto-Optimal” solutions
(named after Italian economist Vilfredo Pareto.
Pareto noticed that many economic solutions helped some people
while hurting others. He was interested in finding solutions that helped
some people without hurting anyone else. Solutions like this are now
called “Pareto improvements.”
The evaluation of solutions in MOO is different: In seeking Vilfredo Pareto
an optimal or best overall solution, the goal is to quantify the (1848 -1923)
degree of conflict (trade-off)
Multi-objective Optimization MOO problem solution Approaches
• Traditional Approaches (Non-Pareto Techniques) - A priori
The preferences ( Relative importance) are known before solution
• Aggregating approaches :
Aggregating the objectives into a single and parameterized
objective function and performing several runs with different
parameter settings to achieve a set of solutions.
– Weighted sum method
• Convert multiple objectives into one single objective using weights and summation:
• Determine the importance of each objective function assigning it a weight (w)
• Add up all functions:
min Z = ( w1 z1 + w2 z2 +…+ wn zn )
– Goal programming
Where, w1+w2+… approach:
=1 Target level of each objective rather than
maximized or minimized function. Goals are soft constraints.
The Optimizing the Objective is to minimize the deviation fro the goals.
Multi-objective Optimization MOO problem solution Approaches
Posteriori aproach solutions
For conflicting objectives the optimality concept may be inappropriate
So, what does optimal mean in this case?
A new concept that can measure solutions against multiple, conflicting
objectives is : the non-inferiority concept
Terms used: Dominance (mathematical programmers), Efficiency
(statisticians and economists) Pareto optimality (welfare economists)
A solution is non-inferior or non dominated, if there exists no other feasible
solution with better performance with respect to any objective without having
worse performance for at least one other objective
Multi-objective Optimization Posteriori aproach solutions
Pareto - optimal solutions • Pareto front: The curve formed by joining all
• Multi-objective optimization Pareto - optimal solutions in function space
leads to a set of optimal Objective Space,
Variable Space
solutions (known as Pareto- Decision Space Function Space
optimal solutions) since no (Design Space) (Criterion Space)
solution can be considered
better than any other
regarding all the objectives
• Find all the pareto optimal
solutions (a set of Non-
dominated solutions)
• The decision Maker has
option to explores this set
to select a suitable
solution.
Decision Space Vs. Objective Space
Multi-objective Optimization Which Solutions are Optimal?
Posteriori approach Solutions
Concept of Domination
Dominated and Non-dominated solutions
• Non-dominated : given two objectives, a non-
dominated solution is when no other solutions are
better with respect to both the objectives.
• Dominated: when solution can be further improved at
least in one objective without compromising the other
objective.
A solution x(1) is said to dominate the other solution x(2),
if both conditions a and b are true:
(a) The solution x(1) is no worse that x(2) in all objectives.
(b) The solution x(1) is strictly better than x(2) in at least
one objective. Examples:
3 dominates 2,1 ; 5 dominates2,4;
3 does not dominate 5 Seek Non dominated Solutions
Multi-objective Optimization
Pareto optimum set consists of
the solutions which cannot be
improved with respect to all The
functions at the same time.
Depends on the type of objectives
Definition of domination takes care
of possibilities
Always on the boundary of feasible
region
- Evolutionary Multi-
objective Optimization
(EMO) Algorithms
Pareto-Optimal Fronts
Multi-objective Linear programming problem Pareto - optimal solutions
Total cost Minimize Z1= f1(x) = 2000x1 + 500x2 Function space
f2
Total time Minimize Z2= f2(x) = x1 + 2x2 Objective space
Subject to 2x1 + 2x2 ≥ 16 (1) If there is only one point that Criterion Space
minimize both cost and time,
x1 + 5x2 ≥ 28 (2) obviously that would be the A (4000, 16)
solution; but often, there are C
D
Variable space different optimal points for
Decision space each objective. B (8500, 13)
x2
Design Space f1
Line segment AB
Z1 time16h Line segment AB in function
between the two optimal
(optimal cost) Z2 cost = Rs.8500 points represents Non space is ‘Pareto front’
Z1 =cost = Rs.4000 (optimal time) dominant solutions. Any point on this front is
Z2=Time=13h
x1 x2 f1 f2 considered “Pareto optimal”.
A (0, 8) C By moving along the curve,
D A 0 8 4000 16
B(3, 5) you could minimize cost at the
x1 C ? ? 5500 15 expense of time, or minimize
time at the expense of cost,
D ? ? 7000 14 but you can not improve both
(1) 3 5 8500 13 at once.
(2) B
Example: Single objective NL
constrained Optimization Subject to -x1 - x2 + 10 ≤ 0 (1)
GRAPHICAL SOLUTION
-2x1 + 3x2 -10 ≤ 0 (2)
Solution:
Figure is a graphical representation of
the problem. The feasible set S is
convex, as shown in the figure.
f1 =16.25
f1
Multi-objective Optimization
Multi-objective Optimization
OPTIMIZATION PROBLEMS - CLASSIFICATION
ample:
The Local bag manufacturing Company manufactures standard school and office bags.
Products require.(1) cutting of material (Outer and inner) and (2) stitching and finishing.
The following is the formulated product mix LP model
Maximize profit z =
GRAPHICAL
5 LP SOLUTION
0
X
X2
1 40 –
Optimal Solution at Point A
+
X1 = 0 School bags 30 –
X2 = 20 Office bags 1
Profits = Rs. 2,400 2
0 20 – Profit Line for 50X1 + 120X2
A(0, 20) (Passes through Point A)
X
2
B
10 –
Subject to
| | | | | |
0– 10 20 30 40 50 60 X1
C
Sensitivity Analysis
Optimal solutions to LP problems have been found under what are called deterministic assumptions
This means that we assume complete certainty in the data and relationships of a problem
But in the real world, conditions are dynamic and changing
We can analyze how sensitive a deterministic solution is to changes in the assumptions of the model
This is called sensitivity analysis , postoptimality analysis or parametric programming
| | C | | | |
0– 10 20 30 40 50 60 X1
Software in sensitivity analysis also provide Windows and Changes in Objective Function
Coefficients
Sensitivity Analysis
Changes in the Constraint Coefficients
If the amount of resources needed to produce a product changes, coefficients
in the constraint equations change
This does not change the objective function, but it can produce a significant
change in the shape of the feasible region
This may cause a change in the optimal solution
40 – 40 – 40 –
Optimal Optimal Optimal
Solution unchanged Solution
20 – 20 – 2X1 + 4X2 ≤ 80 20 –
16
2X1 + 4X2 ≤ 80 2X1 + 5 X2 ≤ 80
| | | | | | | | | | |
0– 20 30 40 X1 0– 20 40 X1 0– 20 30 40 X1
Sensitivity Analysis
Changes in Resources or Right-Hand-Side Values Constraint
Coefficients :
The right-hand-side values of the constraints often represent resources available
If additional resources were available, a higher total profit could be realized
Sensitivity analysis about resources will help answer questions such as:
How much should the company be willing to pay for additional hours?
Is it profitable to have some cutting work overtime?
Should we be willing to pay for more titching and finishing time?
If the right-hand side of a constraint is changed, the feasible region will change
(unless the constraint is redundant) and often the optimal solution will change
The amount of change (increase or decrease) in the objective function value that
results from a unit change in one of the resources available is called the dual price
or dual value
Sensitivity Analysis
Changes in Resources or Right-Hand-Side Values Constraint
Coefficients :
However, the amount of possible increase in the right-hand side of a resource is
limited
If the number of hours increases beyond the upper bound (or decreases below
the lower bound), then the objective function would no longer increase
(decrease) by the dual price.
There may be excess (slack) hours of a resource or the objective function
may change by an amount different from the dual price.
Thus, the dual price is relevant only within limits.
If the dual value of a constraint is zero
The slack is positive, indicating unused resource
Additional amount of resource will simply increase the amount of slack.
The upper limit of infinity indicates that adding more hours would simply
increase the amount of slack.
Sensitivity Analysis
Changes in Resources or Right-Hand-Side Values Constraint Coefficients :
Cutting is a binding constraint; increasing the resource will Shadow price represents change in
increase the solution space and move the optimal point. the objective function value per one-
Stitching is a nonbinding constraint; increasing the resource will unit increase in the RHS of the
increase the solution space and but will not move the optimal point. constraint. In a business application, a
The cutting hours are changed from 80 to 100, the X2
X2 new optimal solution is (0,25) with profit of 3,000.
shadow price is the maximum price
The extra 20 hours resulted in an increase in profit that we can pay for an extra unit of a
60 – of 600 or 30/hour 60 – given limited resource.Right Hand Side
Max. z =
5 (RHS).
0 – Max. z =
X 5
40 –
A(0,25) Constraint 60 Hours 40 – 0
1
Profit Line for 50X1 + 120X2 A = (0,20) X
a
25 – (Passes through Point A) + – 1
Max. z = Max. z =
60 – 5 60 – 5
0 0
X – X
Constraint 1 1
40 – Representing 60 40 –
Hours + A (0,20) +
Profit Line for 50X1 + 120X2 – Profit Line for 50X1 + 120X2
A(0,15)
(Passes through Point A) 1 (Passes through Point A) 1
Changed Constraint 2 20 – 2
20 ––
15 Representing 60 0
0
Hours X 10 – X
2
| | | 2 | | | | | |
s.t.40 0–
10 20
s.t.
30 40 50 60
0– 20 60 X1 X1
2 2
Sensitivity Analysis
Changes in Resources or Right-Hand-Side Values Constraint Coefficients :
If total cutting time was increased to
Max. z = 240, the optimal solution would be
5 (0,60) with a profit of 7,200. This is
0 2,400 (the original solution) + 30 (dual
A(0,60) X price)*160 hours(240-80)
1 If the hours increases beyond 240,
Changed Constraint
60 – Representing 240 Hours
+ then the optimal solution would
still be (0,60) and profit would not
Profit Line for 50X1 + 120X2
1
(Passes through Point A)
increase. The extra time is slack
2 during which the cutting are not
40 – 0 working
X
Constraint
2
Representing
s.t. 60 Hours
20 – 2
X
1
| | | | | |
0
– 20 40 60 + 80 100 120