0% found this document useful (0 votes)
32 views322 pages

OT CLG Notes

Optimization involves finding the conditions that give the maximum or minimum value of a function. It can be defined as obtaining the best result under given circumstances. There are various types of optimization problems classified based on the number of objective functions, existence of constraints, nature of expressions, and type of variables. Optimization has many engineering applications like design of structures for minimum weight/cost and controlling production lines to reduce costs.

Uploaded by

kokaneankit03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views322 pages

OT CLG Notes

Optimization involves finding the conditions that give the maximum or minimum value of a function. It can be defined as obtaining the best result under given circumstances. There are various types of optimization problems classified based on the number of objective functions, existence of constraints, nature of expressions, and type of variables. Optimization has many engineering applications like design of structures for minimum weight/cost and controlling production lines to reduce costs.

Uploaded by

kokaneankit03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 322

Optimization Techniques

Optimization Techniques (OT)


MEDLO5011
End Semester Exam – 80M
Internal Assessment – 20M

1
Optimization Techniques
Reference Books
• "Engineering Optimization - Theory and Practice"
S.S. Rao - John Wiley and Sons Inc.

• “Decision Making in the Manufacturing Environment Using Graph


Theory and Fuzzy Multiple Attribute Decision Making”
R V Rao - Springer Publication

• “Design and analysis of experiments”


Douglas C. Montgomery - John Wiley & Sons Inc.
2
Optimization Techniques
Optimization Techniques
Optimization Techniques
• Module 1 Basic Concepts – Definitions, Classification, Applications.

• CO1 - Identify the types of optimization problems and appl y the calculus
method to single variable
• Module 2 Linear Programming Problem (LPP) –

To find the optimal value of a linear function of several variables.

Applications – Transportation & Assignment Models.


• CO2 - Formulate the problem as Linear Programming problem and
analyse the sensitivity of a decision variable. 5
Optimization Techniques

• Module 3 Integer Programming Model –

A mathematical optimization or feasibility program in which some or all of


the variables are restricted to be integers.

• CO3 - Apply various linear and non-linear techniques for problem


solving in various domain.

6
Optimization Techniques

• Module 4 Multi Objective Decision Making (MODM) –

It is a procedure targeting at supporting decision makers faced with


conflicting evaluations. MODM is associated with problems in which the
alternatives have been non-predetermined.

• CO4 - Apply multi-objective decision making methods for problem


in manufacturing environment and other domain.
Optimization Techniques

• Module 5 Multi Criterion Decision Making (MCDM) –

MCDM is associated with problems where the number of alternatives are


predetermined. The decision maker (DM) is to select/prioritize/rank a finite
number of courses of action.

• CO5 - Apply multi criterion decision making methods for problem in


manufacturing environment and other domain.
Optimization Techniques

• Module 6 Design of Experiments (DOE) –


DOE deals with planning, conducting, analysing and interpreting controlled
tests to evaluate the factors that control the value of a parameter or group of
parameters.

• CO5 - Apply Design of Experiments method for Optimization


Module 1

Basic Concepts
Basic Concepts
Introduction

• Optimization is the act of obtaining the best result under given circumstances.

• Engineers have to take many technological and managerial decisions at several


stages. The ultimate goal of all such decisions is either to minimize the effort
required or to maximize the desired benefit.

• Optimization can be defined as the process of finding the conditions that give
the maximum or minimum value of a function. f(x)
Basic Concepts

• The Optimization Techniques also known as Mathematical Programming


Techniques are a part of Operations Research.

• Operations Research is a branch of mathematics concerned with the


application of scientific methods and techniques to decision making problems
and with establishing the best or optimal solution.
Basic Concepts
The table shows various mathematical programming techniques together with
other well-defined areas of operations research.
Basic Concepts
Statement of an Optimization Problem
An optimization or a mathematical programming problem can be stated as

Find X = which minimizes f(X)

Subject to the constraints


gi (X) ≤ 0, i = 1, 2,….,m
lj(X) = 0, j = 1, 2,….,p
Basic Concepts
Where,

X is an n-dimensional vector called the Design Vector


f(X) is called the Objective Function
gi(X) and lj(X) are known as inequality and equality Constraints.

This type of problem is called a constrained optimization problem.

If the problem do not contain any constraints, it is called unconstrained


optimization problems.
Basic Concepts
Design Vector
An engineering system is defined by a set of quantities.
Some quantities are fixed and are called preassigned parameters.
All the other quantities are treated as variables and are called design or decision
variables.
xi , i = 1, 2, . . . , n.
The design variables are collectively represented as a design vector.

X=
Basic Concepts
Design Constraints
• The restrictions that must be satisfied to produce an acceptable design are
collectively called design constraints.

• Constraints that represent limitations on the behaviour or performance of the


system are termed behaviour or functional constraints.

• Constraints that represent physical limitations on design variables, such as


availability and transportability, are known as geometric or side constraints.
Basic Concepts
Constraint Surface
Consider an optimization problem with only inequality constraints gj (X) ≤ 0.
The set of values of X that satisfy the equation gj (X) = 0 forms a hypersurface
in the design space and is called a constraint surface.
Basic Concepts
• The constraint surface divides the design space into two regions: one in which
gj(X) < 0 and the other in which gj(X)>0. Thus the points lying on the
hypersurface will satisfy the constraint gj(X) critically, whereas the points
lying in the region where gj(X)>0 are infeasible or unacceptable, and the points
lying in the region where gj(X) < 0 are feasible or acceptable.

• A design point that lies on one or more than one


constraint surface is called a bound point.

• Design points that do not lie on any constraint


surface are known as free points.
Basic Concepts
Objective Function
• The criterion with respect to which the design is optimized, when expressed as
a function of the design variables, is known as objective function.

• An optimization problem involving multiple objective functions is known as a


multi-objective programming problem.

• With multiple objectives there arises a possibility of conflict, and one simple
way to handle the problem is to construct an overall objective function as a
linear combination of the conflicting multiple objective functions.
Basic Concepts
Classification of Optimization Problems
1) Based on Number of Objective Functions –
i) Single Objective Function
Find X which minimizes f(X)
Subject to the constraints
gi(X) ≤ 0, i = 1, 2,….,m
ii) Multi-Objective Function
Find X which minimizes f1(X), f2(X),_ _ _ fk(X)
Subject to the constraints
gi (X) ≤ 0, i = 1, 2,….,m
Where f1, f2,_ _ _ fk denote the objective functions to be minimized
simultaneously.
Basic Concepts
2) Based on Constraints
i) Existence of Constraints
a) With Constraints – Constraint Optimization
b) Without Constraints – Unconstraint Optimization

ii) Equality / Inequality sign


a) Equality Constraints (=)
b) Inequality Constraints (≤ ; ≥)
Basic Concepts
3) Based on Nature of Expressions for the Objective Function & Constraints
i) Linear programming –
Both objective function and constraints are linear.
ii) Non – linear programming –
Both objective function and constraints are non - linear.
iii) Quadratic programming –
Objective function is quadratic and constraints are linear.
iv) Geometric programming –
Objective function and constraints are posynomials.
Basic Concepts

4) Based on the Nature of the Optimization


i) Local / Global
a) Local – Smallest / Largest in the vicinity of a point.
b) Global - Smallest / Largest among all the minima / maxima.

ii) Deterministic / Stochastic


a) Deterministic – Means you have same thing any number of times
you try it. (Definite value)
b) Stochastic – Some or all of the parameters. (Design variable /
Pre-assigned parameters) are probabilistic
Basic Concepts
5) Based on the types of variables
i) Continuous (Any numeric value including fraction & decimal values)

ii) Discrete
a) Binary (0 or 1)
b) Integer (Any whole number without fraction)
c) Discrete sets (Example – Screw threads – cannot have any number)
Basic Concepts
Engineering Applications of Optimization
• Design of aircraft and aerospace structures for minimum weight.
• Design of civil engineering structures such as frames, foundations, bridges,
towers, chimneys, and dams for minimum cost.
• Design of material handling equipment, such as conveyors, trucks, and cranes,
for minimum cost.
• Controlling the waiting and idle times and queueing in production lines to
reduce the costs.
• Selection of machining conditions in metal-cutting processes for minimum
production cost.
Basic Concepts
• Design of pumps, turbines, and heat transfer equipment for maximum efficiency.

• Allocation of resources or services among several activities to maximize the benefit.

• Planning the best strategy to obtain maximum profit in the presence of a competitor.

• Design of water resources systems for maximum benefit.

• Analysis of statistical data and building empirical models from experimental


results to obtain the most accurate representation of the physical phenomenon.
Basic Concepts
• Find the optimal trajectories of space vehicles.
• Optimum design of linkages, cams, gears, machine tools, and other mechanical
components.
• Optimum design of electrical networks.
• Optimal production planning, controlling and scheduling.
• Optimum design of pipeline networks for process industries.

• Selection of a site for an industry.


• Shortest route taken by a salesperson visiting various cities during one tour.
Basic Concepts
Single-Variable Optimization
• Primary Concern – Identification of Optimal Point.
• Let f(x) be a continuous function defined in a certain interval a – b.
• The decision x*(ϵ [a , b]) is to be found such that f(x) is maximum or
minimum.
Basic Concepts

• A function of one variable f (x) is said to have a relative or local minimum at


x = x∗ if f (x∗) ≤ f (x∗ + h) for all sufficiently small positive and negative
values of h.

• A function of one variable f (x) is said to have a relative or local maximum at


x = x∗ if f (x∗) ≥ f (x∗ + h) for all sufficiently small positive and negative
values of h.
Basic Concepts

• A function of one variable f (x) is said to have a global or absolute minimum at


x = x∗ if f (x∗) ≤ f (x) Ɐx (for all values of x from a to b)

• A function of one variable f (x) is said to have a global or absolute maximum


at x = x∗ if f (x∗) ≥ f (x) Ɐx (for all values of x from a to b)
Basic Concepts
• Stationary Point – The point x* is said to be stationary point if the function has either
maximum or minimum.
• At maximum (or minimum) point, the function changes from increasing to
decreasing (decreasing to increasing).
• Point of Inflection – Stationary point can be point of inflection, where the function
has neither maximum nor minimum.
• At the point of inflection, the function is increasing (or decreasing) on either side of
the point.
Basic Concepts

Necessary Condition

If a function f(x) is defined in the interval a ≤ x ≤ b and has a stationary point,


i.e. relative minimum or maximum or point of inflection at x = x*, where a < x*
< b, and if the derivative df(x)/dx = f ′(x) exists as a finite number at x = x*, then
f ′(x*) = 0.
Basic Concepts
Limitations :
• The theorem does not say what happens if a minimum or maximum occurs at
a point x* where the derivative fails to exist.
• The theorem does not say what happens if a minimum or maximum occurs at
an endpoint of the interval of definition of the function.
• The theorem does not say that the function necessarily will have a minimum
or maximum at every point where the derivative is zero.
Basic Concepts
Sufficient Condition
Let f ′(x*) = f ′′(x*) = · · · = f (n−1)(x*) = 0, but f (n)(x*) ≠ 0.

Then, x = x* is
i) Local minimum value of f(x) if f(n)(x*) > 0 and n is even.
ii) Local maximum value of f(x) if f(n)(x*) ˂0 and n is even.
iii) Neither maximum or minimum if n is odd.
(In this case x* is a point of inflection)
Basic Concepts
Example – Maximum and minimum values of the function

𝑓 𝑥 = 12𝑥 5 − 45𝑥 4 + 40𝑥 3 + 5

Solution – f '(x) = 60x4 – 180x3 + 120x2


= 60(x4 – 3x3 + 2x2)
= 60x2(x2 – 3x + 2)
= 60x2 (x – 1)(x – 2)
⸫ f '(x) = 0, at x =0, x =1, x =2
Basic Concepts
The second order derivative,
f ''(x) = 60(4x3 – 9x2 + 4x)

At x = 1 → f '' (x) = –60 (˂ 0) ⸫ Local Maxima


fmax = 12

At x = 2 → f ''(x) = 240 (> 0) ⸫ Local Minima


fmin = –11

At x = 0 → f ''(x) = 0 ⸫ Investigate the next derivative


Basic Concepts
f '''(x) = 60(12x2 – 18x + 4)

At x = 0 → f '''(x) = 240

This is third order derivative (odd) and f '''(x) ≠ 0 at x = 0

⸫ x = 0 is neither maximum nor minimum, it is a point of inflection


Module 2.1

Linear Programming Problem


In linear programming we try to optimize a linear objective function
subject to linear constraints and with non-negativity restrictions.
Product Mix Problem
A shop can make two types of sweets (A and B). They use two resources – flour
and sugar. To make one packet of A, they need 3kg of flour and 3kg of sugar. To
make one packet of B, they need 3kg of flour and 4kg of sugar. They have 21kg of
flour and 28kg of sugar. These sweets are sold at Rs.10 and Rs.9 per packet
respectively. Formulate the problem to find the best product mix to maximize the
revenue.
Problem Formulation
Let, x1 be the no. of packets of sweet A made.
Decision Variable
x2 be the no. of packets of sweet B made.
Maximize, 10 x1 + 9 x2 Objective function
S.T. 3 x1 + 3 x2 ≤ 21
Constraints
3 x1 + 4 x2 ≤ 28
x1, x2 ≥ 0 Non-negativity restrictions
Assumptions -

1.Proportionality

2.Linearity

3.Deterministic
𝑥2
Graphical Method9
8
4𝑥1 + 3𝑥2 = 24
Maximize 10 x1 + 9 x2
7
S.T. 3 x1 + 3 x2 ≤ 21
6
4 x1 + 3 x2 ≤ 24
x1, x2 ≥ 0 5
(3,4)
4

2 Feasible Region
3𝑥1 + 3𝑥2 = 21
1

0
0 1 2 3 4 5 6 7 8 𝑥1
Every point inside the feasible region is dominated by a boundary point.
Every boundary point is dominated by corner point.
⸫ Enough to evaluate corner point.
In this example, 4 corner points. Substitute in 10 x1 + 9 x2

(0,0) ⸫ Z = 00
(6,0) ⸫ Z = 60
(0,7) ⸫ Z = 63
(3,4) ⸫ Z = 66 (Best Solution OR Optimum Solution)
Minimize 7 x1 + 5 x2 6
𝑥2
S.T. x1 + x2 ≥ 4
5 x1 + 2 x2 ≥ 10 5
5𝑥1 + 2𝑥2 = 10
x1, x2 ≥ 0
4

(2/3,10/3) Feasible Region

2
Corner Points
(4,0) ⸫ Z = 28
1
(0,5) ⸫ Z = 25 𝑥1 + 𝑥2 = 4
(2/3,10/3) ⸫ Z = 64/3 = 21.33
0 𝑥1
0 1 2 3 4 5
Maximize 10 x1 + 9 x2 + 0 x3 + 0 x4
Maximize 10 x1 + 9 x 2
S.T. 3 x1 + 3 x2 ≤ 21 S.T. 3 x1 + 3 x2 + x3 = 21 No positive value in Cj – Zj
4 x1 + 3 x2 ≤ 24 4 x1 + 3 x2 + x4 = 24 ⸫ Algorithm terminates with solution, X1 = 3, X2 = 4, Z = 66
x1, x2 ≥ 0 x1, x2, x3, x4 ≥ 0
Cj 10 9 0 0
CB XB X1 X2 X3 X4 RHS Ɵ (Ratio)
0 X3 3 3 1 0 21 21/3 = 7
0 X4 4 3 0 1 24 24/4 = 6
Cj – Zj 10 9 0 0 0
X1 enters, X4 leaves, Multiply X1 by 3, X3 = X3 – 3 X1
0 X3 0 ¾ 1 -¾ 3 3÷¾=4
10 X1 1 ¾ 0 ¼ 6 6÷¾=8
Cj – Zj 0 3/2 0 - 5/2 60
X2 enters, X3 leaves, Multiply X2 by 3/4, X1 = X1 – (¾) X2
9 X2 0 1 4/3 - 1 4
10 X1 1 0 - 1 1 3
Cj – Zj 0 0 - 2 -1 66
Minimize 7 x1 + 5 x2 Minimize 7 x1 + 5 x2 + 0 x3 + 0 x4
S.T. x1 + x2 ≥ 4 S.T. x1 + x2 - x3 = 4
5 x1 + 2 x2 ≥ 10 5 x1 + 2 x2 - x4 = 10
x1, x2 ≥ 0 x1, x2, x3, x4 ≥ 0

Cannot start Simplex with negative slack variables because the solution would be infeasible.
Minimize 7 x1 + 5 x2 + 0 x3 + 0 x4 + Ma1 + Ma2
S.T. x1 + x2 - x3 + a1 = 4
5 x1 + 2 x2 - x4 + a2 = 10
x1, x2, x3, x4, a1, a2 ≥ 0

Now that we have introduced artificial variables a1 & a2 , but we do not want them in final solution.
⸫ We give a large objective function coefficient to a1 and a2. (BIG M)
Cj -7 -5 0 0 -M -M
CB XB X1 X2 X3 X4 a1 a2 RHS Ɵ
-M a1 1 1 -1 0 1 0 4 4
-M a2 5 2 0 -1 0 1 10 2
Cj – Zj 6M-7 3M-5 -M -M 0 0 -
X1 enters, a2 leaves, Divide a2 row by key element to get X1 row, a1 = a1 - X1
-M a1 0 3/5 -1 1/5 1 -1/5 2 10/3
-7 X1 1 2/5 0 -1/5 0 1/5 2 5
Cj – Zj 0 (3M-11)/5 -M (M-7)/5 0 (-6M+7)/5 -
X2 enters, a1 leaves, divide pivot row by pivot element, X1 = X1 – 2/5 X2
-5 X2 0 1 -5/3 1/3 5/3 -1/3 10/3
-7 X1 1 0 2/3 -1/3 -2/3 1/3 2/3
Cj – Zj 0 0 -11/3 -2/3 (-3M+11)/3 (-3M+2)/3 -64/3

No positive value in Cj – Zj ⸫ The algorithm terminates. Solution is X1 = 2/3 and X2 = 10/3.


We have converted minimization problem by multiplying with – 1. ⸫ Solution is – 64/3 * - 1 = 64/3
Adding Slack Variables
Maximize 10 x1 + 9 x2 Maximize 10 x1 + 9 x2 + 0 x3 + 0 x4
S.T. x1 - x 2 ≤ 5 S.T. x1 - x2 + x3 = 5
4 x1 ≤ 24 4 x1 + x4 = 24
x1 , x2 ≥ 0 x1, x2, x3, x4 ≥ 0
Cj 10 9 0 0
CB XB X1 X2 X3 X4 RHS Ɵ (Ratio)
0 X3 1 -1 1 0 5 5/1 = 5
0 X4 4 0 0 1 24 24/4 = 6
Cj – Zj 10 9 0 0 0
X1 enters, X3 leaves, Divide key row by key element, X4 = X4 – 4 X1
10 X1 1 -1 1 0 5 ---
0 X4 0 4 -4 1 4 1
Cj – Zj 0 19 -10 0 50
X2 enters, X4 leaves, Divide key row by key element, X1 = X1 + X2
10 X1 1 0 0 1/4 6 ---
9 X2 0 1 -1 ¼ 1 ---
Cj – Zj 0 0 9 -19/4 69
No Leaving Variable.
⸫ The algorithm terminates, showing Un-boundness.
𝑥2

4x1= 24
Maximize 10 x1 + 9 x2
S.T. x1 - x2 ≤ 5
4 x1 ≤ 24 Feasible Region
x1 - x2= 5
x1, x2 ≥ 0

(6,1)
(5,0)

𝑥1
Maximize 10 x1 + 9 x2 Adding Slack Variables x3 and –x4. We cannot start with negative slack variable.
S.T. x1 + x2 ≤ 5 ⸫ We introduce artificial variable a1
4 x1 ≥ 24 Maximize 10 x1 + 9 x2 + 0 x3 + 0 x4 – M a1
x1, x2 ≥ 0 S.T. x1 + x2 + x3 = 5
4 x1 - x4 + a1 = 24
x1, x2, x3, x4, a1 ≥ 0
Cj 10 9 0 0 -M
CB XB X1 X2 X3 X4 a1 RHS Ɵ
0 X3 1 1 1 0 0 5 5
-M a1 4 0 0 -1 1 24 6
Cj – Zj 10+4M 9 0 -M 0 ---
X1 enters, X3 leaves, Divide key row by key element to get X1 row, a1 = a1 - 4X1
10 X1 1 1 1 0 0 5
-M a1 0 -4 -4 -1 1 4
Cj – Zj 0 -1-4M -10-4M -M 0 ---

There is no positive value in Cj – Zj and simplex algorithm terminates with the solution X1 = 5 and a1 = 4.
But since there is an artificial variable we say that the problem is infeasible.
Maximize 10 x1 + 9 x2
S.T. x1 + x2 ≤ 5
𝑥2
4 x1 ≥ 24
x1, x2 ≥ 0 4x1= 24

x1 + x2= 5

Feasible Region

Feasible Region

𝑥1
Introduction to DUAL
Maximize 10 x1 + 9 x2 Constraint 1 * 10 gives 30x1 + 30x2 ≤ 210 Upper Estimate = 210
S.T. 3x1 + 3x2 ≤ 21 Constraint 1 * 4 gives 12x + 12x ≤ 84 Upper Estimate = 84
1 2
4x1 + 3x2 ≤ 24
x1, x2 ≥ 0 Constraint 2 * 3 gives 12x1 + 9x2 ≤ 72 Upper Estimate = 72

(Constraint 1 * 3) + (Constraint 2 * 1)
gives 13x1 + 12x2 ≤ 87 Upper Estimate = 87

Now to generalize, multiply first constraint by y1 and the second constraint by y2 (y1, y2 ≥ 0 )
such that,
3y1 + 4y2 ≥ 10
3y1 + 3y2 ≥ 9
21y1 + 24y2 is an upper estimate of Z (We want to Minimize the Upper Estimate)
Now in the process we created a new LP problem, such that,

Minimize 21 y1 + 24 y2
S. T. 3y1 + 4y2 ≥ 10
3y1 + 3y2 ≥ 9
y1, y2 ≥ 0

The original problem is called Primal and the converted problem is called Dual
PRIMAL DUAL
Maximize 10 x1 + 9 x2 Minimize 21 y1 + 24 y2
S.T. 3x1 + 3x2 ≤ 21 S. T. 3y1 + 4y2 ≥ 10
4x1 + 3x2 ≤ 24 3y1 + 3y2 ≥ 9
x1, x2 ≥ 0 y1, y2 ≥ 0

PRIMAL DUAL
Maximization Minimization
Variables Constraints
Constraints Variables
RHS Objective Function
Objective Function RHS
A AT
Dual Simplex Algorithm
Adding negative slack variables,
Minimize 7 x1 + 5 x2
Minimize 7 x1 + 5 x2 + 0 x3 + 0 x4
S.T. x1 + x2 ≥ 4
S.T. x1 + x2 - x3 = 4
5 x1 + 2 x2 ≥ 10
5 x1 + 2 x2 - x4 = 10
x1, x2 ≥ 0
x1, x2, x3 , x4 ≥ 0

The problem is of Minimization, ⸫ Multiply Objective Function by -1 to convert it into Maximization problem.

Cj -7 -5 0 0
CB XB X1 X2 X3 X4 RHS
0 X3 -1 -1 1 0 -4
0 X4 -5 -2 0 1 -10
Cj – Zj -7 -5 0 0 ---
Denominator should
Ɵ 7/5 5/2 --- --- be negative.
Cj -7 -5 0 0
CB XB X1 X2 X3 X4 RHS
0 X3 -1 -1 1 0 -4
0 X4 -5 -2 0 1 -10
Cj – Zj -7 -5 0 0 ---
Ɵ 7/5 5/2 --- ---
X4 leaves, X1 enters, X3 = X3 + X1
0 X3 0 -3/5 1 -1/5 -2
-7 X1 1 2/5 0 -1/5 2
Cj – Zj 0 -11/5 0 -7/5 ---
X3 leaves, X2 enters, X1 = X1 – 2/5(X2)
-5 X2 0 1 -5/3 1/3 10/3
-7 X1 1 0 2/3 -1/3 2/3
Cj – Zj 0 0 -11/3 -2/3 -64/3
The solution has reached as all the values in Cj – Zj are negative and all the values in RHS are positive with solution
X1 = 2/3 , X2 = 10/3 , Z = 64/3
This method is called Dual Simplex because the dual was feasible throughout i.e. Cj – Zj values negative.
Module 2.2

Transportation Problem
A transportation problem is a linear programming problem that talks about
transporting a commodity or a single item from a set of supply points
where the item is available to destination points or demand points where
the item is required.
Let, xij be the quantity transported from i to j
i represents the set of supply points
j represents the set of demand points
North West Corner Rule
8 (x11) 9 (x12) 7 (x13) 8 9 7
40 40
30
4 (x21) 3 (x22) 5 (x23) 4 3 5
25 25

8 (x31) 5 (x32) 6 (x33) 8 5 6


35 35

30 30 40 30 30 40
8 9 7 8 9 7
10 0
30 10 30 10
4 3 5 4 3 5
25 20 25

8 5 6 8 5 6
35 35

0 30 40 0 20 40

8 9 7 8 9 7
0
30 10 0 30 10
4 3 5 4 3 5
5 20 5 0
20 5
8 5 6 8 5 6
35 35 35

0 0 40 0 0 35
8 9 7
0
30 10
4 3 5
20 5 0

8 5 6
35 0

0 0 0

Cost for this solution = 8*30 + 9*10 + 3*20 + 5*5 + 6*35


= 625
Minimum Cost Method or Least Cost Method

8 9 7
30 10 40

4 3 5
25 25

8 5 6
5 30 35

30 30 40
Cost for this solution = 8*30 + 7*10 + 3*25 + 5*5 + 6*30
= 590
Penalty Cost Method or Vogel’s Approximation Method
8 9 7
40

4 3 5
Row penalty - 1, 1, 1
25 Column penalty - 4, 2, 1

8 5 6
35

30 30 40
Row penalty - 1, ---, 1
Column penalty - 0, 4, 1
8 9 7
5 35 40
Row penalty - 1, ---, 2
4 3 5 Column penalty - 0, ---, 1
25 25

8 5 6
30 5 35

30 30 40

Cost for this solution = 8*5 + 7*35 + 4*25 + 5*30 + 6*5


= 565
Stepping Stone Method (Requires Initial Solution)

8 9 7
5 35 40
8 9 7
4 3 5 5 +1 35 -1 40
25 25
4 3 5
8 5 6 25
30 5 35 25
8 5 6
30 30 40 30 -1 5 +1 35

30 30 40
Position 1 – 2 → 9 - 7 + 6 - 5 = 3
8 9 7 Position 2 – 2 → 3 - 5 + 6 – 7 + 8 - 4 = 1
5 35 40
Position 2 – 3 → 5 - 4 + 8 - 7 = 2
4 3 5
25 25
Position 3 – 1 → 8 - 8 + 7 - 6 = 1
8 5 6
30 5 35 No gain ⸫ The given solution is optimum.
30 30 40
8 9 7 Position 1 – 2 → 9 - 7 + 6 - 5 = 3
30 +1 10 -1 40

4 3 5
25 25

8 5 6
5 -1 30 +1 35
30 30 40
8 9 7 8 9 7
30 10 40 30 -Ɵ 10 +Ɵ 40

4 3 5
4 3 5 Ɵ 25 -Ɵ 25
25 25
8 5 6
8 5 6 5 +Ɵ 30 -Ɵ 35
5 30 35
30 30 40
30 30 40 8 9 7
40
5 35
Position 2 – 1 → 4 - 3 + 5 - 6 + 7 - 8= -1
4 3 5
25 25
Position 2 – 3 → 5 - 6 + 5 - 3 = 1
8 5 6
Position 3 – 1 → 8 - 8 + 7 - 6 = 1 30 5 35

30 30 40
MOdified DIstribution Method (MODI Method)
(Requires Initial Solution)

V1 V2 V3
8 9 7 Use, Ui + Vj = Cij where there is an
U1 30 10 40 allocation starting with U1 = 0
X11→ U1 + V1 = 8 → V1 = 8
4 3 5
U2 25 X13→ U1 + V3 = 7 → V3 = 7
25 X33→ U3 + V3 = 6 → U3 = -1
8 5 6 X32→ U3 + V2 = 5 → V2 = 6
U3 5 30 35 X22→ U2 + V2 = 3 → U2 = -3

30 30 40
Now compute, Cij - (Ui + Vj)
V1 = 8 V2 = 6 V3 = 7
For unallocated positions
8 9 7
U1 = 0 40 For eg. X → 9 - (0 + 6) = 3
30 (3) 10 12

4 3 5
U2 = -3
(-1) (1) 25 Similarly calculate for other unallocated
25 positions.
8 5 6
U3 = 1 (1) 5 30 35

30 30 40
V1 = 8 V2 = 6 V3 = 7
Negative sign indicates gain. (X21 → -1)
⸫ Put Ɵ at X21 8 9 7
U1 = 0 5 35 40

4 3 5
8 9 7 U2 = -4
25 25
30 -Ɵ 10 +Ɵ 40
4 3 5 U3 = -1
8 5 6
30 5 35
Ɵ 25 -Ɵ 25
30 30 40
8 5 6
5 +Ɵ 30 -Ɵ 35 Use, Ui + Vj = Cij where there is an allocation
30 30 40 starting with U1 = 0
Now compute, Cij - (Ui + Vj) For unallocated positions
V1 = 8 V2 = 6 V3 = 7
For position 1 – 2 → 9 - (0 + 6) = 3
8 9 7
U1 = 0 5 35 40 For position 2 – 2 → 3 - (-4 + 6) = 1
For position 2 – 3 → 5 - (-4 + 7) = 2
4 3 5 For position 3 – 1 → 8 - (-1 + 8) = 1
U2 = -4 25
25

U3 = -1
8 5 6
30 5 35

30 30 40
All these values are positive hence no further cost reduction is possible and the
algorithm stops. ⸫ This is the optimal solution.
Cost = 8 * 5 + 7 * 35 + 4 * 25 + 5 * 30 + 6 * 5 = 565
Module 2.3

Assignment Problem
Examples
1) Jobs → People
2) Jobs → Machines
3) Machines → People
4) Project → People
Consider a problem of 4 Jobs and 4 People. Each job should go to one person and each
person should get one job. The table shows the cost to assign a job to each person. Solve the
problem by assignment problem to get the minimum assignment cost.
People
8 6 4 8
6 5 5 8
Jobs
9 10 11 12
7 6 8 10
Solution-
i) Subtract row minimum from each row ii) Subtract column minimum from each column iii) Mark the assignments to zeros

4 2 0 4 4 2 0 1 4 2 0 1
1 0 0 3 1 0 0 0 1 0 0 0
0 1 2 3 0 1 2 0 0 1 2 0
1 0 2 4 1 0 2 1 1 0 2 1
Optimum cost = 4 + 8 + 9 + 6 = 27 Allocation J1 → P3 , J2 → P4 , J3 → P1 , J4 → P2
i) Subtract row minimum from each row ii) Subtract column minimum from each column

People 4 2 0 3 4 2 0 0
8 6 4 7 1 0 0 3 1 0 0 0
6 5 5 8 0 1 2 3 0 1 2 0
Jobs
9 10 11 12
1 0 2 4 1 0 2 1
7 6 8 10

iii) Mark the assignments to zeros iv) If tie, break it arbitrarily

4 2 0 0 4 2 0 0 Optimum cost = 9 + 6 + 4 + 8 = 27

1 0 0 0 1 0 0 0
Allocation J1 → P3 , J2 → P4 , J3 → P1 , J4 → P2
0 1 2 0 0 1 2 0
1 0 2 1 1 0 2 1
People
8 6 4 9
7 5 6 9
Jobs
9 10 11 12
9 6 8 11

i) Subtract row minimum from each row ii) Subtract column minimum from each column iii) Mark the assignments to zeros

4 2 0 5 4 2 0 2 4 2 0 2
2 0 1 4 2 0 1 1 2 0 1 1
0 1 2 3 0 1 2 0 0 1 2 0
3 0 2 5 3 0 2 2 3 0 2 2

At this point we have made only 3 assignments as against required 4.


Hungarian Algorithm –
4 2 0 2
i) Tick an unassigned row
2 0 1 1  ii) If there is a zero in that row tick the corresponding column
0 1 2 0
iii) If there is an assignment in that column tick the corresponding row
3 0 2 2  iv) If there is a zero in that row tick the corresponding column
 v) Draw lines through unticked rows and ticked columns

4 3 0 2 vi) Now take the minimum of the no. (Ɵ) where no line is passing through
1 0 0 0 vii) Add Ɵ to numbers having two lines passing through them
0 2 2 0 viii) Numbers with one line passing through them remain same
ix) Subtract Ɵ from all uncovered numbers
2 0 1 1

Start assigning the zero

Cost corresponding to original matrix Cost = 4 + 9 + 9 + 6 = 28


Allocation J1 → P3 , J2 → P4 , J3 → P1 , J4 → P2
i) Add a dummy column ii) Subtract column minimum from each column
People
8 6 4 8 6 4 0 2 1 0 0
6 5 5 6 5 5 0 0 0 1 0
Jobs
9 10 11 9 10 11 0 3 5 7 0
7 6 8 7 6 8 0 1 1 4 0

iii) Mark the assignments to zeros iv) Hungarian Algorithm v) New matrix with Ɵ = 1

2 1 0 0 2 1 0 0 2 1 0 1
0 0 1 0 0 0 1 0 0 0 1 1
3 5 7 0 3 5 7 0  2 4 6 0
1 1 4 0 1 1 4 0  0 0 3 0
 Optimum cost = 4 + 6 + 0 + 6 = 16
Allocation J1 → P3 , J2 → P1 , J4 → P2 , J3 → Nobody
Module 3.1

Integer
Linear
Programming
• In most of the optimization techniques, the design variables are
Integer Linear Programming

assumed to be continuous, which can take any real value.

• However, there are problems where the fractional values of design


variables are neither practical nor physically meaningful.

• Therefore, it becomes necessary to solve the optimization problem as


integer programming problem.
• When all the variables are constrained to take only integer values, it is
Integer Linear Programming

called all-integer programming problem.

• When the variables are restricted to take only discrete values, the problem is
called discrete programming problem.

• When some variables are restricted to take integer values, the problem is
called mixed-integer programming problem.

• When all the design variables are allowed to take on values of either 0 or 1,
the problem is called zero-one programming problem.
The All-Integer and Mixed-Integer linear programming problems can
Integer Linear Programming

be solved by,

1) Cutting Plane algorithm by Gomory


2) Branch and Bound algorithm by Land & Doig

4
Maximize 3 x1 + 4 x2 𝑥2
S.T. 3x1 - x2 ≤ 12
9
7 x1 +11 x2 ≤ 88
D 3x1 - x2= 12
Integer Linear Programming

x1, x2 ≥ 0 8 P
and x1, x2 are integers
P'
7
Solution-
6

Optimum solution 5 G
C
x1 = 5 ½ , x2 = 4 ½ , Z = 34 ½
4 F
If we truncate the fractional part 3 E
x1 = 5 , x2 = 4 , Z = 31 7x1 + 11x2= 88
2
However this is not the 1
optimum solution of the Q'
integer programming 0
A B 𝑥1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
problem.
The optimum solution is,
x1 = 0 , x2 = 8 , Z = 32
Q 5
Gomory’s Cutting Plane Method
Integer Linear Programming

• Gomory’s method is based on the idea of generating a cutting


plane.

• If one or more of the basic variables have fractional values,


some additional constraints known as Gomory constraints,
that will force the solution towards an all integer point will
have to be introduced.

6
Maximize Z = 3 x2 Add the slack variables,
S.T. 3x1 + 2x2 ≤ 7 Maximize Z = 3 x2 + 0 x3 + 0 x4
Integer Linear Programming

-x1 + x2 ≤ 2 S.T. 3x1 + 2x2 + x3 = 7


x1, x2 ≥ 0 -x1 + x2 + x4 = 2
and x1, x2 are integers x1, x2, x3, x4 ≥ 0
and are integers

Cj 0 3 0 0
CB XB X1 X2 X3 X4 RHS Ɵ (Ratio)
0 X3 3 2 1 0 7 7/2
0 X4 -1 1 0 1 2 2/1 = 2
Cj – Zj 0 3 0 0

7
Cj 0 3 0 0
CB XB X1 X2 X3 X4 RHS Ɵ (Ratio)
0 X3 3 2 1 0 7 7/2
Integer Linear Programming

0 X4 -1 1 0 1 2 2/1 = 2
Cj – Zj 0 3 0 0 0
X2 enters, X4 leaves, Divide X4 by key element to get X2, X3 = X3 – 2 X2
0 X3 5 0 1 -2 3 3/5
3 X2 -1 1 0 1 2 ---
Cj – Zj 3 0 0 -3 6
X1 enters, X3 leaves, Divide X3 by key element to get X1, X2 = X2 + X1
0 X1 1 0 1/5 -2/5 3/5
3 X2 0 1 1/5 3/5 13/5
Cj – Zj 0 0 -3/5 -9/5 39/5

3
No positive value in (Cj – Zj) ⸫ The algorithm terminates with solution X1 = 3/5 , X2 = 13/5 = 25
But this is not the optimum solution as X1 and X2 are not integers.

8
Now we apply the Gomory’s procedure.
Select the constraint corresponding to
max (fBi) = max (fB1, fB2) = max (3/5 , 3/5) = 3/5
Integer Linear Programming

XB1 = IB1 + fB1 = 0 + 3/5


XB2 = IB2 + fB2 = 2 + 3/5
Now construct the Gomorian constraint for eqn. (1) i.e. X1
𝑛 i = 1 (i is the equation number)
−𝑓𝐵𝑖 = − ෍ 𝑓𝑖𝑗 𝑥𝑗 + 𝑔𝑖 m = 2 (Basic Variables i.e. X1 , X2)
𝑗=𝑚+1
n = 4 (Total Variables i.e. X1, X2, X3 X4)
4
−𝑓𝐵𝑖 = − ෍ 𝑓1𝑗 𝑥𝑗 + 𝑔1 From the previous iteration
𝑗=3
3 f13 → 1st row 3rd column
− = − 𝑓13 𝑥3 − 𝑓14 𝑥4 + 𝑔1 f14 → 1st row 4th column
5
3 1 3 f14 = -2/5 = -1 + 3/5
− = − 𝑥3 − 𝑥4 + 𝑔1 Make the fraction positive
5 5 5
3 1 3
− = 0𝑥1 + 0𝑥2 − 𝑥3 − 𝑥4 + 𝑔1
5 5 5
This becomes our Gomory constraint. Now with this Gomory variable (g1) continue with the previous iteration.
9
Apply Dual Simplex as we have
negative value in RHS

Cj 0 3 0 0 0
Integer Linear Programming

CB XB X1 X2 X3 X4 g1 RHS
0 X1 1 0 1/5 -2/5 0 3/5
3 X2 0 1 1/5 3/5 0 13/5
g1 0 0 -1/5 -3/5 1 -3/5
Cj – Zj 0 0 -3/5 -9/5 0 39/5

Ɵ 0 0 3 3 0
g1 leaves, X3 enters, Divide g1 by key element to get X3, X1 = X1 -1/5 X3 , X2 = X2 – 1/5 X3
0 X1 1 0 0 -1 1 0
3 X2 0 1 0 0 1 2
0 X3 0 0 1 3 -5 3
Cj – Zj 0 0 0 0 -3 6

All the values in (Cj – Zj) are less than or equal to zero. All the values in RHS are positive and integer.

⸫ Optimum integer solution is X1 = 0, X2 = 2, Z = 6 10


Module 3.1

Integer
Linear
Programming
Integer Linear Programming
Branch & Bound
The Branch & Bound method can be considered as a refined enumeration
method in which most of the non-promising integer points are discarded
without testing them.

If at least one of the variables, say xi assumes a non-integer value, then we


can find an integer [xi] such that,
[xi] ˂ xi ˂ [xi] + 1

Then two sub-problems are formulated


xi ≤ [xi] and xi ≥ [xi] + 1

The process of finding these sub-problems is called branching.


Integer Linear Programming
Each of these two sub-problems are solved again as a continuous problem. The solution
of a continuous problem forms a node and from each node two branches originate.

The process of branching continues until an integer feasible solution is found for one of
the two continuous problems.

The nodes that are eliminated are said to have been fathomed because it is not possible
to find a better integer solution from these nodes.

The algorithm continues to select a node for further branching until all the nodes have
been fathomed. At that stage, the particular fathomed node that has the integer feasible
solution with highest/ lowest value for maximization/ minimization objective function
gives the optimum solution of the original linear integer programming problem.
Integer Linear Programming 𝑥2
8 10𝑥1 + 6𝑥2 =45
Maximize 5 x1 + 4x2
S.T. x1 + x2 ≤ 5 7

10 x1 + 6 x2 ≤ 45
6
x1, x2 ≥ 0 and are integers D
5

4
LPP1
X1 = 3.75 , X2 = 1.25 3
Z = 23.75
x1 ≤ 3 2 E
x1 ≥ 4
C

X1 = 3 , X2 = 2 X1 = 4 , X2 = 0.83 1 F 𝑥1 + 𝑥2 =5
Z = 23 Z = 23.33
0A B
LPP2 LPP3 G H
0 1 2 3 4 5 6 7 8
Fathomed Node Fathomed Node
⸪ We got the integer solution. ⸪ Further branching cannot improve Z.`
𝑥1
⸫ Optimum solution is Zmax = 23, X1 = 3 , X2 = 2
Integer Linear Programming
Minimize 5 x1 + 4x2 𝑥2
S.T. 3x1 + 2x2 ≥ 5 3.5
2x1 + 3x2 ≥ 7
x1, x2 ≥ 0 and are integers 3
C
2.5
LPP1 B
X1 = 0.2 , X2 = 2.2 2
Z = 9.8
x1 ≥ 1
LPP2 x1 ≤ 0
1.5
LPP3
X1 = 0 , X2 = 2.5 X1 = 1 , X2 = 1.67 1 2𝑥1 + 3𝑥2 =7
Z = 10 Z = 11.7
x2 ≤ 2 x2 ≥ 3 0.5
LPP5 Fathomed Node 3𝑥1 + 2𝑥2 =5
X1 = 0 , X2 = 3 ⸪ Further branching cannot improve Z. 0
Infeasible 0 1 2 3 A4 5 6 7 8
Z = 12 𝑥1
LPP4
Fathomed Node
⸪ We got the integer solution.

⸫ Optimum solution is Zmin = 12, X1 = 0 , X2 = 3


Integer Linear Programming
When to fathom a Node in Branch and Bound
i) When the integer solution is reached.
ii) When the problem at a particular node is infeasible.
iii) When branching cannot improve the value of Z.

6
Module 3.2
Non-Linear
Programming
Problem
(NLPP)
Integer Linear Programming

Non-Linear Programming Problem

1) With Equality Constraint - Lagrangian Method


2)With Inequality Constraint - Kuhn-Tucker Method
Integer Linear Programming
Lagrangian Method
Lagrangian Multiplier Method is applicable only when constraints are
of equality sign.
A general NLPP with equality constraint is

Maximize / Minimize Z = f (x)


S.T. g (x) = 0
x≥0

Assume that f(x) and g(x) are differentiable w.r.t. x

3
Integer Linear Programming
Necessary Condition -
To find the necessary condition for the maxima (or minima) value of Z,
a new function is formed by introducing Lagrangian multiplier 𝜆

𝐿 𝑥, 𝜆 = 𝑓 𝑥 + 𝜆 ∗ g 𝑥

𝜆 is a constant and unrestricted in sign.


The function L is called the Lagrangian Function.

Maximize/Minimize 𝐿 𝑋, 𝜆 = 𝑓 𝑥 + 𝜆 ∗ g 𝑥
S.T. X≥0
4
Integer Linear Programming

Necessary Condition

𝜕𝐿 𝜕𝐿
=0 and =0
∂𝑥 ∂𝜆
After solving these equations we get stationary points (x*, 𝜆*)

5
Integer Linear Programming Sufficient Condition – By Hessian Matrix

𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
𝜕𝑥1 𝜕𝑥1 𝜕𝑥1 𝜕𝑥2 𝜕𝑥1 𝜕𝑥3
𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
H= 𝜕𝑥2 𝜕𝑥1 𝜕𝑥2 𝜕𝑥2 𝜕𝑥2 𝜕𝑥3
𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
𝜕𝑥3 𝜕𝑥1 𝜕𝑥3 𝜕𝑥2 𝜕𝑥3 𝜕𝑥3

Partially double derivate f(x) to get the values of Hessian Matrix

Find the principal minors by finding the determinants

f(x) →Minimum → All the minors are positive → D1 > 0, D2 > 0, D3 > 0

f(x) → Maximization → Minors have alternate sign, but first principal minor should

be negative →D1 ˂ 0, D2 > 0, D3 ˂ 0 6


Integer Linear Programming
Using the Lagrange’s multiplier method solve the following NLPP
Optimize 𝑍 = 6𝑥1 2 + 5𝑥2 2
S. T. 𝑥1 + 5𝑥2 = 7
𝑥1 , 𝑥2 ≥ 0
Solution
f(x) = 6𝑥1 2 + 5𝑥2 2
g(x) = 𝑥1 + 5𝑥2 − 7
Taking 𝜆 as the Lagrangian multiplier

Lagrangian function,
L = f(x) - 𝜆g(x)
⸫ L= 6𝑥1 2 + 5𝑥2 2 - 𝜆 (𝑥1 +5𝑥2 − 7)
7
Integer Linear Programming Necessary Condition –
Taking partial derivative of L w.r.t. x1 , x2 and 𝜆
L= 6𝑥1 2 + 5𝑥2 2 - 𝜆 (𝑥1 +5𝑥2 − 7)
𝜕𝐿
= 12𝑥1 − 𝜆 = 0 → 𝑥1 = 𝜆/12
∂𝑥 1
𝜕𝐿
= 10𝑥2 − 5𝜆 = 0 → 𝑥2 = 𝜆/2
∂𝑥2
𝜕𝐿
= −(𝑥1 +5𝑥2 − 7) = 0 → 𝑥1 + 5𝑥2 = 7 ---(1)
∂𝜆

Substituting the values of x1 and x2 in equation (1)


𝜆 𝜆
+ 5 = 7 → 𝜆 = 84/31
12 2

Substituting the value of 𝜆 in x1 and x2 we get,


1 84 7 1 81 42
𝑥1 = = and 𝑥2 = =
12 31 31 2 31 31
7 42
⸫ The stationary point is (𝑥1 , 𝑥2 ) = ( , )
31 31 8
Integer Linear Programming Sufficient Condition – By Hessian Matrix
𝜕2 𝑓 𝜕2 𝑓
𝜕𝑥1 𝜕𝑥1 𝜕𝑥1 𝜕𝑥2
H=
𝜕2 𝑓 𝜕2 𝑓
𝜕𝑥2 𝜕𝑥1 𝜕𝑥2 𝜕𝑥2

f(x) = 6𝑥1 2 + 5𝑥2 2

Partially double derivate f(x) to get the values of Hessian Matrix


𝜕𝑓 𝜕2𝑓 𝜕2𝑓
= 12𝑥1 = 12 =0
∂𝑥1 𝜕𝑥1 𝜕𝑥1 𝜕𝑥1 𝜕𝑥2
𝜕𝑓 𝜕2𝑓 𝜕2𝑓
= 10𝑥2 =0 = 10
∂𝑥2 𝜕𝑥1 𝜕𝑥2 𝜕𝑥2 𝜕𝑥2

12 0
⸫ H=
0 10 9
Integer Linear Programming
12 0
H=
Find the principal minors by finding the determinants 0 10

Principal minor of order 1 12 = 12

12 0
Principal minor of order 2 = 120
0 10

All the Principal minors are positive. D1 > 0, D2 > 0

f(x) →Minimum at point (7/31, 42/31)

⸫ Zmin = 6(7/31)2 + 5(42/31)2 = 294/31

10
Integer Linear Programming
Using the Lagrange’s multiplier method solve the following NLPP
Optimize 𝑍 = 4𝑥1 2 + 2𝑥2 2 + 𝑥3 2 - 4 𝑥1 𝑥2
S. T. 𝑥1 + 𝑥2 + 𝑥3 = 15
2𝑥1 − 𝑥2 + 2𝑥3 = 20
𝑥1 , 𝑥2 , 𝑥3 ≥ 0

Solution
f(x) = 4𝑥1 2 + 2𝑥2 2 + 𝑥3 2 - 4 𝑥1 𝑥2
g1(x) = 𝑥1 + 𝑥2 + 𝑥3 − 15
g2(x) = 2𝑥1 − 𝑥2 + 2𝑥3 − 20
Taking 𝜆 as the Lagrangian multiplier
Lagrangian function,
L = f(x) - 𝜆1g1(x) - 𝜆2g2(x)
⸫ L = 4𝑥1 2 + 2𝑥2 2 + 𝑥3 2 - 4 𝑥1 𝑥2 - 𝜆1 ( 𝑥1 + 𝑥2 + 𝑥3 − 15) - 𝜆2 (2𝑥1 − 𝑥2 + 2𝑥3 − 20)
11
Integer Linear Programming
Necessary Condition –
Taking partial derivative of L w.r.t. x1 , x2 , x3 , 𝜆1 and 𝜆2
L = 4𝑥1 2 + 2𝑥2 2 + 𝑥3 2 - 4 𝑥1 𝑥2 - 𝜆1 ( 𝑥1 + 𝑥2 + 𝑥3 − 15) - 𝜆2 (2𝑥1 − 𝑥2 + 2𝑥3 − 20)
𝜕𝐿
= 8𝑥1 − 4𝑥2 − 𝜆1 − 2𝜆2 = 0
∂𝑥1
𝜕𝐿
= 4𝑥2 − 4𝑥1 − 𝜆1 + 𝜆2 = 0
∂𝑥2
𝜕𝐿
= 2𝑥3 − 𝜆1 − 2𝜆2 = 0
∂𝑥3
𝜕𝐿
= −(𝑥1 + 𝑥2 + 𝑥3 − 15) = 0
∂𝜆1
𝜕𝐿
= −(2𝑥1 − 𝑥2 + 2𝑥3 − 20) = 0
∂𝜆2

Solving the above equations we get, 𝑥1 = 33/9, 𝑥2 = 10/3, 𝑥3 = 8, 𝜆1 = 40/9 , 𝜆2 = 52/9


⸫ The stationary point is (𝑥1 , 𝑥2, 𝑥3 ) = (33/9, 10/3, 8)
12
Integer Linear Programming Sufficient Condition – By Hessian Matrix

𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
𝜕𝑥1 𝜕𝑥1 𝜕𝑥1 𝜕𝑥2 𝜕𝑥1 𝜕𝑥3
𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
H= 𝜕𝑥2 𝜕𝑥1 𝜕𝑥2 𝜕𝑥2 𝜕𝑥2 𝜕𝑥3
𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
𝜕𝑥3 𝜕𝑥1 𝜕𝑥3 𝜕𝑥2 𝜕𝑥3 𝜕𝑥3

f(x) = 4𝑥1 2 + 2𝑥2 2 + 𝑥3 2 - 4 𝑥1 𝑥2

Partially double derivate f(x) to get the values of Hessian Matrix


𝜕𝑓 𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
= 8𝑥1 − 4𝑥2 =8 = −4 =0
∂𝑥 1 𝜕𝑥1 𝜕𝑥1 𝜕𝑥1 𝜕𝑥2 𝜕𝑥1 𝜕𝑥3

𝜕𝑓 𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
= 4𝑥2 − 4𝑥1 = −4 =4 =0
∂𝑥 2 𝜕𝑥2 𝜕𝑥1 𝜕𝑥2 𝜕𝑥2 𝜕𝑥2 𝜕𝑥3

𝜕𝑓 𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
= 2𝑥3 =0 =0 =2
∂𝑥 3 𝜕𝑥3 𝜕𝑥1 𝜕𝑥3 𝜕𝑥2 𝜕𝑥3 𝜕𝑥3

13
Integer Linear Programming 8 −4 0
H = −4 4 0
0 0 2
Find the principal minors by finding the determinants

Principal minor of order 1 8 =8

8 −4
Principal minor of order 2 = 32 − 16 = 16
−4 4

8 −4 0
Principal minor of order 3 −4 4 0 = 32
0 0 2
All the Principal minors are positive. D1 > 0, D2 > 0, D3 > 0

f(x) →Minimum at point (33/9, 10/3, 8)

⸫ Zmin = 4(33/9)2 + 2(10/3)2 + (8)2 - 4(33/9) (10/3)= 91.11


14
Module 3.2
Non-Linear
Programming
Problem
(NLPP)
Integer Linear Programming
Kuhn – Tucker Method
Kuhn – Tucker Method is applicable only when constraints are of
inequality sign.

Maximize Z = 10 x1 + 4 x2 – 2x12 – x22


S.T. 2x1 + x2 ≤ 5
x1 , x2 ≥ 0
Solution -
Let, K = f(x) – 𝜆*g(x)
where, f(x) = Z = 10 x1 + 4 x2 – 2x12 – x22
g(x) = 2x1 +x2 – 5
⸫ K = (10 x1 + 4 x2 – 2x12 – x22 ) - 𝜆 (2x1 + x2 – 5)
Integer Linear Programming
Kuhn – Tucker Conditions – K = (10 x1 + 4 x2 – 2x12 – x22 ) - 𝜆 (2x1 + x2 – 5)

𝜕𝐾
= 0 → 10 – 4x1 - 2 𝜆 = 0 ---------(1)
∂𝑥1

𝜕𝐾
= 0 → 4 – 2x2 - 𝜆 = 0 ---------(2)
∂𝑥2

𝜆g = 0 → 𝜆 (2x1 + x2 – 5) = 0 --------(3)

g≤0 → 2x1 + x2 ≤ 5 ----------(4)

x1 , x2 ≥ 0 ------------(5)

𝜆≥0 -----------(6)
Integer Linear Programming
From (3) → 𝜆 = 0 OR 2x1 + x2 – 5 = 0

Case – 1
Let 𝜆 = 0
Then, (1) → 10 = 4 x1 → x1 = 5/2
(2) → 4 = 2 x2 → x2 = 2
Equations (1), (2), (3), (5) and (6) are satisfied….. Need to check equation (4)
Substitute value of x1 = 5/2 and x2 = 2 in (4)
(4) → 2x1 + x2 ≤ 5
2(5/2) + 2 ≤ 5
7≤5
This is absurd ⸫ Reject the case.
Integer Linear Programming
Case – 2
Let, 2x1 + x2 = 5 -------(A)
From (1) and (2) eliminating 𝜆
Multiplying (2) by 2 and subtracting it from (1)
10 – 4x1 – 2 𝜆 = 0
-(8 – 4x2 - 2 𝜆) = 0
2 – 4x1 + 4x2 = 0 → 2x1 – 2x2 = 1 ---------(B)

Solving (A) and (B) → x2 = 4/3 and x1 = 11/6


Substituting these values in (1) to get 𝜆
10 – 4x1 - 2𝜆 = 0 → 𝜆 = 4/3

Equations (1), (2), (3), (5) and (6) are satisfied….. Need to check equation (4)
Substitute value of x1 = 11/6 and x2 = 4/3 in (4)
Integer Linear Programming

(4) → 2x1 + x2 ≤ 5
2(11/6) + 4/3 ≤ 5
5≤5

All the 6 conditions are satisfied.

⸫ Optimal solution is x1 = 11/6 and x2 = 4/3

Zmax = 10 x1 + 4 x2 – 2x12 – x22 → 10(11/6) + 4(4/3) – 2(11/6)2 – (4/3)2 = 91/6


Integer Linear Programming
Maximize Z = 2x12 – 7x22 – 16 x1 + 2 x2 + 12 x1 x2 + 7
S.T. 2x1 + 5x2 ≤ 105
x1 , x2 ≥ 0
Solution -
Let, K = f(x) – 𝜆*g(x)
where, f(x) = Z = 2x12 – 7x22 – 16 x1 + 2 x2 + 12 x1 x2 + 7
g(x) = 2x1 + 5x2 -105
⸫ K = (2x12 – 7x22 – 16 x1 + 2 x2 + 12 x1 x2 + 7) - 𝜆 (2x1 + 5x2 -105)
Integer Linear Programming Kuhn – Tucker Conditions – K = (2x12 – 7x22 – 16 x1 + 2 x2 + 12 x1 x2 + 7) - 𝜆 (2x1 + 5x2 -105)

𝜕𝐾
= 0 → 4x1 – 16 + 12x2 – 2 𝜆 = 0 ---------(1)
∂𝑥1

𝜕𝐾
= 0 → – 14x2 + 2 + 12x1- 5𝜆 = 0 ---------(2)
∂𝑥2

𝜆g = 0 → 𝜆 (2x1 + 5x2 – 105) = 0 --------(3)

g≤0 → 2x1 + 5x2 ≤ 105 ----------(4)

x1 , x2 ≥ 0 ------------(5)

𝜆≥0 -----------(6)
Integer Linear Programming
From (3) → 𝜆 = 0 OR 2x1 + 5x2 – 105 = 0

Case – 1
Let 𝜆 = 0
Then, (1) → 4 x1 + 12x2 = 16 x1 = 1
(2) → 12 x1 – 14x2 = –2 x2 = 1
Equations (1), (2), (3), (5) and (6) are satisfied….. Need to check equation (4)
Substitute value of x1 = 1 and x2 = 1 in (4)
(4) → 2x1 + 5x2 ≤ 105
2(1) + 5(1) ≤ 105
7 ≤ 105
⸫ Condition (4) is also satisfied.
Substituting the values of x1 = 1 and x2 = 1 in Z
Z = 2x12 – 7x22 – 16 x1 + 2 x2 + 12 x1 x2 + 7 = 0 → which is absurd (⸪ problem is of max.)
Integer Linear Programming
Case – 2
Let, 2x1 + 5x2 = 105 -------(A)
From (1) and (2) eliminating 𝜆
Multiplying (1) by 5 and (2) by 2 and subtracting (2) from (1)
20x1 – 80 + 60x2 – 10𝜆 = 0
– (24x1 + 4 – 28x2 – 10 𝜆) = 0
– 4x1 – 84 + 88x2 = 0 → – 2x1 – 42 + 44x2 = 0 ---------(B)

Solving (A) and (B) → x2 = 147/49 and x1 = 45


Substituting these values in (1) to get 𝜆
4x1 – 16 + 12x2 – 2 𝜆 = 0 → 𝜆 = 100

Equations (1), (2), (3), (5) and (6) are satisfied….. Need to check equation (4)
Substitute value of x1 = 45 and x2 = 3 in (4)
Integer Linear Programming
(4) → 2x1 + 5x2 ≤ 105
2(45) + 5(3) ≤ 105
105 ≤ 105

All the 6 conditions are satisfied.

⸫ Optimal solution is x1 = 45 and x2 = 3

Zmax = 2x12 – 7x22 – 16 x1 + 2 x2 + 12 x1 x2 + 7

= 4900
Module 3.2
Non-Linear
Programming
Problem
(NLPP)
Integer Linear Programming
Simulation
It is the process of designing a model of a real system and conducting
experiments with the model for the purpose of understanding the behaviour
for the operating of the system.
A duplication of the original system.
To understand the implementation of the system.

Applications – Inventory control, Financial decisions, Business situations.

2
Integer Linear Programming
Monte-Carlo Technique
• It is an experiment on chance.
• Uses random number and requires decision making under uncertainties.

• Steps in Monte-Carlo Technique


1) Establishing probability distribution
2) Cumulative probability distribution
3) Generating random numbers
4) Find the solution using the above steps
5) Setting random number intervals
3
Integer Linear Programming
A dentist schedules patients for 30 minutes appointments. Patients take
more or less than 30 minutes depending on the type of dental work to be
done. The following summary shows the various categories of work, the
time actually needed and their probabilities. Simulate the clinic for 4 hours
and find out the average waiting time for the patients as well as the idleness
of the dentist. Assume that the patients show up the clinic at exactly their
schedule, arrival time starting at 8 am. Use the following random numbers
for handling the above problem, 40, 82, 11, 34, 25, 66, 17, 79.

Category Time Req. (Mins) No. of Patients Probability


Filling 45 40 0.40
Crown 60 15 0.15
Cleaning 15 15 0.15
Extracting 45 10 0.10
Check-up 15 20 0.20
4
Integer Linear Programming Solution -
Type Probability Cumulative Random No.
Probability Interval
Filling 0.40 0.40 00 - 39
Crown 0.15 0.55 40 - 54
Cleaning 0.15 0.70 55 – 69
Extracting 0.10 0.80 70 – 79
Check-up 0.20 1.00 80 - 99

5
Integer Linear Programming

Patient Scheduled Random No. Category Service Time


Arrival Needed (mins)
1 8:00 40 Crown 60
2 8:30 82 Check-up 15
3 9:00 11 Filling 45
4 9:30 34 Filling 45
5 10:00 25 Filling 45
6 10:30 66 Cleaning 15
7 11:00 17 Filling 45
8 11:30 79 Extracting 45

6
Integer Linear Programming

Patient Arrival Service Service Duration Service Ends Waiting Idle Time
Start (Mins) (Mins)
1 8:00 8:00 60 9:00 0 0
2 8:30 9:00 15 9:15 30 0
3 9:00 9:15 45 10:00 15 0
4 9:30 10:00 45 10:45 30 0
5 10:00 10:45 45 11:30 45 0
6 10:30 11:30 15 11:45 60 0
7 11:00 11:45 45 12:30 45 0
8 11:30 12:30 45 1:15 60 0

Average waiting time = 285 / 8 = 35.625 minutes


Idle time = 0 minutes

7
MODM

Quadratic Programming
A Quadratic Programming problem is a non-linear programming problem
with a quadratic objective function and linear constraints.

The General Structure of Quadratic Programming Problem is as follows,

Maximize Z = C1x1 + C2x2 + --- + Cnxn + --- +


Cn+1 x12 + Cn+2x22 + --- +
C2n+1 x1x2 + C2n+2 x1x3 + ---
S.T. ai1 x1 + ai2 x2 + --- ain xn ≤ bi
and x1, x2, --- xn ≥ 0
MODM
Matrix Form Representation
Maximize Z = Cx + xT A x
S.T. Dx ≤ b
x≥0
where A is a symmetric matrix and b and C are real vectors.
𝑥1
𝑎11 𝑎12 ⋯ 𝑎1𝑛
𝑥2
𝐶 = 𝐶1 𝐶2 ⋯ 𝐶𝑛 𝑋= ⋮ 𝐴= ⋮ ⋮ ⋱ ⋮
𝑎𝑚1 𝑎𝑚2 ⋯ 𝑎𝑚𝑛
𝑥𝑛

𝑏1
𝑑11 𝑑12 ⋯ 𝑑1𝑛
𝑏2
𝐷= ⋮ ⋮ ⋱ ⋮ b=

𝑑𝑚1 𝑑𝑚2 ⋯ 𝑑𝑚𝑛
𝑏𝑚
MODM

Example – f(x) = x12 - 2x22 + x1x2 - 2x2x3 + 5x1x3

1 0.5 2.5 𝑥1
f(x) = 𝑥1 𝑥2 𝑥3 0.5 −2 −1 𝑥2
2.5 −1 0 𝑥3

4
MODM

Example – f(x) = - 3x12 + 2x22 - 3x32

−3 0 0 𝑥1
f(x) = 𝑥1 𝑥2 𝑥3 0 2 0 𝑥2
0 0 −3 𝑥3

Example – f(x) = x12 + x1 - 2x2 + 10

1 0 𝑥1 𝑥1
f(x) = 𝑥1 𝑥2
𝑥2 + 1 −2 𝑥2 + 10
0 0

5
MODM
To check the definiteness of a function
The definiteness of a function f(x) depends on the nature of A in xT A x
We can use the matrix minor test or Eigen value test to check the definiteness
of a function.

The principal minors decides the nature of xT A x

𝑎11 𝑎12 𝑎13


𝑎11 𝑎12
𝐷1 = 𝑎11 𝐷2 = 𝐷3 = 𝑎21 𝑎22 𝑎23
𝑎21 𝑎22
𝑎31 𝑎32 𝑎33

6
MODM

A function f(x) = xT A x is

i) Positive definite D1 > 0 ; D2 > 0 ; D3 > 0


ii) Positive semi-definite D1 > 0 ; D2 ≥ 0 ; D3 ≥ 0
(At least one of Di = 0 , for i = 2,3,---,n)
iii) Negative definite D1 ˂ 0 ; D2 > 0 ; D3 ˂ 0
iv) Negative semi-definite D1 ˂ 0 ; D2 ≥ 0 ; D3 ≤ 0
(At least one of Di = , for i = 2,3,---,n)
v) Indefinite All other cases are indefinite

7
MODM
Decide the definiteness of the function
f(x) = -3 x12 + 2 x22 – 3x32 – 10 x1x2 + 4 x2x3 + 6 x1x3

Solution –
Write the function as f(x) = xT A x
−3 −5 3 𝑥1
f(x) = 𝑥1 𝑥2 𝑥3 −5 2 2 𝑥2
3 2 −3 𝑥3
The Principal minors are,
𝐷1 = −3 = −3 ˂ 0

−3 −5
𝐷2 = = −31 ˂ 0
−5 2

−3 −5 3
Thus D1 < 0, D2 < 0, D3 > 0.
𝐷3 = −5 2 2 = 27 > 0
3 2 −3
This ensures that the function is indefinite.
8
MODM
Decide the definiteness of the function
f(x) = 2 + 2x1 + 3 x2 - x12 - x22
Solution –
Write the function in the form,
−1 0 𝑥1 𝑥1
f(x) = 𝑥1 𝑥2 𝑥 + 2 3 𝑥 +2
0 −1 2 2
By matrix minor test – D1 = -1 ˂ 0 ; D2 = 1 > 0
Alternate sign with first sign negative ⸫ Negative definite.
By Eigen value test – Eigen values are -1 , -1
Both are negative ⸫ Negative definite.
9
MODM

Quadratic programming method can be obtained by

i) Wolfe’s method
ii) Beale’s method

10
MODM
4 0
Find the quadratic form of matrix A =
0 3
Solution –
Quadratic form, f(x) = xT A x
4 0 𝑥1
𝑥1 𝑥2
0 3 𝑥2

𝑥1 𝑥2 4𝑥1 + 0𝑥2
(0𝑥1 + 3𝑥2)

𝑥1 𝑥2 4𝑥1
3𝑥2

4x12 + 3x22
11
MODM
4 −5 7
Find the quadratic form of matrix A = −5 −6 8
7 8 −9
Solution –
Quadratic form, f(x) = xT A x
4 −5 7 𝑥1
f(x) = 𝑥1 𝑥2 𝑥3 −5 −6 8 𝑥2
7 8 −9 𝑥3
(4𝑥1 − 5𝑥2 + 7𝑥3 )
f(x) = 𝑥1 𝑥2 𝑥3 (−5𝑥1 − 6𝑥2 + 8𝑥3 )
(7𝑥1 + 8𝑥3 − 9𝑥3 )
f(x)=(4𝑥12 − 5𝑥1 𝑥2 + 7𝑥1 𝑥3 )+(−5𝑥1 𝑥2 − 6𝑥22 + 8𝑥2 𝑥3 )+(7𝑥1 𝑥3 + 8𝑥2 𝑥3 − 9𝑥32 )
f(x)=4𝑥12 − 6𝑥22 − 9𝑥32 − 10𝑥1 𝑥2 + 14𝑥1 𝑥3 + 16𝑥2 𝑥3
12
MODM
Geometric Programming
• G. P. was developed by Duffin, Peterson & Zener.

• G. P. is used to minimize the functions that are in the form of POSYNOMIALS subject to
constraints of the same type.

• It differs from other optimization techniques in the way that it finds the optimal value of
the objective function first, instead of finding optimal values of the design variables first.

• It is advantageous in situations where the optimal value of the objective function may be
all that is of interest and the calculation of the optimum design vectors may be omitted.
2
MODM
The major disadvantage of G. P. is that it requires objective function and constraints in the
form of Posynomials.
Posynomials –
Consider for example the total cost as the objective function f(x) which is given by the sum
of several component costs Ui(x) as,
f(x) = U1 + U2 + --- + UN
In many cases, the component cost Ui can be expressed as power function of the type,
Ui = Ci x1a1i x2a2i --- xnani
where, coefficients Ci are positive constants
design parameters x1, x2, --- xn are positive variables
exponents aij are real constants (positive, negative or zero)
Function f(x), because of positive coefficients and variables and real exponents are called Posynomials.
3
MODM

G. P. technique is based on the arithmetic mean - geometric mean


inequality and therefore called geometric programming.

Let, u1, u2,… un, be any n non-negative numbers.


𝛿1 , 𝛿2 , … 𝛿𝑛 be such that
𝛿𝑖 > 0 {say 𝛿 are weights}
i =1, 2, 3,…, n
𝛿1 +𝛿2 + … +𝛿𝑛 = 1

4
MODM
Now,
Arithmetic mean ≥ Geometric mean

𝛿1 𝑢1 +𝛿2 𝑢2 +⋯𝛿𝑛 𝑢𝑛 𝛿 𝛿 𝛿
≥ 𝑢1 1 𝑢2 2 …𝑢𝑛𝑛
𝛿1 +𝛿2 +⋯+𝛿𝑛

𝛿
σ𝑛𝑖=1 𝛿𝑖 𝑢𝑖 ≥ ς𝑛𝑖=1 𝑢𝑖 𝑖 {𝛿1 + 𝛿2 + … +𝛿𝑛 = 1

Let, 𝑈𝑖 = 𝛿𝑖 𝑢𝑖 {Ɐ i

𝑈𝑖 𝛿𝑖 𝑈𝑖
σ𝑛𝑖=1 𝑈𝑖 ≥ 𝑛
ς𝑖=1 {⸪𝑢𝑖 =
𝛿𝑖 𝛿𝑖
5
MODM
𝑈1 𝑈2 𝑈𝑛
The equality hold when , = =… = K (say)
𝛿1 𝛿2 𝛿𝑛

⸫ 𝑈1 = 𝐾𝛿1 ; 𝑈2 = 𝐾𝛿2 ; … 𝑈𝑛 = 𝐾𝛿𝑛


From
𝑛
LHS
෍ 𝑈𝑖 = 𝐾( 𝛿1 + 𝛿2 + … +𝛿𝑛 ) = 𝐾 {⸪ 𝛿1 + 𝛿2 + … +𝛿𝑛 = 1
𝑖=1

From RHS
𝑛 𝑈𝑖 𝛿𝑖
ς𝑖=1 = 𝐾 𝛿1 𝐾 𝛿2 … 𝐾 𝛿𝑛 = 𝐾 (𝛿1 +𝛿2 + …+𝛿𝑛 ) = 𝐾
𝛿𝑖

LHS = RHS = K
⸫ Equality holds 6
MODM
1
Minimize 𝑓 𝑥 = 𝑥1 + 𝑥2 + ; 𝑥1 , 𝑥2 > 0
𝑥1 𝑥2

𝑓 𝑥 = 𝑈1 + 𝑈2 + 𝑈3

𝑈1 𝛿1 𝑈2 𝛿2 𝑈3 𝛿3 𝑈𝑖 𝛿𝑖
≥ {σ𝑛𝑖=1 𝑈𝑖 ≥ 𝑛
ς𝑖=1
𝛿1 𝛿2 𝛿3 𝛿𝑖

𝛿1 𝛿2 𝛿3
𝑥1 𝑥2 1
=
𝛿1 𝛿2 𝑥1 𝑥2 𝛿3

𝛿1 𝛿2 𝛿3
𝛿1 −𝛿3 𝛿2 −𝛿3
1 1 1
= 𝑥1 𝑥2
𝛿1 𝛿2 𝛿3

In order to minimize f(x) we need to maximize above equation.


Now the variables are unknown, so let us make above equation free from variables.
7
MODM
To make it free from variables,
Let 𝛿1 − 𝛿3 = 0; 𝛿2 − 𝛿3 = 0; and 𝛿1 + 𝛿2 + 𝛿3 = 1
𝛿1 = 𝛿3 ; 𝛿2 = 𝛿3 𝛿1 = 𝛿2 = 𝛿3 = 1/3

There are three unknowns 𝛿1 , 𝛿2 , 𝛿3 and three equations


So we get a fix solution

Substituting the values of 𝛿1 = 𝛿2 = 𝛿3 = 1/3 in above RHS equation


𝛿1 −𝛿3 𝛿2 −𝛿3 1 𝛿1 1 𝛿2 1 𝛿3 1/3 1/3 1/3
𝑥1 𝑥2 = (1)(1) 3 3 3
𝛿1 𝛿2 𝛿3
=3
⸫Min f(x) = 3
8
MODM
Now Values of 𝑥1 , 𝑥2 = ?
𝑈1 𝑈2 𝑈𝑛
Applying the equality condition = =
𝛿1 𝛿2 𝛿𝑛

𝑥1 𝑥2 1
= =
1/3 1/3 (𝑥1 𝑥2 )1/3

𝑥1 = 𝑥2 and 𝑥1 𝑥2 2 = 1 𝑥1 3 =1 𝑥1 = 𝑥2 =1

9
MODM
Minimize 𝑓 𝑥 = 5x1 + 20x2 + 10x1-1x2-1 ; x1 , x2 ≥ 0
𝑓 𝑥 = 𝑈1 + 𝑈2 + 𝑈3

𝑈1 𝛿1 𝑈2 𝛿2 𝑈3 𝛿3 𝑈 𝑖 𝛿𝑖
≥ {σ𝑛𝑖=1 𝑈𝑖 ≥ 𝑛
ς𝑖=1
𝛿1 𝛿2 𝛿3 𝛿𝑖

𝛿1 𝛿2 𝛿3
5𝑥1 20𝑥2 10x1−1x2−1
=
𝛿1 𝛿2 𝛿3

𝛿1 𝛿2 𝛿3
𝛿1 −𝛿3 𝛿2 −𝛿3
5 20 10
= 𝑥1 𝑥2
𝛿1 𝛿2 𝛿3

In order to minimize f(x) we need to maximize above equation.


Now the variables are unknown, so let us make above equation free from variables.
10
MODM
To make it free from variables,
Let 𝛿1 − 𝛿3 = 0; 𝛿2 − 𝛿3 = 0; and 𝛿1 + 𝛿2 + 𝛿3 = 1
𝛿1 = 𝛿3 ; 𝛿2 = 𝛿3 𝛿1 = 𝛿2 = 𝛿3 = 1/3

There are three unknowns 𝛿1 , 𝛿2 , 𝛿3 and three equations


So we get a fix solution

Substituting the values of 𝛿1 = 𝛿2 = 𝛿3 = 1/3 in above RHS equation


𝛿1 −𝛿3 𝛿2 −𝛿3 5 𝛿1 20 𝛿2 10 𝛿3 1/3 1/3 1/3
𝑥1 𝑥2 = (1)(1) 15 60 30
1/3 1/3 1/3
= 30
⸫Min f(x) = 30
11
MODM
Now Values of 𝑥1 , 𝑥2 = ?
𝑈1 𝑈2 𝑈𝑛
Applying the equality condition = =
𝛿1 𝛿2 𝛿𝑛

5𝑥1 20𝑥2 10𝑥1−1 𝑥2−1


= =
1/3 1/3 1/3

1
𝑥1 = 4𝑥2 2𝑥2 = → 2𝑥1 𝑥2 2 = 1 8𝑥2 3 =1
𝑥1 𝑥2
⸫ 𝑥1 =2 𝑎𝑛𝑑 𝑥2 =1/2

12
MODM

Goal Programming
In goal programming, the goal is specified before solving the problem and the
LPP has to be solved in such a way that the particular specified goal is achieved.
Goal Models

Single Goal Modes Multiple Goal Models

Multi goal with equal (No) priorities

Multi goal with priorities

Multi goal with priorities and weights


MODM

• Goal Programming set some estimated targets for each goal and assign
priorities to them, i.e. to rank them in order of importance.

• GoalProgramming tries to minimize the deviations from the targets that


were set. It begins with the most important goal and continues in such a
way that a less important goal is considered only after the more important
ones are satisfied or have reached the point beyond which no further
improvement are desired.

• In
the final solution all the goals may not be fulfilled to the fullest extent
however, the deviations will be the minimum possible.
MODM
A factory can manufacture 2 products A & B. The profit on one unit of A is
Rs. 80 and one unit of B is Rs. 40. The maximum demand of A is 6 units per
week and of B it is 8 units per week. The manufacturer has set up a goal of
achieving a profit of Rs. 640 per week. Formulate the problem as goal
programing and solve it.

Solution -
Let, x1 and x2 be the no. of units of A & B product per week respectively.

Let, Z be the profit


Then, Z = 80x1 + 40x2
Goal is 640
Z may exceed the goal of 640 or fall short of it.
MODM
Let, u ≥ 0 denote the shortfall (underachieved) and
v ≥ 0 denote the excess (overachieved) in profit from goal

(u & v are deviational variables)

3 possibilities exists.
Goal will be …
i) Exactly achieved (u = 0 and v = 0)
ii) Overachieved (u = 0 and v > 0)
iii) Underachieved (u > 0 and v = 0)

So only those solutions are acceptable in which at least one of the


variable, u or v is zero or both are zero.
MODM
So, Either Z = 640 – u
Or Z = 640 + v

Combining the above two conditions,

Z + u – v = 640 {where at least one variable u or v is zero

To achieve the goal as closely as possible, the objective should be to minimize


the deviation from the goal.
MODM
Problem Formulation – 𝑥2
18
Minimize Z' = u + v 𝑥1 = 6
16
S.T. 80 x1 + 40 x2 + u – v = 640
14
x1 ≤ 6
12
x2 ≤ 8
10
x1 , x2 , u, v ≥ 0 𝐶 𝑥2 = 8
8
𝐵 𝐸
such that either u or v or both equal to zero. 6

4
𝐷
2 80𝑥1 + 40𝑥2 =640
𝑂 𝐴
0
0 1 2 3 4 5 6 7 8 9 10 𝑥1
MODM

Considering Z = 80 x1 + 40 x2 on the line DE, Z = 640.


⸫ Goal can be met exactly if x1 and x2 lies on DE.
For points on DE, u = v = 0, giving minimum value of Z'

If (x1, x2) falls in DEC, then profit is over and above 640. (u = 0 and v > 0)

If (x1, x2) falls in OADEB, then profit is less than 640. (u > 0 and v = 0)
MODM
Simplex Solution

Adding the slack variables the problem becomes,

Minimize Z' = u + v
S.T. 80 x1 + 40 x2 + u – v = 640
x1 + x3 = 6
x2 + x4 = 8
x1 , x2 , u, v ≥ 0
such that either u or v or both equal to zero
Cj 0 0 -1 -1 0 0
CB XB X1 X2 u v X3 X4 RHS Ɵ
-1 u 80 40 1 -1 0 0 640 8
0 X3 1 0 0 0 1 0 6 6
0 X4 0 1 0 0 0 1 8 ---
Cj – Zj 80 40 0 -2 0 0 -640
MODM
Cj 0 0 -1 -1 0 0
CB XB X1 X2 u v X3 X4 RHS Ɵ
-1 u 80 40 1 -1 0 0 640 8
0 X3 1 0 0 0 1 0 6 6
0 X4 0 1 0 0 0 1 8 ---
Cj – Zj 80 40 0 -2 0 0 -640
X1 enters ; X3 leaves ; u = u – 80 X1 ; X4 = X4
-1 u 0 40 1 -1 -80 0 160 4
0 X1 1 0 0 0 1 0 6 ---
0 X4 0 1 0 0 0 1 8 8
Cj – Zj 0 40 0 -2 -80 0 -160
X2 enters ; u leaves ; Divide u by 40 to get X2 ; X1 = X1 ; X4 = X4 – X2
0 X2 0 1 1/40 -1/40 -2 0 4
0 X1 1 0 0 0 1 0 6
0 X4 0 0 -1/40 1/40 2 1 4
Cj – Zj 0 0 -1 -1 0 0 0
No positive values in Cj – Zj ⸫ The solution is X1 = 6 and X2 = 4
MODM
Goal Programming Formulation (In General)

Minimize Z' = u + v
S. T. Z + u – v = Goal
σ𝑛𝑗=1 aij xj = bi
u, v, xj ≥ 0
with either u or v or both equal to zero

where, u is the underachievement


v is the overachievement
MODM
A company manufactures 2 products, radios and transistors, which must be processed
through assembly & finishing departments. Assembly has 90 hours available, finishing
can handle up to 72 hours of work. Manufacturing one radio requires 6 hours in assembly
and 3 hours in finishing. Each transistor requires 3 hours in assembly and 6 hours in
finishing. If profit is Rs. 120 per radio and Rs. 90 per transistor, determine the best
combination of radios and transistors to realize profit of Rs. 2100.

Solution –
Expressing the given problem in goal problem Adding slack variables, the problem is expressed as
Minimize Z' = u Minimize Z' = u
S.T. 120 x1 + 90 x2 + u – v = 2100 S.T. 120 x1 + 90 x2 + u – v = 2100
6x1 + 3x2 ≤ 90 6x1 + 3x2 + x3 = 90
3x1 + 6x2 ≤ 72 3x1 + 6x2 + x4 = 72
x1, x2, u, v ≥ 0 x1, x2, u, v, x3, x4 ≥ 0
Such that either u or v or both equal to zero. Such that either u or v or both equal to zero.
MODM
Cj 0 0 -1 0 0 0
CB XB X1 X2 u v X3 X4 RHS Ɵ
-1 u 120 90 1 -1 0 0 2100 17.5
0 X3 6 3 0 0 1 0 90 15
0 X4 3 6 0 0 0 1 72 24
Cj – Z j 120 90 0 -1 0 0 -2100
X1 enters ; X3 leaves ; Divide X3 by 6 to get X1 ; u = u – 120 X1 ; X4 = X4 - 3X1
-1 u 0 30 1 -1 -20 0 300 10
0 X1 1 ½ 0 0 1/6 0 15 30
0 X4 0 4.5 0 0 -0.5 1 27 6
Cj – Z j 0 30 0 -1 -20 0 -300
X2 enters ; X4 leaves ; Divide X4 by 4.5 to get X2 ; u = u – 30 X2 ; X1 = X1 – 1/2 X2
-1 u 0 0 1 -1 -16.67 -6.66 120
0 X1 1 0 0 0 0.222 -0.111 12
0 X2 0 1 0 0 -0.111 0.222 6
Cj – Z j 0 0 0 -1 -16.67 -6.66 -120
No positive values in Cj – Zj ⸫ The algorithm terminates. The solution is X1 = 12 ; X2 = 6 and u = 120
MODM
In the previous example, the company sets two equally ranked goals, one to reach a profit goal of Rs. 1500
and other to meet a radio goal of 10. Find the optimal solution.
Solution –
Let, u1 = amount by which the profit goal is underachieved
v1 = amount by which the profit goal is overachieved
u2 = amount by which the radio goal is underachieved
v2 = amount by which the radio goal is overachieved
Goal programming model can be formulated as,
Minimize Z' = u1 + u2
S.T. 120 x1 + 90 x2 + u1 – v1 = 1500
x1 + u2 – v2 = 10
6x1 + 3x2 ≤ 90
3x1 + 6x2 ≤ 72
x1, x2, u1, v1, u2, v2 ≥ 0
Such that either u1 or v1 or both equal to zero
and either u2 or v2 or both equal to zero
MODM
Adding slack variables to assembly & finishing department constraints.
Minimize Z' = u1 + u2
S.T. 120 x1 + 90 x2 + u1 – v1 = 1500
x1 + u2 – v2 = 10
6x1 + 3x2 + x3 = 90
3x1 + 6x2 + x4 = 72
x1, x2, u1, v1, u2, v2, x3, x4 ≥ 0
Such that either u1 or v1 or both equal to zero
and either u2 or v2 or both equal to zero

Cj 0 0 -1 0 -1 0 0 0
CB XB X1 X2 u1 v1 u2 v2 X3 X4 RHS Ɵ
-1 u1 120 90 1 -1 0 0 0 0 1500 12.5
-1 u2 1 0 0 0 1 -1 0 0 10 10
0 X3 6 3 0 0 0 0 1 0 90 15
0 X4 3 6 0 0 0 0 0 1 72 24
Cj – Zj 121 90 0 -1 0 -1 0 0 -1510
X1 enters ; u2 leaves ; Divide u2 by key element to get X1 ; u1 = u1 – 120 X1 ; X3 = X3 - 6X1 ; X4 = X4 - 3X1
MODM
Cj 0 0 -1 0 -1 0 0 0
CB XB X1 X2 u1 v1 u2 v2 X3 X4 RHS Ɵ
-1 u1 0 90 1 -1 -120 120 0 0 300 2.5
0 X1 1 0 0 0 1 -1 0 0 10 ---
0 X3 0 3 0 0 -6 6 1 0 30 5
0 X4 0 6 0 0 -3 3 0 1 42 14
Cj – Zj 0 90 0 -1 -121 120 0 0 -300
v2 enters ; u1 leaves ; Divide u1 by key element to get v2 ; X1 = X1 + v2 ; X3 = X3 - 6v2 ; X4 = X4 - 3v2
0 v2 0 3/4 1/120 -1/120 -1 1 0 0 5/2
0 X1 1 3/4 1/120 -1/120 0 0 0 0 25/2
0 X3 0 -3/2 -1/20 1/20 0 0 1 0 15
0 X4 0 15/4 -1/40 1/40 0 0 0 1 69/2
Cj – Zj 0 0 -1 0 -1 0 0 0 0

No positive values in Cj – Zj ⸫ The algorithm terminates. The solution is X1 = 25/2 ; X2 = 0 and v2 = 5/2
MODM

Dynamic Programming (Richard Bellman – 1950)

D. P. is used to solve a wide variety of optimization problem such as inventory


management, scheduling, etc.

The main concept is to break the problems into sub-problems (called stages) and
combine their solutions to get the solution of larger problem.
MODM

Bellman’s Principle of Optimality

The solution of D. P. is based upon Bellman’s principle of optimality


which states that,

The optimal policy must be one such that, regardless of how a particular
state is reached, all later decisions preceding from that state must be
optimal.

3
MODM
Use D.P. to solve the following problem Stage 2 - f2(K2) = min {x22 + f1(K1)}

Minimize Z = x12 + x22 + x32 = min {x22 + K12}


= min {x22 + (K2 – x2)2}
S.T. x1 + x2 + x3 = 15
Differentiate w.r.t. x2 and equate it to zero
x1, x2, x3 ≥ 0
2x2 – 2 (K2 – x2) = 0 → x2 = K2 / 2
Substituting the value of x2 in the equation of f2(K2)
Solution –
⸫ f2(K2) = K22 / 4 + K22 / 4 = K22 / 2
State Variables K3 = x1 + x2 + x3 = 15
K2 = x 1 + x 2 = K3 – x3 Stage 3 - f3(K3) = min {x32 + f2(K2)}
= min {x32 + K22 / 2}
K1 = x 1 = K2 – x2 = min {x32 + (K3 – x3)2 / 2}
Stage 1 -
Differentiate w.r.t. x3 and equate it to zero
f1(K1) = min {x12} = K12 2x3 – 2(K3 – x3) / 2 = 0 → x3 = K3 / 3

Substituting the value of x3 in the equation of f3(K3)


⸫ f3(K3) = {(K3 / 3)2 + (K3 - K3 / 3) 2 / 2 } = K32 / 3
4
MODM
Now, we know that K3 = 15

⸫ f3(K3) = K32 / 3 = 152 / 3 = 75

x3 = K3 / 3 = 15 / 3 = 5

K2 = K3 – x3 = 15 – 5 = 10

x2 = K2 / 2 = 10 / 2 = 5

K1 = K2 – x2 = 10 – 5 = 5

x1 = K1 = 5

⸫ x1 = x2 = x3 = 5 & Z = f3(K3) = 75

5
MODM
Use D.P. to solve the following problem Stage 2 - f2(K2) = max {y2 f1(K1)}
Maximize Z = y1y2y3 = max {y2 K1}
S.T. y1 + y2 + y3 = 5 = max {y2 (K2 – y2)}
y1, y2, y3 ≥ 0 Differentiate w.r.t. y2 and equate it to zero
y2 (–1) + (1) (K2 – y2) = 0 → y2 = K2 / 2
Solution – Substituting the value of y2 in the equation of f2(K2)
State Variables K3 = y1 + y2 + y3 = 5 ⸫ f2(K2) = K2 / 2 (K2 - K2 / 2 ) = K22 / 4
K2 = y1 + y2 = K3 – y3
Stage 3 - f3(K3) = max {y3 f2(K2)}
K1 = y1 = K2 – y2 = max {y3 K22 / 4}
Stage 1 - = max{y3 (K3 – y3)2 / 4}
f1(K1) = max {y1} = K1 Differentiate w.r.t. y3 and equate it to zero
¼ {1 (K3 – y3)2 – y3 2 (K3 – y3)} = 0 → y3 = K3/3

6
MODM
Now, we know that K3 = 5

y3 = K3 / 3 = 5 / 3

K2 = K3 – y3 = 5 – 5 / 3 = 10 / 3

y2 = K2 / 2 = (10 / 3) / 2 = 5 / 3

K1 = K2 – y2 = 10 / 3 – 5 / 3 = 5 / 3

y1 = K1 = 5 / 3

⸫ y1 = y2 = y3 = 5 / 3 & Z = y1y2y3 = 125 / 27

7
Module 4
MODM
Introduction to Non-traditional
Optimization Techniques
• Genetic Algorithms (GA)

• Simulated Annealing (SA)

• Particle Swarm Optimization (PSO)


MODM

Genetic Algorithm John Holland(1960)

The entire concept of GA is based on the famous quote by Charles Darwin -

It is not the strongest of the species that survives, nor the most intelligent,
but the one most responsive to change.

“Survival of the fittest ”


MODM
The concept of GA is related to Biology.

Cell → Chromosome → DNA

These chromosomes are represented in binary as strings of 0’s and 1’s.

0 1 0 1 0 Chromosomes

0 Gene
MODM
Maximize the function f(x) = x2 Condition – x value from 0-31
Step 1: Select Encoding type
Convert number to Binary Value
0- 00000 (𝑢) − 𝑥 (𝑙)
𝑥
31- 11111 2𝑞 ≥
∆𝑥
Thus length of chromosome is 5

Step 2: Choose population size


Let n = 4

Step3: Randomly choose initial population


13, 24, 8, 19
MODM
Step 4: Fitness Function And Selection
Use Roulette wheel selection method.
Consider a wheel and divide it into divisions equal to the number of
chromosomes. The area occupied by each chromosome will be
proportional to its fitness value.
Chr.
X Initial population f(x) Probability count Expected count Actual count
No.
1 13 01101 169 0.14 0.58 1
2 24 11000 576 0.49 1.97 2
3 8 01000 64 0.06 0.22 0
4 19 10011 361 0.31 1.23 1
Total 1170 1 4
Average 293
MODM
Step 5: Crossover
In biological term crossover is reproduction.
Select a random crossover point, and the tails of both the chromosomes are
swapped to produce a new offspring.
Find the crossover of chromosomes 2 and 1, and chromosomes 2 and 4.
Randomly select the crossover point as shown.

2 1100|0 2 1100|1
New Chromosomes
1 0110|1 1 0110|0

2 11|000 2 11011

4 10|011 4 10000
MODM
Step 6 : Evaluation of offspring

Chr. No. Offspring X f(x)


1 01100 12 144
2 11001 25 625
3 11011 27 729
4 10000 16 256

Step 7 : Mutation
It is defined as a random tweak in the chromosome which also promotes the idea of diversity.
Perform the mutation by doing a random tweak in the chromosome number 3

1 1 0 1 1 1 1 1 1 1 ⸫ x = 31
f(x) = 961
MODM

Termination Conditions
There are different termination conditions, which are listed below:

1) There is no improvement in the population for over x iterations.

2) We have already predefined an absolute number of generation for our algorithm.

3) When our fitness function has reached a predefined value.


MODM
Population Initialization

Fitness Assignment

Selection

Crossover GA Operators

Mutation
No

Yes
Evaluation of offspring Termination criteria Stop
met?
MODM
Difference between GA and Traditional methods of Optimization
A population of points is used for starting the procedure instead of a single design
point. Since several points are used as candidate solutions, GA are less likely to get
trapped at a local optimum.
GA use only the values of the objective function. The derivatives are not used in the
search procedure.
In GA, the search method is naturally applicable for solving discrete and integer
programming problems. For continuous design variables, the string length can be
varied to achieve any desired resolution.
The objective function value corresponding to a design vector pays the role of fitness
in natural genetics.
In every generation, a new set of strings is produced by using randomized parent
selection & crossover from the old generation. GA find a new generation with better
fitness or objective function value.
11
Module 4
MODM
Particle Swarm Optimization (PSO)
Kennedy & Eberhart (1995)
• PSO is based on the behaviour of a colony or swarm of insects such as ants, termites,
bees and wasps; a flock of birds or a school of fish.

• The particle denotes a bee in a colony or a bird in a flock. Each particle or individual in
a swarm behaves in a distributed way using its own intelligence and the collective or
group intelligence of the swarm.

2
MODM
• Inthe context of multivariable optimization, the swarm is assumed to be of
specified or fixed size with each particle located initially at random locations in
the multidimensional design space.
• Each particle is assumed to have two characteristics –
i) position
ii) velocity
• Each particle wanders around in the design space and remembers the best position
(in terms of food) it has discovered.
• The particles communicate information or good positions to each other and adjust
their individual positions & velocities based on the information received on the
good positions.

3
MODM
• Each bird in a flock follows the following simple rules;
1. It tries not to come too close to other birds.
2. It steers towards the average direction of other birds.
3. It tries to fit the average position between other birds with no wide gaps in the
flock.

• Thusthe behaviour of the flock or swarm is based on the combination of three


simple factors;
1. Cohesion – Stick together
2. Separation – Don’t come too close
3. Alignment – Follow the general heading of the flock
4
MODM
• The PSO is based on the following model;
1. When one bird locates a target or food, it instantaneously transmits the
information to all other birds.
2. All other birds gravitate to the target or food but not directly.
3. There is a component of each bird’s own independent thinking as well as
its past memory.

• Thus the model simulates a random search in the design space for the
maximum value of the objective function. As such, gradually over many
iterations, the birds go to the target .

5
MODM
Computational Implementation of PSO
• Velocity of the particle j in the ith iteration
Vj(i) = Ɵ Vj(i - 1) + c1r1 [Pbest j – Xj(i - 1)] + c2r2 [Gbest – Xj(i - 1)]

• Position of the jth particle in the ith iteration


Xj(i) = Xj(i - 1) + Vj(i)

• Evaluate the objective function values corresponding to the particles as


f [X1(i)], f [X2(i)], …f [XN(i)]

• The iterative process is continued until all the particles converge to same optimal solution.
6
Module 4
MODM
Simulated Annealing
• The SA method is based on the simulation of thermal annealing of critically heated solids.

• When a metal is brought into a molten state by heating it to a high temperature, the atoms
in the molten metal move freely with respect to each other.

• However, the movements of atoms get restricted as the temperature is reduced.

• Asthe temperature reduces, the atoms tend to get ordered and finally form crystals having
the minimum possible internal energy. The process of formation of crystals essentially
depends on the cooling rate.
MODM
The process of Annealing

3
MODM

• When the temperature of the molten metal is reduced at a very fast rate, it may not
be able to achieve the crystalline state; instead, it may attain a polycrystalline state
having a higher energy state compared to that of the crystalline state.

• In engineering applications, rapid cooling may introduce defects inside the


material. Thus the temperature of the molten metal needs to be reduced at a slow
and controlled rate to ensure proper solidification with a highly ordered crystalline
state that corresponds to the lowest energy state (internal energy).
This process of cooling at a slow rate is known as annealing.

4
MODM
Procedure
The SA method simulates the process of slow cooling of molten metal to achieve
the minimum function value in a minimization problem.
The cooling phenomenon of the molten metal is simulated by introducing a
temperature like parameter and controlling it using the concept of Boltzmann’s
probability distribution.
It implies that the energy (E) of a system in thermal equilibrium at temperature (T)
is distributed probabilistically according to the relation;
P(E) = e –E/KT
where, P(E) denotes the probability of achieving the energy level E
K is the Boltzmann’s constant
5
MODM
• Athigh temperatures the system has nearly a uniform probability of being at
any energy state; however; at low temperatures the system has a small
probability of being at a high energy state.

• Thus, when the search process is assumed to follow Boltzmann’s probability


distribution, the convergence of SA algorithm can be controlled by controlling
the temperature T.

6
Module 5

Multi Attribute Decision Making


MADM
Introduction
• Multiple criterion decision making (MCDM) refers to making
decisions in the presence of multiple, usually conflicting criteria.

• The problems of MCDM can be broadly classified into two categories:


• Multiple Attribute Decision Making (MADM)
• Multiple Objective Decision Making (MODM)
depending on whether the problem is a selection problem or a design
problem.
MADM
MADM methods, are generally discrete, with a limited number of
predetermined alternatives. MADM is an approach employed to solve
problems involving selection from among a finite number of
alternatives. An MADM method specifies how attribute information is
to be processed in order to arrive at a choice.
MADM
Each decision table (also called Decision Matrix) in MADM methods
has four main parts,

a) Alternatives

b) Attributes

c) Weight or relative importance of each attribute

d) Measures of performance of alternatives with respect to the


attributes.
MADM
The decision table is shown in table and it shows,
• Alternatives, Ai (for i = 1, 2, ….. , N)
• Attributes, Bj (for j = 1, 2, ….. , M)
• Weights of attributes, wj (for j=1, 2, ….., M)
• The measures of performance of alternatives, mij (for i = 1, 2, ….., N;
j=1, 2, ….., M).
MADM
• Given the decision table information and a decision-making method, the
task of the decision maker is to find the best alternative and/or to rank the
entire set of alternatives.

• All the elements in the decision table must be normalized to the same units,
so that all possible attributes in the decision problem can be considered.
MADM
Simple Additive Weighting (SAW) Method - (Fishburn - 1967)
• Also called the Weighted Sum Method (WSM).
• Each attribute is given a weight, and the sum of all weights must be 1.
• Each alternative is assessed with regard to every attribute.
• The overall or composite performance score of an alternative is given
by equation

where, (mij)normal represents the normalized value of mij


MADM
• The attributes can be beneficial or non-beneficial.

• A beneficial attribute (e.g., profit) means its higher measures are more
desirable for the given decision-making problem and normalized values are
calculated by (mij)K/(mij)L

• A non-beneficial attribute (e.g., cost) is that for which the lower measures
are desirable, and the normalized values are calculated by (mij)L/(mij)K

• (mij)K is the measure of the attribute for the K-th alternative, and (mij)L is
the measure of the attribute for the L-th alternative.
MADM
A person has to select a house from given 3 alternatives he has. He considers 3
attributes of price, near to market and near to school with weights as 0.625,
0.125 and 0.25 respectively. Select the best alternative of house by SAW
method.

House 1

House 2 House 3
MADM

Alternative/Criteria Price (Rs. Lakhs) Near Market (Km) Near School (Km)
House 1 100 1.5 2.75
House 2 140 1.0 3.5
House 3 80 1.7 3.0

Attribute Weights 0.625 0.125 0.250


MADM
Alternative/ Price (Rs. Near Market Near School
• Solution- Criteria Lakhs) (Km) (Km)

• All the attributes are non-beneficial. House 1 100 1.5 2.75


• Normalize the decision matrix. House 2 140 1.0 3.5

• For non-beneficial attribute (mij)L/(mij)K House 3 80 1.7 3.0

Alternative/Criteria Price (Rs. Lakhs) Near Market (Km) Near School (Km)
House 1 0.80 0.66 1.00
House 2 0.57 1.00 0.78
House 3 1.00 0.58 0.91
MADM
The overall or composite performance Alternative/ Price (Rs. Near Market Near School
Criteria Lakhs) (Km) (Km)
score of an alternative is given by
equation House 1 0.80 0.66 1.00
House 2 0.57 1.00 0.78
House 3 1.00 0.58 0.91
Weights 0.625 0.125 0.25

Alternatives Compare Performance Score


House 1 0.80*0.625+0.66*0.125+1.00*0.25 = 0.83
House 2 0.57*0.625+1.00*0.125+0.78*0.25 = 0.67
House 3 1.00*0.625+0.58*0.125+0.91*0.25 = 0.92
Rank the alternatives according to their C.P. score
Rank → House 3→ House 1→ House 2
MADM
A problem is related with selection of a suitable material for a cryogenic storage tank for
transportation of liquid nitrogen. The material selection problem considers seven alternative
materials and seven attributes, and the data are given in table. The first three attributes are
beneficial and the remaining are non-beneficial. The weights of the attributes are 0.28, 0.14,
0.05, 0.24, 0.19, 0.05 and 0.05 respectively. Rank the materials as per SAW method.

TI = Toughness index Material 1:Al 2024-T6


YS = Yield strength Material 2:Al 5052-O
YM = Young’s modulus Material 3:SS 301-FH
D= Density Material 4:SS310-3AH
TE = Thermal expansion Material 5:Ti-6Al-4V
TC = Thermal conductivity Material 6:Inconel 718
SH = Specific heat Material 7:70Cu-30Zn
MADM
The quantitative values of the material
selection attributes, are normalized.
For beneficial criteria (mij)K/(mij)L
For non-beneficial criteria (mij)L/(mij)K
MADM
Calculate the Pi for all the seven materials

Weights 0.28 0.14 0.05 0.24 0.19 0.05 0.05

P1 = 0.4217
P2 = 0.4020 P5 = 0.5991
P3 = 0.7081 P6 = 0.5352
P4 = 0.5008 P7 = 0.3635
MADM
The values of Pi are arranged in descending order as given below
0.7081

The SAW method suggests the material designated as 3, i.e., SS 301-FH, as


the right choice for the given problem of selection of a suitable material for a
cryogenic storage tank for transportation of liquid nitrogen. The second choice
is Ti-6Al-4V, and the last choice is the material designated as 7, i.e., 70Cu-
30Zn.
Module 5

Multi Attribute Decision Making


MADM
Weighted Product Method (WPM) (Miller and Starr, 1969)
• In WPM there is multiplication instead of addition as done in SAW.
• The normalized values are calculated as explained under the SAW method.
• Each normalized value of an alternative with respect to an attribute, is raised
to the power of the relative weight of the corresponding attribute.
• The overall or composite performance score of an alternative is given by
equation

• The alternative with the highest Pi value is considered the best alternative.
MADM
A person has to select a house from given 3 alternatives he has. He considers 3
attributes of price, near to market and near to school with weights as 0.625,
0.125 and 0.25 respectively. Select the best alternative of house by WPM
method.

Alternative/Criteria Price (Rs. Lakhs) Near Market (Km) Near School (Km)
House 1 100 1.5 2.75
House 2 140 1.0 3.5
House 3 80 1.7 3.0

Attribute Weights 0.625 0.125 0.250


MADM
Alternative/ Price (Rs. Near Market Near School
• Solution- Criteria Lakhs) (Km) (Km)

• All the attributes are non-beneficial. House 1 100 1.5 2.75


• Normalize the decision matrix. House 2 140 1.0 3.5

• For non-beneficial attribute (mij)L/(mij)K House 3 80 1.7 3.0

Alternative/Criteria Price (Rs. Lakhs) Near Market (Km) Near School (Km)
House 1 0.80 0.66 1.00
House 2 0.57 1.00 0.78
House 3 1.00 0.58 0.91
MADM
The overall or composite performance Alternative/ Price (Rs. Near Market Near School
Criteria Lakhs) (Km) (Km)
score of an alternative is given by
equation House 1 0.80 0.66 1.00
House 2 0.57 1.00 0.78
House 3 1.00 0.58 0.91
Weights 0.625 0.125 0.25

Alternatives Compare Performance Score


House 1 0.80(0.625)*0.66(0.125)*1.00(0.25) = 0.82
House 2 0.57(0.625)*1.00(0.125)*0.78(0.25) = 0.66
House 3 1.00(0.625)*0.58(0.125)*0.91(0.25) = 0.91
Rank the alternatives according to their C.P. score
Rank → House 3→ House 1→ House 2
MADM
The results of a cylindrical turning test are presented in table. This test is
conducted for the purpose of evaluation of most common, commercially
available metal cutting fluids. All four attributes are of non-beneficial type, and
lower values are desirable. The weights for attribute FC, TF, WL and R are 0.30,
0.10, 0.40 and 0.20 respectively. Rank these cutting fluids according to WPM.

7
MADM

7
Normalized data of cutting fluid attributes
MADM

Weights 0.30 0.10 0.40 0.20

P1= (0.7251)0.30 x (0.5489)0.10 x (1)0.40 x (0.5222)0.20 = 0.7501

Similarly,
P2 = 0.6306 P3 = 0.8864 P4 = 0.6750 P5 = 0.8990

Ranking –
P5 → P3 → P1 → P4 → P2
Module 5

Multi Attribute Decision Making


MADM
Analytic Hierarchy Process (AHP)
• Saaty (1980) developed AHP, which decomposes a decision-making
problem into a system of hierarchies of objectives, attributes (or criteria)
and alternatives.

• AHP can efficiently deal with tangible (i.e., objective) as well as non-
tangible (i.e., subjective) attributes.
MADM
The main procedure of AHP using the radical root method (also called the
geometric mean method) is as follows

Step 1: Determine the objective and the evaluation attributes. Develop a


hierarchical structure with a goal or objective at the top level, the attributes at
the second level and the alternatives at the third level.

Step 2: Determine the relative importance of different attributes with respect to


the goal or objective.
MADM
• Construct a pair-wise comparison matrix using a scale of relative importance.

• The judgements are entered using the fundamental scale of the analytic
hierarchy process (Saaty 1980).

• An attribute compared with itself is always assigned the value 1, so the main
diagonal entries of the pair-wise comparison matrix are all 1.

• The numbers 3, 5, 7, and 9 correspond to the verbal judgements ‘moderate


importance’, ‘strong importance’, ‘very strong importance’, and ‘absolute
importance’ (with 2, 4, 6, and 8 for compromise between these values).
MADM
Assuming M attributes, the pair-wise comparison of attribute i with
attribute j yields a square matrix BM x M where aij denotes the comparative
importance of attribute i with respect to attribute j. In the matrix, bij = 1
when i = j and bji = 1/bij.
MADM
• Find the relative normalized weight (wj) of each attribute by
(i) calculating the geometric mean of the ith row, and
(ii) normalizing the geometric means of rows in the comparison matrix.

• This can be represented as:

• The geometric mean method of AHP is commonly used to determine the


relative normalized weights of the attributes, because of its simplicity, easy
determination of the maximum Eigen value, and reduction in inconsistency
of judgements.
MADM
• Calculate matrices A3 and A4 such that A3 = A1 * A2 and A4 = A3 / A2
(Where A1 is decision matrix and A2 is weight matrix)

• Determine the maximum Eigen value 𝜆max that is the average of matrix A4.

• Calculate the consistency index CI = (𝜆max - M) / (M - 1). The smaller the


value of CI, the smaller is the deviation from the consistency.

• Obtain the random index (RI) for the number of attributes used in decision
making. Refer to Table
Random index (RI) values
MADM
• Consistency Ratio

CR = CI/RI

• If the calculated value of CR is less than the allowed CR value of 0.1,


there is good consistency in the judgements made.

• Apply normalization on original matrix.


MADM
• In the AHP model, both the relative and absolute modes of comparison can
be performed.

• The relative mode can be used when decision makers have prior knowledge
of the attributes for different alternatives to be used, or when objective data
of the attributes for different alternatives to be evaluated are not available.

• The absolute mode is used when data of the attributes for different
alternatives to be evaluated are readily available. In the absolute mode, CI is
always equal to 0, and complete consistency in judgements exists, since the
exact values are used in the comparison matrices.
MADM
Step 3: The next step is to compare the alternatives pair-wise with respect
to how much better (i.e., more dominant) they are in satisfying each of
the attributes.

Step 4: The next step is to obtain the overall or composite performance


scores for the alternatives by multiplying the relative normalized weight
(wj) of each attribute with its corresponding normalized weight value for
each alternative, and summing over the attributes for each alternative.
MADM
A person has to select a house from given 3 alternatives he has. He considers 3
attributes of price, near to market and near to school. Select the best alternative
of house by AHP method. The attribute information with respect to the criteria
is given in the table below and the decision maker has given the following pair-
wise matrix.
Alternative/Criteria Price (Rs. Lakhs) Near Market (Km) Near School (Km)
House 1 100 1.5 2.75
House 2 140 1.0 3.5
House 3 80 1.7 3.0
P NM NS
1 5 3
P 1 1ൗ
ൗ5 1 3
NM
1
NS ൗ3 3 1
MADM
Solution -
According to the Decision Maker, the relative importance of different
attributes are, P NM NS
1 5 3
P 1 1ൗ
ൗ5 1 3
NM
1
NS ൗ3 3 1

Normalizing the geometric means


Geometric Mean,
(1 X 5 X 3)1/3 = 2.4
0.63
(1/5 X 1 X 1/3)1/3 = 0.4 0.10
(1/3 X 3 X 1)1/3 = 1.0 0.25
𝑀

෍ 𝐺𝑀𝑗 = 3.8
𝑗=1
MADM
• A3 = A1 * A2
1 5 3 0.63 1.88
1Τ 1 1Τ
5 3 0.10 = 0.31
1Τ 3 1 0.25 0.76
3

• A4 = A3 / A2
1.88 0.63 2.98
0.31 ÷ 0.10 = 3.10
0.76 0.25 3.04

• 𝜆max = 3.04 (average of matrix A4)


MADM
• Consistency Index
CI = (𝜆max - M) / (M - 1) {M= 3 - No. of Attributes}
= (3.04 - 3) / (3 - 1)
= 0.02
• Consistency Ratio
CR = CI/RI {R.I. from table}
= 0.02/0.52
= 0.0384 < 0.1

• As the calculated value of CR is less than the allowed CR value of 0.1,


there is good consistency in the judgements made.
MADM
Normalizing the estimated quantitative values of P, NM and NS.
All the attributes are non-beneficial attributes.
Alternative/Criteria Price (Rs. Near Market Near School
Lakhs) (Km) (Km)
House 1 100 1.5 2.75
House 2 140 1.0 3.5
House 3 80 1.7 3.0

0.8 0.66 1
0.57 1 0.78
1 0.58 0.91
MADM 0.8 0.66 1
0.57 1 0.78
1 0.58 0.91
0.63 0.10 0.25

• Obtain the overall scores for the alternatives by multiplying the relative
normalized weight (w ) of each attribute with its corresponding normalized
j

weight value for each alternative, and summing over the attributes for each
alternative.
House 1 = 0.8 X 0.63 + 0.66 X 0.10 + 1 X 0.25 = 0.83
House 2 = 0.57 X 0.63 + 1 X 0.10 + 0.78 X 0.25 = 0.65
House 3 = 1 X 0.63 + 0.58 X 0.10 + 0.91 X 0.25 = 0.91

• Ranking the layouts according to their scores -


3–1-2
MADM
Problem - A strip-layout selection methodology is to be considered as
shown in Figure 1. Six alternative strip-layouts, shown in Figure 2, were
synthesized.

Figure 1 Figure 2
MADM
Five strip-layout selection attributes were identified relevant to the case, and these were:
economical material utilization (Ur), die cost (Dc), stamping operational cost (Oc), required
production rate (Pr) and job accuracy (Ja). Table presents the estimated quantitative values of
Ur, Dc, Oc, Pr, and assigned qualitative values of Ja. Ur, Pr, and Ja are beneficial attributes,
and higher values of these attributes are desired for the given stamping operation. Dc and Oc
are the non-beneficial attributes, and lower values of these attributes are desired for the
given stamping operation.
MADM
Let the decision maker prepare the following matrix:

Ur, Pr, and Ja are considered as equally important. These three attributes are
considered as moderately more important than Dc, and strongly more
important than Oc, and the relative importance values are assigned
accordingly in the above matrix. Rank the strip layouts according to AHP.
MADM
Solution -
According to the Decision Maker, the relative importance of different
attributes are,

Normalizing the geometric means


Geometric Mean,
(1 X 3 X 5 X 1 X 1)1/5 = 1.71
0.28
(1/3 X 1 X 3 X 1/3 X 1/3)1/5 = 0.64
0.10
(1/5 X 1/3 X 1 X 1/5 X 1/5)1/5 = 0.30 0.04
(1 X 3 X 5 X 1 X 1)1/5 = 1.71 0.28
(1 X 3 X 5 X 1 X 1)1/5 = 1.71 0.28
𝑀

෍ 𝐺𝑀𝑗 = 6.07
𝑗=1
MADM
• A3 = A1 * A2
1 3 5 1 1 0.28 1.34
1Τ 1 3 1Τ 1Τ
3 3 3 0.10 0.49

5

3 1 1Τ
5

5 0.04 = 0.24
1 3 5 1 1 0.28 1.34
1 3 5 1 1 0.28 1.34

• A4 = A3 / A2
1.34 0.28 4.78
0.49 0.10 4.9
0.24 ÷ 0.04 = 6
1.34 0.28 4.78
1.34 0.28 4.78

• 𝜆max = 5.04 (average of matrix A4)


MADM
• Consistency Index
CI = (𝜆max - M) / (M - 1) {M= 5 - No. of Attributes}
= (5.04 - 5) / (5 - 1)
= 0.01
• Consistency Ratio
CR = CI/RI {R.I. from table}
= 0.01/1.11
= 0.009 < 0.1

• As the calculated value of CR is less than the allowed CR value of 0.1,


there is good consistency in the judgements made.
MADM
• Normalizing the estimated quantitative values of Ur, Dc, Oc, Pr, and Ja.
Ur, Pr, and Ja are beneficial attributes while Dc and Oc are the non-beneficial
attributes.

Layout Ur Dc Oc Pr Ja
a 0.65 1.00 0.69 0.53 1.00
b 1.00 0.87 0.65 0.80 0.75
c 0.82 0.80 1.00 1.00 0.75
d 0.80 0.78 0.60 0.83 0.50
e 0.77 0.77 0.56 0.73 0.50
f 0.77 0.76 0.77 0.72 0.50
MADM
• Obtain the overall scores for the alternatives by multiplying the relative
normalized weight (w ) of each attribute with its corresponding normalized
j

weight value for each alternative, and summing over the attributes for each
alternative.

a = 0.65 X 0.28 + 1 X 0.10 + 0.69 X 0.04 + 0.53 X 0.28 + 1 X 0.28 = 0.738


b = 1 X 0.28 + 0.87 X 0.10 + 0.65 X 0.04 + 0.8 X 0.28 + 0.75 X 0.28 = 0.827
c = 0.82 X 0.28 + 0.8 X 0.10 + 1 X 0.04 + 1 X 0.28 + 0.75 X 0.28 = 0.839
d = 0.8 X 0.28 + 0.78 X 0.10 + 0.6 X 0.04 + 0.83 X 0.28 + 0.5 X 0.28 = 0.698
e = 0.77 X 0.28 + 0.77 X 0.10 + 0.56 X 0.04 + 0.73 X 0.28 + 0.5 X 0.28 = 0.650
f = 0.77 X 0.28 + 0.76 X 0.10 + 0.77 X 0.04 + 0.72 X 0.28 + 0.5 X 0.28 = 0.661

• Ranking the layouts according to their scores -


c–b–a–d–f-e
TOPSIS

Technique for Order Preference by Similarity to


Ideal Solution
MADM
TOPSIS

The method is based on the concept that the chosen alternative should
have the shortest Euclidean distance from the ideal solution and the
farthest from the negative ideal solution.
MADM
Procedure
1) Determine the objective and identify the pertinent evaluation
attributes.

2) A decision table matrix based on all the information available on


attributes.
If subjective attribute is given , a ranked value judgement on a scale
is adopted.

3) Obtain the normalized decision matrix


𝑥𝑖𝑗
𝑟𝑖𝑗 =
σ 𝑥𝑖𝑗 2
MADM
Procedure
4) Decide on the relative importance (i.e. weights) of different
attributes with respect to the objective such that σ 𝑤𝑗 =1

5) Obtain the weighted normalized matrix

𝑉𝑖𝑗 = 𝑟𝑖𝑗 ∗ 𝑤𝑗

6) Obtain the ideal (Best) and Negative ideal (Worst) solution from
the weighted normalized matrix
𝑉𝑗+ indicates ideal (Best) value
𝑉𝑗− indicates Negative ideal (Worst) value
(Based on beneficial and non beneficial attributes)
MADM
Procedure
7) Compute the separation measure
The separation of each alternative from the ideal one is given by
the Euclidean distance in the following equations

𝑛
+ 2
𝑆𝑖+ = ෍ 𝑉𝑖𝑗 − 𝑉𝑗
𝑗=1

𝑛
− 2
𝑆𝑖− = ෍ 𝑉𝑖𝑗 − 𝑉𝑗
𝑗=1
MADM
Procedure
8) Determine the relative closeness of a particular alternative to the ideal
solution.
𝑆𝑖−
𝑃𝑖 = +
𝑆𝑖 + 𝑆𝑖−

9) Determine the preference by arranging the alternatives in the descending


order of their Pi
MADM
A manufacturer has to select most suitable material for their new
product development. It is desire to consider thermal conductivity,
reliability, life cycle and cost as the four influencing parameters. Their
intensity is considered as 10, 40, 30 and 20 percent respectively. Cost is
the non-beneficial criterion while all other are beneficial criteria. The
choice is supposed to be made for 4 materials from M1 to M4. Apply
TOPSIS method to find the best material. The decision matrix based on
the information available on attributes is given below.
Attributes
Alternatives
TC R LC C
M1 7 9 9 8
M2 8 7 8 7
M3 9 6 8 9
M4 6 7 8 6
MADM Alt.
Attributes
TC R LC C
Solution: M1 7 9 9 8
M2 8 7 8 7
Step 1 and Step 2 are already done in the numerical itself. M3 9 6 8 9

Step 3 Normalized the decision matrix M4 6 7 8 6

Attributes
𝑥𝑖𝑗
Alternatives
𝑟𝑖𝑗 =
TC R LC C σ 𝑥𝑖𝑗 2
M1 0.46 0.61 0.54 0.52
M2 0.52 0.47 0.48 0.46
M3 0.59 0.41 0.48 0.59
M4 0.39 0.47 0.48 0.39

Step 4 Weights of attributes are given in the problem


(0.10; 0.40; 0.30; 0.20)
MADM
Step 5 Weighted Normalize Decision Matrix 𝑉𝑖𝑗 = 𝑟𝑖𝑗 ∗ 𝑤𝑗
0.046 0.244 0.162 0.104
0.052 0.188 0.144 0.092
0.059 0.164 0.144 0.118
0.039 0.188 0.144 0.078

Step 6 Obtain the ideal (Best) and ideal Negative (Worst) solution
(TC, R, LC are beneficial while C is non beneficial)
𝑉𝑗+ = 0.059 0.244 0.162 0.078
𝑉𝑗− = 0.039 0.164 0.144 0.118
MADM
Step 7 Compute the Separation Measure for each Alternative 𝑉𝑖𝑗 = 𝑟𝑖𝑗 ∗ 𝑤𝑗

𝑛 𝑛 0.046 0.244 0.162 0.104


2 2
𝑆𝑖+ = ෍ 𝑉𝑖𝑗 − 𝑉𝑗+ 𝑆𝑖− = ෍ 𝑉𝑖𝑗 − 𝑉𝑗− 0.052 0.188 0.144 0.092
𝑗=1 𝑗=1 0.059 0.164 0.144 0.118
0.039 0.188 0.144 0.078

0.029 0.084 𝑉𝑗+ = 0.059 0.244 0.162 0.078


𝑉𝑗− = 0.039 0.164 0.144 0.118
0.058 0.040

0.091 0.019

0.060 0.047
MADM
Step 8 Relative Closeness to the Ideal Solution. 0.029 0.084

𝑆𝑖−
0.058 0.040

𝑃𝑖 = 0.091 0.019
𝑆𝑖+ + 𝑆𝑖− 0.060 0.047

0.74
0.41
0.17
0.45

Step 9 Rank the alternatives in their descending order of 𝑃𝑖


Rank → M1→ M4 → M2 → M3
PROMETHEE

Preference Ranking Organisation Method for


Enrichment Evaluations
MADM
PROMETHEE

The PROMETHEE method was introduced by Brans et al. (1984) and


belongs to the category of outranking methods.

PROMETHEE uses a pairwise comparison of alternatives in each single


criterion in order to determine partial binary relations denoting the
strength of preference of an alternative a1 over alternative a2.
MADM
Implementation of PROMETHEE

Requires following information:

• The relative importance or the weights of the criteria considered.

• The decision maker preference function, which he/she uses when


comparing the contribution of the alternatives in terms of each
separate criterion.
MADM
Methodology used in PROMETHEE

• Identify the selection criteria for the considered decision making


problem and short-list the alternatives on the basis of the identified
criteria satisfying the requirements.
• Prepare a decision table including the measures or values of all
criteria for the short-listed alternatives.
• The weights of relative importance of the criteria may be assigned
using analytic hierarchy process (AHP) method.
• To have the information on the decision maker preference function,
which he/she uses when comparing the contribution of the alternatives
in terms of each separate criterion.
MADM
Methodology
• Assign the preference function by translating the difference
between the evaluations obtained by two alternatives.

Pi,a1a2 = 𝐺𝑖 𝑐𝑖 𝑎1 − 𝑐𝑖 𝑎2

• Compute the weighted average of preference function Pi with the


help of multi attribute preference index.

П𝑎1𝑎2 = σ𝑀
𝑖=1 𝑤𝑖 Pi,a1a2
MADM
Methodology
• For PROMETHEE outranking relations, the leaving flow, entering
flow and the net flow for an alternative a belonging to a set of
alternatives A are defined by the following equations:

ᵠ (a) =
+
σ𝑥ԑ𝐴 П𝑥𝑎 ᵠ+(ai) is called the leaving flow

ᵠ (a) = σ𝑥ԑ𝐴 П𝑎𝑥



ᵠ- (ai) is called the entering flow

ᵠ(a) = ᵠ (a) − ᵠ (a)


+ −
ᵠ (ai) is called the net flow
MADM
Methodology
ᵠ+(a) is the measure of the outranking character of ‘a’ (i.e. dominance of
alternative ‘a’ over all other alternatives).

ᵠ-
(a) gives the outranked character of ‘a’ (i.e. degree to which
alternative ‘a’ is dominated by all other alternatives).

The net flow, ᵠ (a) represents a value function, whereby a higher value
reflects a higher attractiveness of alternative ‘a’. The net flow values are
used to indicate the outranking relationship between the alternatives.
MADM
A person has to select a house from given 3 alternatives he has. He considers 3
attributes of price, near to market and near to school with weights as 0.625,
0.125 and 0.25 respectively. Select the best alternative of house by
PROMETHEE method.

Alternative/Criteria Price (Rs. Lakhs) Near Market (Km) Near School (Km)
House 1 100 1.5 2.75
House 2 140 1.0 3.5
House 3 80 1.7 3.0

Attribute Weights 0.625 0.125 0.250


Price
(Rs.
Lakhs)
Near
Market
(Km)
Near
School
(Km)
MADM
H1 100 1.5 2.75
H2 140 1.0 3.5
Pi,a1a2 = 𝐺𝑖 𝑐𝑖 𝑎1 − 𝑐𝑖 𝑎2 П𝑎1𝑎2 = σ𝑀
𝑖=1 𝑤𝑖 Pi,a1a2
H3 80 1.7 3.0

Price Price
H1 H2 H3 H1 H2 H3

H1 --- 1 0 H1 --- 0.625 0


H2 0 --- 0 H2 0 --- 0
H3 H3
1 1 --- 0.625 0.625 ---
N to M H1 H2 H3 N to M H1 H2 H3

H1 --- 0 1 H1 --- 0 0.125


H2 1 --- 1 H2 0.125 --- 0.125
H3 H3
0 0 --- 0 0 ---
N to S H1 H2 H3 N to S H1 H2 H3

H1
--- 1 1 H1
--- 0.25 0.25
H2 0 --- 0 H2 0 --- 0
H3 0 1 --- H3 0 0.25 ---
MADM
П𝑎1𝑎2 = σ𝑀
𝑖=1 𝑤𝑖 Pi,a1a2
Price
H1 H2 H3 H1 H2 H3 ᵠ (a)
+
H1 --- 0.625 0
H1
--- 0.875 0.375 1.25
H2 0 --- 0
H3
0.625 0.625 --- H2 0.125 --- 0.125 0.25
N to M H1 H2 H3
H3 0.625 0.875 --- 1.50
H1 --- 0 0.125

ᵠ (a)
H2 0.125 --- 0.125 - 0.75 1.75 0.50
H3
0 0 ---
N to S H1 H2 H3
Net dominance = (Row value – Column value)
H1
--- 0.25 0.25
H2 0 --- 0 (0.5, -1.5, 1.0)
H3 0 0.25 --- Rank → House 3→ House 1→ House 2
MADM
Numerical -
The problem considering eight criteria and four alternative grinding fluids is shown
in Table. The eight criteria used to evaluate the four short-listed alternatives included
four machining process output variables wheel wear (WW), tangential force (TF),
grinding temperature (GT), and surface roughness (SR), and four cutting (i.e.
grinding) fluid properties and characteristics recyclability (R), toxic harm rate (TH),
environment pollution tendency (EP), and stability (S). The cutting fluid properties
and characteristics are expressed in linguistic terms.
MADM
The linguistic terms are converted to fuzzy scores and the objective data of
cutting fluid selection criteria is as shown in table.

The normalised weights of each criterion are WW = 0.3306, TF = 0.0718, GT =


0.1808, SR = 0.0718, R = 0.0459, TH = 0.1260, EP = 0.1260, and S = 0.0472.
WW, TF, GT, SR, TH and EP are non-beneficial criterion and lower values are
desired. R and S are beneficial criterion and higher values are desired. Rank the
cutting fluids according to PROMETHEE method.
Optimization Techniques
MEDLO5011
Module-: Robust Design Methods: DoE

-Dr. Sainath A. Waghmare Date: 06/10/2021 &


FCRIT, Mechanical Engg. 09/10/2021

DR. SAINATH, FCRIT. 1


Robust Design Methods:

DR. SAINATH, FCRIT. 2


Robust Design:
• Robust means that something is sturdy or able to hold up.

• Robust Design is a technique that reduces variation in a product by reducing the


sensitivity of the design of the product to sources of variation rather than by
controlling their sources.

• The end result is a robust design, a design that has minimum sensitivity to
variations in uncontrollable factors.

• In other words, it's the process of making sure that finished products maintain their
consistency even when factors interfere with the production process. Those factors
or variations in production are often called noise.

• Variation may be introduced by manufacturing process, environment, parts from


outside suppliers, or end users.
DR. SAINATH, FCRIT. 3
Robust Design:

DR. SAINATH, FCRIT. 4


Design of Experiments

DR. SAINATH, FCRIT. 5


Experiments: Introduction
• An experiment is a test or series of tests.
• It is defined as the systematic procedure carried out under controlled
conditions in order to discover an unknown effect, to test or establish a
hypothesis, or to illustrate a known effect.
• When analyzing a process, experiments are often used to evaluate which
process inputs have a significant impact on the process output, and what
the target level of those inputs should be to achieve a desired result
(output).
• Experiments can be designed in many different ways to collect information.
DR. SAINATH, FCRIT. 6
Design of Experiments: Introduction

• Design of Experiments (DOE) is also referred to as Designed


Experiments or Experimental Design - all of the terms have the same
meaning.
• Collection of statistical techniques providing systematic way to sample
design space.
• Useful when tackling new problem in which you know very little about
design space.
• Study effects of multiple input variables on 1 or more output
parameters.

DR. SAINATH, FCRIT. 7


Design of Experiments: Background
• DoE is firstly introduced by Sir R. A. Fisher in England in the early 1920s
for the agriculture experiments.
• His primary goal was to determine optimum water, rain, sunshine,
fertilizers and soil conditions needed to produce the best crop.
• Using DoE, Fisher was able to lay out all combinations (also called as
treatments and trial conditions) of the factors included in the study.
• When the no. of combinations became too large, he devised to carry out
a fraction of total possibilities such that all the factors would be eventually
present.
• He invented the first method to analyze the effect of more than one
factor at a time.
DR. SAINATH, FCRIT. 8
Design of Experiments: Evolution

• Factorial and fractional factorial designs (1920+) > Agriculture


• Sequential designs (1940+) > Defense
• Response surface designs for process optimization (1950+) > Chemical
• Robust parameter design for variation reduction (1970+) >
Manufacturing and Quality Improvement
• Virtual (computer) experiments using computational models (1990+) >
Automotive, Semiconductor, aircrafts…..

DR. SAINATH, FCRIT. 9


Design of Experiments: Applications
• Use DOE when more than one input factor is suspected of influencing an
output. For example, it may be desirable to understand the effect of
temperature and pressure on the strength of a glue bond.
• DOE can also be used to confirm suspected input/output relationships and
to develop a predictive equation suitable for performing what-if analysis.
• Experimental design can be used at the point of greatest leverage to reduce
design costs by speeding up the design process, reducing late engineering
design changes, and reducing product material and labor complexity.
• Designed Experiments are also powerful tools to achieve manufacturing
cost savings by minimizing process variation and reducing rework, scrap,
and the need for inspection.

DR. SAINATH, FCRIT. 10


Design of Experiments:
Necessity of careful planning of experiment
Limited resources
• Time to carry out experiment
• Costs of required materials/equipment
Avoid reaching suboptimal settings
Avoid missing interesting parts of experimental region
Protection against external uncontrollable/undetectable
influences
Getting precise estimates

DR. SAINATH, FCRIT. 11


Design of Experiments: Concept
Several inputs Several outputs
x f(x) = y
x1, x2, x3, …. y=f(x1, x2, x3, ….)
Independent variables, parameters Dependent variables

The objectives of the experiment may include the


following:
1. Determining which variables are most influential on the
response y
2. Determining where to set the influential x’s so that y is
almost always near the desired nominal value
3. Determining where to set the influential x’s so that
variability in y is small
4. Determining where to set the influential x’s so that the
effects of the uncontrollable variables z1, z2, . . . , zq are
minimized.

DR. SAINATH, FCRIT. 12


Design of Experiments: Concept eg.
Consider the following diagram of a cake-baking process. There are three aspects of the
process that are analyzed by a designed experiment:

Factors, or inputs to the


process

Levels, or settings of each


factor in the study.

Response, or output of
the experiment.

Components of
Experimental
Design
DR. SAINATH, FCRIT. 13
Design of Experiments: Process

Verify predicted
Define problem Interpret results
results

Additional
Set objectives Data analysis experimentations
if required

Determine Conduct
responses and experiment and
factors data collection

DR. SAINATH, FCRIT. 14


Design of Experiments: Basic Steps

1.Prepare the design


2.Data Collection
3.Statistical analysis of the data
4.Derive conclusions
5.Formulate recommendations as a result of the
experiments
DR. SAINATH, FCRIT. 15
Design of Experiments: Terms
1. Factor (F): It is an independent variable that may affect the response and of which
different levels are used in an experiments.

2. Response: It is an output variable that shows the observed results or value of an


experimental treatment. These are the once that we want to optimize.

Example: An operator has to set a machine to obtain good surface finish.

Here, variables or independent variables or factor are


A. Feed, x1
B. Speed, x2
C. Coolant temperature, x3

Response or dependent variable is Surface Finish, y = f(x1, x2, x3)


DR. SAINATH, FCRIT. 16
Design of Experiments: Terms
3. Levels (L): It is the setting or adjustment of a factor at a specific level during an
experiment.
4. Noise factor: It is an independent variable that is difficult or too expensive to
control as a part of standard experimental conditions.
Not desirable to make inferences on noise factors, but they are included in an
experiment to broaden the conclusions regarding control factors.
Here, the levels for factors are
A. Feed (10 mm to 50 mm)
B. Speed (2 mm/sec to 10 mm/sec)
C. Coolant temperature (20oC to 30oC)

Noise can be metal hardness and humidity.

DR. SAINATH, FCRIT. 17


Design of Experiments: Terms
5. Effect: It is a relationship between a factor and a response variable.
It include the Main effect, Dispersion effect and Interaction effect.

Main Effect: It is the impact or influence of a single factor on the mean of the
response variable.

6. Interaction: It occurs when the influence of the one factor on the response variable
depends on one or more other factors.

Interaction effect of the factor=(Mean of the response variable at High level of


interaction – Mean of the response variable at Low level of interaction)

DR. SAINATH, FCRIT. 18


Design of Experiments: Terms

No interaction Interaction-1 Interaction-2

DR. SAINATH, FCRIT. 19


Design of Experiments: Terms
7. Experimental Run: It is a single performance of the experiment for a specific set of
treatment combinations.
n represents the number of experimental runs. Feed Speed Coolant
temp
𝑛 = 𝐿𝐹 10 2 20
Where, L = number of levels and 10 10 20
F = number of factors
10 2 30
So, we have, L = 2 and F = 3
𝑛 = 23 = 8 no. of experiments are possible. 10 10 30
50 2 20
Therefore, we will get 8 observations for 50 10 20
surface finishes. Out them, we can find the 50 2 30
surface finish and respective configuration of 50 10 30
factors.
DR. SAINATH, FCRIT. 20
Design of Experiments: Terms

What if we have, L = 3 and F = 4


𝑛 = 34 = 81 no. of experiments are possible.

Therefore, we will get 81 observations for surface finishes.

This is called as Full Factorial Design

DR. SAINATH, FCRIT. 21


Design of Experiments: Terms: Full factorial design
• A full factorial design of experiments consists of the following:
• Vary one factor at a time
• Perform experiments for all levels of all factors
• Hence perform a large number of experiments that are needed!
• All interactions are captured.
• Consider a simple design for the following case:
• Let the number of factors = k
• Let the number of levels for the ith factor = ni
• The total number of experiments (N) that need to be performed is
K
N   ni 𝑛 = 𝐿𝐹
i 1

DR. SAINATH, FCRIT. 22


Design of Experiments: Terms: Fractional factorial design
• Due to combinatorial explosion, you cannot usually
perform full factorial experiment
• Instead, consider just some possible combinations
• Questions to be answered
– How many experiments do I need?
– Which combination of levels should I choose?
• Need to balance experiment cost with design space
coverage

DR. SAINATH, FCRIT. 23


Design of Experiments: Terms: Fractional factorial design

• Initially, may be useful to look at large number of factors


superficially rather than small number of factors in detail

f1 l11, l12
F1 L11, L12
F1 L11, L12, L13, L14…. f2 l 21, l 22 F2 L21, L22
F2 L21, L22, L23, L24…. vs. … …
F3 L31, L32, L33, L34…. … …
Fn Ln1, Ln2
fn ln1, ln 2
Many levels
Many factors

DR. SAINATH, FCRIT. 24


Design of Experiments: Terms: Two Level Fractional Factorial Designs

2k factorial design

DR. SAINATH, FCRIT. 25


Design of Experiments: Terms: Two Level Fractional Factorial Designs
DOE - Factorial Designs - 23

Trial A B C Trial A B C
1 Lo Lo Lo 1 -1 -1 -1
2 Lo Lo Hi 2 -1 -1 +1
3 Lo Hi Lo 3 -1 +1 -1
4 Lo Hi Hi 4 -1 +1 +1
5 Hi Lo Lo 5 +1 -1 -1
6 Hi Lo Hi 6 +1 -1 +1
7 Hi Hi Lo 7 +1 +1 -1
8 Hi Hi Hi 8 +1 +1 +1

DR. SAINATH, FCRIT. 26


Design of Experiments: Terms: Half Fractional Designs
A half-fraction of the 2k design involves running only half of the
treatments of the full factorial design. For example, consider a 23 design
that requires 8 runs in all.

A half-fraction is the design in which only four of the eight treatments are
run. The fraction is denoted as 2 3-1with the “-1 " in the index denoting a
half-fraction.

In the next figure: Assume that the treatments chosen for the half-fraction
design are the ones where the interaction ABC is at the high level (1). The
resulting 23-1 design has a design matrix.

DR. SAINATH, FCRIT. 27


I A B C AB AC BC ABC
1 -1 -1 -1 1 1 1 -1
1 -1 -1 1 1 -1 -1 1 No. of runs = 8
1 -1 1 -1 -1 1 -1 1
1 -1 1 1 -1 -1 1 -1
23
1 1 -1 -1 -1 -1 1 1
1 1 -1 1 -1 1 -1 -1
1 1 1 -1 1 -1 -1 -1
1 1 1 1 1 1 1 1

I A B C AB AC BC ABC
1 -1 -1 1 1 -1 -1 1
No. of runs = 4 1 -1 1 -1 -1 1 -1 1
23-1 1 1 -1 -1 -1 -1 1 1
1 1 1 1 1 1 1 1
I=ABC
The column corresponding to the identity, I , and column corresponding
to the interaction, ABC are identical.
DR. SAINATH, FCRIT. 28
Design of Experiments: Terms: Quarter and Smaller Fraction Designs

A quarter-fraction design, denoted as 2k-2 , consists of a fourth of the


runs of the full factorial design.

Quarter-fraction designs require two defining relations.

The first defining relation returns the half-fraction or the 2 k-1design.


The second defining relation selects half of the runs of the 2k-1 design
to give the quarter-fraction.

Figure a, I= ABCD  2k-1. Figure b, I=AD 2k-2

DR. SAINATH, FCRIT. 29


Design of Experiments: Terms: Quarter and Smaller Fraction Designs

I= ABCD

24-1

I=AD

24-2

DR. SAINATH, FCRIT. 30


Design of Experiments: Taguchi method
Many factors/inputs/variables must be taken into consideration when making a
product especially a brand new one

The Taguchi method is a structured approach for determining the ”best”


combination of inputs to produce a product or service
◦ Based on a Design of Experiments (DOE) methodology for determining
parameter levels

DOE is an important tool for designing processes and products


◦ A method for quantitatively identifying the right inputs and parameter levels
for making a high quality product or service

Taguchi approaches design from a robust design perspective


DR. SAINATH, FCRIT. 31
Design of Experiments: Taguchi method
Traditional Design of Experiments focused on how different design factors
affect the average result level

In Taguchi’s DOE (robust design), variation is more interesting to study than


the average

Robust design: An experimental method to achieve product and process


quality through designing in an insensitivity to noise based on statistical
principles.
To investigate how different parameters affect the mean and variance of a
process performance characteristic.
The Taguchi method is best used when there are an intermediate number of
variables (3 to 50), few interactions between variables, and when only a few
variables contribute significantly.
DR. SAINATH, FCRIT. 32
Design of Experiments: Terms: Taguchi's Orthogonal Arrays

Taguchi's orthogonal arrays are highly fractional orthogonal designs.


These designs can be used to estimate main effects using only a few
experimental runs.

Consider the L4 array shown in the next Figure. The L4 array is denoted
as L4(2^3).

L4 means the array requires 4 runs. 2^3 indicates that the design
estimates up to three main effects at 2 levels each. The L4 array can be
used to estimate three main effects using four runs provided that the
two factor and three factor interactions can be ignored.

DR. SAINATH, FCRIT. 33


Taguchi’s Two Level Designs-Examples Taguchi’s Three Level Designs- Example

L8 (2^7) L9 (3^4)
L4 (2^3)

DR. SAINATH, FCRIT. 34


Design of Experiments: Taguchi Problem
A microprocessor company is having difficulty with its current yields.
Silicon processors are made on a large die, cut into pieces, and each one is
tested to match specifications.
The company has requested that you run experiments to increase
processor yield. The factors that affect processor yields are temperature,
pressure, doping amount, and deposition rate.

a) Question: Determine the Taguchi experimental design orthogonal array.


b) Question: Conducting three trials for each experiment, the data below
was collected. Compute the SN ratio for each experiment for the target
value case, create a response chart, and determine the parameters that
have the highest and lowest effect on the processor yield.
DR. SAINATH, FCRIT. 35
Design of Experiments: Taguchi Problem

The operating conditions for each parameter and level are listed below:

•A: Temperature •C: Doping Amount


•A1 = 100ºC •C1 = 4%
•A2 = 150ºC (current) •C2 = 6% (current)
•A3 = 200ºC •C3 = 8%
•B: Pressure •D: Deposition Rate
•B1 = 2 psi •D1 = 0.1 mg/s
•B2 = 5 psi (current) •D2 = 0.2 mg/s (current)
•B3 = 8 psi •D3 = 0.3 mg/s

DR. SAINATH, FCRIT. 36


Design of Experiments: Taguchi Problem
a) Solution: The L9 orthogonal array should be used.
The filled in orthogonal array should look like this:
This setup allows the
testing of all four
variables without having
to run 81 (=34)

DR. SAINATH, FCRIT. 37


To determine the effect each variable has on the output,
the signal-to-noise ratio, or the SN number, needs to be
calculated for each experiment conducted.

yi is the mean value and si is the standard deviation.

DR. SAINATH, FCRIT. 38


signal-to-noise ratio

Mean of the Variance of the


experiment experiment per y
per y

87.3+82.3+70.7
𝑦𝑖 = = 80.1
3

Std. deviation is (87.3 − 80.1)2 +(82.3 − 80.1)2 +(70.7 − 80.1)2


𝑆𝑖 = = 8.51
given by 3−1

DR. SAINATH, FCRIT. 39


Doping Deposition Std.
Exp. No. Temp. Pressure Amount Rate Trial 1 Trial 2 Trial 3 Mean deviation
1 100 2 4 0.1 87.3 82.3 70.7 80.1 8.5
2 100 5 6 0.2 74.8 70.7 63.2 69.6 5.9
3 100 8 8 0.3 56.5 54.9 45.7 52.4 5.8
4 150 2 6 0.3 79.8 78.2 62.3 73.4 9.7
5 150 5 8 0.1 77.3 76.5 54.9 69.6 12.7
6 150 8 4 0.2 89 87.3 83.2 86.5 3
7 200 2 8 0.2 64.8 62.3 55.7 60.9 4.7
8 200 5 4 0.3 99 93.2 87.3 93.2 5.9
9 200 8 6 0.1 75.7 74 63.2 71 6.8

DR. SAINATH, FCRIT. 40


b) Solution:
80.12
For the first treatment, SN i  10 log 8.52  19.5

Exp. No. A (temp) B (pres) C (dop) D (dep) T1 T2 T3 SNi


1 1 1 1 1 87.3 82.3 70.7 19.5
2 1 2 2 2 74.8 70.7 63.2 21.5
3 1 3 3 3 56.5 54.9 45.7 19.1
4 2 1 2 3 79.8 78.2 62.3 17.6
5 2 2 3 1 77.3 76.5 54.9 14.8
6 2 3 1 2 89 87.3 83.2 29.3
7 3 1 1 2 64.8 62.3 55.7 22.3
8 3 2 2 3 99 93.2 87.3 24.0
9 3 3 1 1 75.7 74 63.2 20.4
DR. SAINATH, FCRIT. 41
Shown below is the response table. calculating an average SN value for each factor. A sample
calculation is shown for Factor B (pressure):

Exp. No. A (temp) B (pres) C (dop) D (dep) SNi


1 1 1 1 1 19.5
2 1 2 2 2 21.5
3 1 3 3 3 19.1
4 2 1 2 3 17.6
5 2 2 3 1 14.8
6 2 3 1 2 29.3
7 3 1 3 2 22.3
8 3 2 1 3 24.0
9 3 3 2 1 20.4
DR. SAINATH, FCRIT. 42
SN B1  19.5  17.6  22.3  19.8 SNB2  21.5  14.8  24.0  20.1 SNB3  19.1  29.3  20.4  22.9
3
3 3

Level A (temp) B (pres) C (dop) D (dep) The effect of this factor is


1 20 19.8 24.3 18.2 then calculated by
2 20.6 20.1 19.8 24.4 determining the range:
3 22.2 22.9 18.7 20.2
  Max  Min  22.9 19.8  3.1
Δ 2.2 3.1 5.5 6.1
Rank 4 3 2 1

Deposition rate has the largest effect on the processor yield


and the temperature has the smallest effect on the processor yield.

DR. SAINATH, FCRIT. 43


Hands on Minitab
software

DR. SAINATH, FCRIT. 44


1. Selecting the proper orthogonal array

DR. SAINATH, FCRIT. 45


2. Select the appropriate factors and levels

DR. SAINATH, FCRIT. 46


3.Select the appropriate design

DR. SAINATH, FCRIT. 47


4. Enter factors’ names and levels values.

DR. SAINATH, FCRIT. 48


5. Check the proper orthogonal array

DR. SAINATH, FCRIT. 49


6. Enter data to respective columns

DR. SAINATH, FCRIT. 50


7. Analyze the Taguchi problem set-up

DR. SAINATH, FCRIT. 51


8. Determine response columns

DR. SAINATH, FCRIT. 52


9. Result display

DR. SAINATH, FCRIT. 53


Thank You 

DR. SAINATH, FCRIT. 54

You might also like