0% found this document useful (0 votes)
146 views246 pages

Desicion Modeling and Optimisation

Uploaded by

Keshav Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
146 views246 pages

Desicion Modeling and Optimisation

Uploaded by

Keshav Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 246

Department of Distance and

Continuing Education
University of Delhi

Master of Business Administration (MBA)


Semester - II
Course Credit - 4.5
Core Course - MBAFT - 6202
Editorial Board
Dr. Sameer Anand, Dr. Abhishek Tandon
Dr. Gurjeet Kaur

Content Writers
Dr. Reena Jain, Dr. Deepa Tyagi,
Dr. Shubham Aggarwal, Dr. Sandeep Mishra,
Dr. Upasana Dhanda

Academic Coordinator
Mr. Deekshant Awasthi

© Department of Distance and Continuing Education


ISBN: 978-81-19169-15-3
1st edition: 2023
e-mail: [email protected]
[email protected]

Published by:
Department of Distance and Continuing Education under
the aegis of Campus of Open Learning/School of Open Learning,
University of Delhi, Delhi-110 007

Printed by:
School of Open Learning, University of Delhi
Disclaimer

DISCLAIMER

This book has been written for academic purposes only.Though every
effort has been made to avoid errors yet any unintentional errors
might have occurred . The authors ,the editors,the publisher and the
distributor are not responsible for any action taken on the basis of this
study module or its consequences thereof.

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
INDEX

Lesson – 1: Model Building for Optimization & Distribution and Network Models.1
1.1 Learning Objectives
1.2 Introduction
1.3 Linear Programming model
1.4 Distribution and networking models
1.5 Summary

Lesson-2: Multicriteria Decision Models……………………………………...………35


2.1. Learning Objectives
2.2. Introduction Of Goal Programming (GP)
2.3. Model Formulation \ Modeling
2.4. Alternative Forms of Goal Constraints
2.5. Analysis Of Goal Programming (GP) Graphically
2.6. Simplex Method (Modified) Applied to Goal Programming (GP) Problems
2.7. Applications Of Goal Programming (GP) In Management Science
2.8. Summary

Lesson-3: Waiting Line Models………………….…………………………………….77


3.1 Learning Objectives
3.2 Introduction
3.3 Basic Elements of Queuing Models
3.4 Role of Poisson and Exponential Distributions
3.5 Symbols and notations used
3.6 Distribution of Arrivals
3.7 Distribution of Interarrival Time
3.8 Markovian process of Interarrival Time
3.9 States of Queuing System
3.10 Some Important Definitions
3.11 Kendall - lee notations
3.12 Poisson Queues
3.13 Applications of Queuing Theory
3.14 Limitations of Queuing Theory
3.15 Summary

Lesson-4: Simulation……………………………………………………………………119
4.1. Learning Objectives
4.2. Introduction of Simulation
4.3. Key Advantages of Simulation for Business
4.4. General Elementary Steps in the Simulation Technique
4.5. Types Of Simulation Models to Control in Management Science
4.6 Monte Carlo Simulation
4.7. Tools For the Verification and Validation of Simulation Model
4.8. Advantages And Limitations of Simulation
4.9. Summary

Lesson-5: Decision Making Under Uncertainty…………………………..…………144


5.1 Learning Objectives
5.2 Introduction
5.3 Decision Making under uncertainty
5.4 Risk Profile
5.5 Decision Tree
5.6 Summary

Lesson-6: Project Scheduling…………………………………………………….…...165


6.2 Introduction: Project Scheduling
6.3 Construction of AOA network diagram
6.4 Scheduling with known activity times
6.5 Scheduling with uncertain activity times
6.6 Time-cost trade-offs
6.7 Summary

Lesson-7: Markov Processes…………………….……………………………………190


7.1 Learning Objectives
7.2 Introduction
7.3 Stochastic Process
7.4 State Space
7.5 Classification of Stochastic Process
7.6 Markov Chain
7.7 Transition Probability
7.8 Transition Probability Matrix
7.9 Initial Distribution
7.10 Concept for Classification of the States
7.11 Classification of the States
7.12 Some Important Results
7.13 Basic Limit Theorem for aperiodic Markov Chain
7.14 Stationary Distribution
7.15 Application areas of Markov chain
7.16 Summary

Lesson - 8: Theory of Games……………………….…………………………………217


8.1 Learning Objectives
8.2 Introduction
8.3 Game Models
8.4 Two-person zero sum game
8.5 Solution of m × n games – Formulation and Solution as a LPP
8.6 Summary
MBAFT-6202 Decision Modeling and Optimization

LESSON 1
MODEL BUILDING FOR OPTIMIZATION & DISTRIBUTION AND NETWORK
MODELS
Dr. Reena Jain
Assistant Professor
Kalindi College
University of Delhi
[email protected]

STRUCTURE

1.1 Learning Objectives


1.2 Introduction
1.3 Linear Programming model
1.3.1 Production Model
1.3.2 Investment Model
1.3.3 Cost Minimization Model
1.3.4 Production Model
1.4 Distribution and networking models
1.4.1 Transportation Problem
1.4.2 Assignment problem
1.4.3Shortest route Problem
1.4.4 Maximal Flow problem
1.5 Summary
1.6 Glossary
1.7 Answers to in-text Questions
1.8 Self-Assessment Questions
1.9 References
1.10 Suggested Readings

1.1 LEARNING OBJECTIVES

After learning this chapter students would be able to formulate real life problems of logistics,
networking, production, diet requirement etc. into mathematical models. It will help them to
understand the practical applications of networking and theory studied would be helpful in
1|Page

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

determining the optimal solution for distribution network problems. It will be helpful in
determining shortest route between any two paces, to minimize the transportation cost
between two places, to maximize the efficiency of transportation between any two points etc.
This lesson will make them more equipped to take better managerial decisions regarding
many realistic situations discussed above.

1.2 INTRODUCTION

Model building for optimization is done using the technique of linear programming
problem and networks. Linear Programming is a very important tool of quantitative
techniques for the best possible distribution of scarce resources including labor,
materials, machinery, money, energy, and so forth. It is used in almost every aspect of
life, whether marketing or domestic or Production or anything else. You people are quiet
familiar with the term ‘Linear. It is used to describe how two or more variables in a model
relate to one another proportionally. Every time a specified change in one variable occurs
will always follow a given change in another variable. The term "programming" refers to
device some technique for doing work in organized manner. It is a planning that involves
the economic allocation of scarce resources among various options to attain the optimal
goal, i.e., to maximize or minimize the objective function. Hence, Linear Programming
is a quantitative technique for optimum allocation of
limited or rare resources like the ones mentioned above, such as labour, materials, equip
ments, money, energy, etc.
Linear programming problems in general are concerned with the use or allocation of
limited resources-labour, materials, machines and capital in the best possible manner so
that costs are minimized or profits are maximized. The best decision is found by
solving mathematical problems. Technique of networking is used for distribution models.
It includes transportation problem, perfect matching problem, maximal low problem etc.
using the idea of network.
The linear programming models are widely used to solve a number of military,
economic, industrial and social problems. There are various reasons for their wide uses
such as:
1. A large variety of problems in diverse field can be represented as linear
programming models.
2. Efficient and simple techniques are available for solving linear programming
problems.

2|Page

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

3. Data variation can be handled through linear programming models with ease.
Networking helps in determining shortest route between any two given points, It calculates
the max flow, helps in determining the best assignment schedule, i.e perfect matching etc.

1.3 LINEAR PROGRAMMING MODEL

A Linear Programming model essentially consists of three components.


i) The linear objective function
ii) The set of linear constraints
iii) Non-negativity of decision variables
The activities are represented by X1, X2, X3……..Xn.
These are known as Decision variables.
The objective function of a LPP (Linear Programming Problem) is a mathematical
representation of the objective in terms of decision variables. It is usually a Profit or cost
function. Such as
Optimize Z = C1X1 + C2X2 +……….. Cn Xn Where Z is the value of objective
function, which could be maximize or minimize depending upon situation. X1, X2,
X3, X4…..Xn are the decision variables, and C1, C2, …Cn are the components of
cost vector that give contribution of respective decision variables.
The set of linear constraints are the set of linear equations or inequations. These
constraints are mathematical expressions for the limitations under which the
mathematical model is framed. For example, budget constraint, space constraint, labor
constraint, time constraint etc.
Assumptions of Linear Programming Model
All LP models assume that all constraints and objective function of model should be linear.
All parameters, like the availability of resources, the unit worth of decision variable, the
amount of resources, a unit of decision variable uses, are known and constant.
The optimal values of decision variables and resources are supposed to be real and non-
negative numbers. If there are only whole numbers (integers) or mixed numbers (some are
integers and some are fractions), the Integer programming method can be used.
The value of the objective function, given the values of the decision variables and the total
amount of resources used, must be equal to the sum of the contributions (profit or cost) made
by each decision variable and the sum of the resources used by each decision variable. In
3|Page

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

other words, the objective function is the direct sum of the contributions made by each
variable.

General Mathematical Model of an LPP


Optimize (Maximize or Minimize)
Z=C1X1 + C2X2 + C3X3 +……+ CnXn

Subject to constraints,
a11X1+ a12X2+… ...... + a1nXn ( , =, ≥) b1

a21X1+ a22X2+… ...... + a2nXn ( , =, ≥) b2

am1X1+ am2X2+… ... + amnXn ( , =, ≥) bn

and X1, X2 .. Xn ≥ 0

Key Points for formulating Linear Programming Model


i) Identify and define the decision variable of the problem
ii) Formulate the objective function in terms of decision variables.
iii) Formulate the constraints in terms of decision variables subject to which the
objective function should be optimized (Maximization or Minimization)
iv) Add the non-negativity condition corresponding to each decision
variable.
Some examples are illustrated to explain the concept.

1.3.1 Production Model


Example 1
A manufacturer produces two types of wooden toys, A and B. Each toy of the type A
requires 4 hours of grinding and 2 hours of polishing, compared to 2 hours of
grinding and 5 hours of polishing for each toy of type B. The producer has two
grinders and three polishers. Each grinder puts in 40 hours a week of work, while
each polisher puts in 60 hours. Profit on toy A is Rs. 3.00, while profit on toy B is
Rs. 4.00. Everything made during a week is sold in the market. To produce the
maximum profit in a week, how should the manufacturer divide up his production
capacity between the two categories of toys?

4|Page

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Solution: Let’s go step by step


i) Let X1 be the number of units of toy A
X2 be the number of units of toy B.
ii) As profit on each type of toy is given, therefore, the objective function is to
maximize the profit, so
Max Z = 3X1 + 4X2

iii) As each toy must undergo two processes i.e. grinding and polishing. So
corresponding constraints would be
4X1 + 2X2 ≤80 (for grinding)
Th e number of hours f o r g r i n d i n g machine is 40hrs per week per
grinder. So total hrs for two grinders would be 80 hrs (40*2).
Similarly other constraint would be
2X1 + 5X2 ≤ 180 (for polishing)
iv) By non-negativity condition
X1, X2 ≥ 0
Hence Final LPP is
Max Z = 3X1 + 4X2
Subject to constraints,
4X1 + 2X2 ≤ 80
2X1 + 5X2 ≤ 180
X1, X2 ≥ 0
1.3.2 Investment Model

Example 2.
Mr. Joshi received an amount of Rs.30,000 on his retirement, which he wishes to
invest in some source from where he can get fixed income. From his friend he
came to know about two types of s ecurity bonds, which yields fixed income
per annum. Bond A generates 7% per annum whereas corresponding value for Bond B
is 10%. Due to risk factors involved and feedback received from others, he decides to
invest at most of Rs.14,000 in bond B and at least Rs.4,000 in Bond A. He also wishes
5|Page

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

that the amount invested in Bond A should not be less than the amount invested in Bond
B. Formulate the LPP model for helping Mr. Joshi to generate maximum annual return
from his retirement fund.
Solution
Let X1 and X2 be the amount invested in Bonds A and B respectively. Income
generated from two Bonds are given. Hence the objective function is to maximize the
income:
Max Z = 0.07X1 + 0.1X2

Subject to:
X1 + X2 ≤ 30,000

X1 ≥ 4,000

X2 ≤ 14,000

X1 ≥ X2

X1, X2 ≥ 0

Now, let’s explore some minimization problems also

1.3.3 Cost Minimization Model

Example 3
A farmer uses two types of pesticides, liquid and dry for his fields. The liquid pesticide
contains 6 units of chemical A, 3 u n i t s o f c h e m i c a l B and 1 u n i t chemical C
per jar. R e s p e c t i v e v a l u e s f o r d r y p e s t i c i d e i s 1,2 and 4 units per carton. For
healthy crops the minimum requirement of chemical A, B and C are 10, 12, and12 units
respectively. The liquid pesticide is available in market for Rs. 40 per jar.
Respective value for dry p e s t i c i d e i s Rs.25 per carton. How many of each
pesticides should be bought in order to fulfil the requirements and keep costs down?
Solution
Let X1 and X2 be the number of units p u r c h a s e d of liquid and dry pesticides.

The objective function is to minimize the cost

6|Page

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Min. Z = 40X1 + 25X2

Subject to:
6X1 + X2 ≥ 10

3X1 + 2X2 ≥ 12

X1 + 4X2 ≥ 12

X1, X2 ≥ 0

1.3.4 Production Model

Example 4
A sewing machine manufacturer purchases semi-finished casted parts and process them to
produce three different models, basic standard and premium. The casted parts undergo
three different processes namely turning, milling, and drilling. The selling price of basic
model is Rs 500, for standard model, it is 600 and for premium model it is Rs. 700.The
demand for all the models is so large that all produced machines get sold. Cost of casted
parts of three models are Rs.200, Rs.220 and Rs.250 for basic, standard, and premium
models respectively.
Cost per hour to run each of the three processes are Rs.400 for Turning, Rs.375 for
milling and Rs.600 for drilling. The capacities of each process for each model are shown
in the following table.

Process Capacities Per Hour


Basic Standard Premium
Turning 25 40 25
Milling 25 15 15
Drilling 40 30 10
The manufacturer wants to know how many parts of each type it should produce per
hour in order to maximize profit for an hour’s run. Formulate this problem as an LP
model to maximize total profit to the manufacturer.

Solution:
Let X1 and X2 and X3 be the number of basic, standard, and premium model
7|Page

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

produced per hour respectively.

With the information given, the hourly profit for basic, standard,
and premium model would be as follows
Profit per unit of Basic model = (500–200) – (400/25 +375/25 +
600/40) = 254
Profit per unit of standard model = (600-220) – (400/40 + 375/15 +
600/30) = 325
Profit per unit of premium model = (700 – 250) – (400/25 + 375/15 +
600/10) = 349
Hence Objective Function is
Maximize Z = 254 X1 + 325X2 + 349X3

Subjected to:
X1/25 + X2/40 + X3/25 ≤ 1

X1/25 + X2/15 + X3/15 ≤ 1

X1/40 + X2/30 + X3/10 ≤ 1

X1, X2, X3 ≥ 0

1.3.5 Man Power Scheduling

Example 5
Apollo hospital needs different number of nursing staff at different timings of day.
Each nurse has a duty of 8 hrs in a day and reports at the beginning of period. The
hospital management wants to formulate the plan that how many nurses should be
called to meet the daily needs. The following table summarizes the number of
nurses needed round the clock.

8|Page

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Period Clock time (24 hours Minimum number of


day) nurses required

1 8 a.m. – 12 noon 3
2 12 noon. – 4 p.m. 6
3 4 p.m. – 8 p.m. 14
4 8 p.m. – 12 mid night 6
5 12 mid night – 4 a.m. 10
6 4 a.m. – 8 a.m. 8

In order to have enough nurses available during each period, the hospital seeks to
determine the bare minimum number of nurses that should be employed. Formulate the
situation as a linear programming problem.
Solution
Let X1, X2, X3, X4, X5 and X6 be the number of nurses joining duty at the
beginning of periods 1, 2, 3, 4, 5 and 6 respectively.
Objective function is
Minimize Z = X1 + X2 + X3 + X4 + X5 + X6

Subject to
X1 + X2 ≥ 6

X2 + X3 ≥ 14

X3 + X4 ≥ 6

X4 + X5 ≥ 10

X5 + X6 ≥ 8

X6 + X1 ≥ 3

X1, X2, X3, X4, X5, X6 ≥ 0

9|Page

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

1.3.6 Trim loss Problem


Example 6: The Modern Paper Mart produces paper rolls with a standard width of 20 m each.
The company receives orders of different width. These widths are produced by slitting the
rolls of 20 m. The typical order received on one day is summarized as follows:

Order Desired Width (m) Desired number of


Rolls
1 5 150
2 7 200
3 9 300
Formulate the above problem as a linear programming problem to meet the order with
minimum trim loss.
Solution: 20m width roll can be cut according to following combinations to meet the required
order:
S.No. 5m 7m 9m Trim Loss (m)

1 4 0 0 0
2 2 1 0 3
3 2 0 1 1
4 1 2 0 1
5 0 1 1 4
6 0 0 2 2
Let Xi , i=1,2…,6 be the number of rolls cut according to ith combination.
Objective function is
Minimize Z = 0X1 + 3X2 + 1X3 + 1X4 + 4X5 + 2X6

Subject to
4X1 + 2X2 + 2X3 + 1X4 >= 150

X2 + 2X4 + X5 >= 200

10 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

X3 + X5 + 2X6 >= 300

X1, X2, X3, X4, X5, X6 >= 0

In- Text Questions


Section 1.3
Q1. What are the essential components of linear programming model.

Q2. How minimization problem can be converted into maximization problem?

Q3. Why linear programming problems are so popular?

Q4. Can objective function and/or constraints of LPP be non-linear?

1.4 DISTRIBUTION AND NETWORKING MODELS

Linear programming can be applied distribution and networking models also.


Transportation is one of such problems. Under this category we consider the
physical transport of goods from multiple sources to multiple destinations. Each
source has some limited availability and each destination has some particular
demand. The objective of the problem is to find the schedule that how many items
should be transported from a particular source to particular destination so as to
satisfy the demand of each destination with the available units at minimum cost of
transportation.
Above situation can be explained diagrammatically as Sources
Destinations
This problem can be converted into Linear Programming Problem as follows:
LetXij >=0 be the number of units to be sent from Si to Dj per unit time. Now the
objective is minimizing the transportation cost under the availability and demand
constraint, so objective function is
m n
Min Z = ∑ ∑ Cij Xij
i=1 j=1

11 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Where Cij is the cost of transportation from ith source to jth destination subject to
the constraints that the Sum of the quantity of goods transported to different
destinations from source i must be less than equal to the availability of source i that
is

n
∑xij ≤ si for i = 1, . . . , m,
j=1

and the the Sum of the quantity of goods transported to jth destination must be greater
than demand dj , that is

m
X
∑xij ≥ dj for j = 1, . . . , n.
i=1
The necessary and sufficient condition for the existence of solution is that
Total supply = total demand
We thus define the transportation (or Hitchcock) problem as the following LP, where
the si >= 0, dj >= 0, cij >= 0 are given, with Total supply = total demand
Hence, Linear programming problem is

Min ∑ cij xij


i,j

subject to
∑xij = si for each i = 1, . . . , m
j

∑xij = dj for each j = 1, . . . , n


i
xij ≥ 0 for each i = 1, . . . , m and j = 1, . . . , n.
12 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Which can be solved using the method of Integer Programming.


But as it is a separate class of problem we have special method also for solving
this type of problem.
1.4.1 Transportation Problem
Solution procedure for solving Transportation Problem.
• Define the objective function to be minimized
• Set up the transportation table with n rows representing the destinations and m
columns representing the sources
• At (i,j) position of this table write the unit transportation cost from jth
source to ith destination.
• Write the availability each source and demand each destination(as given in
following lay out)
• Now check whether the total supply = Total demand or not. If yes then problem
is balanced otherwise balance it by adding a dummy row or column of difference
to make it balanced.
• Find initial basic feasible solution
• Test whether it is optimal or not . If optimal then stop otherwise improve the
solution and again check for optimality until you reach optimal solution.

Source\Destination D1 D2 D3 Availability
S1 C12 A1
S2 A2
S3 C31 A3
S4 A4
Demand d1 d2 d3
In above table C12 shows cost of transporting one unit from source 1 to destination2
and similarly C31 shows cost of transporting one unit from source 3 to destination1.
Conversion Of Unbalanced problem to Balanced Problem
If Total availability is not equal to total demand, then such a problem is known as
unbalanced problem. The very first condition for the problem to be solvable is that It
must be balanced. So, let’s take an example to see how an unbalanced problem can be
converted into balanced one.
13 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Source\Destination D1 D2 D3 Availability
S1 C12 8
S2 10
S3 C31 10
S4 12
Demand 10 10 5

Total Demand = 25 Total Supply =40


As the total supply is greater than total demand therefore given problem is
unbalanced. To make it balanced add a dummy destination with demand equal to
(40-25) ie 15. For this destination each cost would remain zero as this is not a real
demand.

Methods For Obtaining Basic Feasible Solution (BFS)


An initial BFS can be obtained by any of following Three methods:
1. North- West Corner Rule
2. Least Cost Matrix Method
3. Vogel’s Approximation Method (VAM)

1. North–west corner rule


• Write the given transportation problem in tabular form
• Check whether the problem is balanced or not. If not then make it balanced
and go to next step.
• Go to the north-west corner of the table. Suppose it is the (i, j)th cell.
• Allocate min (Aii, dj) to this cell. If the min (Aii, dj) = Ai, then the
availability of the ith origin is exhausted and demand at the jth
destination remains as dj-ai and the ith row is deleted from the table. But
if min (Aii, dj) = dj, then demand at the jth destination is fulfilled and the
availability at the ith origin remains to be Ai-dj and the jth column is
deleted from the table.
• Repeat above 2 steps until all availabilities get exhausted and demands are
fulfilled

14 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

To D1 D2 D3 D4 Capacity
From

S1 21 16 25 13 11
11

S2 17 18 14 23 13 9
9 4

S3 32 27 18 41 19 16 6
6 10 3

Demand 6 10 12 3 15 4 43

Hence, Initial BFS is


X14 =11 X23 = 9 X24 = 4 X31 = 6 X32 = 10 X33 = 3

2. Least Cost Matrix Method (LCMM)


In this method, allocations are made on the basis of economic desirability. The steps
involved in determining an initial solution using least-cost method are as follows:
• Write the given transportation problem in tabular form.
• Choose the cell with minimum cost. If it is not unique, anyone can be chosen.
Suppose it is the (i, j) th cell.
• Allocate min (Aij, dj) to this cell. If the min (Aij, dj) =Ai, then the availability
of the origin is exhausted and demand at the jth destination remains as dj-
Ai and the ith row is deleted from the table. But if min (Aij, dj) = dj, then
demand at the jth destination is fulfilled and the availability at the ith origin
remains to be Ai-dj and the jth column is deleted from the table.
• Repeat above 2 steps until all availabilities get exhausted and demands are
fulfilled.

15 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

To D1 D2 D3 D4 Capacity

From
S1 21 16 25 13 11
11

S2 17 18 14 23 13 1
1 12

S3 32 27 18 41 19 9 4
5 10 4

Demand 6 5 10 12 15 4 43

Vogel's Approximation Method (VAM)


The idea of limiting opportunity (or penalty) costs forms the foundation of VAM. The
difference between the two least expensive choices for any source and destination is
known as the opportunity cost for that source or destination. This penalty suggests that
any unsatisfied demand or supply would be satisfied at an additional cost per unit, which
would be equal to an opportunity cost or penalty. This approach is chosen over the others
mentioned above since it requires less iterations to reach the ideal result. The most
effective strategy for locating an initial, workable basic answer is VAM. The following
procedures are used to determine an initial solution while employing VAM.

• • Create a matrix table for the provided transportation problem.


• Calculate the penalty cost, which is the difference between the minimum cost and
the next lowest cost for each row and each column.
• Regardless of row or column, select the highest difference or penalty cost. Choose
the row/column cell with the lowest cost and begin allocating to it. Suppose it is
the (i, j)th cell.
• Next to this cell allocate min (Aij, dj). If the min (Aij, dj) = Ai, then the
availability of the ith origin is exhausted and demand at the jth destination
remains as dj-ai and the ith row is deleted from the table. But if min (Aij, dj) =
dj, then demand at the jth destination is fulfilled and the availability at the ith
origin remains to be Ai-dj and the jth column is deleted from the table.

16 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

• Repeat above 2 steps until all availabilities get exhausted and demands are
fulfilled.
Note:
• It is always advisable to use VAM as initial basic feasible solution, if method
is not specified. The reason behind it is very simple that this provides solution
which is very close to optimal solution.
• If at any point before the end, a row's supply and column's demand are both
satisfied simultaneously, then both will be crossed out and the next variable to
be added to the basic solution will necessary be at the zero level. Such a situation
is known as degeneracy.

To D1 D2 D3 D4 Capacit Ui
y
From
S1 21 16 25 13 11 3
11

S2 17 18 14 23 13 9 3 3 3 4
6 3 4

S3 32 27 18 41 19 7 9 9 9
7 12

Demand 6 10 12 15 4 43

Vj 4 2 4 10

15 9 4 18

9 4

Optimal Solution
Once an initial basic feasible solution has been found, we will move towards the
technique to find optimal

17 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

solution. Following two techniques are used to improve BFS


• Stepping Stone Method
• Modified Distribution Method (MODI). Here we are discussing only MODI
method.

Stepping Stone Method

• Determine the initial basic feasible solution by any of the three methods
defined above.
• Draw closed loop, starting from any non-basic cell. Loop should be
drawn with horizontal or vertical lines in such a way that corner should
come only at basic cells and loop should end at same non-basic cell
from where it was started. Mark alternatively (+) and (-) sign at the
corners of the loop traced, starting from non-basic cell respectively.
Calculate the transportation cost by adding or subtracting the cost of
each corner depending upon marked sign. This cost is identified as net
cost change between initial basic feasible solution and new solution.
Repeat the procedure to calculate net cost change corresponding to each
non-basic cell.
• If net cost change corresponding to each non-basic cell is positive, then initial
basic feasible solution is optimal. Otherwise, select the non-basic cell
corresponding to which highest negative net cost change is obtained.
• Out of all (-) marked cells, select minimum allocation. Add this value at (+)
marked corners and subtract at (-) marked corners. This would be new basic
feasible solution. Again, calculate net cost change corresponding to each non-
basic cell and check for optimality.
• Repeat the above steps till the condition of optimality is satisfied.

Modified Distribution Method (MODI)


• Determine the initial basic feasible solution by any of the three methods
defined above.
• If number of Basic cells (All those cells which have allocation) are m+n-
1, then go to next step, otherwise give zero allocations to some non-basic
cells so as number of basic cell becomes m+n-1 .

18 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

• Determine the set of numbers Ui (for rows) and Vj (for column) in such a
way that for Basic Cells Ui+Vj-Cij=0.
• Calculate the value of Ui+Vj-Cij (opportunity cost) for each non-basic cell.
If opportunity cost for each non-basic cell is negative or zero then the
current solution is optimal. otherwise the non-basic cell which has highest
opportunity cost must be entered into the basis and one of basic cell wll
become non- basic.
• Repeat the above two steps till the condition of optimality is satisfied.
1.4.2 Assignment problem
It is a special case of transportation model where workers and jobs represent the
sources and destinations respectively. The supply (demand) at each source
(destination) is exactly equal to 1. The cost of transportation Cij represents the
wages given to worker i if jth job is assigned to him/her. Such problem can be
formulated as linear programming problem just as transportation problem as
explained in previous section. These problems can be solved using Hungarian
method. The basic assumptions of Hungarian Method are explained in next section.

Basic Assumptions:
• Only one job can be assigned to each worker and only one worker can be
assigned to each job.
• Number of jobs should be equal to number of workers. i.e Problem should be
balanced.
• If problem is unbalanced, then by adding dummy jobs or dummy workers with 0
as associated cost problem must be balanced first.
• Problem should be a minimization problem.
• If problem is a maximization problem, then convert it into minimization first.

Hungarian Method
• Check whether the problem is balanced or not, if not convert into balanced
problem.
• Check whether it is a minimization problem or not, if not convert it into a
minimization problem.
• Select the row minima from each row and subtract it from its corresponding row,
the technique is termed as row reduction method.

19 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

• Select the column minima from each column and subtract it from its
corresponding column, the technique is termed as column reduction method.
• Start assignment by selecting Zeroes from rows/ column in such a way that there
should be single assignment in each row or column.
• If each row and column get assignment, then the current solution is optimal.
Calculate the cost of assignment by adding the corresponding values in original
matrix.
• If any row/column is left where there is no assignment then follow the
improvement schedule to improve the table and again do assignment.

Conversion of Unbalanced Problem to Balanced Problem

Workers ® W1 W2 W3 W4
Job
¯
J1 6 4 8 6
J2 8 5 2 4
J3 9 4 7 3
In above problem we have 3 jobs but 4 workers, so to make it balanced add a
dummy job with all associated cost as zero.

Workers ® W1 W2 W3 W4
Job
¯
J1 6 4 8 6
J2 8 5 2 4
J3 9 4 7 3
J4 (Dummy) 0 0 0 0

Conversion of Maximization to Minimization


Let the above matrix is representing the profits earned by the firm when ith job is done by
the jth worker. Now, Firm wants to allocate the jobs to the workers in such a way, so that
profit can be maximized. It can be converted into minimization problem by following
algorithm.

20 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

• Select the maximum of matrix i.e “9”.


• Subtract the entries of whole matrix from this 9
• Use the reduced matrix for solving the problem.

Reduced matrix is:

Workers ® W1 W2 W3 W4
Job
¯
J1 3 5 1 3
J2 1 4 7 5
J3 0 5 2 6
J4 (Dummy) 9 9 9 9
After assignment the values of assigned positions would be added from the original
matrix to get the maximum profit.

Example:
ABC Transco has four trucks namely A, B, C and D and four sites The numbers given in the
following table shows the distance in km associated with each pair of truck and site. Find
the assignment schedule for the following problem in order to minimize the total distance
(in km) travelled?

Trucks A B C D
Sites

1 90 75 75 80
2 35 85 55 65
3 125 95 90 105
4 45 110 95 115

Solution: Subtract 75 from Row 1, 35 from Row 2, 90 from Row 3, and 45 from Row 4.

21 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Trucks A B C D
Sites

1 15 0 0 5
2 0 50 20 30
3 35 5 0 15
4 0 65 50 70
Subtract 0 from Column 1, 0 from Colum 2, 0 from Column 3, and 5 from
Column 4

Trucks ® A B C D
Sites
¯
1 15 0 0 0
2 0 50 20 25
3 35 5 0 10
4 0 65 50 65
Start assignment
Trucks ® A B C D
Sites
¯
1 15 0 0

2 0 50 20 25

3 35 5 10

4 65 50 65

After assignment we get just three assignments so improve solution.


• Draw minimum number of lines to cover all the zeroes.
• Select the minimum of uncovered element*, and subtract it from uncovered
22 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

and add it to doubly covered** and do nothing for singly covered***.


• Again start assignment, if all four assignments obtained, then optimal
otherwise repeat the steps from beginning.

Trucks ® A B C D
Sites
¯

1 15 0 0
2 0 50 20 25
3 35 5 10

4 65 50 65
Minimum uncovered element is 5, Hence new matrix is

Trucks A B C D

1 20 5 0

2 45 20 20

3 35 0 5
4 0 60 50 60

* those elements which are not crossed by lines


** those elements which lie at intersection of two lines
*** those elements which are under a single line.
Minimum uncovered is 20, so new matrix is

23 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Trucks ® A B C D
Sites
¯

1 40 5 0

2 0 25 0

3 55 0 5

4 40 30 40

Hence the optimal assignment is


Truck 1 to Site B Truck 2 to Site D Truck 3 to Site C and Truck
4 to Site A.
At cost of 75+65+90+45=275 km

1.4.3 Shortest route Problem


Shortest route problem is defined as a problem of finding the shortest path between the
desired nodes of a given graph. The graph could be graphical representation of road network,
communication network, logistic network etc. The most popular method of finding shortest
path between any node (source) to every other node is Dijkstra’s algorithm. Before
explaining Dijkstra’s algorithm lets understand graph and some basic terms of graph.
The diagrammatical representation of explaining the connectivity between different elements
of some network is called graph. These elements are called nodes and the arcs by which these
elements are connected are called edges. The node, from which distance is calculated is
called source and the node till which distance is calculated is called sink. For example, in
case of road maps, the different cities are called nodes and the roads by which these cities are
connected are called the edges.
Dijkstra’s Algorithm
• It begins at source and examines the graph to determine the shortest route between
source and every other node of network.

24 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

• The algorithm maintains a record of the presently known shortest distance between
source and every other node, and it changes these values whenever it discovers a route
that is shorter than the previously known shortest distance.
• After the algorithm has determined the route that is the shortest between the source
node and another node, the algorithm updates label on the other node as either
"visited" or “permanent” from “unvisited” or “temporary” and adds it to the path. This
path initially contains only source node and one by one other nodes get added to it.
• The procedure is repeated until each node in the network has been included in the
route. It means procedure terminates when label of each node becomes "visited" or
“permanent” . At the end we get a route that connects the source node to all of the
other nodes with shortest path length between them.
Example: Consider the following graph with six nodes. The numbers written on edges
expresses the distance between corresponding nodes. Use Dijkstra’s algorithm to find the
shortest distance of each node from node S.
Solution: Initially Only S is labelled as “visited” and rest all five nodes are labelled as
“unvisited”. Also write the label on each node along with distance and the name of node from
which that distance is measured. To maintain the record lets write in tabular form.

Iteration Visited or permanent Unvisited or Label(distance,


node temporary node preceding node)

1 S A (1,S)

B (5,S)

C (∞,S) Since no path


between 3 and S

D (∞,S) Since no path


between 4 and S

E (∞,S) Since no path


between 5 and S

Now, Nodes A and B can be traced from node S in 1 unit of distance and 5 units of distance
respectively. Since min(1,5)=1
25 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Hence, node A would be traced from node S and node A is labelled as “visited”. Now path
becomes {S,A} at shortest distance of 1 unit.

Iteration Visited or permanent Unvisited or Label(distance,


node temporary node preceding node)

2 S, A B (1+2,A)

C (1+2,A)

D (1+1,A)

E (∞,S) Since no path


between 5 and S through
visited nodes

Label for node B Could either be (5,S) or (1+2,A)=(3,A). Since distance of node B via node
A is less hence its label would be (3,A) not (5,S). Which means shortest distance of node B
from S is 3 units via node A. Similarly labels for rest of the nodes are written. At this point
node D has least distance among unvisited nodes, so label of node D updated to visited.

Iteration Visited or permanent Unvisited or Label(distance,


node temporary node preceding node)

3 S,A,D B (1+2,A)

C (1+2,A)

E (∞,S)

Since node B and C can be visited from node A at the distance of unit 3, which is minimum
too amongst all unvisited nodes, therefore both become visited now.

Iteration Visited or permanent Unvisited or Label(distance,


node temporary node preceding node)

4 S,A,D,B,C E (3+1,C) or (2+2,D)

26 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Node E could be traversed either from node C or from node D as both length are same, ie 4
units. Hence shortest path of each node from node S is
S to A, 1 unit with path SA
S to B, 3 units with path SAB
S to C, 3 units with path SAC
S to D, 2 units with path SAD
S to E, 4 units with path SACE or SADE
1.4.4 Maximal Flow problem
The objective of the max flow problem is to find the greatest amount of flow that can be
sent through a network of pipelines, channels, or other passageways while capacity
limitations is taking into account. This is one of the main concerns handled via graph
theory. The problem can be used to simulate a broad variety of real-world circumstances,
including resource distribution, communication networks, and transportation systems, to
name a few.

In the maximum flow problem, we have a directed graph with a source node S and a sink
node T, and each edge has a capacity that symbolizes the maximum amount of flow that
can be sent through it. In other words, the maximum amount of flow that can be sent
through an edge is called its capacity. The objective here is to determine the greatest
quantity of flow that can be transmitted from point S to point T while still adhering to the
capacity limitations imposed by the edges. The most common algorithm for solving
maximal flow problem is Ford-Fulkerson algorithm.

Ford-Fulkerson algorithm
This algorithm is based on finding the flow- augmenting path, residual capacity of each
edge from source to sink and bottleneck capacity of augmenting path.
Residual Capacity - Residual capacity of the directed edge is defined as the remaining
capacity of the edge. i.e, original capacity of the edge - current flow through the edge. If
there is flow along a directed edge u → v then reversed edge has a capacity 0 and it can
be considered like
f(v,u)=−f(u,v)
Residual Graph – The original graph in which instead of original capacities, Residual
capacities as written on edges.
Augmenting Path – The series of edges or the path from source to sink in residual graph
is called augmenting path.
27 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Bottleneck capacity- the minimum of residual capacities on any augmenting path is


called bottleneck capacity.

Let’s explain the algorithm with the help of an example.

Example: Consider the following network, where the numbers given on arcs are
capacities of corresponding edges. Find the maximal flow from source to sink

Solution: Firstly, redraw the network, taking initial flow as zero for all edges.

Find an augmenting path from source to sink. Let the path be S-A-B-T. The residual
capacities of edges are 7,5 and 8. The bottleneck capacity of this path is 5. Hence update
the flow on this path by 5 units. Now the new flow is as follows:

Similarly by considering augmenting path S-D-C-T, the bottleneck capacity is 2. Hence


flow would be updated by 2 along this path.

Next augmenting path would be S-D-A-C-T and its bottleneck capacity is 2. Hence new flow
at this path will be increased by 2.
Next possible augmenting path is S-A-C-T, with bottleneck capacity 1. So flow at this path
would be incremented by 1.
Now, no more augmenting path left. Hence process terminates here. The maximal flow is
5+5=10
In- Text Questions
1. What is basic feasible solution? What are different methods of finding basic feasible
solution for a transportation problem?
2. Which method should be used for finding BFS and why?
3. Can transportation problem be solved by LPP?
4. Differentiate between Stepping Stone and MODI Method. Which one is more
effective and why?
5. Is there any relationship between transportation problem and assignment problem? If
yes, explain
6. What is the basic nature of transportation problem, maximization or minimization?
Can both type of problems be solved using transportation problem?
7. True or False?
(a) To balance a transportation model, it may be necessary to add both a dummy
source and a dummy destination.

28 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

(b) The amounts shipped to a dummy destination represent surplus at the shipping
source.
(c) The amounts shipped from a dummy source represent shortages at the receiving
destinations.
8. In each of the following cases, determine whether a dummy source or a dummy
destination must be added to balance the model.
(a) Supply: al = 10, a2 = 5, a3 = 4, a4 = 6
Demand: bi = 10, b2 = 5, b3 = 7, b4 = 9
(b) Supply: aJ = 30, Q2 = 44
Demand: bJ = 25, bi = 30, b3 = 10

1.5 SUMMARY

In this chapter the concept of linear programming problem is explained with the help of
real life problems. Some real life situations such as production model, Investment model,
cost minimization model, production model, man power scheduling model and paper
trim loss problems are formulated using linear programming Problem. The basic
essential elements of LPP, assumptions of LPP and conditions under which LPP can be
used are explained in detail. In next section transportation problem and assignment
problems are explained as distribution models. All the methods of finding Basic Feasible
Solution (BFS) are explained in detail. Transportation problem is formulated as LPP.
Method of finding optimal solution is explained. Hungarian method is explained for
finding the optimal solution of assignment problem. All techniques are illustrated with
examples

1.6 GLOSSARY

• Linear Programming Problem- The mathematical model of some real life situation
consisting of linear objective function, linear set of constraints along with non-
negativity condition is called Linear Programming Problem.
• Decision Variable- X1, X2, X3, etc., are used to denote the activities. These are referred
to as decision variables.
• Integer Programming Problem- A mathematical model, where decision variables can
assume only integral values is called Integer Programming Problem.

29 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

• Objective Function- The goal of an LPP (Linear Programming Problem) is


mathematically represented by a Linear function using decision variables,
this is called objective function.
• Constraints- The set of linear constraints are the set of linear inequalities and/or
equalities which is just mathematical expression corresponding to the restrictions
imposed on the model such as budget constraint, space constraint, labor constraint.
• Optimization – The word optimization is used for maximization or minimization of
objective function.
• Balanced transportation Problem- A transportation problem where total supply = total
demand is called balanced transportation problem.
• Basic Feasible Solution (BFS)- Set of solution satisfying all the constraints having some
positive values and some zero values is called BFS.

1.7 ANSWERES TO IN-TEXT QUESTIONS

Section 1.3
Ans1 A Linear Programming model essentially consists of three components.
v) The linear objective function
vi) The set of linear constraints
vii) Non-negativity of decision variables
Ans2 Min Z= - Max Z
Ans3 Because
4. A large variety of problems in diverse field can be represented as linear
programming models.
5. Efficient and simple techniques are available for solving linear programming
problems.
6. Data variation can be handled through linear programming models with ease.
Ans4 no. In LPP objective function and set of constraints should be linear.

Section 1.4

Ans1 Set of solution satisfying all the constraints having some positive values and some
zero values is called basic feasible solution. Different methods are:

30 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

4. North- West Corner Rule


5. Least Cost Matrix Method
6. Vogel’s Approximation Method (VAM)
Ans2 Vogel’s Approximation Method (VAM), because it usually reduces number of
iterations for finding optimal solution.
Ans3 yes.
Ans 4 For difference refer the algorithm given in text. MODI is preferred over
Stepping Stone, as MODI converges faster as compared to Stepping Stone.
Ans5 yes, Assignment problem is a special case of transportation problem. In this
case supply and demand of each source and destination is taken as 1.
Ans6. Minimization. Yes, both type can be solved. For solving maximization , it
should be converted into minimization and the should be solved.
Ans7
a) False
b) True
c) True
Ans8
a) Dummy Source with supply of 6 units
b) Dummy Destination with demand 9 units

1.8 SELF-ASSESSMENT QUESTIONS

Q1. A company produces two types of dolls, regular doll and premium doll. The sales volume
for regular doll is at least 80% of the total sales of both the dolls. However, the company
cannot sell more than 100 units of regular dolls per day. Both dolls use one special rubber, of
which the maximum daily availability is 240 pounds. Regular doll consumes 2 pounds per
unit whereas premium doll uses 4 pounds per unit of this special rubber. Company earns
profit of $20 and $50, from regular doll and premium doll respectively. Formulate the given
condition as a linear programming problem to determine the optimal product mix for the
company.

Q2. Alumco manufactures aluminium sheets and aluminium bars. The maximum production
capacity is estimated at either 800 sheets or 600 bars per day. The maximum daily demand is
550 sheets and 580 bars. The profit per ton is $40 per sheet and $35 per bar. Formulate the
given condition as a linear programming problem to determine the optimal daily production
mix.
31 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Q3. An investor wishes to invest $10,000 over the next year in two types of bonds. Bond A
yields 7% and bond B yields 11%. Past experiences suggests an allocation of at least 25% in
A and at most 50% in B. Moreover, investment in A should be at least half the investment in
B. Formulate the given condition as a linear programming problem to to help the investor that
how should the fund be allocated in two bonds?

Q4. The Standard paper company produces paper rolls with a standard width of 20 m each.
Special customer orders with different widths are produced by splitting the standard rolls.
The typical order received on one day is summarized as follows:

Order Desired Width (m) Desired number of


Rolls
1 5 150
2 7 200
3 9 300
Formulate the above problem as a linear programming problem to meet the order with
minimum trim loss.
Q5 Find the optimal transportation schedule for the following transportation problem, where
entries in the table are corresponding costs for transporting one unit from plant to respective
warehouse. Compare optimal cost when initial BFS was
i. Least Cost Matrix Method
ii. North-West Corner Rule
iii. VAM

Plant Warehouse Supply


W1 W2 W3
A 28 17 26 500
B 19 12 16 300
Demand 250 250 500

32 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Q6. Crompton has three factories - X, Y, and Z. It supplies its products to four distributors
located in different states. The production capacities of these factories are 200, 500 and 300
per month respectively. The Demand of distributors are given in tables. The values written in
tables are net returns corresponding to each factory -distributor pair per unit.

Factory Distributors Capacity


A B C D
X 12 18 6 25 200
Y 8 7 10 18 500
Z 14 3 11 20 300
Demand 180 320 100 400

Determine a suitable allocation to maximize the total net return. Find the conditional solution
also, i.e. if X can’t transport to C and Z can’t transport to B.

Q7. What are real life applications of shortest route problem.

Q8. Find the shortest route from source to all other nodes for the following graph.

Q9. Find the maximal flow using Ford-Fulkerson Algorithm.

33 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

1.9 REFERENCES

• Operations Research. An Introduction. Tenth Edition. Hamdy A. Taha.


• Quantitative Techniques in Management. 5th Edition N.D. Vohra
• Operations Research (Theory Methods & Applications), S.D. Sharma
• Operations Research: Concepts, Problems and Solutions V.K. Kapoor

1.10 SUGGESTED BOOKS

• Operations Research. An Introduction. Tenth Edition. Hamdy A. Taha.


• Quantitative Techniques in Management. 5th Edition N.D. Vohra
• Operations Research (Theory Methods & Applications), S.D. Sharma
• Operations Research: Concepts, Problems and Solutions V.K. Kapoor

34 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

LESSON 2
MULTICRITERIA DECISION MODELS

Dr. Deepa Tyagi


Assistant Professor
Shaheed Rajguru College of Applied Sciences for Women
University of Delhi

STRUCTURE

2.1. Learning Objectives


2.2. Introduction Of Goal Programming (GP)
2.3. Model Formulation \Modeling
2.3.1. Steps Of Goal Programming (GP) Model Formulation
2.3.2. Concept Of Labor Goal
2.3.3. Concept Of Profit Goal
2.3.4. Concept Of Material Goal
2.4. Alternative Forms of Goal Constraints
2.5. Analysis Of Goal Programming (GP) Graphically
2.6. Simplex Method (Modified) Applied to Goal Programming (GP) Problems
2.7. Premptive Goal Programming and Non-Preemptive (Weighted) Goal Programming
2.8. Applications Of Goal Programming (GP) In Management Science
2.9. Summary
2.10. Self-Assessment Exercises
2.11. Objective Questions
2.12. Suggested Readings/References and Glossary

2.1 LEARNING OBJECTIVES

After learning this Chapter, you will be able to:


➢ Explain the concept of Goal Programming (GP).
➢ Formulate (Modeling) business/industry decision issues of multiple targets

35 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

as goal programming (GP) problems.


➢ Describe the graphical solution used to solve the goal programming (GP)
problems.
➢ Describe the Simplex Method (Modified) solution used to solve the goal
programming (GP) problems.

2.2 INTRODUCTION OF GOAL PROGRAMMING (GP)

Goal Programming(GP) is a form of a Linear Programming (LP) that includes


several purposes instead of a single objective (also called goal/objective).
According to Romero (1992), the concept of GP was introduced by Charnes
and Cooper (1961). They did not present it as a unique or revolutionary
methodology but only as an extension of LP. Ijiri (1965) developed the cocept of
different priority levels to the goals and different weights for the goals at the
same priority level. Lu (1972) and Ignizio (1976) have discussed the subject of
GP in detail and wrote a text book on this subject.
In usual LP models, we examined a single goal that was either maximized
or minimized. However, it is seen that a company or an organization frequently
has more than one goal, which may include to something other than profit or
price.
In fact, a company may have quite a lot of principles, so-called multiple
principles, which has been considered in making a judgment as an alternative of
just a single goal. For example, in addition to maximizing profit, a company in
risk of a labor strike might want to avoid employee layoffs, or a company about
to be fined for pollution infractions might want to minimize the emission of
pollutants. A company deciding between a number of probable research and
improvement projects might want to consider the probability of success of each
of the projects, the cost and time required for each, and probable successin
making a selection.
In this chapter, we explore the GP techniques that can be used to solve
problems when they have multiple goals. GP is a deviation of LP in that
condition when it considers more than one target in the objective function. The
GP models and the LP models are designed in a similar general format with an
objective function and linear constraints(also called limits/ restrictions). Also,
the solutions of both models (LP model and GP model) are like to be similar.

36 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

2.3 MODEL FORMULATION \ MODELING

As we know well, the structure of a GP model is alike LP model, with an target


function, decision (also so called verdict/ Judgement) variables, and restrictions.
Also, the GP models can be solved graphically (like as LP models) when we
considered two decision variables in the model.

2.3.1. STEPS OF GOAL PROGRAMMING (GP) MODEL FORMULATION

The steps we have taken in the model formulation can be briefly summarized as
follows:
1. Define Variables and Constants
2. Formulate Constraints
3. Develop the Objective Function

➢ Define Variables and Constants: The first step of model formation is the
definition of' probable (choice) variables and the right hand side constants. The
right hand side-constants maybe either available resources or specified goal
limit value. It requires a careful analysis of the problem in order to identify all
significant variables that have some effect on the set of goals (objectives)
specified by the decision maker.

➢ Formulate Constraints: Through an analysis of the relationships among choice


variables and their relationships to the goal, a set of constraints should be
formulated. A constraint maybe either a system constraint that define the
relationship between choice variables and the goals. It should be remembered
that if there is no deviational variable to minimize in order to achieve in a
certain goal, a new constraint must be created. Also, if further refinement of
goals and priorities is required , it may be facilitated by decomposing certain
deviational variables.

➢ Develop the Objective Function: Through the analysis of the decision marker’s goal
structure, the objective function must be developed. First, the preemptive priority
factors should be assigned to certain deviational variables that are relevant to goal
attainment. Second, if necessary differential weights must be assigned to deviational
variables at the same priority level. It is imperative that goals at the same priority
level be commensurable.
Now, we understand GP modeling by demonstrating through an example
with the aim of how to formulate a model. This will be helpful to clarify the
main differences between GP and LP.
37 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Now formulating the model as:


Maximize Z = $40 x1 + 50 x2
subject to
x1 + 2 x2  40 hrs. of labor
4 x1 + 3x2  120 lb. of clay
x1 , x2  0
where
x1 = no. of bowls manufactured
x2 = no. of mugs manufactured
Using the Beaver Creek Pottery Company example to understand the way a GP
model is framed and explain the differences between a LP model and a GP
model. However, this model was originally formulated as follows:
• Z is the objective(or goal ) function that represents the total profit to be
made from bowls and mugs
• $40 tends to the profit per bowl
• $50 tends to the profit per mug
• The first constraint is for existing labor. It represents a bowl needs 1 hour
of labor, a mug needs 2 hours, and total 40 hours of labor are existing
daily.
• The second constraint is for clay, and it shows that each bowl needs 4
pounds of clay, each mug needs 3 pounds, and the daily limit of clay is
120 pounds totally.
This is known as a Standard LP model formulation because it has a single
objective function for return. However, let us suppose that instead of having one
objective, the pottery company has several objectives, listed here in order of
importance:
i. To ignore layoffs, the company does not need to consume less than 40
hours of labor per day.
ii. The company would like to get an adequate profit limit value of $1600 per
day.
iii. Because the clay must be put in storage in a special place so that it does
not dry out, the company prefers not to keep more than 120 pounds on
hand each day.

38 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

iv. Because high overhead cost result when the plant is kept open past normal
hours, the company would like to minimize the amount of overtime.
These several aims are stated to as goals in the perspective of the GP
technique. The company would, naturally, like to come as close as possible to
attaining each of these targets. Because the usual form of the LP model
considers only one objective, we must create an alternative form of the model to
reproduce these multiple goals.
Now, the first step in formulating a GP model is to convert the LP model
constraints into objectives (goals).
The different aims in a GP problem are denoted to as goals
(objectives).

2.3.2. CONCEPT OF LABOR GOAL

The first goal of the pottery company is to ignore underutilization of labor that is,
using less than 40 hours of labor each day. To denote the probability of
underutilizing labor, the LP constraint for labor, x1 + 2 x2  40 hours of labor, is

rewriting as

x1 + 2 x2 + d1− − d1+ = 40 hrs.


This reformulated expression is stated as a goal constraint.
Here, the two new variables, d1− and d1+ , are so-called deviational variables. They denote the
number of labor hours fewer than 40 ( d1− ) and the number of labor hours more than 40 ( d1+ ).
More precisely, d1− denotes labor underutilization, and d1+ denotes overtime.
For instance, suppose x1 = 5 bowls and x2 = 10 mugs, then a total of 25 hours of labor have
been consumed. Replacing these values into our goal constraint then it gives
 (5) + 2(10) + d1− − d1+ = 40
 25 + d1− − d1+ = 40

All goal constraints are similar that contain deviational variables.

39 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Because only 25 hours were used in manufacture, labor was underutilized by 15 hours (40 -
25 = 15). Thus, if we suppose d1− = 15 hours and d1+ = 0 (because no overtime is exists there),
then we have

 25 + d1− − d1+ = 40
 25 + 15 − 0 = 40
 40 = 40
+
A positive deviational variable d1 is the quantity through which a

target level is overdone.



A negative deviational variable d1 is the quantity through which a

target level is underperformed.

Now consider the case:


where x1 = 10 bowls
And x2 = 20 mugs
This indicates that a total of 50 hours have been used for manufacture, or 10 hours above the
target level of 40 hours. This additional 10 hours is overtime.
Thus, d1 = 0 (because there is no underutilization)

And d1+ = 10 hours


In each of these two brief examples, at least one of the deviational variables equated zero. As
in the first example we can see d1+ = 0 , and in the second example we can see, d1 = 0 .
This is because, it is difficult to use fewer than 40 hours of labor and more than 40 hours of
labor at the same time duration. Of course, both deviational variables, d1 and d1+ , could have
equated zero, if exactly 40 hours were used in manufacture. These examples explain one of
the important features of GP that can be stated as
In a Goal Constraint, at least one or both of the deviational must
equal zero.

40 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

The next step in formulation, our GP model is to get the goal of not using less than 40 hours
of labor. We do this by creating a new setup of objective function:


Minimize Pd
1 1

As we know well that, the objective function in all GP models is to minimize deviation from
the goal constraint levels. In this objective function, the goal is to minimize d1− , the
underutilization of labor. If d1− = 0, then we would not be using less than 40 hours of labor.
Thus, it is our aim to make d1− equal zero or the minimum quantity possible.

The symbol P1 in the objective function describes the minimization of d1− as the first-priority
goal. This indicates that the first step will be to minimize the value of d1− before any other
goal is introduced,when this model is resolved.
In a GP model, the objective function pursues to minimize the
deviation from targets in order of the goal priorities.

Also, the fourth goal-priority in this issue is related to the labor constraint. The fourth goal
which is denoted by P4 and considered to minimize overtime. Remember that hours of
overtime are denoted by d1+ ;
Therefore, the objective function is given by

− +
Minimize Pd
1 1 , P4 d1

As earlier, the objective is to minimize the deviational variable d1+ . Further, if d1+ = 0 , there
would be no overtime throughly. In the calculation of this model, the solution to this fourth-
level goal will not be attempted until goals one, two, and three have been solved.
2.3.3. CONCEPT OF PROFIT GOAL

In our GP model, the second goal is to obtain a daily profit of $1,600. Remember
that the original LP objective function was defined as
Z = 40 x1 + 50 x2

41 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Now we redefines this objective function as a goal constraint that follows the target level
such as
40 x1 + 50 x2 + d2− − d2+ = $1600

The deviational variables d 2− and d 2+ denote the amount of profit less than $1,600 ( d 2− ) and
the amount of profit exceeding $1,600 ( d 2+ ), respectively. The pottery company's goal to
reaching $1,600 in profit is symbolized in the objective function as
− − +
Minimize Pd
1 1 , P2 d 2 , P4 d1

Here, It is seen that only d 2− is being minimized, not d 2+ , since it is reasonable to accept that
the pottery company would be agreeable to all profits in additional of $1,600 (i.e., it does not
need to minimize d 2+ , additional profit). At the second-priority level by minimizing d 2− , the
pottery company expectations that d 2− will equal zero, which will outcome in at least $1,600
in profit.
2.3.4. CONCEPT OF MATERIAL GOAL
The third goal of the company is to ignore more than 120 pounds of clay on hand
every day. The goal constraint is

4 x1 + 3x2 + d3− − d3+ = 120 lb.

Since the deviational variable d3− denotes the amount of clay fewer than 120 pounds, and d3+
denotes the amount in additional of 120 pounds, this goal can be reproduced in the objective
function such as
− − + +
Minimize Pd
1 1 , P2 d 2 , P3d3 , P4 d1

The term P3d3+ indicates the company's requirements to minimize d3+ , the quantity of clay in
addition of 120 pounds. The P3 term indicates third most important goal of the pottery
company.
The whole GP model can now be defined symbolically as follows:

42 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

− − + +
Minimize Pd
1 1 , P2 d 2 , P3 d 3 , P4 d1

subject to
x1 + 2 x2 + d1− − d1+ = 40
40 x1 + 50 x2 + d 2− − d 2+ = 1600
4 x1 + 3 x2 + d3− − d3+ = 120
x1 , x2 , d1− , d1+ , d 2− , d 2+ , d3− , d3+  0
The simple difference between this model and the standard LP model is that the objective
function terms are not summed to equal a total value, Z. The reason behind this is that, the
deviational variables in the objective function denotes different unit of measure. For instance,
d1− and d1+ indicates hours of labor, d 2− indicates dollars, and d3+ indicates pounds of clay.
It would be irrelevant to sum hours, dollars, and pounds. The objective function in a GP
model specifies only that the deviations from the goals represented in the objective function
be minimized individually, in order of their priority.
Since the deviational variables often have different units
of measure then the terms are not summed in the objective
function logically.

2.4 ALTERNATIVE FORMS OF GOAL CONSTRAINTS

Now, suppose we want to modify the prior GP model so that our fourth-priority goal
boundaries overtime to 10 hours in place of minimizing overtime. Remember that the goal
constraint for labor is given by

x1 + 2 x2 + d1− − d1+ = 40

d1+ denotes overtime in this goal constraint. Since the new fourth-priority goal is to bound
overtime to 10 hours, the goal constraint is developed as follows:

d1+ + d 4− + d 4+ = 10

Even though this goal constraint seems unfamiliar, it is satisfactory in GP to have an


expression with all deviational variables. In this expression, d 4− denotes the quantity of
overtime fewer than 10 hours, and d 4+ denotes the quantity of overtime more than 10 hours.

43 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Since, the company requirements to bound overtime to 10 hours, d 4+ is minimized in the


objective function:
− − + +
Minimize Pd
1 1 , P2 d 2 , P3d3 , P4 d 4

All deviational variables can include in Goal constraints.

Next, consider the inclusion of a fifth-priority goal to this example. Assume that the pottery
company has limited warehouse space, so it can manufacture no more than 30 bowls and 20
mugs daily. If probable, the company would like to manufacture these amounts. However,
because the profit for mugs is more than the profit for bowls (i.e., $50 rather than $40), it is
more significant to reach the goal for mugs. This fifth goal necessitates that two new goal
constraints be formed, as follows:

x1 + d5− = 30 bowls
x2 + d6− = 20 mugs

Here, It is notice that the positive deviational variables d5+ and d6+ have been removed from
these goal constraints. The reason behind to do this as the statement of the fifth goal specifies
that "no more than 30 bowls and 20 mugs" can be produced. Further, positive deviation, or
over production, is not possible.

Since, the genuine goal of the company is to reach the levels of manufacturing shown in these
two goal constraints, the negative deviational variables d5− and d 6− are minimized in the
objective function. However, remember that it is more significant to the company to reach
the goal for mugs because mugs make more profit. This situation is reflected in the objective
function, such as:
− − + + − −
1 1 , P2 d 2 , P3d3 , P4 d 4 , 4 P5 d 5 + 5P5 d 6
Minimize Pd

Since, the goal for mugs is more significant instead of the goal for bowls, the level of
significance should be in ratio to the quantity of profit (i.e., $50 for each mug and $40 for
each bowl). Therefore, the goal for mugs is more significant than the goal for bowls in the
percentage of 5 to 4.

Here, the coefficient 5 and 4 are referred to as weights for P5 d6− and P5 d5− , respectively.
Thus, at the fifth priority level, the minimization of d 6− is "weighted" greater than the

44 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

minimization of d5− . When this model is resolved, the attainment of the goal for minimizing
d 6− (bowls) is more significant, although both goals are at the equal priority level.
At the same priority level, two or more goals can be assigned weights
to specify their relative significance.

Here it is notice that, these two weighted goals have been summed due to both are at the same
priority level. At this individual priority level, their sum characterises achievement of the
desired goal. The whole GP model, with the new goals for both overtime and production, is
developed as:
− − + + − −
Minimize Pd 1 1 , P2 d 2 , P3 d 3 , P4 d 4 , 4 P5 d 5 + 5 P5 d 6

subject to
x1 + 2 x2 + d1− − d1+ = 40
40 x1 + 50 x2 + d 2− − d 2+ = 1600
4 x1 + 3x2 + d3− − d3+ = 120
d1+ + d 4− + d 4+ = 10
x1 + d5− = 30
x2 + d6− = 20
x1 , x2 , d1− , d1+ , d 2− , d 2+ , d3− , d3+ , d 4− , d 4+ , d5− , d 6−  0

2.5 ANALYSIS OF GOAL PROGRAMMING (GP) GRAPHICALLY

Only those linear GP problems which involve two decision variables can be solved
by the graphical method. This method is quiet similar to the graphical method of
LP. The graphical method is used in LP to maximize the objective function with
one goal only, whereas in GP, it is used to minimize the deviation from a set of
multiple goals. Here the deviation from the goal of highest priority are minimize as
much as possible and then the deviations in the other goals in order of priority are
minimized so that the achievements of the goals of higher order are not affected.
Following procedural steps are employed in the process, after the problem has
been formulated.
Step-1: Plot all structural constratints and identify the feasible region. In case, no
structural constraints exists, the feasible region is that area where both x1 and x2
45 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

are  0 (the non-negative quadrant).


Step-2: Plot the lines corresponding to the goal constraints. To accomplish this,
set the deviational variables in the goal constraint equal to zero and plot the
resulting equation.
Step-3: Identify the top-priority solution. For this determine the point or points
within the feasible region that satisfy the highest priority goal.
Step-4: Move to the goal which has the next highest priority and determine the “
best” solution(s) already achieved for goals of highest priority.
Step-5: Repeat step 4 untill all priority level have been investigated.
For clear understanding the method is illustrated with the help of following
example.
The original GP model for Beaver Creek Pottery Company, defined at the beginning
of this chapter, will be used as
an example:

− − + +
Minimize Pd
1 1 , P2 d 2 , P3 d 3 , P4 d1

subject to
x1 + 2 x2 + d1− − d1+ = 40
40 x1 + 50 x2 + d 2− − d 2+ = 1600
4 x1 + 3 x2 + d3− − d3+ = 120
x1 , x2 , d1− , d1+ , d 2− , d 2+ , d3− , d3+  0

To graph this model, the deviational variables in each goal constraint are set equal to zero,
and we graph each subsequent equation on a set of coordinates. Here, Figure-1 is a graph of
the three goal constraints for this model.
Notice that in Figure-1, there is no feasible solution space indicated, as in a regular LP model.
This is because all three goal constraints are equations; thus, all solution points are on the
constraint lines.
The solution logic in a GP model is to try to attain the goals in the objective function, in order
of their priorities. As a goal is achieved, the next highest-ranked goal is then considered.
However, a higher-ranked goal that has been achieved is never given up in order to achieve a
lower-ranked goal.

46 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Graphical solution illustrates the GP solution logic-seeking to achieve


goals by minimizing deviation in order of their priority.

In this example we first consider the first-priority goal is minimizing d1− . The relationship
of d1− and d1+ to the goal constraint is shown in Figure-2. The area below the goal constraint
line x1 + 2 x2 = 40 represents possible values for d1− , and the area above the line represents
values for d1+ . In order to achieve the goal of minimizing d1− , the area below the constraint
line corresponding to d1− is eliminated, leaving the shaded area as a possible solution area.

Next, we consider the second-priority goal is minimizing d 2− . In Figure-3, the area below
the constraint line 40x1 + 50x2 = 1,600 represents the values for d 2− , and the area above the
line represents the values for d 2+ . To minimize d 2− , the area below the constraint line
corresponding to d 2− is eliminated. Notice that by eliminating the area for d 2− , we do not
affect the first-priority goal of minimizing d1− .
One goal is never achieved at the expense of another higher-priority goal.

Next, we consider the third-priority goal is minimizing d3+ . Figure-4 shows the areas
corresponding to d3− and d3+ . To minimize d3+ , the area above the constraint line
4 x1 + 3x2 = 120 is eliminated. After considering the first three goals, we are left with the area
between the line segments AC and BC, which contains possible solution points that satisfy
the first three goals.

Figure-1: Graph of Goal Constraints


47 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Figure-2: Minimize d1− (The First- Priority Goal)

Figure-3: Minimize d 2− (The Second- Priority Goal)

48 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Figure-4: Minimize d3+ (The Third- Priority Goal)

Finally, we must consider the fourth-priority goal is minimizing d1+ . To achieve this final
goal, the area above the constraint line x1 + 2 x2 = 40 must be eliminated. However, if we
eliminate this area, then both d 2− and d3− must take on values. In other words, we cannot
minimize d1+ totally without violating the first- and second-priority goals. Therefore, we
want to find a solution point that satisfies the first three goals but achieves as much of the
fourth-priority goal as possible.
Point C in Figure-5 is a solution that satisfies these conditions. Notice that if we move down
the goal constraint line 4 x1 + 3x2 = 120 toward point D, d1+ is further minimized; however,
d 2− takes on a value as we move past point C. Thus, the minimization of d1+ would be
accomplished only at the expense of a higher-ranked goal.
The solution at point C is determined by simultaneously solving the two equations that
intersect at this point. Doing so results in the following solution:
x1 = 50 bowls
x2 = 20 mugs
d1+ = 15 hours

49 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Because the deviational variables d1− , d 2− , and d3+ all equal zero, they have been minimized,
and the first three goals have been achieved. Because d1+ = 15 hours of overtime, the fourth-
priority goal has not been achieved. The solution to a GP model such as this one is referred to
as the most adequate solution rather than the optimal solution because it fulfils the definite
goals as well as possible.
Further, GP solutions do not always find all goals, and they are not
ideal; however, they attain the most suitable solution as well as possible.

Figure-5: Minimize d1+ (The Fourth-Priority Goal) and Final Solution

2.6 SIMPLEX METHOD (MODIFIED) APPLIED TO GOAL


PROGRAMMING (GP) PROBLEMS

The simplex method for solving a GP problem is similar to that for a LP problem in the
modified form. In this section we shall demonstrate how the algorithm can be modified to
solve a GP model. The method of solution of GP problem by modified simplex method, is as
follows:

50 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Step-1. Formulation of Initial Table: Construct the initial simplex table in the same way as
for LP problems with the coefficients of the associated variables (decision variables and the
deviational variables) placed in the appropriate columns. Now put a thick horizontal line
below these entries and write the pre-emptive priority goals P1 , P2 ,......, in xB column, starting
from the bottom to the top i.e., first (top) priority P1 is written at the bottom and the least
priority is written at the top.
Step-2. In GP problem there is no profit maximization or cost minimization in the objective
function. Here we minimize the unattained portions of the goal as much as possible, by
minimizing the deviational variables through the use of certain pre-emptive priority factors
and different weights attached with the deviational derivatives in the objective function. So
the pre-emptive priority factor with weight attached with the deviational derivatives in the
objective function Z will represent c j values. Write the c j -row at the top of the table.

Step-3. Test of Optimality: Compute the values of Z j and c j − Z j separately for each of the
ranked goals P1 , P2 ,....... It is because the different goals are measured in different units. Z j
and c j − Z j are computed in the same manner as in the usual simplex method of LP
problems.

(
Thus, c j − Z j = c j − ( cB column ) . j th column
T
)
Z = ( cB column ) . ( xB column )
T
and

The optimality criterion Z j or c j − Z j becomes a matrix of size k  n , where k represents the


number of pre-emptive priority levels and n is the number of variables including both
decision and deviational variables.
Optimality Test: Check the c1 − Z1 row ( j = 1 for the top priority P1 ). The top priority goal P1
is said to be achieved if all c j − Z j  0 in the P1 row or there is zero in P1 row in xB column.

If atleast one of these entries in P1 row is negative and there is no zero in P1 row in xB
column, then this goal P1 is not achieved and can be improved further, in this case proceed to
the next step.
Step-4. To find the Entering Vector (or Variable): the variable in the column corresponding
to the largest negative c1 − Z1 value (smallest element) in the P1 row is selected as the

51 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

entering variable (or vector). In case of tie, check the next lower priority level. The column
corresponding to the smallest element (largest negative element) in the lower priority row,
out of the columns in which there is a tie in c1 − Z1 row, is selected as key column (i.e.,
incoming variable or vector).
To Find the Outgoing Vector (Or Variable) : The outgoing vector is selected as in usual simplex
method in LP problems. The variable in the row (known as key row), which corresponds to
the minimum non-negative value, obtained by dividing the values in the xB column by the
corresponding positive elements (or values) in the key column, is taken as the outgoing
variable (or vector).
The element at the intersection of the key row and key column is called key-element.
Step-5. As in usual Simplex Method we reduce the key element equal to 1 and with its help
all other elements in the key column are reduced to zero. Thus, a new reduced matrix is
obtained.
For this matrix again find the values of Z j or c j − Z j for each of the ranked goals
P1 , P2 ,....... Now again we check c1 − Z1 row for optimality. If all entries in this P1 row are
positive then the goal is achieved (Note that in this situation the values in xB column, in P1
row will be zero to show that this goal is fully achieved).
If atleast one entry in P1 row is negative then goal P1 is still not achieved. In this case again
repeat step- 4 and 5.
Step-6. If goal P1 is achieved, then proveed to achieve the next priority goal P2 in the above
manner. The goal P2 cannot be improved (achieved) further from the present level if there is
positive entry in row P1 (higher priority goal) below the most negative entry in row P2 .

Continue this process until the lower priority goal (say Pi ) is also achieved fully or to the
nearest satisfaction. The goal Pi cannotbe improved (achieved) further from the level there is
positive entry in higher priority goals P1 , P2 ,....... rows below the most negative entry in row
Pi .

For clear understanding of the above method see the following illustrative example.

52 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Example: A company manufactured two products radios and transisters which must be
processed through assembly and finishing departments. Assembly has 90 hours available,
finishing can handle upto 72 hours of work. Manufacturing one radio requires 6 hours in
assembly and 3 hours in finishing. The profit is Rs. 120 per radio and Rs. 90 per transistor.
The company has established the following goals and has assigned them priorities
P1 , P2 , P3 (where P1 is most important) as follows:

Priority Goal
P1 Produce to meet a radio goal of 13
P2 Reach a profit goal of Rs. 1950
P3 Produce to meet a transistor goal of 5.

Formulate the problem as a GP problem and find the optimum solution.

Solution: Formulation of the GP problem: Firstly, the given informations can be put in a
tabular form as follows:

Radio x1 Transistor x2 Time Available


In hours

Assembly time in hours 6 3 90

Finishing time in hours 3 6 72

Profit in Rs. 120 90

Let x1 and x2 be the numbers of radios and transistors manufactured, respectively.

Also let

53 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

d1− = amount by which the profit goal is underachieved ,


d1+ = amount by which the profit goal is overachieved ,
d 2− = amount by which the radio goal is underachieved ,
d 2+ = amount by which the radio goal is overachieved ,
d3− = amount by which the transistor goal is underachieved ,
d3+ = amount by which the transistor goal is overachieved .

Then the given problem formulated as a GP problem is as follows:


− − −
1 2 + P2 d1 + P3 d 3
Minimize Z = Pd i.e., Minimize Z = (d 2− , d1− , d 3− )
subject to 120 x1 + 90 x2 + d1− − d1+ = 1950 ( Profit goal )
x1 + d 2− − d 2+ = 13 ( Radio goal )
− +
x2 + d − d = 5
3 3 (Transistor goal )
6 x1 + 3x2  90 ( Assembly constraint )
3x1 + 6 x2  72 ( Finishing constraint )
and x1 , x2 , d1− , d1+ , d 2− , d 2+ , d3− , d 3+  0.

Solution of the GP Problem: Introducing the slack variables x3 , x4 , the above GP problem can be
written as follows:
− − −
1 2 + P2 d1 + P3d 3
Minimize Z = Pd
subject to 120 x1 + 90 x2 + d1− − d1+ = 1950
x1 + d 2− − d 2+ = 13
x2 + d3− − d3+ = 5
6 x1 + 3x2 + x3 = 90
3x1 + 6 x2 + x4 = 72
and x1 , x2 , d1− , d1+ , d 2− , d 2+ , d3− , d3+  0.

Now we shall solve this problem step by step.

x1 = 0 = x2 = d1+ = d 2+ = d3+ , we get d1− = 1950, d 2− = 13


Taking
d3− = 5, x3 = 90, x4 = 72, which is the basic feasible solution.

54 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Step-1. Formulation of the initial table: Now we formulate the starting (initial) table as
follows. (As explained in step-1 of 2.6).

cj 0 0 0 0 P2 0 P1 0 P3 0 Mini Ratio

B cB xB / x1
xB x1 x2 x3 x4 d1− d1+ d 2− d 2+ d3− d3+
Type

d1− P2 1950 120 90 0 0 1 -1 0 0 0 0 1950/120

d 2− P1 13
1 0 0 0 0 0 1 -1 0 0 13/1 (Min)

d3− P3 5 0 1 0 0 0 0 0 0 1 -1 —

x3 0 90 6 3 1 0 0 0 0 0 0 0 90/6

x4 0 72 3 6 0 1 0 0 0 0 0 0 72/3

cj − Z j P3 5 0 1 0 0 0 0 0 0 0 1

P2 1950 -120 -90 0 0 0 -1 0 0 0 0

P1 13 -1 0 0 0 0 0 0 1 0 0

Step-2. The c j row is written at the top of the table.

Step-3. Test of optimality: Here we compute c j − Z j , j = 1, 2,.....,10.

55 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

c1 − Z1 = 0 − ( P2 , P1 , P3 , 0, 0).(120,1, 0, 6,3)T = −120 P2 − P1 + 0.P3


c2 − Z 2 = 0 − ( P2 , P1 , P3 , 0, 0).(90, 0,1,3, 6)T = −90 P2 + 0.P1 + 1.P3
c3 − Z 3 = 0 − ( P2 , P1 , P3 , 0, 0).(0, 0, 0,1, 0)T = 0.P2 + 0.P1 + 0.P3
c4 − Z 4 = 0 − ( P2 , P1 , P3 , 0, 0).(0, 0, 0, 0,1)T = 0.P2 + 0.P1 + 0.P3
c5 − Z 5 = P2 − ( P2 , P1 , P3 , 0, 0).(1, 0, 0, 0, 0)T = 0.P2 + 0.P1 + 0.P3
c6 − Z 6 = 0 − ( P2 , P1 , P3 , 0, 0).(−1, 0, 0, 0, 0)T = P2 + 0.P1 + 0.P3
c7 − Z 7 = P1 − ( P2 , P1 , P3 , 0, 0).(0,1, 0, 0, 0)T = 0.P2 + 0.P1 + 0.P3
c8 − Z8 = 0 − ( P2 , P1 , P3 , 0, 0).(0, −1, 0, 0,1)T = 0.P2 + 1.P1 + 0.P3
c9 − Z 9 = P3 − ( P2 , P1 , P3 , 0, 0).(0, 0,1, 0, 0)T = 0.P2 + 0.P1 + 0.P3
c10 − Z10 = 0 − ( P2 , P1 , P3 , 0, 0).(0, 0, −1, 0, 0)T = 0.P2 + 0.P1 + 1.P3

Note that all enteries in the columns corresponding to vectors in the basis are zero. So we
may compute c j − Z j for columns corresponding to non-basic variables only. The entries in
columns corresponding to basic variables will be zero.

Z = cB x B = ( P2 , P1 , P3 , 0, 0).(1950,13,5,90, 72)T = 1950P2 + 13P1 + 5P3

All these enteries are made in the above table.


Step-4. The most negative (– ve) entry in P1 (Top priority) row is -1 in Ist column.

 x1 is the incoming variable and by minimum ratio rule d 2− is the outgoing variable. Thus,
the key element is 1 ( = a21 ).

Step-5. Here, reducing all other elements in key column c1 equal to zero with the help of
key element, the next table is as follows:

cj 0 0 0 0 P2 0 P1 0 P3 0 Mini Ratio

B cB xB / d 2+
Type xB x1 x2 x3 x4 d1− d1+ d 2− d 2+ d3− d3+

d1− P2 390 0 90 0 0 1 -1 -120 120 0 0 390/120

56 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

x1 0 13 1 0 0 0 0 0 1 -1 0 0 —

d3− P3 5 0 1 0 0 0 0 0 0 1 -1 —

x3 0 12 0 3 1 0 0 0 -6 0 0 12/6 (Min)
6

x4 0 33 0 6 0 1 0 0 -3 3 0 0 33/3

cj − Z j P3 5 0 -1 0 0 0 0 0 0 0 1

P2 390 0 -90 0 0 0 1 120 -120 0 0

P1 0 0 0 0 0 0 0 1 0 0 0

Here, we again c j − Z j compute for columns corresponding to non-basic variables only. All
enteries in the columns corresponding to basic variables will be zero.

c2 − Z 2 = 0 − ( P2 , 0, P3 , 0, 0).(90, 0,1,3, 6)T = −90 P2 − P3


c6 − Z 6 = 0 − ( P2 , 0, P3 , 0, 0).(−1, 0, 0, 0, 0)T = P2
c7 − Z 7 = P1 − ( P2 , 0, P3 , 0, 0).(−120, −1, 0, −6, −3)T = P1 + 120 P2
c8 − Z8 = 0 − ( P2 , 0, P3 , 0, 0).(120, −1, 0, 6,3)T = −120 P2
c10 − Z10 = 0 − ( P2 , 0, P3 , 0, 0).(0, 0, −1, 0, 0)T = P3

The value of c j − Z j in P1 , P2 , P3 rows may also be found easily as follows making 1 at the
place of key element use it to reduce all enteries in P1 , P2 , P3 rows corresponding to the
column of key element to zero.

Z = cB x B = ( P2 , 0, P3 , 0, 0).(390,13,5,12,33)T = 390 P2 + 5P3

Since all enteries in P1 row are  0 , so the priority goal P1 is achieved.

Now we proceed to achieve that goal P2 , without affecting the achievement of top priority
goal P1 .

57 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Step-6. In the P2 row (in above table) most negative value is -120 in column corresponding
to variable d 2+ . So this variable is taken as the entering variable. Now by minimumratio rule
x3 in 4 th-row is the outgoing vector. Thus, 6( = a48 ) is the key element. Dividing this 4 th-
row by 6, we make 1 (at this place) and with the help of 1 at this place we reduce all other
elements in this d 2+ column equal to zero.

Thus, we get the next simplex table as follows:

cj 0 0 0 0 P2 0 P1 0 P3 0 Mini Ratio

B cB xB / d 2+
Type xB x1 x2 x3 x4 d1− d1+ d 2− d 2+ d3− d3+

d1− P2 150 0 30 -20 0 1 -1 0 0 0 0 150/30 = 5

x1 0 15 1 1/2 1/6 0 0 0 0 0 0 0 15/(1/2)

=30

d3− P3 5 0 1 0 0 0 0 0 0 1 -1 5/1 = 5

d 2+ 0 2 0
1/2
1/6 0 0 0 -1 1 0 0 2/(1/2) = 4
(Min)

x4 0 27 0 9/2 -1/2 1 0 0 0 0 0 0 27/(9/2) = 6

cj − Z j P3 5 0 -1 0 0 0 0 0 0 0 1

P2 150 0 -30 20 0 0 1 0 0 0 0

P1 0 0 0 0 0 0 0 1 0 0 0

Now, we again compute c j − Z j for columns corresponding to non-basic variables only. All
enteries in the columns corresponding to basic variables will be zero.

58 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

c2 − Z 2 = 0 − ( P2 , 0, P3 , 0, 0).(30, 1/ 2, 1, 1/ 2, 9 / 2)T = −30 P2 − P3


c3 − Z 3 = 0 − ( P2 , 0, P3 , 0, 0).(−20, 1/ 6, 0, 1/ 6, − 1/ 2)T = 20 P2
c6 − Z 6 = 0 − ( P2 , 0, P3 , 0, 0).(−1, 0, 0, 0, 0)T = P2
c7 − Z 7 = P1 − ( P2 , 0, P3 , 0, 0).(0, 0, 0, −1, 0)T = P1
c10 − Z10 = 0 − ( P2 , 0, P3 , 0, 0).(0, 0, −1, 0, 0)T = P3

Z = cB x B = ( P2 , 0, P3 , 0, 0).(150,15,5, 2, 27)T = 150 P2 + 5P3

Again in P2 row c2 − Z 2 is negative, so this solution is not optimal from P2 point of view.
Now we take variable x2 in second column corresponding to most negative entry in P2 rows
as entering variable, by minimum ratio rule d 2+ in IV row is the outgoing variable, so the key
element is 1/ 2(= a42 ) . Dividing IV row by ½, to make 1, the key element, and then reducing
all other elements in 2 nd column to 0, we get the following reduced matrix.

cj 0 0 0 0 P2 0 P1 0 P3 0
B cB
Type xB x1 x2 x3 x4 d1− d1+ d 2− d 2+ d3− d3+

d1− P2 30 0 0 -30 0 1 -1 60 -60 0 0

x1 0 13 1 0 0 0 0 0 1 -1 0 0

d3− P3 1 0 0 -1/3 0 0 0 2 -2 1 -1

d 2+ 0 4 0 1 1/3 0 0 0 -2 2 0 0

x4 0 9 0 0 -2 1 0 0 9 -9 0 0

cj − Z j P3 5 0 0 1/3 0 0 0 -2 2 0 1

P2 150 0 0 30 0 0 1 -60 60 0 0

P1 0 0 0 0 0 0 0 1 0 0 0

59 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Again c j − Z j compute for columns corresponding to non-basic variables only.

We get

c3 − Z3 = 0 − ( P2 , 0, P3 , 0, 0).(−30, 0, −1/ 3,1/ 3, −2)T = 30 P2 + (1/ 3) P3


c6 − Z 6 = 0 − ( P2 , 0, P3 , 0, 0).(−1, 0, 0, 0, 0)T = P2
c7 − Z 7 = P1 − ( P2 , 0, P3 , 0, 0).(60,1, 2, −2,9)T = P1 − 60 P2 − 2 P3
c8 − Z8 = 0 − ( P2 , 0, P3 , 0, 0).(−60, −1, −2, 2, −9)T = 60 P2 + 2 P3
c10 − Z10 = 0 − ( P2 , 0, P3 , 0, 0).(0, 0, −1, 0, 0)T = P3

And Z = cB x B = ( P2 , 0, P3 , 0, 0).(150,15,5, 2, 27)T = 150 P2 + 5P3

In the above table we note that there is negative entry -60 in P2 . But P2 cannot be improved
further as there is positive entry below this element in P1 row (top priority row). Similarly, if
we move to improve P3 , then also it is not possible as there is positive entry in row P1 , below
the negative entry in row P3 .

Thus, P2 and P3 cannot be improved further.

Hence the solution of the above GP problem is

x1 = 13, x2 = 4, d1− = 30, d3− = 1, d1+ = 0 = d2− = d 2+ = d3+ i.e., radios and 4 transistors
should be manufactured.
For this solution the first (top) priority goal P1 is fully achieved (13 radios), the second
priority goal P2 is missed by Rs. 30 (here
Profit = 120 13 + 90  4 = Rs.1920, and Rs.1950 − 1920 = Rs.30 only), and the last priority
goal P3 is also missed by 1 transistor (here 5- 4 = 1).

Hence the optimum solution to GP problem is

• x1 = 13, x2 = 4, Minimize Z = (0,30,1) .

60 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

2.7 PREEMPTIVE GOAL PROGRAMMING AND NON-


PREEMPTIVE (WEIGHTED) GOAL PROGRAMMING

This section presents two algorithms for solving GP. Namely as


1. Preemptive Goal Programming (PEGP)
2. Non-Preemptive Goal Programming (NPGP) or Weighted Goal Programming (WGP).
Both methods are based on representing the multiple goals by a single objective function.
The Preemptive Procedure starts by prioritizing the goals in order of importance. The
model then optimizes the goals one at a time in order of priority and in a manner that does
not degrade a higher-priority solution.
In the Non-Preemptive Procedure, the single objective function is the weighted sum of the
functions representing the goals of the problem.
The proposed two methods do not generally produce the same solution. Neither
method, however, is superior to the other, because the two techniques entail distinct
decision-making preferences.
As we know very well that, In some situations, a decision maker may face multiple
objectives, and there may be no point in an LP’s feasible region satisfying all objectives. In
such a case, how can the decision maker choose a satisfactory decision? Goal programming
is one technique that can be used in such situations. The following example illustrates the
main ideas of goal programming with both methods (Premptive and Non-Preemptive
Procedue).
Example: The Leon Bumit Advertising Agency is tryimg to determine a TV Advertising
Schedule for Priceler Auto Company. Priceler has three goals such as:
➢ Goal 1: Its ads should be seen by at least 40 million high-income men
(HIM).
➢ Goal 2: Its ads should be seen by at least 60 million low-income people
(LIP).
➢ Goal 3: Its ads should be seen by at least 35 million high-income women
(HIW).
Leon Burnit can purchase two types of ads: those shown during football games and those
shown during soap operas. At most, $600,000 can be spent on ads. The advertising costs and

61 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

potential audiences of a one- minute ad of each type are shown in below table. Leon Burnit
must determine how many football ads and soap opera ads to purchase.
Ad HIM LIP HIW Cost

Football Game 7 10 5 100,000

Soap Opera 3 5 4 60.000

Table: Millions of Viewers

If we let
𝑥1 = # of minutes of ads shown during football games
𝑥2 = # of minutes of ads shown during soap operas
We can write the constraints of the problem as

7 x1 + 3 x2  40
10 x1 + 5 x2  60
5 x1 + 4 x2  35
100 x1 + 60 x2  600
x1 , x2 0

From the Figure-1, we find that no point that satisfies the budget constraint meets all three
of Priceler’s goals. Thus, the problem has no feasible solution. It is impossible to meet all of
Priceler’s goals, so Burnit might ask Priceler to identify, for each goal, a cost (per-unit short
of meeting each goal) that is incurred for failing to meet the goal.
Burnit can now formulate an LP that minimizes the cost incurred in deviating from
Priceler’s three goals. The trick is to transform each inequality constraint in that represents
one of Priceler’s goals into an equality constraint. The cost-minimizing solution might
under- satisfy or over-satisfy a given goal, so we need to define the following deviational
variables:

62 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Figure-2.1

di+ = the amount by which we are over the i − th goal


di− = the amount by which we are under the i − th goal
Using the deviational variables, we can write

7 x1 + 3x2 + d1− − d1+ = 40


10 x1 + 5 x2 + d 2− − d 2+ = 60
5 x1 + 4 x2 + d3− − d3+ = 35

Non-Preemptive Goal Programming


Now suppose Priceler determines that
➢ Each million exposures by which Priceler falls short of the HIM goal costs
Priceler a $200,000 penalty because of lost sales.
➢ Each million exposures by which Priceler falls short of the LIP goal costs
Priceler a $100,000 penalty because of lost sales.

63 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

➢ Each million exposures by which Priceler falls short of the HIW goal costs
Priceler a $50,000 penalty because of lost sales.

To find the best solution satisfying the above equations, we can write the following model
with the objective:

Minimize = 200d1− + 100d 2− + 50d3−


7 x1 + 3 x2 + d1− − d1+ = 40
− +
10 x1 + 5 x2 + d − d 2 2 = 60
− +
5 x1 + 4 x2 + d − d3 3 = 35
100 x1 + 60 x2 + s4 = 600
x1 , x2 , di− , di+ , s4  0, i

The optimal solution to the above model is

Z = 250; x1 = 6, x2 = 0,
d1+ = 2, d 2+ = d3+ = d1− = d 2− = 0 and d3− = 5.

Preemptive Goal Programming (PGP)


In our LP formulation of the Burnit example, we assumed that Priceler could exactly
determine the relative importance of the three goals. For instance, Priceler determined that
the HIM goal was 2 times as important as the LIP goal, and the LIP goal was 2 times as
important as HIW goal. In many situations, however, a decision maker may not be able to
determine precisely the relative importance of the goals. When this is the case, preemptive
goal programming may prove to be a useful tool. To apply PGP, the decision maker must
rank his or her goals from the most important (goal 1) to least important (goal 𝑛). The
objective function coefficient for the variable representing goal 𝑖 will be Pi where we
assume that
P1 P2 ........ Pn

For the example, we can then write

64 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

− − −
1 1 + P2 d 2 + P3 d 3
Minimize = Pd
7 x1 + 3 x2 + d1− − d1+ = 40
− +
10 x1 + 5 x2 + d − d
2 2 = 60
− +
5 x1 + 4 x2 + d − d
3 3 = 35
100 x1 + 60 x2 + s4 = 600
x1 , x2 , di− , di+ , s4  0, i

Preemptive goal programming problems can be solved by an extension of the simplex known
as the goal programming simplex. To prepare a problem for solution by the goal
programming simplex, we must compute n Row 0s (objective rows), with the i-th row
corresponding to goal i.
We thus have

Row 0 - Objective 1 (goal 1) : Z1 - P1d1- = 0


Row 0 - Objective 2 (goal 2) : Z 2 - P2 d 2- = 0
Row 0 - Objective 3 (goal 3) : Z3 - P3d3- = 0

By organizing these, we have

Row 0 - Objective 1 (goal 1) : Z1 +7P1 x1 +3P1 x2 - P1d1+ = 40P1


Row 0 - Objective 2 (goal 2) : Z 2 +10P2 x1 +5P2 x2 - P2 d 2+ = 60P2
Row 0 - Objective 3 (goal 3) : Z3 +5P3 x1 +4P3 x2 - P3d3+ = 30P3

Z x1 x2 d1+ d 2+ d3+ d1− d 2− d3− s4 RHS

Z1 1 7P1 3P1 − P1 0 0 0 0 0 0 40P1

Z2 1 10P2 5P2 0 − P2 0 0 0 0 0 60P2

Z3 1 5P3 4P3 0 0 − P3 0 0 0 0 35P3

d1− 0 7 3 -1 0 0 1 0 0 0 40

d 2− 0 10 5 0 -1 0 0 1 0 0 65

65 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

d3− 0 5 4 0 0 -1 0 0 1 35

s4 0 100 60 0 0 0 0 0 0 1 600

Z x1 x2 d1+ d 2+ d3+ d1− d 2− d3− s4 RHS

Z1 1 0 0 0 0 0 − P1 0 0 0 0

Z2 1 0 5 P2 10 P2 − P2 0 10 P2 0 0 0 20 P2

7 7 7 7

Z3 1 0 13P3 5 P3 0 − P3 5 P3 0 0 0 45 P3

7 7 7 7

x1 0 1 3 −1 0 0 1 0 0 0 40
7 7 7 7

d 2− 0 0 5 10 -1 0

10 1 0 0 20
7 7 7 7

d3− 0 0 13 5 0 -1

5 0 1 45
7 7 7 7

s4 0 0 120 100 0 0 100 0 0 1 200



7 7 7 7

Z x1 x2 d1+ d 2+ d3+ d1− d 2− d3− s4 RHS

Z1 1 0 0 0 0 0 − P1 0 0 0 0

Z2 1 0 − P2 0 − P2 0 0 0 0 − P2 0
10

66 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Z3 1 0 P3 0 0 − P3 0 0 0 − P3 5P3
20

x1 0 1 3 0 0 0 1 0 0 1 6
5 7 100

d 2− 0 0 -1 0 -1 0 0 1 0

1 0
10

d3− 0 0 1 0 0 -1 0 0 1

1 5
20

d1+ 0 0 6 1 0 0 -1 0 0 7 2
5 100

Priorities Deviational Variables


Highest Medium Lowest x1 x2 HIM LIP HIW

HIM LIP HIW 6 0 0 0 5


HIM HIW LIP 5 5 0 5 10
3 3 3

LIP HIM HIW 6 0 0 0 5


LIP HIW HIM 6 0 0 0 5
HIW HIM LIP 3 5 4 5 0
HIW LIP HIM 3 5 4 5 0

When a preemptive goal programming problem involves only two decision variables, the
optimal solution can be found graphically. For example, suppose HIW is the highest priority
goal, LIP is the second-highest, and HIM is the lowest. From the Figure, we find that the set
of points satisfying the highest-priority goal (HIW) and the budget constraint is bounded by
the triangle ABC.

67 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Among these points, we now try to come as close as we can to satisfying the second-
highest-priority goal (LIP). Unfortunately, no point in triangle ABC satisfies the LIP goal.
We see from the figure, however, that among all points satisfying the highest-priority goal,
point C (C is where the HIW goal is exactly met and the budget constraint is binding) is the
unique point that comes the closest to satisfying the LIP goal.
Simultaneously solving the following equations, we find that point C (3, 5) is the solution
that satisfies both goals and closest to satisfying the LIP goal.
5 x1 + 4 x2 = 35
100 x1 + 60 x2 = 600

We can use computer system i.e, MS Excel Solver to solve preemptive GP models.

2.8 APPLICATIONS OF GOAL PROGRAMMING (GP) IN


MANAGEMEMENT SCIENCE

GP has a close correspondence with decision making. As managers are constantly called
upon to make decisions in order to solve problems, this technique is particularly relevant in
the field. Business success relies on effective decision making processes, and GP models can
assist. In particular assigned weights can express the intensity with which the goals are

68 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

strived for. Moreover in management the multiple GP approach can be considered as an


extension of the widely used break-even analysis. GP been applied in different management
fields, such as accounting like budgeting, cost allocation, corporate social reporting etc.,
finance like asset management, portfolio selection etc., marketing like sales operation, media
planning, operations like inventory management, transportation etc., and natural resources.
The increasing popularity of GP and usefulness for decision making policies are particularly
evident in some areas, such as portfolio management, marketing and strategic management
etc.

2.9 SUMMARY

➢ In this chapter, at the beginning, we explore introduction to GP as a powerful tool to


tackle multiple and incompatible goals of any enterprise some of which may be non-
economic in nature.

➢ The next point in discussion has been on the concepts of GP. The differences between
GP and LP have been brought out. The distinguishing features of GP revolve around
its ability to use the ordinal principle of preemptive priority structure of the goals of
management which may be incommensurable.
➢ The formulations of GP models with its steps have been covered with the
typical and comprehensive examples.
➢ In the graphical method of solving the GP problem, one problem was
formulated and solved graphically for a meaningful appreciation.
➢ The optimal solutions of GP problems by modified simplex method with its
steps have been covered with the typical and comprehensive examples.

2.10 SELF-ASSESSMENT QUESTIONS

1. What is GP?

2. Identify the major differences between LP and GP.


3. What is GP? State clearly its assumptions.
4. An office equipment manufacturer produces two kinds of products, chairs and lamps.
Production of either a chair or a lamp requires 1 hour of production capacity in the
plant. The plant has a maximum capacity of 10 hours per week. The gross margin
from the chair is Rs. 80 and Rs. 40 for that of a lamp. Formulate the problem as a GP
problem if the goal of the firm is to earn a profit of Rs. 800 per week. Formulate the
69 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

problem.

Answer: The GP can be stated as:


Minimize Z = d1− + d1+
Subject to the constraint
80 x1 + 40 x2 + d1− − d1+ = 800,
x1 + x2  10
and x1 , x2 , d1− , d1+  0.
5. Formulate the problem given in above question-4, as a GP problem with the following
equally ranked goals
(i) to earn a profit of Rs. 800 per week
(ii) beacause of the limited sales capacity the maximum number of chairs and lamps
that can be sold are 6 and 8 per week respectively.

Answer: The GP can be stated as:


Minimize Z = d1− + d 2− + d3−
Subject to the constraint
80 x1 + 40 x2 + d1− − d1+ = 800,
x1 + d 2− = 6
x1 + d3− = 8
and x1 , x2 , d1− , d1+ , d 2− , d3−  0.

6. Suppose a firm manufactures two products. Each product requires time in two
production departments: Product 1 requires 20 hours in department 1 and 10 hours in
department 2. Product 2 requires 10 hours in department 1 and 10 hours in department
2. Production time is limited in department 1 to 60 hours and in department 2 to 40
hours. Contribution to profits for the two products in Rs. 40 and Rs. 80 respectively.
Management has established the following goal priorities:
Pl (priority1): To meet production goals of 2 units for each product.
P2 (priority2): To maximize profits.
Formulate the problem.

− − −
Answer: Minimize Z = Pd
1 2 + Pd
1 3 + P2 d1

70 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Subject to the constraint


20 x1 + 80 x2 + d1− − d1+ = 1000,
x1 + d 2− − d 2+ = 2
x1 + d3− − d3+ = 2
and x1 , x2 , d1− , d1+ , d2− , d2+ , d3− , d3+  0.

7. A company manufactures two products radios and transistors which must be


processed through assembly and finishing departments. Assembly has 90 hours
available, finishing can handle upto 72 hours of work. Manufacturing one radio
requires 6 hours in assembly and 3 hours in finishing. Each transistor requires 3 hours
in assembly and 6 hours in finishing. If profit is Rs. 120 per radio and Rs. 90 per
transistor, determine the best combination of radio and transistors to realize a
maximum profit of Rs. 2000.
Formulate the problem as a GP problem. Also solve the GP problem by graphical as
well as by modified simplex method.
Answer: Minimize Z = d1−
Subject to the constraints

120 x1 + 90 x2 + d1− − d1+ = 2000,


6 x1 + 3x2  90
3x1 + 6 x2  72
and x1 , x2 , d1− , d1+  0.
where x1 and x2 are the number of radios and transistors produced respectively.
Optimum solution: No. of radios = x1 = 12 and No. of transistors = x2 = 6 .
Minimize Z = Rs.20
The profit target is missed by Rs. 20.
i.e., The profit earned is Rs. 2000-20 = Rs. 1980

8. In the problem given in question-6, the company sets the following two equally
ranked goals
(i) reach a profit goal of Rs. 1500
(ii) meet a product of radios goal of 10
Formulate the problem as a GP problem and solve by graphical as well as by modified
simplex method.

71 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Answer: Minimize Z = d1− + d2−


Subject to the constraints

120 x1 + 90 x2 + d1− − d1+ = 1500


6 x1 + 3 x2  90
3x1 + 6 x2  72
x1 + d 2− − d 2+ = 10
and x1 , x2 , d1− , d1+ , d 2− , d 2+  0.
where x1 and x2 are the number of radios and transistors produced respectively.
Optimum solution: No. of radios = x1 = 25 / 2 and No. of transistors = x2 = 0 .
Minimize Z = 0
Thus, profit is exactly Rs. 1500
and the goal of 10 radios is better by (25/2)-10=5/2.

9. Find x1 , x2 , to Minimize Z = (d1− , d 2− )


Subject to the constraints
x1 + x2 + d1− − d1+ = 20,
4 x1 + 5 x2 + d 2− − d 2+ = 150
and x1 , x2 , d1− , d1+ , d 2− , d 2+  0.

x1 = 75 / 2, x2 = 0, Minimize Z = (0, 0), d1+ = 35 / 2,


Answer:
or x1 = 0, x2 = 30, Minimize Z = (0, 0), d1+ = 10.

10. A manufacturing firm produces two types of products A and B. According to


past experience, production of either Product A or Product B requires an average
of one hour in the plant. The plant has a normal production capacity of 400
hours a month. The marketing department of the firm reports that because of
limited market, the maximum number of Product A and Product B that can be
sold in a month are 240 and 300 respectively. The net profit from the sale of
Product A and Product B are Rs.800 and lbs. 400 respectively. The manager of
the firm has set the following goals arranged in the order of importance (pre-
emptive priority factors).
Pl: He wants to avoid any underutilization of normal production capacity.
P2: He wants to sell maximum possible units of Product A and Product B.
72 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Since the net profit from the sale of Product A is twice the amount from that of
Product B, the manager has twice, as much desire to achieve sales for Product A
as for Product B.
P3: He wants to minimize the overtime operation of the plant as much as
possible.
Solve this problem by Graphical Method of GP as well as by modified simplex
method.

− − − +
Answer: Minimize Z = Pd
1 1 + P2 (2d 2 + d3 ) + P3 d1

Subject to the constraints

x1 + x2 + d1− − d1+ = 400


6 x1 + d 2− = 240
x2 + d3− = 300
and x1 , x2 , d1− , d1+ , d 2− , d3−  0.
where x1 and x2 are the number of units of product A and product B produced.
Optimum solution: No. of product A = x1 = 240 and No. of product B = x2 = 300 .
The overtime operation goal could not be achieved. Overtime operation of the plant is
overachieved by 140 hours over the normal capacity of 400 hours a month.

11. Find x1 , x2 , to Minimize Z = (3d1+ + 2d2+ , d3− , d4− )


Subject to the constraints
x1 + x2 + d1− − d1+ = 8,
x1 + d 2− − d 2+ = 3
3x1 + 5 x2 + d3− − d3+ = 65
x1 + x2 + d 4− − d 4+ = 65
and x1 , x2 , d1− , d1+ , d 2− , d 2+ , d3− , d3+ , d 4− , d 4+  0.

Answer: x1 = 0, x2 = 8, Minimize Z = (0, 25,57).

73 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

2.11 OBJECTIVE QUESTIONS

A. Fill in the Blanks.


Fill in the blanks”……..” so that the following statements are complete and correct.
− +
1. In GP, the deviational variable must satisfy di  di = ...... .
2. GP begins with the most important goal and continues until theachievement of a
..........important goal.
3. In GP problem, a constraint having unachieved variable is expressed as a……….than
or equal to type constraint.
4. In the simplex method of GP the variable to enter the solution- mix is selected with
c −Zj
..........priority row and most.......... j value in it.
5. Consider a goal with constraint g ( x1 , x2 ,......, xn ) + d1−  b1 (d1−  0) with d1− in the
objective function. Then the constraint is active provided………

B Multiple Choice Questions.


Indicate the correct answer for each question by writing the corresponding letter from
(a), (b), (c) and (d).
6. In GP problem, goals are assigned priorities such that
(a) goals may not have equal priority
(b) higher priority goals must be achieved before lower priority goals.
(c) goals of greatest importance are given lowest priority
(d) None of these.
7. Deviational variables in GP model must satisfy following conditions
(a) di+ − di− = 0 (b) di+  di− = 0
(c) di+ + di− = 0 (d) none of these.
8. For applying a GP approach, decision-maker must
(a) set target for each of the goals
(b) assign pre-emptive priority to each goal
(c) assume that linearity exists in the use of resources to achieve goals

74 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

(d) all of these.


9. In GP at optimality, which of the following conditions indicate that a goal has been
exactly satisfied
(a) positive deviational variable is in the solution mix with a negative value
(b) both positive and negative deviational variables are in the solution mix
(c) both positive and negative deviational variables are not in the solution mix
(d) none of these.

10. Consider a goal with constraint g ( x1 , x2 ,......, xn ) + d1− − d1+ = b1 and the term
3d1− + 2d1+ in the objective function, the decision-maker
(a) prefers g ( x1 , x2 ,......, xn )  b1 rather than  b1
(b) prefers g ( x1 , x2 ,......, xn )  b1 rather than  b1
(c) not concerned with either  or  1
(d) none of these.
Answers To Objective Questions
1. 0 2. less. 3. greater. 4. highest ; negative. 5. d i−  0.
6. (b). 7. (b). 8. (d ). 9. (c). 10. (b).

2.12 GLOSSARY

• LP: Linear Programming is a mathematical modelling technique in which a linear


function is maximized or minimized when subjected to various constraints.

• GP: Goal Programming isan extension of Linear Programming in which targets are
specified for a set of constraints.

• PEGP: Premptive Goal Programming,where there is hierarchy of priority levels for


the goals, so goal of primary importance receive first priority attention, those of
secondary importance receive second priority attention, and so forth (if there are more
than two priority levels).

• NPEGP/WGP: Non- Premptive Goal Programming/Weighted Goal Programming,


where all the goals are roughly comparable importance.

75 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

2.12 REFERENCES & SUGGESTED BOOKS

1. Anderson, D., Sweeney, D., Williams, T., Martin, R.K. (2012). An introduction to
management science: quantitative approaches to decision making (13th ed.). Cengage
Learning.
2. Balakrishnan, N., Render, B., Stair, R. M., & Munson, C. (2017). Managerial decision
modeling. Upper Saddle River, Pearson Education.
3. Hillier, F.& Lieberman, G.J. (2014). Introduction to operations research (10th
ed.).McGraw-Hill Education.
4. Powell, S. G., & Baker, K. R. (2017). Business analytics: The art of modeling with
spreadsheets. Wiley.
5. Swarup, K. Gupta, P. K. & Mohan, M. (2012). Introduction to operation research.
Sixteenth edition, Sultan Chand & Sons.
6. Hamdy, A. Taha (2017). Operation research an introduction . Tenth edition,Global
Edition, Pearson Eduction Ltd.

76 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

LESSON 3
WAITING LINE MODELS
Dr. Shubham Agarwal
Associate Professor
New Delhi Institute of Management
GGSIP University
[email protected]

STRUCTURE

3.1 Learning Objectives


3.2 Introduction
3.3 Basic elements of queuing models
3.4 Role of poisson and exponential distributions
3.5 Symbols and notations used
3.6 Distribution of arrivals
3.6.1 Arrival Distribution Theorm
3.7 Distribution of interarrival time
3.8 Markovian process of interarrival time
3.9 States of queuing system
3.9.1 Transient state
3.9.2 Steady state
3.9.3 Explosive state
3.10 Some important definitions
3.11 Kendall - lee notations
3.12 Poisson queues
3.12.1 Model I (M/M/I) : (∞/FCFS)
3.12.2 Model II (M/M/c) : (∞/FCFS)
3.12.3 Model III (M/M/I) : (N/FCFS)
3.12.4 Model IV (M/M/c) : (N/FCFS)
3.13 Applications of queuing theory
3.14 Limitations of queuing theory
3.15 Summary
3.16 Glossary
3.17 Answers to In-text Questions
3.18 Self-Assessment Questions
77 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

3.19 Suggested Readings

3.1 LEARNING OBJECTIVES

After reading the unit, students will be able to


• Define basic elements of queuing system.
• Describe role of poisson and exponential distributions.
• Explain markovian process of interarrival time.
• Understand the concept of different queuing models.
• Solve the problems related to waiting line models.
• Identify the situation where the waiting line models can be used.

3.2 INTRODUCTION

Queues or waiting lines are a typical


occurrence in both regular life and several
corporate and industrial settings. When the
ability to deliver that service is now
unable to meet the current demand, it
happens. Additionally, because clients
arrive in a random pattern, queues arise
when the service rate is larger than the
arrival rate. The most frequent locations in
daily life where lines form are: movie
ticket booths, bank counters, railroad reservation counters, phone booths, doctor's offices, gas
stations, post offices, etc.
In addition to these, queues also form in the manufacturing sector in instances where goods
are waiting for the next step in the process or waiting to be moved to another location,
machines are waiting for repair parts to be assembled in assembly lines, workers are waiting
at the tool crib to obtain tools, etc. This could lengthen the production cycle, raising the
product's cost, and it might push back the delivery date. Because the findings are frequently
employed when making business decisions regarding the resources required to deliver

78 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

service, queuing theory is regarded as one of the standard approaches of operations research
and management science (along with linear programming, simulation, etc.).
The mathematical analysis of queues is known as queueing theory. The theory makes it possible to
mathematically analyse a number of connected processes, such as getting to the front of the line,
waiting in line, and receiving service. In order to reduce the average cost of using the queuing
system and the cost of service, the queuing model aims to determine the ideal service rate and
server count. Numerous further mathematical models for understanding and resolving issues with
waiting lines are provided by queuing theory.

3.3 ELEMENTS OF COST

The units requiring service enter the queuing system on their arrival and join a queue. The
queue represents the number of customers waiting for service. A queue is called finite if the
number of units in it is finite otherwise it is called infinite. Some of the basic elements of
queuing system are as follows:
• Input source of queue
• Queue discipline (Service discipline)
• Service mechanism (Service system)
• System output

3.3.1 Input source of queue:


Customers requiring service are generated at different times by an input source, commonly
known as population. The rate at which customers arrive at the service facility is determined
by the arrival process. An input source is characterized by:
• Size of the calling population
• Pattern of arrivals at the system
• Behavior of the arrivals (Customer behavior)

a) Size of the calling population

The size represents the total number of potential customers who will require service. The size
of the population is described by the following factors given below:
i) According to source- The source of customers can be finite or infinite. For example,
all people of a city or state (and others) could be the potential customers at a
supermarket. The number of people being very large, it can be taken to be infinite
79 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

whereas there are many situations in business and industrial conditions where we
cannot consider the population to be infinite; it is finite.
ii) According to numbers- The customers may arrive for service individually or in
groups. Single arrivals are illustrated by patients visiting a doctor, students reaching at
a library counter etc. On the other hand, families visiting restaurants, ships
discharging cargo at a dock are examples of group or batch arrivals.
iii) According to time- Customers arrive in the system at a service facility according to
some known schedule or else they arrive randomly. Arrivals are considered random
when they are independent of one another and their occurrence cannot be predicted
exactly. The queuing models wherein customer’s arrival times are known with
certainty are categorized as deterministic models and are easier to handle. On the
other hand, a substantial majority of the queuing models are based on the premise that
the customers enter the system stochastically, at random points in time.
b) Pattern of arrivals at the system

Customers' arrival processes (or patterns) at the support system are divided into two groups:
static arrival processes and dynamic arrival processes.
i) In static arrival process, the control depends on the nature of arrival rate (random or
constant). Random arrivals are either at a constant rate or varying with time. Thus to
analyze the queuing system, it is necessary to attempt to describe the probability
distribution of arrivals. From such distributions we obtain average time between
successive arrivals, also called “inter-arrival time” (time between two consecutive
arrivals), and the average arrival rate (i.e. number of customers arriving per unit of
time at the service system).
ii) In dynamic arrival processboth the service centre and the customers have control.
By varying staffing levels at various service times, varying service fees at various
times, or allowing entrance with appointments, the service facility can adjust its
capacity to match changes in the intensity of demand.
Frequently in queuing problems, the number of arrivals per unit of time can be estimated by a
probability distribution known as the Poisson distribution, as it adequately supports many real
world situations.

c) Behavior of arrivals (Customer behavior)

The behaviour or attitude of the customers entering the queueing system is another factor to
take into account. Customers can be divided into two groups based on how patient or

80 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

impatient they are. A customer is described as patient if, upon entering the service system, he
remains there until served, regardless of how long he must wait. In contrast, an impatient
customer is one who waits in the queue for a predetermined amount of time before leaving
due to factors like the length of the queue in front of him.Some interesting observations of
customer behavior in queues are as follows:
i) Balking- Some customers even before joining the queue get discouraged by seeing
the number of customers already in service system or estimating the excessive waiting
time for desired service decide to return for service at a later time. This is known as
balking.
ii) Reneging- Customers after joining the queue wait for sometime and leave the service
system due to intolerable delay, so they renege.
iii) Jockeying- Customers who switch from one queue to another hoping to receive
service more quickly are said to be jockeying.
iv) Collusion- Customers in the queue may demand the service on their behalf as well as
on behalf of others is known as collusion.

3.3.2 Queue discipline (Service discipline):


In the queue structure, the important thing to know is the queue discipline. The queue
discipline is the order or manner in which customers from the queue are selected for service.
There are a number of queue disciplines in which customers in the queue are served. Some of
these are as follows:

(a) Static queue disciplines are based on the individual customer's status in the queue. Few
of such disciplines are:
i) First-come-first-served (FCFS)- If the customers are served in the order of their
arrival, then this is known as the first-come-first-served (FCFS) service discipline.
ii) Last-come-first-served (LCFS)- Sometimes, the customers are serviced in the
reverse order of their entry so that the ones who join the last are served first, then this
is called last-come-first-served (LCFS) service discipline.

(b) Dynamic queue disciplines are based on the individual customer attributes in the queue.
Few of such disciplines are:
i) Service in Random Order (SIRO)- Under this rule customers are selected for
service at random, irrespective of their arrivals in the service system. In this every

81 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

customer in the queue is equally likely to be selected. The time of arrival of the
customers is, therefore, of no relevance in such a case.
iii) Priority Service- Under this rule customers are grouped in priority classes on the
basis of some attributes such as service time or urgency or according to some
identifiable characteristic to provide the service. The treatment of VIPs in preference
to other patients in a hospital is an example of priority service.
iv) Round Robin service- Every customer gets a time slice. If his service is not
completed, he will re-enter the queue.
Use simple and easily understandable language.
3.3.3 Service mechanism (Service system):

The uncertainties involved in the service mechanism are the number of servers, the
number of customers getting served at any time, and the duration and mode of service.
Networks of queues consist of more than one servers arranged in series and/or parallel.
Random variables are used to represent service times, and the number of servers, when
appropriate. If service is provided for customers in groups, their size can also be a random
variable. A service system has only a few components listed below:
• Configuration of the service system
• Speed of the service
• System capacity

a) Configuration of the service system


The conditions of the queue determine how customers enter the support system. If the server
is not busy when the clients arrive, they are served right away. If not, the client is prompted
to join the queue, which can be set up in a variety of ways. The configuration of the service
system refers to the physical layout of the service centres. Typically, service systems are
categorised according to their amount of servers or channels:

i) Single Server – Single Queue-Single server models are those where there is only one
queue and one service station facility, and the customer waits until the service point is
prepared to accept him in for servicing. A library counter serving as an example of a
single server facility, with students gathering at it.

82 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

customers customers
arrivequeue service facility leave

ii) Single Server – Several Queues- In this type of facility there are several queues and
the customer may join any one of these but there is only one service channel.

customers customers
arrivequeues service facility leave

iii) Several (Parallel) Servers – Single Queue-This kind of strategy uses multiple servers,
each of which offers the same kind of service. When one of the service channels is prepared
to receive the customers in for servicing, they wait in a single line.

customers customers
arrivequeue service stations leave

83 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

iv) Several Servers – Several Queues-This kind of model comprises of a number of servers,
each of which has its own queue. An illustration of this type of model is the various cash
counters in an electricity office where customers can settle their electricity bills.

Customers customers
arrivequeues service stations leave

v) Service facilities in a series-In this, a customer approaches the first station, receives some
service there, moves to the next station, receives more service there, and then does it all over
again and so forth, until the user ultimately exits the system after receiving the full service.
For instance, the machining of a particular steel object might involve a succession of single
servers performing cutting, turning, knurling, drilling, grinding, and packaging operations.

customers customers
arrivequeue service mechanism queue service mechanism leave

b) Speed of Service
In a queuing system, the speed with which service is provided can be expressed in either of
two ways as, service rate and as service time.
i) The service rate describes the average number of customers that can be
served per unit time. Service rate is denoted by µ.
ii) The service time indicates the amount of time needed to serve a customer.
Service time is the reciprocal of service rate, i.e, service time = 1/ µ.

84 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Eg: If a cashier can attend, on an average 10 customers in an hour, the service rate would be
expressed as 10 customers/hour and service time would be equal to 6 minutes/customer.

c) System capacity
In a queuing system, it is important to take into account how many consumers can wait at
once. If the waiting area is big, it can be assumed that it is practically infinite. But based on
our regular use of telephone networks, we know that the size of the buffer that receives our
call while we wait for a free line is crucial as well.
3.3.4 System output:

The rate at which consumers are served is referred to as system output. It depends on how long the
facility needs to provide the service and how the service facility is set up. In a single channel facility,
the queue's output is unimportant since the client leaves after obtaining the service. However, in a
multistage channel facility, the queue's output is crucial because the probability of a service station
breakdown can affect the queues. The queue prior to the breakdown will get longer, while the line
after the breakdown will get shorter.

3.4 ROLE OF POISSON AND EXPONENTIAL DISTRIBUTIONS

Constructive queuing models are analytically reliable representations of real-world systems.


These two requirements are frequently satisfied by a queuing model based on the Poisson
process and its companion exponential probability distribution. A Poisson process simulates
the emergence of random events from a memoryless process, such as the arrival of a
customer, a web server's request for action, or the accomplishment of the requested actions.
That is, the amount of time that will pass from the present moment till the next event does not
depend on when the previous event occurred. The observer counts the number of events that
take place within a period of defined length while calculating the Poisson probability
distribution.The observer keeps track of how much time passes between consecutive events
in the (negative) exponential probability distribution. The underlying physical process is
memoryless in both cases.
Inputs from the environment are frequently handled by models based on the Poisson process
in a way that closely resembles how the system being represented would handle the same
inputs. The resulting analytically obedient models provide insight into the system being
modelled as well as the shape of their solution. Even a Poisson-based queuing model that
performs comparably poorly in simulating the specific system performance can be helpful.
System designers that like to incorporate a security aspect in their designs are attracted by the
fact that such models frequently give evaluations of "worst-case" situational
85 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

scenarios.Additionally, the shape of a queuing problem's solution, whose precise behaviour is


poorly replicated, is frequently revealed by studying models based on the Poisson process. As
a result, queuing models are commonly represented by the exponential distribution as Poisson
processes.
The exponential distribution with parameter λ is given by λe-λt for t ≥ 0. If T is a random
variable that represents interarrival times with the exponential distribution then,
P(T ≤ t) = 1 − e-λt
and P(T > t) = e-λt.
This distribution lends itself well to modeling customer interarrival times or service times for
a number of reasons. The first is the fact that the exponential function is a strictly decreasing
function of t. This means that after an arrival has occurred, the amount of waiting time until
the next arrival is more likely to be small than large. Another significant property of the
exponential distribution is what is known as the no-memory property. The no-memory
property suggests that the time until the next arrival will never depend on how much time has
already passed. This makes intuitive sense for a model where we’re measuring customer
arrivals because the customers’ actions are clearly independent of one another.
The Poisson distribution is used to calculate the likelihood that a specific number of
entries will occur within a specific time frame. The Poisson distribution with parameter λ is
given by,
(λt)ne-λt
n!
Where n is the number of arrivals.
We find that if we set n = 0, the Poisson distribution gives us e-λt, which is equal to P(T > t)
from the exponential distribution.

86 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

IN-TEXT QUESTIONS

1. The objective of the queuing model is to find out the ___________ service rate
and the number of servers so that the average cost of being in the queuing
system and the cost of service are minimized.

2. Which of the following is/are the basic elements of queuing system?


(a) Queue discipline
(b) Service mechanism
(c) System output
(d) All of above
3. Some customers decide to return for assistance at a later time after
becoming discouraged by the number of people already in the queue or
the estimated lengthy wait time before even getting in line. This is known
as:
(a) Jockeying
(b) Balking
(c) Reneging
(d) Collusion
4. The _____________ is used to determine the probability of a certain number of
arrivals occurring in a given time period.

3.5 SYMBOLS AND NOTATIONS USED

The following are some symbols and terminology used in the queuing models:

n = Number of units in the system


Pn(t) = Transient state probability of n units in the system at time t
En = State in which n units in the system
Pn = Steady state probability of having n units in the system
λn = Mean arrival rate
87 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

µn = Mean service rate


λ = Mean arrival rate when λn is constant for all n
µ = Mean service rate when µn is constant for all n ≥ 1
c = Number of parallel service stations
ρ = Traffic intensity (λ/µ)
ΦT(n) = Probability of n services in time T
ψ(w) = Probability density function of waiting time in the system
Ls = Expected no. of units in the system (Length of the system)
Lq = Expected no. of units in the queue (Length of the queue)
Ws = Expected waiting time per customer in the system
Wq = Expected waiting time per customer in the queue
(W/W > 0) = Expected waiting time of a customer who has to wait
(L/L > 0) = Expected length of non-empty queues
P(W > 0) = Probability of a customer having to wait for service

3.6 DISTRIBUTION OF ARRIVALS

3.6.1 Arrival Distribution Theorm:

If the arrivals are completely random then the probability distribution of number of
arrivals in a fixed time interval follows a poisson’s distribution.
Proof: To prove this theorem we shall make some assumptions which are as follows:
Let there are n units in the system at time t.
1) The probability of one arrival in small time interval Δt = λ.Δt
2) The probability of more than one arrival in the time interval Δt is zero because Δt is very
small.
3) The process has independent increments.
4) Pn(t) be the probability of n arrivals in time t.
There may be two cases:
Case-I: when n > 0 then, two events can occur, which are shown below:
n units n units (n – 1) units n units

no arrival one arrival

t Δt t + Δt t Δt t + Δt

88 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Let there are n units in the system at time t, no arrival takes place in time Δt. Hence there
remain n units in the system at time (t + Δt).
Probability of this event = (probability of n units in the system)
x (probability of no arrival in time Δt)
= Pn(t) . (1 – λ.Δt)
Let there are (n – 1) units in the system at time t, one arrival takes place in time Δt. So there
are n units in the system at time (t + Δt).
Probability of this event = [probability of (n – 1) units in the system]
x (probability of one arrival in time Δt)
= Pn - 1(t) .λ.Δt
Hence the probability of n units in the system at time (t + Δt) is,
Pn(t + Δt) = Pn(t) . (1 – λ.Δt) + Pn - 1(t) .λ.Δt
Pn(t + Δt) = Pn(t) – λ. Pn(t). Δt + Pn - 1(t) .λ.Δt
Pn(t + Δt) – Pn(t) = – λ. Pn(t) + λ. Pn - 1(t)
Δt
Taking limit Δt→0, we get
Pn‫(׳‬t) = – λ. Pn(t) + λ. Pn - 1(t) , for n > 0 ……….(1)

Case II: When n = 0 then,


0 units 0 units

no arrival

t Δt t + Δt

Let there is no unit in the system at time t, no arrival takes place in time Δt. Hence there
would be zero units in the system at time (t + Δt).
Probability of this event = (probability of no unit in the system)
x (probability of no arrival in time Δt)
P0(t + Δt) = P0(t) . (1 – λ.Δt)
P0(t + Δt) – P0(t) = – λ. P0(t). Δt
P0(t + Δt) – P0(t) = – λ. P0(t)
Δt
Taking limit Δt→ 0, we get
P0 ‫(׳‬t) = – λ. P0(t) , for n = 0 ……….(2)
89 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

In order to find the probability distribution, we shall make use of generating function of Pn(t).
i.e,

P(z,t) = ∑ Pn(t). zn ……….(3)
n=0

Differentiating with respect to t, we get



P ‫(׳‬z,t) = ∑ Pn‫(׳‬t). zn ……….(4)
n=0

Multiplying equation (1) by zn and taking summation from n = 1 to n = ∞, we get


∞ ∞ ∞
‫׳‬
∑z n
Pn (t) = – λ ∑ z Pn(t) + λ ∑ z Pn - 1(t)
n n
n=1 n=1 n=1

Adding equation (2) to the above equation, we get


∞ ∞ ∞
∑z n
Pn‫(׳‬t) 0 ‫׳‬
+ z P0 (t) = – λ ∑ z Pn(t) – λ z P0(t) + λ ∑ z Pn - 1(t)
n 0 n
n=1 n=1 n=1

or, ∞ ∞ ∞
n ‫׳‬
∑ z Pn (t) = – λ ∑ z Pn(t) + λ ∑ z Pn - 1(t)
n n
n=0 n=0 n=1

Making use of equation (3) and (4), we get


P ‫(׳‬z,t) = – λ P(z,t) + λ P(z,t). z
or, P ‫(׳‬z,t) = λ P(z,t) (z – 1)

or, P ‫(׳‬z,t) = λ (z – 1)
P(z,t)
Integrating with respect to t, we get
log P(z,t) = λ (z – 1). t + c
Putting t = 0,
log P(z,0) = c ……….(5)
Now from equation (3), we have

P(z,0) = ∑Pn(0). zn
n=0


P(z,0) = ∑Pn(0). zn+ P0(0). z0
90 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

n=1

Since, P0(0) = Probability of zero unit at time zero= universal truth = 1


and Pn(0) = Probability of n unit at time zero = 0

Therefore, P(z,0) = 1 + 0 = 1
Now from equation (5),
c = log (1) = 0
Hence, log P(z,t) = λ (z – 1). t
P(z,t) = eλ (z – 1). t
or, P(z,t) = eλzt. e-λt ……….(6)
Now from equation (3), we have

P(z,t) = ∑ Pn(t). zn
n=0

Where, Pn(t) = 1 dn P(z,t)


n! dzn z=0
Therefore using equation (6), we have
Pn(t) = 1 dneλzt. e-λt
n! dzn z=0
-λt n λzt
= e de
n! dznz = 0
= e-λteλzt (λt)nz = 0
n!
Pn(t) = e -λt
(λt)n
n!
Which is poisson’s distribution, hence the theorem.

3.7 DISTRIBUTION OF INTERARRIVAL TIME

Theorem: If n, the number of arrivals in time t, follow the Poisson’s distribution,


Pn(t) = e-λt (λt)n
n!
then T, interarrival time obeys the negative exponential law,
a(T) = λe-λt
Proof: Given that the arrivals follows the Poisson’s distribution, i.e,
91 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Pn(t) = e-λt (λt)n ……….(1)


n!
If (T + ΔT) be the interarrival time then putting t = T + ΔT and n = 0, we get
P0(T + ΔT) = e-λ(T + ΔT)
= e-λT e-λ. ΔT
Expanding e-λ. ΔT up to first approximation because ΔT is very small, we get
P0(T + ΔT) = e-λT [1 – λ. ΔT] ……….(2)
Again if T be the inter arrival time then putting t = T and n = 0 in equation (1), we get
P0(T) = e-λT ……….(3)
Using this equation (2) implies,
P0(T + ΔT) = P0(T) [1 – λ. ΔT]
or,
P0(T + ΔT) = P0(T) – P0(T).λ. ΔT
P0(T + ΔT) – P0(T) = – λ.P0(T) = – λ. e-λT
ΔT
Taking limit Δt→ 0, we get
P0 ‫(׳‬T) = – λ. e-λT
Hence by the definition of probability density function, we have
a(T) = λe-λT
Omitting the negative sign because probability density function is always positive.
Hence the theorem.

3.8 MARKOVIAN PROCESS OF INTERARRIVAL TIME

Theorem:According to the Markovian process of interarrival time, the amount of time until
the subsequent arrival is made is irrespective of the amount of time that has passed since the
previous arrival. i.e, to say,
P[T ≥ t1 ӏ T ≥ t0] = P[0 ≤ T ≤ t1–t0]
Proof: Using conditional probability, we can write
P[T ≥ t1 ӏ T ≥ t0] = P[(T ≥ t1)and (T ≥ t0)]/P[T ≥ t0] ………...(1)
Since the interarrival times are exponentially distributed, R.H.S. of equation (1) becomes,
[∫t0t1 λ.e-λt dt]/ [∫t0∞ λ.e-λt dt] = [e-λt1 - e-λt0]/[- e-λt0]
Hence,
P[T ≥ t1 ӏ T ≥ t0] = 1 - e-λ(t1-t0) …………(2)
Also,

92 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

P[0 ≤ T ≤ t1–t0] = ∫0(t1-t0) λ.e-λt dt = 1 - e-λ(t1-t0) ………….(3)

From equations (2) and (3), we have


P[T ≥ t1 ӏ T ≥ t0] = P[0 ≤ T ≤ t1–t0]

This is the markovian process of interarrival time.

3.9 STATES OF QUEUING SYSTEM

The analysis of queuing theory involves the study of the behavior of the system over time.
The state of the system is the basic concept in the analysis of the queuing theory. The state of
the queuing system may be classified as follows:

3.9.1 Transient state:

When a system's operational features rely on time, it is said to be in a transitory state.


For example, a queuing system is said to be in a transient state when the likelihood of having
clients in the system depends on time. This typically happens towards the beginning of the
system's operation, when it starts to go through a number of changes. But after some time, it
becomes stable.

3.9.2 Steady state:

A queuing system is said to be in a steady state when the probability of having a


certain amount of customers is independent of time. When a system's working characteristics
become timely independent, it is said to be in steady state. This typically occurs as a device
ages.

3.9.3 Explosive state:

The length of the queue will grow over time and eventually reach infinity if the system's arrival rate
is higher than its service rate. Such a state is known as explosive state.

93 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

3.10 SOME IMPORTANT DEFINITIONS

(i) System length- The average number of customers in the system, those waiting to be and
those being serviced, is known as the length of the system.
(ii) Queue length-Queue length is the average number of customers in line waiting to obtain
service.
(iii) Waiting time in the queue-It is the typical length of time a customer must wait in line
before it is put into operation.
(iv) Waiting time in the system- It is the amount of time, on average, that a consumer
spends using the system between joining the queue and receiving their service.
(v) Servicing time- The time taken for servicing of a unit is known as its servicing time.
(vi) Mean arrival rate- The expected number of arrivals occurring in the time interval of
length unity is called mean arrival rate. It is denoted by λ.
(vii) Mean arrival time- It is the reciprocal of mean arrival rate and is defined as,
Mean arrival time = 1/mean arrival rate = 1/λ.
(vii) Mean servicing rate- The expected number of services completed in the time interval of
length unity is called mean servicing rate. It is denoted by µ.
(vii) Mean servicing time- It is the reciprocal of mean servicing rate and is defined as, Mean
servicing time = 1/mean servicing rate = 1/µ.
(ix) Server busy period- The busy period of the server is the time during which itremains
busy in servicing.
(x) Server idle period- When all the units in the queue are served, the idle period of the
server begins and it continues up to the time of the arrival of the unit i.e, the idle period
of the server is the time during which he remains free because there is no unit in the
system to be served.
(xi) Traffic intensity (Utilization factor)- An important parameter in any queuing system is
the traffic intensity also called the load or the utilization factor, defined as the ratio of
the mean servicing time over the mean arrival time. It is denoted by ρ and is defined as,
ρ = λ/µ

94 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

3.11 KENDALL - LEE NOTATIONS

Notation for describing the characteristics of a queuing model was first suggested by David
G. Kendall in 1953. Kendall's notation introduced an (a/b/c) queuing notation that can be
found in all standard modern works on queuing theory.
Where,
a describes the interarrival time distribution,
b the service time distribution and
c the number of servers

The symbols conventionally used for a and b are:


M for exponential distribution (M stands for Markov),
D for deterministic distribution and
G (or GI) for general distribution

For example, "G/D/1" would indicate a General arrival process, a Deterministic (constant
time) service process and a single server. Some other examples are M/M/1, M/M/c, M/G/1,
G/M/1 and M/D/1.

Later in 1966, A. Lee extended Kendall’s notations by adding fourth (d) and fifth (e)
characteristics to the notation to cover other queuing models. Then the following symbolic
expression can be used to fully specify the queuing model:
(a/b/c) : (d/e)
Where a, b and c describes their usual meaning and the addition letters d and e describes the
capacity of the system and the queue discipline respectively.

For example, (M/M/4) : (25/FCFS) could represent a bank with exponential arrival times,
exponential service times, 4 tellers, total capacity of 25 customers and an FCFS queue
discipline.

3.12 POISSON QUEUES

3.12.1 MODEL - I [(M/M/1):(∞/FCFS)] (Birth and Death Model):

95 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Assumptions for this model are as follow:


(i) Both arrival and service rate are independent of the number of customers in the queue.
(ii) The arrival occur completely at random according t the Poisson distribution.
(iii) Only one queue and one service facility is available.
(iv) Capacity of the system is infinite.
(v) Customers are served on the basis of first come first serve service mechanism.
(vi) λ = Arrival rate
(vii) µ = Servive rate
(viii) ρ = λ / µ
(ix) Pn(t) = The probability of n units in the system at time t

❖ To obtain the system of steady state equations


There are n > 0 units in the system at time t + ∆t, in the following ways:
(n-1) units n units n units n units (n+1) units n units
one arrival no arrival no arrival
no service no service one service

t Δt t + Δt t Δt t + Δt t Δt t + Δt
Let there are (n-1) units in the system at time t, one arrival takes place in time Δt and
no service provided in time Δt. Hence there remain n units in the system at time (t + Δt).
Probability of this event = [probability of (n-1) units in the system]
x (probability of one arrival in time Δt)
x (probability of no service in time Δt)
= Pn-1(t) .λ.Δt . (1 – µ.Δt)
Let there are n units in the system at time t, no arrival takes place in time Δt and no
service provided in time Δt. So there are n units in the system at time (t + Δt).
Probability of this event = [probability of n units in the system]
x (probability of no arrival in time Δt)
x (probability of no service in time Δt)
= Pn(t) . (1- λ.Δt). (1 – µ.Δt)
Let there are (n+1) units in the system at time t, no arrival takes place in time Δt and
one service provided in time Δt.
So there are n units in the system at time (t + Δt).
Probability of this event = [probability of n units in the system]
x (probability of no arrival in time Δt)
x (probability of no service in time Δt)
96 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

= Pn+1(t) . (1- λ.Δt). µ.Δt


All these cases are mutually exclusive, hence the probability of n units in the system
at time (t + Δt) is,
Pn(t + Δt) = Pn-1(t) . λ.Δt . (1 – µ.Δt) + Pn(t) . (1- λ.Δt). (1 – µ.Δt)
+ Pn+1(t) . (1- λ.Δt). µ.Δt
Since Δt is very small, hence neglecting the terms containing (Δt)2
Pn(t + Δt) = Pn-1(t).λ. Δt + Pn(t) - Pn(t). λ.Δt - Pn(t). µ.Δt + Pn+1(t). µ.Δt
Pn(t + Δt) – Pn(t) = λ. Pn - 1(t) – λ. Pn(t) - µ. Pn(t) + µ. Pn+1(t)
Δt
= λ. Pn - 1(t) – (λ+µ). Pn(t) + µ. Pn+1(t)
Taking limit Δt→ 0, we get
Pn‫(׳‬t) = λ. Pn - 1(t) – (λ+µ). Pn(t) + µ. Pn+1(t), for n > 0 ……….(1)
Similarly, there are n=0 units in the system at time t + ∆t, in the following ways:
0 units 0 units 1 unit 0 units
no arrival no arrival
one service

t Δt t + Δt t Δt t + Δt
Let there is no unit in the system at time t and no arrival takes place in time Δt.
So there is no units in the system at time (t + Δt).

Probability of this event = [probability of 0 unit in the system]


x (probability of no arrival in time Δt)
= P0(t) . (1- λ.Δt)
Let there is one unit in the system at time t, no arrival takes place in time Δt and one
service provided in time Δt.
So there is no unit in the system at time (t + Δt).
Probability of this event = [probability of one unit in the system]
x (probability of no arrival in time Δt)
x (probability of one service in time Δt)
= P1(t) . (1- λ.Δt). µ.Δt
All these cases are mutually exclusive, hence the probability of no unit in the system
at time (t + Δt) is,
P0(t + Δt) = P0(t) . (1- λ.Δt) + P1(t) . (1- λ.Δt). µ.Δt
Since Δt is very small, hence neglecting the terms containing (Δt)2
P0(t + Δt) = P0(t) - P0(t).λ.Δt + P1(t). µ.Δt
97 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

P0(t + Δt) – P0(t) = - λ. P0(t) + µ. P1(t)


Δt
Taking limit Δt→ 0, we get
P0 ‫(׳‬t) = - λ. P0(t) + µ. P1(t), for n = 0 ……….(2)
In steady state, the probabilities are independent of time, therefore equations (1) & (2)
becomes
λ. Pn - 1 – (λ+µ). Pn+ µ. Pn+1 = 0 ……….(3)
- λ. P0 + µ. P1 = 0 ……….(4)
These equations are called steady state equations.
❖ Now to solve the system of steady state equations
from equation (4)
P1 = λ. P0 / µ ……….(5)
Since, ρ = λ / µ ……….(6)
Therefore, P1 = ρ. P0
Put n = 1 in equation (3), we get
λ. P0 – (λ+µ). P1+ µ. P2 = 0
using (5) & (6), we get
P2 = ρ2. P0
Similarly, P3 = ρ3. P0
P4 = ρ4. P0
-----------------
Pn = ρn. P0 ……….(7)
To find P0, use the fact that the total probability is 1, therefore

Ʃ Pn= P0 + P1 + P2 + P3 + …………upto ∞ =1
n=0

P0 + ρ. P0 + ρ2. P0 + ρ3. P0 + ………..upto ∞ = 1


P0[1/(1- ρ)] = 1
P0 = (1- ρ)
From (7), Pn = (1- ρ).ρn
Hence the steady state distribution of arrivals is, Pn = (1- ρ).ρn
∞ N-1
Also, P[Queue size ≥ N] = Ʃ Pn– Ʃ Pn= 1- (P0 + P1 + …… + PN-1)
n=0 n=0

Using values of P0, P1, …..PN-1, we get

98 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

P[Queue size ≥ N] = (λ/μ)N = ρN


Important formulae of Model - I
▪ Lq= Expected number of units in the queue = λ2/[µ.(µ-λ)] = λ. Wq
▪ Ls = Expected number of units in the system = λ/(µ-λ) = λ. Ws
▪ (L/L > 0) = Expected length of non-empty queue = µ/(µ-λ) = µ.Ws
▪ Wq = Expected waiting time per customer in the queue = λ/[µ.(µ-λ)] = Lq/λ
▪ Ws = Expected waiting time per customer in the system = 1/(µ-λ) = Ls/λ
▪ (W/W > 0) = Expected waiting time of customer who has to wait = 1/(µ-λ)
▪ ρ = Busy period = λ / µ
▪ P0 = Probability that exactly zero units in the system = (1- ρ) = (1- λ/µ)
▪ Pn = Probability that exactly ‘n’ units in the system = P0. (λ/µ)n
▪ P(W > 0) = Probability that an arrival will have to wait = 1 – P0
▪ P(Queue size ≥ n) = (λ/µ)n

Example:A telephone booth's patrons are thought to be Poisson distributed and come with an
average interval of 10 minutes. The phone call's duration is spread exponentially, with a
mean of five minutes. Determine:
(a) Expected number of units in the queue.
(b) Expected waiting time in the queue.
(c) Expected number of units in the system.
(d) Expected waiting time in the system
(e) Expected fraction of the day that the phone will be in use.
(f) Probability that the customer will have to wait.
Solution: Given,
The mean arrival time = 10 min
The mean service time = 5 min
The mean arrival rate, λ = (1/10) x 60 = 6/hour
The mean service rate, µ = (1/5) x 60 = 12/hour
(a) Expected number of units in the queue,
Lq= λ2/[µ.(µ-λ)] = (6)2/[12. (12-6)] = 0.5 units
(b) Expected waiting time in the queue,
Wq = λ/[µ.(µ-λ)] = (6)/[12. (12-6)] = 0.0833 hours
(c) Expected number of units in the system,
Ls = λ/(µ-λ) = 6/(12-6) = 1 unit
99 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

(d) Expected waiting time in the system,


Ws = 1/(µ-λ) = 1/(12-6) = 0.1667 hours
(e) Expected fraction of the day that the phone will be in use,
ρ = Busy period = λ / µ = 6/12 = 1/2
(f) Probability that the customer will have to wait,
P(W > 0) = 1 – P0 = 1 - (1- ρ) = ρ = 1/2 = 0.5

Example: Customers come at a one-man barbershop in a Poisson distribution, with a mean


arrival rate of four per hour. If the clients are always willing to wait, the hair cutting time is
exponentially distributed, with an average hair cut lasting 10 minutes. find:
(a) Average number of customer in the shop.
(b) Average waiting time of a customer.
(c) The probability that a customer will have to wait.
(d) Expected waiting time of customer who has to wait.

Solution: Given,
The mean arrival rate, λ = 4/hour
The mean service time = 10 min
The mean service rate, µ = (1/10) x 60 = 6/hour
(a) Average number of customer in the shop,
Ls = λ/(µ-λ) = 4/(6-4) = 2 Customers
(b) Average waiting time of a customer,
Wq = λ/[µ.(µ-λ)] = (4)/[6. (6-4)] = 0.333 hours
(c) The probability that a customer will have to wait,
P(W > 0) = 1 – P0 = 1 - (1- ρ) = ρ = λ / µ = 4/6 = 0.667
(d) (W/W > 0) = 1/(µ-λ) = 1/(6-4) = 1/2 hours = 30 mins

Example: A TV repairman works on the sets in the order that they are delivered and
anticipates that each set's repair time will be exponentially dispersed, with a mean of 30
minutes. In a Poisson fashion, the sets come on average every 12 to 10 hours throughout the
day. Determine:

100 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

(a) What is the expected idle time per day for the repairman?
(b) How many TV sets will be there waiting for the repair?
Solution: Given,
The mean service time = 30 mins
The mean arrival rate, λ = 12/10 hours a day = 12/(10 x 60) = 1/50 per min
The mean service rate, µ = 1/30 per min
(a) Busy period, ρ= λ / µ = (1/50)/(1/30) = 0.6 hour
The idle time, P0 = 1- ρ = 1- 0.6 = 0.4 hour
The expected idle time per day for the repairman = 0.4 x 10 = 4 hrs/day
(b) The number of TV sets waiting for the repair,
Lq= λ2/[µ.(µ-λ)] = (1/50)2/[(1/30). ((1/30)-(1/50))] = 0.9 units

Example:Trains arrive at a pace of 30 per day in a railway marshalling yard. Considering


that the service time has an average of 36 minutes and that the inter-arrival time follows an
exponential distribution. Calculate:
(a) The average number of trains in the system
(b) Expected length of non-empty queue
(c) The probability that the queue size exceeds 12

Solution: Given,
The mean arrival rate, λ = 30 trains per day = 30/(24x60) = 1/48 trains per min
The mean service time = 36 mins
The mean service rate, µ = 1/36 trains per min
(a) The average number of trains in the system,
Ls = λ/(µ-λ) = (1/48)/((1/36)-(1/48)) = 3 trains
(b) Expected length of non-empty queue,
(L/L > 0) = µ/(µ-λ) = (1/36)/((1/36)-(1/48)) = 4 trains
(c) The probability that the queue size exceeds 12,
P(Queue size ≥ 12) = (λ/µ)12 = [(1/48)/(1/36)]12 = (0.75)12 = 0.032

101 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

IN-TEXT QUESTIONS

5. __________ is the average time that a customer spends in the system from
the entry in the queue to the completion of the service.
6. The average number of customers in the queue waiting to receive the service
is called _____________.
7. The expected number of services completed in the time interval of length
unity is called:
(a) Mean servicing rate
(b) Mean arrival rate
(c) Mean waiting time
(d) Mean servicing time
8. The time taken for servicing of a unit is known as its ______________.

3.12.2 MODEL - II [(M/M/c):(∞/FCFS)]:

Assumptions for this model are as follow:


(i) The arrival of customer follows the Poisson distribution.
(ii) The service time follows the exponential distribution.
(iii) Several servers in the service facility are available.
(iv) The length of the queue is infinite.
(v) Customers are served on the basis of first come first serve service mechanism.
(vi) c = number of servers
(vii) Arrival rate, λ = λn (Depending upon n)
(viii) Service rate, µ = µn (Depending upon n)
(ix) µn = nµ, if n ≤ c
cµ, if n ≥ c

❖ To obtain the system of steady state equations


Consider similar arguments as in equations (1) and (2) of Model – I,
Pn‫(׳‬t) = λ. Pn - 1(t) – (λ+nµ). Pn(t) + (n+1)µ. Pn+1(t), for 0 < n < c ……….(1)
Pn‫(׳‬t) = λ. Pn - 1(t) – (λ+cµ). Pn(t) + cµ. Pn+1(t), for n ≥ c ……….(2)
‫׳‬
P0 (t) = - λ. P0(t) + µ. P1(t), for n = 0 ……….(3)
In steady state, the probabilities are independent of time, therefore equations (1), (2) & (3)
becomes

102 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

λ. Pn - 1 – (λ+nµ). Pn + (n+1)µ. Pn+1 = 0 ……….(4)


λ. Pn - 1 – (λ+cµ). Pn + cµ. Pn+1 = 0 ……….(5)
- λ. P0 + µ. P1 = 0 .……….(6)
These equations are called steady state equations.
❖ Now to solve the system of steady state equations
From equation (6), we have
P1 = (λ/µ). P0 ……….(7)
Put n = 1 in equation (4) and using value of P1, we get
P2 = (λ/(2µ)). P1 = (1/2!).(λ/µ)2. P0
Put n = 2 in equation (4) and using value of P1, we get
P3 = (λ/(3µ)). P2 = (1/3!).(λ/µ)3. P0
----------------------------------------
Pn = (λ/(nµ)). Pn-1 = (1/n!).(λ/µ)n. P0 , n ≤ c
----------------------------------------
Pc = (λ/(cµ)). Pc-1 = (1/c!).(λ/µ)c. P0
Pc+1 = (λ/(cµ)). Pc = (1/c).(1/c!).(λ/µ)c+1. P0
Pc+2 = (λ/(cµ)). Pc+1 = (1/c2).(1/c!).(λ/µ)c+2. P0
----------------------------------------
Pn = Pc+(n-c) = (1/cn-c).(1/c!).(λ/µ)n. P0 , n ≥ c

Now, in order to find P0, use the fact that the total probability, ƩPn= 1
n=0

c-1 ∞
Ʃ Pn + ƩPn = 1
n=0 n=c

c-1 ∞
Ʃ [(1/n!).(λ/µ) . P0] + Ʃ [(1/cn-c).(1/c!).(λ/µ)n. P0] = 1
n
n=0 n=c

c-1 ∞
P0 [ Ʃ (c /n!).(λ/(cµ)) + Ʃ (cc/c!).(λ/cµ)n] = 1
n n
n=0 n=c

c-1 ∞
P0 [ Ʃ (1/n!).(cρ) + (c /c!) Ʃ (ρ)n] = 1
n c
(Since, ρ= λ/cµ)
n=0 n=c

c-1
P0 [ Ʃ (1/n!).(cρ)n + (cc/c!). (ρc+ ρc+1 + ρc+2 +…..upto ∞)] = 1
103 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

n=0

c-1
P0 [ Ʃ (1/n!).(cρ)n + (cc/c!). (ρc/(1-ρ))] = 1
n=0

c-1
P0= [ Ʃ (1/n!).(cρ)n + (cc/c!). (ρc/(1-ρ))]-1
n=0

Important formulae of Model - II


▪ Lq= (cc/c!). [ρc+1 /(1-ρ)2] = Pc .[ρ/(1-ρ)2]
▪ Ls = Lq + λ/µ
▪ (L/L > 0) = 1/(1-ρ)
▪ Wq = Lq /λ = Pc /[cµ(1-ρ)2]
▪ Ws = Ls /λ = Wq + (1/µ)
▪ (W/W > 0) = 1/(cµ-λ)
▪ ρ = Busy period = λ /(cµ)
c-1
▪ P0= [ Ʃ (1/n!).(cρ)n + (cc/c!). (ρc/(1-ρ))]-1
n=0

▪ Pn = (1/n!).(λ/µ)n. P0 , n≤c
▪ Pn = (1/c ).(1/c!).(λ/µ) . P0, n ≥ c
n-c n

▪ P(W > 0) = Pc/(1-ρ)

Example:There are two long distance providers for a telephone exchange. The telephone
company discovers that long distance calls typically come at a rate of 15/hour during peak
load, as predicted by the poisson distribution. These conversations' durations are roughly
exponentially distributed, with a mean duration of 5 minutes. Find:
(a) How likely is it that a subscriber will have to wait for long distance calls during
the busiest time of the day?
(b) What is the average waiting time for the customers?
Solution: Given,
Number of servers, c =2
The mean arrival rate, λ = 15 per hour
The mean service time = 5 min
The mean service rate, µ = 1/5 per min = 60/5 per hour = 12 per hour
Now, ρ = λ /(cµ) = 15/(2x12) = 5/8
104 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

c-1
P0= [ Ʃ (1/n!).(cρ)n + (cc/c!). (ρc/(1-ρ))]-1
n=0

1
P0= [ Ʃ (1/n!).(5/4)n + (22/2!). ((5/8)2 /(1-(5/8)))]-1
n=0

P0= [1 + (5/4) + (5/4)2 .(4/3)]-1 = 3/13

(a) P(W > 0) = Pc/(1-ρ) = [(1/c!).(λ/µ)c. P0 ]/(1- ρ)


= [(1/2!).(15/12)2. (3/13)]/(1- (5/8)) = 0.48
(b) Wq = Pc /[cµ(1-ρ)2] = [(1/c!).(λ/µ)c. P0 ] /[cµ(1-ρ)2]
= [(1/2!).(15/12)2. (3/13)] /[2x12. (3/8)2] = 0.053 hours = 3.2 mins

Example: A tax consulting firm has 3 counters in its offices to receive the people who have
problems concerning their income and the sales tax. On an average 48 persons arrive in
8hours a day. Each tax advisor spends 15 min on an average for an arrival if the arrival time
follows a Poisson distribution and the service time follows an exponential distribution. Find:
(a) The typical user count in the system.
(b) The customer's system-wide average wait period.
(c) The typical amount of customers in line for service.
(d) The length of time that consumers typically wait in line.
(e) The likelihood that a customer will need to wait before receiving assistance.
Solution: Given,
Number of servers, c = 3
The mean arrival rate, λ = 48 persons 8 hours a day = 48/8 = 6 / hour
The mean service time = 15 min
The mean service rate, µ = 1/15 per min = 60/15 per hour = 4 / hour
Now, ρ = λ /(cµ) = 6/(3x4) = 1/2

c-1
P0= [ Ʃ (1/n!).(cρ)n + (cc/c!). (ρc/(1-ρ))]-1
n=0

2
P0= [ Ʃ (1/n!).(3/2)n + (33/3!). ((1/2)3 /(1-(1/2)))]-1
n=0

105 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

P0= [1 + (3/2) + (1/2!).(3/2)2 + 2.(1/3!). (3/2)3]-1 = 0.21


(a) The average number of customer in the system,
Ls = Lq + λ/µ = (cc/c!). [ρc+1 /(1-ρ)2] + λ/µ
= (33/3!). [(1/2)4 /(1-(1/2))2] + 6/4 = 1.73 customers
(b) Average waiting time of the customer in the system,
Ws = Lq /λ + (1/µ) = (cc/c!). [ρc+1 /(1-ρ)2] / λ + (1/µ)
= (33/3!). [(1/2)4 /(1-(1/2))2]/6 + (1/4) = 0.289 hrs
(c) Average number of customers waiting in the queue for service,
Lq = (cc/c!). [ρc+1 /(1-ρ)2] = (33/3!). [(1/2)4 /(1-(1/2))2] = 0.23
(d) Average waiting time of the customers in the queue,
Wq = Lq /λ = 0.23/6 = 0.038
(e) Probability that a customer has to wait before he gets service
= 1 – P0 – P1 – P2
= 1 – P0 – (λ/µ). P0 – (1/2!).(λ/µ)2. P0
= 1 – 0.21 – (6/4). (0.21) – (1/2!).(6/4)2. (0.21)
= 0.239

3.12.3 MODEL - III [(M/M/1):(N/FCFS)]:

Assumptions for this model are as follow:


(i) The arrival of customer follows the Poisson distribution.
(ii) The service time follows the exponential distribution.
(iii) There is only one queue and one server available.
(iv) The length of the queue is finite (say N).
(v) Customers are served on the basis of first come first serve service mechanism.
(vi) Arrival rate, λ = λn
(vii) Service rate, µ = µn
(viii) λn = λ, if n < N
0, if n ≥ N
(ix) ρ = λ/µ

❖ To obtain the system of steady state equations


Consider similar arguments as in equations (1) and (2) of Model – I, we can write
Pn‫(׳‬t) = λ. Pn - 1(t) – (λ+µ). Pn(t) + µ. Pn+1(t) , for 0 < n < N ……….(1)
106 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

PN ‫(׳‬t) = λ. PN - 1(t) – µ. PN(t) , for n = N ……….(2)


‫׳‬
P0 (t) = - λ. P0(t) + µ. P1(t) , for n = 0 ……….(3)
In steady state, the probabilities are independent of time, therefore equations (1), (2) & (3)
becomes
λ. Pn - 1 – (λ+µ). Pn + µ. Pn+1 = 0 , for 0 < n < N .……….(4)
λ. PN - 1 – µ. PN = 0 , for n = N .……….(5)
- λ. P0 + µ. P1 = 0 , for n = 0 .……….(6)
These equations are called steady state equations.

❖ Now to solve the system of steady state equations


from equation (6)
P1 = (λ/µ). P0 = ρ. P0 ……….(7)
Put n = 1 in equation (4) and using value of P1, we get
P2 = (λ/µ)2. P0 = ρ2. P0
Put n = 2 in equation (4) and using value of P1, we get
P3 = (λ/µ)3. P0 = ρ3. P0
--------------------------
--------------------------
Pn = (λ/µ)n. P0 = ρn. P0 , for n < N
--------------------------
PN = (λ/µ)N. P0 = ρN. P0 , for n = N
PN+1 = 0 , for n > N

N
Now, in order to find P0, use the fact that the total probability, ƩPn= 1
n=0

P0 + P1 + P2 +……+PN = 1
P0 + ρ. P0 + ρ2. P0 +…….+ρN. P0 = 1
P0 [1+ ρ + ρ2 +…….+ρN] = 1
P0 [(1- ρN+1)/(1-ρ)] = 1
P0 = (1-ρ)/ (1- ρN+1)
Hence,
Pn = ρn. P0 = ρn. [(1-ρ)/ (1- ρN+1)] , for n ≤ N

Important formulae of Model - III

107 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

▪ P0 = (1-ρ)/ (1- ρN+1)


▪ Pn = ρn. [(1-ρ)/ (1- ρN+1)] , for n ≤ N
N
▪ Ls = P0. Ʃ (n.ρn)
n=0

N
▪ Lq= P0. Ʃ (n-1).ρn
n=1

▪ Ws = Ls /λ
▪ Wq = Lq /λ = Ws – (1/µ)
▪ ρ = λ/µ

Example:Think about a single server queuing system with exponential response time and
poisson arrival. Five customers come per hour, and services last 30 minutes. if the device can
only accommodate four users. Find:
(a) The likelihood that the system is vacant.
(b) The typical user count in the system.
(c) The typical amount of customers waiting in line.
Solution: Given,
The mean arrival rate, λ = 5 / hour
The mean service time = 30 min
The mean service rate, µ = 1/30 per min = 60/30 per hour = 2 / hour
N=4
Now, ρ = λ /µ = 5/2 = 2.5
(a) Probability that the system is empty,
P0 = (1-ρ)/ (1- ρN+1) = (1-2.5)/ (1- (2.5)5) = 0.016
(b) The average number of customers in the system,
4
Ls = P0. Ʃ (n.ρn) = P0. [0.ρ0 + 1.ρ1 + 2.ρ2 + 3.ρ3 +4.ρ4]
n=0

= 0.016 [0 + (2.5)+ 2(2.5)2 + 3(2.5)3 +4(2.5)4]


= 0.016 [0 + (2.5)+ 2(6.25) + 3(15.625) +4(39.0625)]
= 0.016 [0 + 2.5+ 12.5 + 46.875 + 156.25]
= 3.49
(c) Average number of customers in the queue,

108 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

4
Lq= P0. Ʃ (n-1).ρn= P0. [0.ρ1 + 1.ρ2 + 2.ρ3 + 3.ρ4]
n=1

= 0.016 [0(2.5)+ 1(2.5)2 + 2(2.5)3 +3(2.5)4]


= 0.016 [0 + 1(6.25) + 2(15.625) +3(39.0625)]
= 0.016 [0 + 6.25 + 31.25 + 117.1875]
= 2.475

Example: Three clients can be served at once in a one-person barbershop, two of whom can
wait while the other is being attended to. If a customer arrives and the store is closed, he
moves to another store. The average rate of random customer arrival is 4 per hour, and the
average service duration is 10 minutes. Determine:
(a) The probability distribution for how many customers are in line for assistance.
(b) The amount of patrons anticipated to be waiting in the store.
(c) The anticipated clientele at the barbershop.
(d) How long can a client anticipate being in the store for?

Solution: Given,
The mean arrival rate, λ = 4 / hour
The mean service time = 10 min
The mean service rate, µ = 1/10 per min = 60/10 per hour = 6 / hour
N=3
Now, ρ = λ /µ = 4/6 = 2/3 = 0.667
(a) The probability distribution for the number of customers waiting for service,
P0 = (1-ρ)/ (1- ρN+1) = (1-0.667)/ (1- (0.667)4) = 0.4152
P1 = ρ. P0 = 0.667 x 0.4152 = 0.2769
P2 = ρ2. P0 = (0.667)2 x 0.4152 = 0.1847
P3 = ρ3. P0 = (0.667)3 x 0.4152 = 0.1232
(b) Expected number of customers waiting in the shop,
3
Lq= P0. Ʃ (n-1).ρn= P0. [0.ρ1 + 1.ρ2 + 2.ρ3]
n=1

= 0.4152 [0(0.667)+ 1(0.667)2 + 2(0.667)3]


= 0.4152 [0 + 0.44489 + 2(0.29674)]
= 0.4311
(c) Expected number of customers in the barber’s shop.
109 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

3
Ls = P0. Ʃ (n.ρn) = P0. [0.ρ0 + 1.ρ1 + 2.ρ2 + 3.ρ3]
n=0

= 0.4152 [0 + (0.667)+ 2(0.667)2 + 3(0.667)3]


= 0.4152 [0 + (0.667)+ 2(0.44489) + 3(0.29674)]
= 1.016
(d) The time that a customer can expect to spend in the shop,
Ws = Ls /λ = (1.016/4) hrs = 15.24 min

3.12.4 MODEL - IV [(M/M/c):(N/FCFS)]:

Assumptions for this model are as follow:


(i) The arrival of customer follows the Poisson distribution.
(ii) The service time follows the exponential distribution.
(iii) Several servers in the service facility are available.
(iv) The length of the queue is finite (say N).
(v) Customers are served on the basis of first come first serve service mechanism.
(vi) c = number of servers
(vii) Arrival rate, λ = λn
(viii) Service rate, µ = µn
(ix) λn = λ, if 0 ≤ n ≤ N
0, if n > N
(x) µn = nµ, if 0 ≤ n ≤ c
cµ, if c ≤ n ≤ N
(xi) ρ = λ /(cµ)

❖ To obtain the system of steady state equations


Consider similar arguments as in equations (1) and (2) of Model – I,
Pn‫(׳‬t) = λ. Pn - 1(t) – (λ+nµ). Pn(t) + (n+1)µ. Pn+1(t), for 0 ≤ n < c ……….(1)
‫׳‬
Pn (t) = λ. Pn - 1(t) – (λ+cµ). Pn(t) + cµ. Pn+1(t), for c ≤ n ≤ N ……….(2)
‫׳‬
P0 (t) = - λ. P0(t) + µ. P1(t), for n = 0 ……….(3)
In steady state, the probabilities are independent of time, therefore equations (1), (2) & (3)
becomes
λ. Pn - 1 – (λ+nµ). Pn + (n+1)µ. Pn+1 = 0 ……….(4)
λ. Pn - 1 – (λ+cµ). Pn + cµ. Pn+1 = 0 ……….(5)
- λ. P0 + µ. P1 = 0 .……….(6)

110 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

These equations are called steady state equations.


❖ Now to solve the system of steady state equations
From equation (6), we have
P1 = (λ/µ). P0 = (ρc). P0 ……….(7)
Put n = 1 in equation (4) and using value of P1, we get
P2 = (λ/(2µ)). P1 = (1/2!).(λ/µ)2. P0 = (1/2!).(ρc)2. P0
Put n = 2 in equation (4) and using value of P1, we get
P3 = (λ/(3µ)). P2 = (1/3!).(λ/µ)3. P0 = (1/3!).(ρc)3. P0
-----------------------------------------------------------
-----------------------------------------------------------
Pn = (λ/(nµ)). Pn-1 = (1/n!).(λ/µ)n. P0 = (1/n!).(ρc)n. P0 , 0 ≤ n ≤ c
-----------------------------------------------------------
-----------------------------------------------------------
Pc = (λ/(cµ)). Pc-1 = (1/c!).(ρc)c. P0
Pc+1 = (λ/(cµ)). Pc = (1/c).(1/c!).(ρc)c+1. P0
Pc+2 = (λ/(cµ)). Pc+1 = (1/c2).(1/c!).(ρc)c+2. P0
-----------------------------------------------------------
-----------------------------------------------------------
Pn = Pc+(n-c) = (1/cn-c).(1/c!).(ρc)n. P0 = (cc/c!).ρn. P0, c ≤ n ≤ N
Since the capacity of the system is N, therefore
Pn = 0 , for n > N

Now, in order to find P0, use the fact that the total probability, ƩPn= 1
n=0

c-1 N
Ʃ Pn + ƩPn = 1
n=0 n=c

c-1 N
Ʃ [(1/n!).(ρc) . P0] +
n
Ʃ [(cc/c!).ρn. P0] = 1
n=0 n=c

c-1 N
P0 [ Ʃ (1/n!).(ρc) + n
Ʃ (ρc)c.(1/c).(ρ)n-c] = 1
n=0 n=c

c-1 N
P0 [ Ʃ (1/n!).(ρc) + (ρc) .(1/c!) Ʃ (ρ)n-c] = 1 (Since, ρ= λ/cµ)
n c
n=0 n=c

111 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

c-1
P0 [ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). (1+ ρ1 + ρ2 +…..+ ρN-c)] = 1
n=0

c-1
P0 [ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). {(1-ρN-c+1)/(1-ρ)}] = 1
n=0

c-1
P0 =[ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). {(1-ρN-c+1)/(1-ρ)}]-1 ,(if ρ = λ/(cµ) ≠ 1)
n=0

c-1
P0 =[ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). (N-c+1)]-1 ,(if ρ = λ/(cµ) = 1)
n=0

Important formulae of Model - IV


N
▪ Lq= Ʃ [(n – c). Pn]
n=sc

N
▪ Ls =Ʃ [n. Pn]
n=0

▪ Wq = Lq /λ’
(where λ’ is the effective arrival rate and is given by, λ ‘= λ(1-PN))
▪ Ws = Ls /λ’
▪ ρ = λ/(cµ)
c-1
▪ P0 =[ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). {(1-ρN-c+1)/(1-ρ)}]-1 ,(if ρ = λ/(cµ) ≠ 1)
n=0

c-1
[ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). (N-c+1)]-1 ,(if ρ = λ/(cµ) = 1)
n=0

(1/n!).(ρc)n. P0 , 0 ≤ n ≤ c
▪ Pn = (cc/c!).ρn. P0 , c ≤ n ≤ N
0 , for n > N

Example:Allow for the establishment of a car inspection station with 3 examination stalls.
Assume that a vehicle waits so that it can move to the front of the line when a stall becomes
available. The station has room for nearly 4 vehicles to wait at once. The station can only
112 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

hold 7 vehicles at a time. During peak hours, a mean of one vehicle arrives according to the
poisson distribution per minute. With a mean of six minutes, the service duration follows an
exponential distribution. Determine:
(a) The typical amount of cars waiting in line.
(b) The typical amount of vehicles using the system at peak times.
(c) The system's anticipated wait period.
(d) The anticipated volume of vehicles per hour that are unable to access the station.
Solution: Given, c = 3, N = 7
Mean arrival rate, λ = 1 car per min
Mean service time = 6 min
Mean servicing rate, µ = 1/6 car per min
ρ = λ/(cµ) = 1/(3(1/6)) = 2
c-1
P0 = [ Σ (ρc)n/n! + ((ρc)c/c!) [(1-ρN-c+1)/(1-ρ)]-1
n=0

c-1
= [ Σ (6)n/n! + ((6)3/3!).(1-27-3+1)/(1-2)]-1
n=0

= [1 + 6 + (62/2!) + 1116]-1 = 1/1141

(a) The average number of cars in the queue,


N
Lq = Σ (n-c)Pn
n=c

7
Lq = Σ (n-3)(cc/c!) ρn P0
n=3

7
Lq = Σ (n-3)(33/3!) (2)n P0
n=3

= (1/1141)[0 + (33/3!)24 + 2(33/3!)25 + 3(33/3!)26 + 4(33/3!)27]


= [72 + 288 + 864 + 2304] = 3.09
(b) The average number of cars in the system during peak hours,
N
Ls = Σ n.Pn
n=0

113 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

c N
Ls = Σ n.Pn+ Σ n.Pn
n=0 n=c+1

3 7
Ls= Σ n.[(1/n!) (ρc) P0 ] + Σ n.[(cc/c!) ρn P0]
n
n=0 n=4

3 7
Ls = [Σ n.(1/n!) (6) + Σ n.(cc/c!) ρn] P0
n
n=0 n=4

Ls = [6 + 62 + (63/2) + 4(33/3!)24 + 5(33/3!)25 + 6(33/3!)26 + 7(33/3!)27]. (1/1141)


= [6 + 36 + 108 + 288 + 720 + 1728 + 4032].(1/1141) = 6.06
(c) The expected waiting time in the system,
Ws = Ls/(λ(1-PN))
= 6.06/ (1-(cc/c!) ρN P0)
= 6.06/ [1-(33/3!) 27 (1/1141)]
= 12.23 min
(d) The expected number of cars per minute that cannot enter in the station
= Arrival rate x Probability that the system is full
= Arrival rate x Probability that there are N units in the system
= λ.PN
Hence, the expected number of cars per hour that cannot enter in the station
= 60.λ.PN
= 60.(1).P7
= 60. (cc/c!) ρ7 P0
= 60. (33/3!) 27. (1/1141)
= 30.3 cars per hour

3.13 APPLICATIONS OF QUEUING THEORY

Many different business scenarios have been used to apply the waiting line or queuing theory.
There are likely to be queues waiting in any circumstance where there are clients, including
banks, post offices, movie theatres, gas stations, train ticket desks, doctor's offices, etc.
Customers typically want a specific degree of service, whereas businesses that provide

114 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

service facilities work to keep costs down while still providing the required service. Queuing
theory can be used to solve issues like the ones listed below:

• Aircraft scheduling at crowded airports for takeoff and landing.


• Planning the distribution and collection of tools by employees from tool cribs in
factories.
• Fleet scheduling for mechanical transport.
• Scheduling of assembly line parts and components.
• Controlling and analysing inventories.
• Minimization of congestion due to traffic delay at tool booths.
• Routing and scheduling of salesman and sales efforts.
• Provide models that are capable of influencing arrival pattern of customers or
determine the most appropriate amount of service or number of service stations.

3.14 LIMITATIONS OF QUEUING THEORY

The traditional queuing theory's presumptions might be too severe to accurately simulate
practical scenarios. These models are unable to handle the complexity of production lines
with product-specific characteristics. To simulate, evaluate, visualise, and optimise time
dynamic queuing line behaviour, specific tools have been developed. Following are a few of
queueing theory's drawbacks:

• The majority of queuing models are highly complicated and difficult to comprehend.
• The exact theoretical distribution that would apply to a particular queue situation is
frequently unknown.
• The analysis of waiting problems becomes more challenging if the queuing discipline
does not follow the "first come, first serve" principle.

3.15 SUMMARY

• The queuing model's goal is to determine the best service rate and server count in order to
reduce both the typical cost of using the system and the cost of providing service.
• If arrivals are entirely random, a poisson's distribution can be used to describe the chance
distribution of the number of arrivals over a given period of time.
115 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

• When a system's working characteristics rely on time, it is said to be in a transient state,


and when they stop being dependent on time, it is said to be in a steady state.
• If the system's arrival rate exceeds its servicing rate, the queue's length will grow over
time and eventually reach infinite as time tends towards infinity. An explosive condition
is one in which this occurs.
• David G. Kendall proposed the first notation for a queuing model's features in 1953.

3.16 GLOSSARY

Waiting Lines - Queues or waiting lines are a typical occurrence in both regular life and
several corporate and industrial settings.

Input source of queue - Customers requiring service are generated at different times by an
input source, commonly known as population.

Queue discipline (Service discipline) - The queue discipline is the order or manner in which
customers from the queue are selected for service.

System output - The rate at which consumers are served is referred to as system output. It
depends on how long the facility needs to provide the service and how the service facility is
set up.

Transient state - When a system's operational features rely on time, it is said to be in a


transitory state.

Steady state - A queuing system is said to be in a steady state when the probability of having
a certain amount of customers is independent of time.

Explosive state - The length of the queue will grow over time and eventually reach infinity if
the system's arrival rate is higher than its service rate. Such a state is known as explosive
state.

116 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

3.17 ANSWERS TO IN-TEXT QUESTIONS

1. Optimum 5. Waiting time in the system


6. Queue length
2. All of above
7. Mean servicing rate
3. Balking 8. Servicing time

4. Poisson distribution

3.18 SELF-ASSESSMENT QUESTIONS

1. What do you mean by queue? Describe the basic elements of queues.


2. Explain different states of queuing system.
3. What are the applications of queuing theory?
4. Define the role of poisson and exponential distribution in queuing theory.
5. Customers appear at a particular gas station at a poisson distribution-based average
rate of 12 per hour. The service duration is distributed exponentially, with a mean of 2
minutes. Then find:
a) Traffic intensity
b) Average length of the queue.
c) The expected number of customer at petrol pump.
6. Assume that customers appear at a bank cashier window at a poisson-distributed
average rate of 20 per hour. 30 clients are served by the bank cashier in an hour.
There is no cap on the length of the potential queue, and the customers who arrive
from an infinite population are served first.
a) What is the value of utilization factor?
b) What is the expected waiting time in the system per customer?
c) What is the probability of zero customers in the system?
7. A poison-distributed entry rate of 25 people per hour is observed at a movie theatre
ticket counter. With an average service rate of two minutes per service, the
distribution is exponential. Calculate the average number of people in the waiting
queue, the average length of the line, the average duration of the system, and the
utilisation factor?

117 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

8. The machines in production shop breakdown at an average of 2 per hour. The non
productive time of any machine costs Rs.30 per hour. If the cost of repairman is Rs.50
per hour and the service rate is 3 per hour. Determine:
a) The number of machines not working at any point of time.
b) The average time that a machine is waiting for the repairman.
c) The cost of non-productive time of the machine per hour.
d) The expected cost of system per hour.
9. Patrons arrive at a reception counter at the rate of 2 per min. The receptionist in duty
takes an average of 1 min per patron. Calculate:
a) What is the chance that a patron will straight way meet the receptionist?
b) The probability that the receptionist is busy.
c) Average number of patrons in the system.
10. In a car manufacturing plant, a loading crane takes exactly 10 min to load a car into a
wagon and again comes back to the position to load another car. If the arrival of cars
is in a poisson stream at an average rate of one after every 20 min, calculate:
a) The average waiting time of a car in the queue.
b) The average waiting time in the system.

3.19 SUGGESTED READINGS

• Vohra N. D. (2021). Quantitative Techniques in Management.6thed.,Tata McGraw


Hill.
• Taha H. A. (2014). Operations Research– An Introduction. 9thed., Prentice Hall India.

118 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

LESSON 4
SIMULATION
Dr. Deepa Tyagi
Assistant Professor
Shaheed Rajguru College of Applied Sciences for Women
University of Delhi

STRUCTURE

4.1. Learning Objectives


4.2. Introduction of Simulation
4.3. Key Advantages of Simulation for Business
4.4. General Elementary Steps in the Simulation Technique
4.5. Types Of Simulation Models to Control in Management Science
4.6 Monte Carlo Simulation
4.7. Tools For the Verification and Validation of Simulation Model
4.7.1. Methods To Execute Verification of Simulation Model
4.7.2. Methods To Execute Validation of Simulation Model
4.7.3. Model Data Comparison with Real Facts
4.7.3.1. Validating The Current System
4.7.3.2. Validating The First Time Model
4.8. Advantages And Limitations of Simulation
4.9. Summary
4.10. Self-Assessment Exercises
4.11. Objective Questions
4.12 Suggested Readings

119 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

4.1 LEARNING OBJECTIVES

Simulation refers a explanatory form that admits a resolution creator to estimate the
nature of a model under differing environments.

It is well known that, not all real-world problems can be solved by applying a specific type of
technique so-called mathematical models and formulas that could be applied to certain types
of problems and then performing the calculations. Some problem situations are too complex
to be represented by the concise techniques presented so far in this text. In such cases,
simulation is an alternative form of analysis tool for computational solution of problems.

Simulation is a explanatory calculation system at which point a model of a process is create


andbefore studies are attended on the model to describe allure conduct under various lifestyle.
Unlike many of the added models characterized in the manual, it is not an optimizing finish.
Simulation models promote resolution creators to interrogate definitely alternatives trough a
what- if methods

The use of simulation as a decision-making tool is fairly widespread, and you are
definitely familiar with some of the ways it is used. Other reasons for the demand of
simulation contains:

✓ Many positions are also complex to permit growth of a mathematical


resolution; the quality of interpretation wanted would seriously influence the
results. In contrast, imitation models are frequently smart to capture the
copiousness of a situation outside waive unity, through enhancing the
conclusion process.

✓ Simulation models are justly natural to use and easy to understand.

✓ Simulation allows the conclusion creator to conduct experiments on a model


that will help in understanding process action while preventing the risks of
transporting tests on the model's real facts.

✓ General computer software packages handy it simple to use properly


exclusive models.

✓ Simulation maybe alternative for a off-course range of positions.

✓ There have existed abundant successful uses of these methods.

120 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

This Simulation idea is best implicit accompanying an model: assume construction a model
of by virtue of what a day at a general store pans out. You assemble rules for in what way or
manner community will communicate, when equipment are brought, ‘congestion,’ ‘free
time,’ and anything different to build an correct model of that real-world sells ground.
You will therefore run that model, usually in simulation program, to visualize the results of
achieving those rules against variables, in the way that a late supply consignment or a Black
Friday surge.
There are generally three positions in which you would be going to use simulation models:

➢ When you lack info, that is accepted when examining old or historical
occurrences.

➢ When your trade processes are also complex expected resolved through usual
orders.

➢ When you need to experiment in a cheap, low-risk atmosphere. ( For example,


If you be going to implement a dangerous, high-priced change to your trade,
and need to validate).

4.3 KEY ADVANTAGES OF SIMULATION FOR BUSINESS

Those positions perform self-explanatory, but skilled are deeper-level trade advantages
to running a imitation model, that we investigate in detail beneath.

1. Flexibility

2. Test Large and/or Complex Systems

3. Isolated from the real world Counterpart

4. Focus on Theoretical, “What-if” Queries

5. Study the Effects of Different, Interrelated (Consistent) Variables

6. Time density

7. Test of Complications

121 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

➢ Flexibility:

✓ You can pretend many various things. From trade movements to preparation
aircraft pilots, skilled is no deficiency of existent and potential applications for
simulation structures.
✓ When it comes to simulate for trade (like, the retail instance determined above), you
can engage it to capture insights in excavating, production, sell, supply chain
management, management, and many remainder of something. It’s manufacturing
agnostic and appropriate to innumerable use-cases.

➢ Test Large and/or Complex Systems:

✓ If you have enough calculating capacity, you can simulate amazingly complex
scenarios, in the way that the regular movements of an airstrip through an
complete quarter or a city traffic gridiron.

✓ It doesn’t matter how many rules you put or variables you confuse at
bureaucracy. As long as you have the need calculating capacity, you can
simulate it accompanying relative ease.

✓ Presently, simulating big atmospheres, in the way that airports, is the


standard. The distinctness actually display or take public by virtue of what to
model and resolve simulations, not the idea of imitation posing itself.

➢ Isolated from the real world Counterpart:

✓ With simulation modeling, you can generate copious amounts of insights


without ever touching the real-world system.

✓ This is a meaningful benefit for big surroundings.

✓ Be it airports, excavating movements, all-encompassing transportation and


distribution ventures, or airplane congregation, all of these are multi-billion-
currency movements.

✓ Inserting even individual change in a complex, big process can influence delays
and control of product quality questions command a price of tens or a great
number of millions of greenbacks.

✓ With simulation, you can field-test your changes before they are executed in

122 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

the here and now. You can take understandings about potential risks early,
and act in advance of ruling class.
➢ Take, e.g., closing a road: agreed, it will cause a bottleneck, but accompanying
simulation, you will see when that obstacle will be most harsh. You can understand
road construction crews to not affiliate with organization the district all the while
congestion (so concerning manage easier for traffic to flow).

➢ Focus on Theoretical, “What-if” Queries

➢ Whether it better understands old civilizations or construction-up business wit,


you can again use simulation designing to produce insights for their own well-
being.

➢ The info may be irrelevant contemporary, but it could be appropriate from now
on when the right factors (like, technology, regulatory atmosphere, etc.,) occur.

➢ Study the Effects of Different, Interrelated (Consistent) Variables

✓ It’s accepted for complex movements to involve many various determinants.

✓ In manufacturing, for example, you depend on hundreds or thousands of


machines, a logistics chain, suppliers, access to materials, and human labor.

✓ In production, e.g., you depend on a great number or chiliads of machines, a


management chain, suppliers, access to stocks, and human labor.

✓ With simulation modeling, you can acquire an understanding of by virtue of


what your production movements will experience as a result of a changing, in
the way that distressing weather, a worker strike, a governmental confrontation
in a country that supplies natural resources, and so forth.

✓ This is valuable news for conclusion-makers, leaders, and shareholders the one
are analyzing project proposals and changes to their existent wholes.

➢ Time density

➢ Though you want visions covering months or age into the future, you cannot
afford to wait that long to receive ruling class. With simulation modeling, you
can acquire facts about the complete for instance is 12 months and
comparatively fast example is inside 1 daytime.

➢ For example, Texas A&M scientists currently fake the potential of biomass hike
in cool seasons.

123 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

➢ Test of Complications

✓ Be it the inference of a new tool in a firm or a new departmental process, you


can test to visualize if it everything as destined through simulation modeling.

✓ In addition, you can recognize potential difficulties and combine resolutions for
those (and test repeated) before executing your change in the here and now.

✓ As you can visualize, simulation modeling determines a roomy range of trade


benefits. If it may be summed up into individual plan, it hopeful that of
impartially concluding the results of your conduct before they take place in the
here and now. ‘20:20’ knowledge is not restricted to just retrospect.

4.4 GENERAL ELEMENTRY STEPS IN THE SIMULATION


TECHNIQUE

Regardless of the type of imitation complicated, following fundamental steps are used for all
simulation models:
1. Identify the complications and set aims.
2. Develop the simulation model.
3. Test the model undoubtedly that it indicates bureaucracy being intentional.
4. Develop individual or more experiments (environments under that the model's
conduct will be checked).
5. Run the simulation and judge the results.
6. Repeat steps 4 and 5 as far as you are compensated accompanying the results.

The beginning task in problem solving (identification) of some sort search out simply
acknowledge the problem and set goals that the answer is engaged to obtain; simulation is no
omission. A clear announcement of the aims can determine not only managing for model
designing but too the balance for estimation of the accomplishment or failure of a simulation.
In general, the aim of a simulation study search out decide by virtue of what a arrangement
will function under sure environments. The more distinguishing a organizer is about what he
or she is expect, the better the chances that the simulation model will be devised to attain
that. Toward that end, the executive must select the outlook and level of detail of simulation.
This signifies the unavoidable standard of complicatedness of the model and the news
necessities of the study.

124 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

The second task is model development. Typically, this includes determining on the form of
the model and utilizing a computer to complete activity the simulations. (For teaching
purposes, the instances and questions in this place episode are generally manual, but in most
actual-life practically computers are used. This stems from the need for abundant numbers of
runs, the complicatedness of simulations, and the need for the act of one that records of
results.) Data accumulation is a important constituent model happening. The amount and type
of facts wanted are a direct function of the sphere and level of detail of the simulation. The
facts are required for both model development and judgment. Naturally, the model must be
planned to authorize judgment of key resolution opportunities.
The third step that is validation (Testing) phase is approximately had connection with
model happening. Its main purpose search out decide if the model adequately describes
evident method efficiency. An investigator usually achieves this by equating the results of
simulation runs accompanying popular performance of bureaucracy under the unchanging
chances. If aforementioned a contrasting cannot be made cause, for instance, actual-life data
are difficult or impossible to acquire, an alternative search out engage a test of fairness, in
which day of reckoning and belief of things used to bureaucracy or similar plans are
depended for validation that the results are reasonable and agreeable. Still another aspect of
confirmation is cautious concern of the arrogance of the model and the principles of
parameters used in experiment the model. Again, day of reckoning and belief of those adept
the real-life structure and those the one must use the results are essential. Finally, note that
model development and model validation be similar or consistent: Model deficiencies
exposed all along confirmation prompt model revisions, that bring about the need for further
validation works and possibly further revisions.
The fourth step in simulation is designing experiments. Experiments are the character of a
simulation; they help answer the what-if questions formal in simulation studies. By searching
the process, the manager or predictor learns about creature nature.
The fifth step is to run the simulation model. If a simulation model is deterministic as well
all parameters are known and constant, only a particular run will command a price of each
what if question. But if the model is probabilistic, accompanying parameters liable to be
subjected chance variability,manifold runs will be needed to get a clear image of the results.
In this manual, probabilistic simulations are the focus of the consideration, and the comments
are restricted to bureaucracy. Probabilistic simulation is basically a form of random
examination, accompanying each run representing individual observation. Therefore,
mathematical hypothesis maybe used to decide appropriate sample sizes. In effect, the best
the strength of instability owned by simulation outcomes, the greater the number of
125 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

simulation runs required to solve a justifiable level of assurance in the results as accurate
signs of model nature.
The last step in the simulation process is to analyze and interpret the results. Explanation of
the results depends to a big magnitude on the grade at which point the simulation model
approximates realism; the nearer the estimate, the less need to "modify" the results.
Moreover, the nearer the estimate of the model to real world, the less the risk intrinsic in
employing the results.

4.5 TYPES OF SIMULATION MODELS TO CONTROL IN


MANAGEMENT SCIENCE
There are in general four types of Simulation models that is to say as:
1. Monte Carlo/ Risk Analysis Simulation
2. Agent Based Modeling and Simulation
3. Discrete Event Simulation
4. System Dynamics Simulation Solutions

➢ Monte Carlo/ Risk Analysis Simulation


In simple words, a Monte Carlo simulation is a arrangement of risk evaluation. Businesses
use it just before executing a main project or change in a process, in the way that a production
manufacturing system.

Built on analytical models, Monte Carlo studies use the practical data of the authentic
system’s inputs and outputs (for example, supply consumption and manufacture yield). It
before identifies doubts and potential risks through possibility distributions.

The benefit of a Monte Carlo-based simulation is that it determines knowledge and a all-
encompassing understanding of potential warnings to your basic and period-to-market.

You can implement Monte Carlo simulations to almost some corporation or field, containing
oil and gas, production, engineering, supply chain administration, and many possible choice.

We will survey the Monte Carlo Simulation in concisely in the next sections of this chapter.

126 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

➢ Agent Based Modeling and Simulation

An agent-based simulation is a model that examines the impact of an ‘agent’ on the ‘system’
or ‘atmosphere.’ In simple terms, just plan the impact a new laser-cutting tool or some other
firm apparatus has on your overall manufacturing line.

The ‘agent’ in agent-based models maybe public, equipment, and practically whatever other.
The simulation contains the agent’s ‘performance,’ that be a part of rules of by means of what
those agents must act in bureaucracy. You formerly examine by virtue of what bureaucracy
responds to those rules.

However, you must draw your rules from real-globe data -alternatively, you will not create
correct insights. In a habit, it serves by way of to analyze a projected change and identify
potential risks and time.

➢ Discrete Event Simulation

A individual happening simulation model enables you to note the particular performances
that influence your trade processes. For example, the typical mechanics support process
includes the end-user calling you, your system taking and allocating the call, and your agent
picking up agreement.

You would use a discrete happening simulation model to test that mechanics support process.
You can use individual event simulation models to study many types of structures (for
instance, healthcare, production, etc), and for a various range of outcomes.

For example, the Nebraska Medical Center had used individual happening simulation models
to visualize how it manage discard productivity bottlenecks, increase the application of its
functioning rooms, and lower sufferer/specialist travel distance and occasion.

➢ System Dynamics Simulation Solutions

This is a very abstract form of simulation modeling. Unlike agent-based modeling and
individual case modeling, structure movement does not involve particular analyses about the
system. So for a production ability, this model will not influence in data about the system and
labor.
127 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Somewhat, trades would use structure movement models to simulate for a long-term,
strategic-level view of the overall structure.

In other words, the preference be going to grab aggregate-level observations about the entire
structure in reaction to an operation — like, a decline in CAPEX, outcome a product line, etc.

4.6 MONTE CARLO SIMULATION

There are many various types of simulation methods. The consultation will devote effort to
something probabilistic simulation utilizing the Monte Carlo procedure. The method gets its
name from the famous Mediterranean resort lead games of chance. The chance factor is an
main situation of Monte Carlo simulation, and this approach maybe used only when a process
has a chance, or chance, component.

In the Monte Carlo arrangement, a administrator labels a frequency distribution that indicates
the chance component of the system understudy. Random samples captured from this
frequency distribution are similar to findings created on the system itself. As the number of
findings increases, the results of the simulation will more carefully approximate the nature of
the actual system, given an appropriate model has been developed. Sampling is consummate
apiece use of chance numbers.
The elementary steps in the process are in this manner:
1. Describe a frequency distribution individually chance component of the system.
2. Figure out an assignment so that intervals of chance numbers will pertain the
frequency distribution.
3. Acquire the chance numbers required for the learning.
4. Explain the results.

The chance numbers used in Monte Carlo simulation can arise some source that exhibits the
essential randomness. Typically, they derive from one of two sources: Large studies believe
computer-generated chance numbers, and limited studies usually use numbers from a table of
random digits like the one demonstrated in Table 19S-1. The digits are listed in pairs for
availability, but they maybe used individually, in pairs, or in whatsoever combination some
given issue demands.

128 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Two main appearance of the sets of chance numbers are owned by simulation. One is that
process are evenly delivered. This way that for some height arrangement of digits (for
example, two-digit numbers), each probable outcome (for instance, 34, 89, 00) has the same
possibility of performing. The second feature is that there are no distinct shapes in sequences
of numbers to implement individual to conclude numbers further in the series (so the name
chance digits). This feature holds for some series of numbers; the numbers can be interpret
across rows and up or down columns.

When utilizing the table, it is main to prevent forever offset in the uniform spot; that would
effect the same series of numbers each period. Various procedures endure for selecting a
chance starting point. One can use the serial number of a model~r;bill to select the row,
column, and way of number choice. Or use rolls of a die. For our purposes, the starting point
will be described in each manual case or problem because all obtains the equivalent results.

The process of simulation will become more transparent as we solve some simple problems.

Example-1:

The supervisor of a manufacturing plant is worried about automobile breakdowns. He be able


a conclusion to pretend breakdowns for a 10-day duration. Historical info on breakdowns
over the last 100 days are likely in the following table:

129 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Number of Breakdowns Frequency


0………. 10
1………. 30
2………. 25
3………. 20
4………. 10
5………. 5
-
100

Simulate breakdowns for a 10-day duration. Read two-number random numbers from Table
19S-l, starting at the top of column 1 and study down.
a) Develop cumulative frequencies for breakdowns:
1. Convert frequencies into relative frequencies by dividing each frequency by the sum
of the frequencies. Thus, 10 turns into 10/100 = .10, 30 turns into 30/100 =.30, and so
forth.
2. Develop cumulative frequencies by successive summing. The results are
demonstrated in the following table:
Number of Frequency Relative Frequency Cumulative
Breakdowns Frequency
0………. 10 .10 .10
1………. 30 .30 .40
2………. 25 .25 .65
3………. 20 .20 .85
4………. 10 .10 .95
5………. 5 .05 1.00
- -
100 1.00

130 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

a) Assign random-number breaks to relate the cumulative frequencies for breakdowns.


(Note: Use two-digit numbers as the repetitions are likely to two decimal places.) You
want a 10 percent chance of collecting the event "0 breakdowns" in our simulation. Thus,
you must assign 10 percent of the probable random numbers as corresponding to that
event. There are 100 two-digit numbers, so we can designate the 10 numbers 01 to 10 to
that event.
Similarly, designate the numbers 11 to 40 to "one breakdown," 41 to 65 to "two
breakdowns," 66 to 85 to "three breakdowns," 86 to 95 to "4 breakdowns" and 96 to 00 to
“five breakdowns”.
Number of Frequency Relative Cumulative Corresponding
Breakdowns Frequency Frequency Random
Numbers
0………. 10 .10 .10 01 to 10
1………. 30 .30 .40 11 to 40
2………. 25 .25 .65 41 to 65
3………. 20 .20 .85 66 to 85
4………. 10 .10 .95 86 to 95
5………. 5 .05 1.00 ………….
- -
100 1.00

a) Obtain the random numbers from Table 19S-1, column 1, as stated in the question: 18 25
73 12 54 96 23 31 45 01
b) Convert the random numbers into numbers of breakdowns:
18 falls in the intervening time 11 to 40 and corresponds, then, to one breakdown on day
1. 25 falls in the intervening time 11 to 40, this corresponds to one breakdown on day 2.

73 corresponds to three breakdowns on day 3.


12 corresponds to one breakdown on day 4.
54 corresponds to two breakdowns on day 5.

131 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

96 corresponds to five breakdowns on day 6.


23 corresponds to one breakdown on day 7.
31 corresponds to one breakdown on day 8.
45 corresponds to two breakdowns on day 9.
01 corresponds to no breakdowns on day 10.
The following table compiles these results:

Days Random Number Simulated Number of


Breakdowns
1 18 1
2 25 1
3 73 3
4 12 1
5 54 2
6 96 5
7 23 1
8 31 1
9 45 2
10 01 0
-
17

The mean number of breakdowns for this 10-period simulation is 17110 = 1.7
breakdowns per day. Compare this to the predicted number of breakdowns based on the
historical data:
0(.10) + 1(.30) + 2(.25) + 3(.20) + 4(.10) + 5(.05) = 2.05 per day
Following are the various points to be noticing:
1. This simple model is proposed to represent the fundamental idea of Monte Carlo
simulation. If our only aim search out estimate the average number of
132 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

breakdowns, we would not ought simulate; we commit base the estimate on the
historical data only.
2. The simulation endure be considered as a sample; it is completely likely that extra
runs of 10 numbers would produce different means
3. Because of the irregularity owned by the results of small samples, it hopeful
foolish to attempt to draw any firm decisions from them; in an real study, much
larger sample sizes almost used.
In few cases, it is beneficial to assemble a flowchart that defines a simulation, particularly
if the simulation will include periodic updating of system values (for instance, amount of
stock available), as illustrated in Example-2. The Excel computer program expression for
this question is demonstrated below. Note that the adjustment of values in columns B, C,
and E must be exactly as revealed.

Code for Input:

133 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

The simulation results are demonstrated in the following screen. Use key F4 do to a
simulation or additional simulation.
Output:

4.7 TOOLS FOR THE VERIFICATION AND VALIDATION OF


SIMULATION MODEL

One of the authentic questions that the simulation investigator faces is to validate the model.
The simulation model is accurate only if the model is an correct description of the original
structure, else it is invalid.
Validation and verification are the two steps in any simulation project to validate a model.
134 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

• Validation is the method of matching two results. In this method, we require to match
the description of a theoretical model to the real structure. If the comparison is
correct, therefore it is valid, otherwise invalid.

• Verification is the method of equating two or more results to insure its accuracy. In
this method, we should equate the model’s application and its combined data among
the developer's visionary report and specifications.

There are various techniques used to perform Verification & Validation of Simulation Model.
Following are some of the common techniques –

There are several methods used to act Verification & Validation of Simulation Model.
Following are few of the ordinary methods –

4.7.1. METHODS TO EXECUTE VERIFICATION OF SIMULATION MODEL

Following are the approaches to act verification of simulation model-

✓ Through utilizing programming techniques to address and debug the


program in substitute-programs.

135 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

✓ Through utilizing “Structured Walk-through” procedure in which more than


one person be going to explain the program.

✓ Through following the intermediate results and matching them with noticed
results.

✓ Through testing the simulation model output utilizing various input combos.

✓ Through matching last simulation result with investigative results.

4.7.2. METHODS TO EXECUTE VALIDATION OF SIMULATION MODEL

Step 1 − Create a model accompanying extreme validity. This maybe made


employing the following steps −

✓ The model must be conferred accompanying management experts while


crafty.

✓ The model must communicate accompanying the customer during the whole
of the process.

✓ The output must directed by method specialists.

Step 2 − Test the model at hypotheses data. This maybe attained by referring the
hypothesis data into the model and examination it quantitatively. Sensitive study can
further be acted to see the effect of change in the consequence when important
changes are created in the input data.

Step 3 − Determine the representative productivity of the Simulation model. This


maybe obtained using the following steps −

✓ Determine by means of what close is the simulation output accompanying


the real structure output.

✓ Comparison maybe acted using the Turing Test. It presents the data in
management pattern, that maybe explained by specialists only.

✓ Statistical plan maybe used for equate the model output accompanying the
actual system output.

136 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

4.7.3. MODEL DATA COMPARISON WITH REAL FACTS

After created the model, we have to display comparison of its output data with original
system data. Following are the two ways to execute this contrasting.

4.7.3.1. VALIDATING THE CURRENT SYSTEM

In this technique, we use real-realm inputs of the model to equate its output with that
of the real-world inputs of the original structure. This process of validation is straight
forward, still, it can give few problems when achieved, in the way that if the output be
going to be compared to average distance, awaiting time, useless period, etc. it maybe
distinguished using statistical tests and hypothesis test. Some of the statistical tests are
Chi-square test, Kolmogorov-Smirnov test, Cramer-von Mises test, and the Moments
test.

4.7.3.2. VALIDATING THE FIRST TIME MODEL

Consider we should describe a expected structure which doesn’t lie at the nor has
existed earlier. Therefore, skilled is no historical data feasible to compare its
efficiency with. Hence, we should use a supposed system formed on assumptions.
Following useful hints will help in making it effective.

➢ Subsystem Validity − A model itself can not have any existing structure to match
it accompanying, but it may comprise a famous subsystem. Each of that validity
can be tested alone.

➢ Internal Validity − A model accompanying large size of internal fluctuation will


be rejected as a stochastic system accompanying extreme variance on account of
its in-house processes will hide the changes in the output due to input changes.

➢ Sensitivity Analysis − It specifies the facts about the sensitive parameter in


management at which point we need to pay larger attention.

➢ Face Validity − When the model acts on opposite logics, before it bear be rejected
although it behaves like the actual structure.

137 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

4.9 ADVANTAGES AND LIMIATIONS OF SIMULATION

Among the main benefits of simulation are these:

➢ It lends itself to questions that are difficult or preposterous to resolve mathematically.

➢ It permits an investigator to experiment accompanying structure performance while


avoiding probable risks owned by testing accompanying the original structure.

➢ It compresses time for fear that managers can fast ascertain complete belongings.

➢ It can be a part of a valuable tool for training conclusion creators by construction up


their occurrence and understanding of structure nature under a broad range of
situations.

➢ In Logistics: In the ever shorter implementation periods of logistics projects,


simulation is a tool that makes a lasting contribution to planning reliability and thus to
the success of a project. The key factor in the successful use of simulation is fast and
qualified modeling.

➢ In Production: Simulation safeguards your investments in machines and means of


production and differs from classical investment calculation, which is designed for
local optimization.

➢ It is a useful technique for solving a business problem where many values of the
variables are not known or partly known in advance and there is no easy way to find
these values.

Further, Application areas for simulation are practically unlimited. Today simulation can be
used for decision-support with supply chain management, workflow and throughput analysis,
facility layout design, resource usage and allocation, resource management and process
change. Whether contemplating a new office building, planning a new factory design,
assessing predictive and reliability maintenance, anticipating new or radical procedures,
deploying new staff, or planning a day’s activities, simulation can play a crucial role in
finding the right and timely solutions. The progressive and technology driven organizations,
in pursuit of winning and/or maintaining their market share, have taken different approaches
to their success. In their pursuit, some have focused on “customer service”, many have
embraced the “productivity” theme, and yet others have pursued the important issue of
138 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

“quality and reliability”. In recent times, simulation has been very successfully used as a
modeling and analysis tool.

However, certain conditions are too accompanying simulation. Leading with these are:

1. Simulation does not produce an best result; it quite signifies an similar nature for a given
batch of inputs.

a) By strategy, there is fundamental randomness (that is, chance numbers) in simulation.

b) Simulations are formed on models, and models are simply estimates of


actuality.

2. For extensive simulation, it can compel important endeavor to establish a suitable model as
well substantial computer time to access simulations.

Because simulation generates an approximate answer in place of an exact answer, and


because of the cost of running a simulation study, simulation is not regularly the first choice
of a conclusion creator. Alternatively, contingent upon the complexity of the condition,
instinctive or investigative designs should first be examined. In simple cases, an instinctive
solution generally is sufficient. In more complicated cases, an analytical resolution is
desirable, supposing an appropriate technique is convenient. If not, possibly probable to
develop an systematic model that maybe used to achieve a result. If these estimates do not
enough, simulation turns into the next probable possibility. By all means, if particularly not
economically admissible, the conclusion creator will have to depend on decision and
knowledge; in actuality, later reevaluating all of the possible choices, the decision creator can
go back to an instinctive solution, however at the beginning that approach performed not
clear adequate.

4.10 SUMMARY

➢ In this Chapter, at first, we investigate establishment of Simulations

➢ The next point in discussion has been on the various kinds of Simulations.

➢ The next stage in consideration has done on the Various Kinds of Simulations.

➢ A Monte-Carlo Study has abided introduced accompanying numerical examples


139 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

through Excel Spreadsheet computation.

➢ The Validation and Verification methods have been studied.

4.10 SELF-ASSESSMENT QUESTIONS

1. What do you mean by word “Simulation”?

2. What are some of the primary reasons for the widespread use of simulation techniques
in practice?

3. What are few of the basic reasons for the extensive use of Simulation methods in
essence?

4. What are few of the manners managers can use Simulation?

5. What act do random numbers perform in Monte Carlo simulations?

6. List the key benefits of Simulation.

7. Interpret the Verification and Validation processes.

4.11 OBJECTIVE QUESTIONS

1. The process of Simulation


(a) is acknowledged as “Monte-Carlo” Simulation.
(b) is a strong analytical technique.
(c) consistently demand use of computers to calculate solutions for the problems.
(d) include the test in what way the result of Simulation model is autonomous of
the simulation run.

2. Simulation in the framework of trade problems


(a) does not produce best results.
(b) are relatively more sensible than mathematical forms.
(c) relatively more high-priced system of analysis.
(d) all the above

140 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

3. Simulation is
(a) valuable to analyse problems where examining result is different.
(b) a statistical experiment as such its outcomes are based on statistical errors.
(c) definitive in type.
(d) all the above

4. Simulation concede possibility not be applied entirely cases as it


(a) pay much computer time.
(b) needs substantial expertise for model construction and huge computer
programming exercises.
(c) provides somewhat approximate result to problem.
(d) all the above.

5. Large complex simulation models are comprehended, cause


(a) they can be high-priced to write and use as an empirical instrument.
(b) their average costs are not clear.
(c) it is complicated to build the relevant happenings.
(d) all the above.

6. Analytical results are captured into concern before a simulation study so as to


(a) decide the best result.
(b) determine acceptable values of decision variables for the particular selections
of system parameters.
(c) recognize suitable values of the system parameters.
(d) all the above.

7. In Monte- Carlo Simulation


(a) the key necessity is randomness.
(b) the model is of deterministic type.
(c) random numbers maybe used to produce the value of input variables
particularly, if the sampled distribution is uniform.
(d) none of the above.

8. As simulation is not an Analytical model, accordingly, result of simulation must


be considered as
(a) impractical
141 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

(b) accurate
(c) estimate
(d) simplified.

9. Key benefits of Simulation for trade contains


(a) Flexbility
(b) Time condensation
(c) Test Large and/or Complex Systems
(d) all the above

10. Which of the following charge is not correct?


(a) Elementary footsteps in the use of simulation method are approximately free of
the nature of the problem.
(b) Simulation includes cultivating a model of few actual phenomenon and then
analyzing on it.
(c) Simulation can not be used when analytical tools maybe used.
(d) Probabilistic simulation is like random sampling where output is based on
statistical error.

11. One can boost the probability that results of simulation are not invalid by
(a) using individual probability distribution in place of continuous one.
(b) validating the simulation model.
(c) changing the input parameters.
(d) none of the above.

12. Biased random sampling is made from among possible choices which have
(a) different probability
(b) equal possibility
(c) possibility which do not sum to unity
(d) none of the above.

13. Verification is the process


(a) of different possibility
(b) of matching two or more results to insure its preciseness
(c) of changing the input parameters.
(d) none of the above.
142 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

14. We can implement Monte Carlo simulations to basically some manufacturing


firms or field, containing
(a) oil and gas, production
(b) engineering
(c) supply chain administration
(d) all of the above.

15. The chance component is an important feature of


(a) Monte-Carlo Simulation
(b) Agent Based Modeling and Simulation
(c) System Dynamics Simulation Solutions
(d) Discrete Event Simulation

ANSWERS OF THE OBJECTIVE QUESTIONS


1. (a) 2. (d) 3. (d) 4. (d) 5. (a) 6. (b) 7. (a) 8. (c) 9. (d) 10. (c)
11. (b) 12. (a) 13. (b) 14. (d) 15. (a)

4.12 REFERENCES & SUGGESTED BOOKS

• Anderson, D., Sweeney, D., Williams, T., Martin, R.K. (2012). An introduction to
management science: quantitative approaches to decision making (13th ed.). Cengage
Learning.

• Balakrishnan, N., Render, B., Stair, R. M., & Munson, C. (2017). Managerial decision
modeling. Upper Saddle River, Pearson Education.

• Hillier, F.& Lieberman, G.J. (2014). Introduction to operations research (10th


ed.).McGraw-Hill Education.

• Powell, S. G., & Baker, K. R. (2017). Business analytics: The art of modeling with
spreadsheets. Wiley.

143 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

LESSON 5
DECISION MAKING UNDER UNCERTAINTY

Dr. Sandeep Mishra


Assistant Professor
Shaheed Rajguru College of Applied Sciences for Women.
University of Delhi
Email Id: [email protected]

“A decision is the conclusion of a process by which one choices between two or more available
courses of action for the purpose of attaining a goal”

STRUCTURE

5.1 Learning Objectives


5.2 Introduction
5.3 Decision Making under uncertainty
5.3.1 Decision Criteria
5.3.1.1 Optimism (Maximax or Minimin) criterion
5.3.1.2 Pessimism (Maximin or Minimax) criterion
5.3.1.3 Equal Probabilities (Laplace) criterion
5.3.1.4 Coefficient of optimism (Hurwiez) criterion
5.3.1.5 Regret (Salvage) criterion
5.4 Risk Profile
5.4.1 Expected Monetary Value (EMV)
5.4.2 Expected Opportunity Loss (EOL)
5.4.3 Expected Value of Perfect Information (EVPI)
5.5 Decision Tree
5.6 Summary
5.7 Glossary
5.8 Answers to In-text Questions
5.9 Self-Assessment Questions
5.10 References
5.11 Suggested Readings

144 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

5.1 LEARNING OBJECTIVES

After completing this chapter, you will be able to:


1. List the steps of the decision-making process and describe the different types of
decision-making environments.
2. Make decisions under uncertainty.
3. Make decisions under risk.
4. Develop accurate and useful decision trees

5.2 INTRODUCTION

Humans make a lot of decisions every day, and occasionally we make ones that could have a
significant impact on our lives both now and in the future. The capability of making good
judgments on time has a significant impact on the success or failure that an individual or
organisation experiences. We would prefer to make the right choice when it comes to key
decisions like where to attend college, whether to buy or rent a car, and other similar choices.

When a decision maker is presented with multiple option possibilities and an unclear or risk-
filled pattern of future occurrences, decision analysis can be utilised to create the best course
of action. In order to decide whether to deploy a medical screening test to identify metabolic
problems in neonates, for instance, The State of North Carolina conducted decision analysis.
Decision analysis therefore consistently demonstrates its value in decision making. Even
when a thorough decision analysis has been performed, unforeseen future circumstances cast
doubt on the outcome. The chosen decision alternative may occasionally produce good or
great results. In other circumstances, a hypothetical future occurrence might materialise and
render the chosen decision alternative only mediocre or worse. The uncertainty surrounding
the outcome is a direct cause of the risk attached to any chosen alternative. Risk analysis is a
key component of a sound decision analysis. The decision-maker is given probability
information about both potential positive and negative outcomes through risk analysis.

Decision making under risk and uncertainty is a fact of life. In decision making under pure
uncertainty, the decision maker has no knowledge regarding any of the states of nature
outcomes, and/or it is costly to obtain the needed information. There are many ways of
handling unknowns when making a decision. We will try to enumerate the most common
methods used to get information prior to decision making under risk and uncertainty.
145 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

5.3 DECISION MAKING UNDER UNCERTAINTY

When making decisions under uncertainty, decision makers are completely in the dark
regarding the likelihood of various outcomes. In other words, they are unsure of how likely
(or unlikely) a particular scenario is. For instance, it is impossible to forecast the likelihood
that Mr. X will serve as the nation's prime minister for the ensuing 15 years.

When it is impossible to quantify the probability of a result, the decision-maker must base
their choice only on the conditional payoff values themselves, keeping the effectiveness
standard in mind.

5.3.1 Decision Criteria

Under conditions of uncertainity, only payoffs are known and the chance of occurrence any
state of nature is unknown. The following are the criteria of decision making under
uncertainity:
(i) Optimism (Maximax or Minimin) criterion
(ii) Pessimism (Maximin or Minimax) criterion
(iii) Equal Probabilities (Laplace) criterion
(iv) Coefficient of optimism (Hurwiez) criterion
(v) Regret (Salvage) criterion

5.3.1.1 Optimism (Maximax or Minimin) Criterion

This criterion ensures that the decision-maker doesn't miss the chance to choose the particular
strategy that correspond to largest possible profit (maximax) or the lowest possible cost
(minimin). So, out of all the alternatives, he selects the decision alternative that maximizes
the maximum payoff (or minimizes the minimum payoff).

The following is the working method of this criterion:

Step 1: Determine the maximum (or minimum) possible payoff corresponding to each
alternative.

Step 2: Select that decision alternative which corresponds to the maximum (or minimum) of
the above maximum (minimum) payoffs.

146 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Because this criterion finds the option with the overall highest reward feasible while adopting
a very optimistic future outlook, it is called the optimistic criterion.

Example 1: A food products company is contemplating the introduction of a revolutionary


new product with new packaing or replacing the existing product at much higher price .
It may even make a moderate change in the composition of the existing product, with a new
packaging at a small increase in price , or may mall a small change in the composition of
the existing product, backing it with the word ‘New’ and a negligible in crease in price .
The three possible states of nature or events are: (i) high increase in sales , (ii) no change
in sales and (iii) decrese in sales . The marketing department of the company
worked out the payoffs in terms of yearly net profits for each of the strategies of three events
(expected sales). This is represented in the following table:
Table 1
States of Nature
Strategies
7,00,000 3,00,000 1,50,000
5,00,000 4,50,000 0
3,00,000 3,00,000 3,00,000

Which strategy should the concerned executive choose using optimistic criterion?
Solution: In Table 2 we see that using optimistic criterion executive’s maximax choice is the
first Strategies, . The 7,00,000 payoff is the maximum of the maximum payoffs (i.e.,
7,00,000, 5,00,000, and 3,00,000) for each Strategies.
Table 2

States of Strategies
Nature
7,00,000 5,00,000 3,00,000
3,00,000 4,50,000 3,00,000
1,50,000 0 3,00,000
Column 7,00,000 5,00,000 3,00,000
(maximum)

Maximax Payoff

147 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

5.3.1.2 Pessimism (Maximin or Minimax) criterion


This criterion is based on the "conservative approach," which holds that the worst-case
scenario will occur. For this reason, it is called the pessimistic criterion. Decision-makers
choose those alternative that, in the case of gains, correspond to the maximum of the minima
values (or, in the event of a loss, the minimum of the maxima values).

The working method of this criterion is as follows:

Step 1: Determine the minimum (or maximum) possible cost for each alternative.

Step 2: Choose that alternative which corresponds to the maximum of the above minimum
payoffs (or minimum of the above maximum cost).

Example 2: Use the data given in example 1 and find that which strategy should the
concerned executive choose using pessimistic criterion?

Solution: In Table 3 we see that using pessimistic criterion executive’s maximin choice is the
first Strategies, . The 3,00,000 payoff is the maximum of the minimum payoffs (i.e.,
1,50,000, 0, and 3,00,000) for each Strategies.
Table 3
States of Strategies
Nature
7,00,000 5,00,000 3,00,000
3,00,000 4,50,000 3,00,000
1,50,000 0 3,00,000
Column 1,50,000 0 3,00,000
(minimum)

Maximin Payoff

5.3.1.3 Equal Probabilities (Laplace) criterion

Since the probabilities of states of nature are unknown, it is assumed that all states of nature
will occur with equal probability meaning that all possible events have an equal chance of
happening.

The working method are as follows:

Step 1: Assign equal probability value to each state of nature by using the formula:
148 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

1÷ (number of the states of nature)

Step 2: Compute the expected (or average) payoff for each alternative (course of action) by
adding all the payoffs and dividing by the number of possible states of nature, or by applying
the formula:

(Probability of state of nature j)× (Payoff value for the combination of alternative I and state
of nature j.)

Step 3: Select the best expected payoff value (maximum for profit and minimum for cost).

Example 3: Use the data given in example 1 and find that Which strategy should the
concerned executive choose using equal probabilities criterion?

Solution: Assuming that each state of nature has a probability 1/3 of occurrence. Thus, from
table 4, using equal probabilities criterion, we see that the largest expected return is from
strategy , the executive must select strategy .

Table 4
States of Nature
Strategies
Expected Return (Rs.)
7,00,000 3,00,000 1,50,000 (7,00,000+3,00,000+1,50,000)/3=3,83,333.33 Largest
5,00,000 4,50,000 0 (5,00,000+4,50,000+0)/3=3,16,666.66 Payoff
3,00,000 3,00,000 3,00,000 (3,00,000+3,00,000+3,00,000)/3=3,00,000

5.3.1.4 Coefficient of optimism (Hurwiez) criterion


According to this criterion, the decision-makers rarely exhibit excessive pessimism or
optimism. The Hurwicz decision criterion (or criterion of optimism) offers a balance between
optimistic and pessimistic decisions because most people tend to fall somewhere in the
middle of the two extremes. By balancing them with varying degrees of optimism and
pessimism, this gives a mechanism for striking a balance between extremes of both optimism
and pessimism.

The working method of this criterion is given below:

Step 1: Chosse an appropriate degree of optimism (or pessimism) of the decision maker. Let
be his degree of optimism and then be the degree of pessimism. [ ].

149 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Step 2: Determine the maximum as well as minmum payoff for each alternative and obtain
the quantities

Example 4: A manufacturer manufactures a product, of which the principal ingredient is a


chemical X. At the moment, the manufacturer spends Rs 1,000 per year on supply of X, but
there is a possibility that the price may soon increase to four times its present figure because
of a worldwide shortage of the chemical. There is another chemical Y, which the
manufacturer could use in conjunction with athird chemical Z, in order to give the same
effect as chemical X. chemical Y and Z would together cost the manufacturer Rs 3,000 per
year, but their prices are unlikely to rise. If the coefficient of optimism is 0.4, then find the
course of action that minimizes the cost?

Solution: The data of the problem is summarized in the following table (negative figures in
the table shows profit).
Table 5
Courses of Action
States of Nature
(use Y and Z) (use X)
(Price of X increases) -3,000 -4,000
(Price of X does not increase) -3,000 -1,000

Given the coefficient of optimism equal to 0.4, the coefficient of pessimism will be 1-
0.4=0.6. Then according to Hurwicz, select course of action that optimizes (maximum for
profit and minimum for cost) the payoff value

Table 6
Course of Action Best Payoff Worst Payoff h
-3,000 -3,000 -3,000
-1,000 -4,000 -2,800

Since course of action has the least cost (maximum profit) = 0.4(1,000) + 0.6(4,000) = Rs
2,800, the manufacturer should adopt this.

150 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

5.3.1.5 Regret (Salvage) criterion

The final decision criterion that we explore is based on opportunity loss. This criterion is also
called opportunity loss decision criterion or minimax regret decision criterion. The
discrepancy between the optimal payoff and the actual payoff obtained is referred to as
opportunity loss. In other words, it represents the amount lost as a result of choosing the
wrong alternative. Regret (savage) identifies the decision inside each alternative that
minimize the maximum opportunity loss.

The following is the algorithm for this criterion:

Step 1: From the given payoff matrix, develop an opportunity-loss (or regret) matrix as
follows:
(i) Find the best payoff corresponding to each state of nature.
(ii) Subtract all other payoff values in that row from this value.

Step 2: For each decision alternative identify the worst (or maximum regret) payoff value.
Record this value in the new row.

Step 3: Select a decision alternative resulting in a smallest anticipated opportunity loss value.

Example 5: Considering the same data of example 1 and find that Which strategy should the
concerned executive choose using regret criterion?

Solution: The regret (opportunity-loss) table is shown below:

Table 7
Strategies
State of Best
Nature Payoff
7,00,000-7,00,000 = 0 7,00,000-5,00,000 = 2,00,000 7,00,000-3,00,000 = 4,00,000 7,00,000
4,50,000-3,00,000 = 1,50,000 4,50,000-5,50,000 = 0 4,50,000-3,00,000 = 1,50,000 4,50,000
3,00,000-1,50,000 = 1,50,000 3,00,000-0 = 3,00,000 3,00,000-3,00,000 = 0 3,00,000
Column
1,50,000 3,00,000 4,00,000
(maximum)

Minimax Regret

151 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Hence the executive should adopt minimum opportunity loss strategy, .

IN-TEXT QUESTIONS
9. A course of action that may be chosen by a decision maker is called an ……
10. In hurwicz describes making the coefficient of realism describes the degree of
…….
11. …….. decision making criterion uses an opportunity loss decision.

5.4 RISK PROFILE OR DECISION UNDER RISK

The decision-maker has access to enough data to estimate the likelihood of each event (state
of nature). A decision maker is considered to make risky decisions when he selects one
alternative out of numerous that have known probabilities of occurrence. From the past data,
several outcomes' probability can be calculated. The decision-maker may frequently base
their choices on personal beliefs about what will happen in the future or on information
gleaned from market research, the opinions of experts, etc. The issue can be resolved as a
decision problem under risk.

Under the condition of risk, one of the most common ways of making decisions under risk is
evaluating the alternative with the highest expected monetary value of the expected payoff.
The ideas of expected opportunity loss and expected value of perfect information are also
discussed.

5.4.1 Expected Monetary Value (EMV)

The expected monetary value (EMV) for a certain course of action is obatained be adding
payoff values multiplied by the probabilities associated with each state of nature.
Mathematically, EMV is stated as follows:

where, m = number of possible states of nature


= probability of occurrence of state of nature,
= payoff associated with state of nature and course of action,

152 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

The EMV criterion may be summarized as below:


Step 1: List conditional profit for each act-event combinations, along with the corresponding
event probabilities.
Step 2: For each act, determine the expected conditional profits.
Step 3: Determine EMV for each act.
Step 4: Choose the act which corresponds to the optimal EMV.

Example 6: Mr X flies quite often from town A to town B. He can use the airport bus which
costs Rs 25 but if he takes it, there is a 0.08 chance that he will miss the flight. The stay in a
hotel costs Rs 270 with a 0.96 chance of being on time for the flight. For Rs 350 he can use a
taxi which will make 99% chance of being on time for the flight. If Mr X catches the plane
on time, he will conclude a business transaction that will produce a profit of Rs 10,000,
otherwise he will lose it. Which mode of transport should Mr X use? Answer on the basis of
the EMV criterion.

Solution: Computation of EMV associated with various courses of action is shown in table 8.
Table 8
Courses of Action
Bus Stay in Hotel Taxi
States of Nature
Expected Expected Expected
Cost Probability Cost Probability Cost Probability
value value value
10,000-25 0.92 9,177 10,000-270 0.96 9,340.80 10,000-350 0.99 9,553.50
Catches the flight = 9,975 = 9,730 = 9,650
Miss the flight -25 0.08 -2 -270 0.04 -10.8 -350 0.01 -3.5
Expected
monetary value 9,175 9,330 9,550
(EMV)

Since EMV associated with course of action ‘Taxi’ is largest (= Rs 9,550), it is the logical
alternative.

5.4.2 Expected Opportunity Loss (EOL)

An alternative approach in decision making under risk is to minimize expected opportunity


loss (EOL). Expected opportunity loss (EOL), also called expected value of regret.
Mathematically, EOL is stated as follows:

where, = opportunity loss due to state of nature, and course of action,


153 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

= probability of occurrence of state of nature,

Major steps in the EOL criterion may be summarized as below:


Step 1: List the conditional profit table for each act-event combination, along with
corresponding event probabilities.
Step 2: For each event, determine the COL ( conditional opportunity loss) values by first
locating the most favourable act (maximum payoff) for that event and ten taking the
difference between that conditional profit value and each conditional profit for that event.
Step 3: For each act, determine the expected COL values and sum these values to get the
expected opportunity loss (EOL) for that act.
Step 4: Choose that act which corresponds to the minimum COL values.

Example 7: A company manufactures goods for a market ggods in which the technology of
the product is changing rapidly. The research and development department has produced a
new product that appears to have potential for commercial exploitation. A further Rs 60,000
is required for development testing. The company has 100 customers and each coustomer
might purchase, at the most, one unit of the product. Market research suggests that a selling
price of Rs 6,000 for each unit, with the total variable costs of manufacturing and selling
estimate as Rs 2,000 for each unit.
Form previous experience, it has been possible to derive a probability distribution relating to
the proportion of customers who will buy the product as follows:
Proportion of customers: 0.04 0.08 0.12 0.16 0.20
Probability: 0.10 0.10 0.20 0.40 0.20
Determine the expected opportunity losses, given no other information than that stated above,
and check whether or not the company should develop the product.

Solution: If p is the proportion of customers who purchase the new product, the company’s
conditional profit is: .

Let be the possible states of nature, i.e. proportion of the customers who
will buy the new product and (develop the product) and (do not develop the product) be
the two courses of action.

The conditional profit values (payoffs) for each pair of and are shown in the table 9.

154 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Table 9
Proportion of
Conditional Profit = Rs
Customers
(4,00,000p - 60,000)
(States of
Nature)
(Develop) (Do not Develop)
0.04 -44,000 0
0.08 -28,000 0
0.12 -12,000 0
0.16 4,000 0
0.20 20,000 0

Opportunity loss values are given below in table 10.


Table 10
Proportion of Customers Conditional Profit (Rs) Opportunity Loss (Rs)
Probability
(States of Nature)

0.04 0.1 -44,000 0 44,000 0


0.08 0.1 -28,000 0 28,000 0
0.12 0.2 -12,000 0 12,000 0
0.16 0.4 4,000 0 0 4,000
0.20 0.20 20,000 0 0 20,000

Using the given estimates of probabilities associated with each state of nature, the expected
opportunity loss (EOL) for each course of action is given below:

Since the company seeks to minimize the expected opportunity loss, the company should
select course of action (do not develop the product) with minimum EOL.

5.4.3 Expected value of Perfect Information (EVPI)

Choosing a course of action that produces the intended results in the presence of any state of
nature is simple if the decision maker is able to obtain flawless (complete and accurate)
knowledge about the occurrence of various states of nature. The Expected value of perfect
information (EVPI) may be defined as the maximum sum a person would be willing to pay to
obtain perfect knowledge of which event would occur. Without any more information, the
EMV or EOL criterion assists the decision-maker in choosing a specific course of action that
maximises the expected payoff. Mathematically, it is stated as:
155 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

where, = probability of occurrence of state of nature,


= best payoff when course of action, is taken in the presence of state of nature

= maximum expected monetary value.

Example 8: XYZ company manufactures parts for passenger cars and sells them in lots of
10,000 parts each. The company has a policy of inspecting each lot before it is actually
shipped to the retailer. Five inspection categories, established for quality control, represent
the percentage of defective items contained in each lot. These are given in the following
table. The daily inspection chart for past 100 inspections shown the following rating or
breakdown inspection: Due to this the management in considering two possible course of
action:

(i) : Shut down the entire plant operations and thoroughly inspect each machine.

Proportion of
Rating Frequency
Defective Items

Excellent (A) 0.02 25


Good (B) 0.05 30
Acceptable (C) 0.10 20
Fair (D) 0.15 20
Poor (E) 0.20 5
Total 100

(ii) : Continue production as it now exists but offer the customer a refund for defective
items that are discovered and subsequently returned.

The first alternative will costs Rs 600 while the second alternative will cost the company Rs 1
for each defective item that is returned. What is the optimum decision for the company? Find
the EVPI.

Solution: Calculations of inspectiona and refund cost are shown in table 11.

156 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Table 11
Defective Cost Opportunity Loss
Rating Probability
Rate Inspect Refund Inspect Refund
A 0.02 0.25 600 200 400 0
B 0.05 0.30 600 500 100 0
C 0.10 0.20 600 1,000 0 400
D 0.15 0.20 600 1,500 0 900
E 0.20 0.05 600 2,000 0 1,400
1.00 600 670 EOL=170 240

The cost of refund is calculated as follows:

For lot A:

Similarly, the cost of refund for other lots is calculated.

Expected cost of refund is:

Expected cost of inspection is:

Since the cost of refund is more than the cost of inspection, the plant should be shut down for
inspection. Also, EVPI = EOL of inspection = Rs 170.

IN-TEXT QUESTIONS

4. The expected monetary value criterion is used for decision making under risk. True/False

5. The difference between the highest the lowest EMV is said to be EVPI. True/False

6. The payoff due to equally likely criterion of decision making is same as minimum…….

157 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

5.5 DECISION TREE

A decision tree can graphically represent any issue that can be expressed in a decision table.
The different decision-alternatives and the order of events are graphically represented by
decision trees as tree branches. Similar to a network, a decision tree is made up of nodes (or
points) and arcs (or lines). They include decision (choice) nodes and states of nature (chance)
nodes when building a tree diagram. Thses nodes are depicted by following symbols:

□ A decision point (or node). Branches (arcs) coming from the decision point (nodes)
denote all decision alternatives available to the decision maker at that point. The decision-
maker must choose just one of these alternatives.
○ Situation of uncertainity (or an outcome node or event point). Arcs emanating from an
outcome node denote all outcomes that could occur at that node. Only one of these
possibilities will come true.
These occurrences, which may indicate customer demand or other factors, are not entirely
within the decision maker's control. The primary benefit of a tree diagram is that a following
act (referred to as a second act) to the occurrence of each event may also be portrayed. In the
tree diagram, the outcome (payoff) for each act-event combination may be shown at the
extremities of each branch. The following decision tree diagram is displayed:

Outcome
Act Event Act Event (Payoff)

Fig: A decision Tree Diagram

O1121 represents the payoff of the act event combination A1-E1-B2-E1.

158 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Folding back or rolling back a decision tree is the process of analysing a decision tree to find
the best course of action. By working our way back to the first decision node, we start with
the payoffs (i.e., the right extreme of the tree). In folding back the decision tree, we use the
following two rules:
• Using the probability of each possible outcome at that node and the payoffs associated
with those outcomes, we compute the expected payoff at each outcome node.
• We choose the alternative that produces the better expected payout at each decision
node. If the expected payoffs are profits, we choose the alternative with the highest
value. In contrast, we choose the alternative with the smallest value if the expected
payoffs are costs.

Thus, in a decision-tree, the decision-maker specifies each act-event sequence's potential


alternatives, events, and payoff values, along with their probabilities. With the help of this, he
may calculate expected payoff values and, as a result, the EMV (expected monetary value) of
each act.
When making judgments in scenarios with multiple stages and decisions that are all
dependent on one another, a decision tree is a very helpful tool. The computation of EMV for
each of the tree's main branches constitutes the contemporary method for decision tree
analysis. When the EMV for a particular path has been established, these values become the
conditional expected payoffs for their corresponding branches.
Example 9: You are given the following estimates concerning a Research and Development
programme:

Decision Probability of Decision Outcome Probability of Payoff Value of


Given Research R Number Outcome given Outcome,
(Rs'000)
Develop 0.5 1 0.6 600
2 0.3 -100
3 0.1 0
Do not develop 0.5 1 0.00 600
2 0.00 -100
3 1.00 0

Construct and evaluate the decision tree diagram for the above data. Show your workings for
evaluation.

159 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Solution: The decision tree of the given problem along with necessary calculations is shown
in figure 9.1.
Figure 9.1
Probability Payoff (in'000 Rs) Expected Payoff (in'000 Rs)

P(x1 |D1 )=0.6 0.3×600=180


3 0.5 × 0.6 =0.3 600

D1, Develop (0.5) P(x2 |D1 )=0.3 -100


1 4 0.5 × 0.3 =0.15 0.15×-100=-15

P(x3 |D1 )=0.1 0


5 0.5 × 0.1 =0.05 0

0 165
P(x1 |D2 )=0
0
6 0.5 × 0 =0.0 600

P(x2 |D2 )=0


2 7 0
0.5 × 0 =0.0 -100
D2, Do not Develop
(0.5)
P(x3 |D2 )=1
8 0.5 × 1.0 =0.5 0 0
0

Example 10 A businessman has two independent investment portfolios A and B, available to


him, but he lacks the capital to undertake both of them simultaneously. He can either choose
A first and then stop, or if A is not successful, then take, B or vice versa. The probability of
success of A is 0.6, while for B it is 0.4. Both investment schemes require an initial capital
outlay of Rs 10,000 and both return nothing if the venture proves to be unsuccessful.
Successful completion of A will return Rs 20,000 (over cost) and successful completion of B
will return Rs 24,000 (over cost). Draw a decision tree in order to determine the best strategy.
Solution: The decision tree based on the given information is shown in Fig. 10.1. The
evaluation of each chance node and decision is given in Table 12.

160 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Table 12: Evaluation of Decision and Chance Nodes

Figure 10.1

Since the EMV = Rs 10,160 at node D1 is highest, therefore the best strategy is to accept
course of action A first and if A is successful, then accept B.

161 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

IN-TEXT QUESTIONS

7. A ……….. provides a graphical representation of the various decision-alternatives.


8. The process by which a decision tree is analyzed to identify the optimal decision is
referred to as………

5.7 SUMMARY

The topic of decision analysis, which is an analytical and systematic method of analysing
decision making, is introduced in this chapter. We begin by outlining the procedures involved
in decision-making under two different conditions: (1) uncertainty and (2) risk. We use
criteria like maximax, maximin, criterion of realism, equally likely, and minimax regret to
determine the optimum options for decision-making when faced with ambiguity. We examine
the calculation and application of the expected monetary value (EMV), expected opportunity
loss (EOL), and expected value of perfect knowledge for decision-making under risk (EVPI).
For more complex issues requiring sequential decision-making, decision trees are employed.
Here, we calculate the expected value of sample data (EVSI).

5.8 GLOSSARY

Decision Alternative:- A course of action or a strategy that can be chosen by a decision


maker.
Decision Table:- A table in which decision alternatives are listed down the rows and
outcomes are listed across the columns. The body of the table contains the payoffs. Also
known as a payoff table.
Outcome:- An occurrence over which a decision maker has little or no control. Also known
as a state-of-nature.

162 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

5.9 ANSWERS TO IN-TEXT QUESTIONS

1. Alternative 5. False
2. Optimism 6. EOL criterion
3. Minimax regret 7. Decision Tree
4. True 8. Folding back

5.10 SELF-ASSESSMENT QUESTIONS

11. What techniques are used to solve decision-making problems under uncertainity?
Which technique results in an optimistic decision?
12. State the meanings of EMV and EVPI.
13. A manufacturer manufactures a product, of which the principal ingredient is a
chemical X. At the moment, the manufacturer spends Rs 1,000 per year on supply of
X, but there is a possibility that the price may soon increase to four times its present
figure because of a worldwide shortage of the chemical. There is another chemical Y,
which the manufacturer could use in conjunction with a third chemical Z, in order to
give the same effect as chemical X. Chemicals Y and Z would together cost the
manufacturer Rs 3,000 per year, but their prices are unlikely to rise. What action
should the manufacturer take? Apply the maximin and minimax criteria for decision-
making and give two sets of solutions. If the coefficient of optimism is 0.4, then find
the course of action that minimizes the cost.
14. The manager of a flower shop promises its customers delivery within four hours on all
flower orders. All flowers are purchased on the previous day and delivered to Parker
by 8.00 am the next morning. The daily demand for roses is as follows.

Dozens of roses : 70 80 90 100


Probability : 0.1 0.2 0.4 0.3

163 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

The manager purchases roses for Rs 10 per dozen and sells them for Rs 30. All unsold
roses are donated to a local hospital. How many dozens of roses should Parker order each
evening to maximize its profits? What is the optimum expected profit?
15. A large steel manufacturing company has three options with regard to production: (i)
produce commercially (ii) build pilot plant (iii) stop producing steel. The management
has estimated that their pilot plant, if built, has 0.8 chance of high yield and 0.2 hance
of low yield. If the pilot plant does show a hight yield, management assigns a
probability of 0.75 that the commercial plant will also have a high yield. If the pilot
plant shows a low yield, there is only a 0.1 chance that the commercial plant will
show a high yield. Finally, management’s best assessment of the yield on a
commercial-size plant without building a pilot plant first has a 0.6 chance of high
yield. A pilot plant will cost Rs. 3,00,000. The profits earned under high and low yield
conditions are Rs. 1,20,00,000 and – Rs. 12,00,000 respectively. Find the optimum
decision for the company.

5.11 REFERENCES

• Hillier, F.& Lieberman, G.J. (2014). Introduction to operations research (10th


ed.).McGraw-Hill Education.

• Powell, S. G., & Baker, K. R. (2017). Business analytics: The art of modeling with
spreadsheets. Wiley.

5.12 REFERENCES & SUGGESTED BOOKS

• Anderson, D., Sweeney, D., Williams, T., Martin, R.K. (2012). An introduction to
management science: quantitative approaches to decision making (13th ed.). Cengage
Learning.

• Balakrishnan, N., Render, B., Stair, R. M., & Munson, C. (2017). Managerial decision
modeling. Upper Saddle River, Pearson Education.

164 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

LESSON 6
PROJECT SCHEDULING
Dr. Sandeep Mishra
Assistant Professor
Shaheed Rajguru College of Applied Sciences for Women.
University of Delhi
Email Id: [email protected]

STRUCTURE

6.1 Learning Objectives


6.2 Introduction: Project Scheduling
6.3 Scheduling with known activity times
6.3.1 PERT versus CPM
6.3.2 Critical Path Analysis
6.3.2.1 Forward Pass Method
6.3.2.2 Backward Pass Method
6.3.2.3 Float (Slack) of an Activity and Event
6.3.2.4 Critical Path
6.4 Scheduling with uncertain activity times
6.4.1 Estimation of Project Completion Time
6.5 Time-cost trade-offs
6.5.1 Project Crashing
6.5.2 Time-cost Trade-Off Procedure
6.6 Summary
6.7 Glossary
6.8 Answers to In-text Questions
6.9 Self-Assessment Questions
6.10 References
6.11 Suggested Readings

6.1 LEARNING OBJECTIVES

After completing this chapter, you will be able to:

• Understand how to plan, monitor, and control projects using PERT/CPM.


165 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

• Determine earliest start, earliest finish, latest start, latest finish, and slack times for
each activity.
• Understand the impact of variability in activity times on the project completion time.
• Develop resource loading charts to plan, monitor, and control the use of various
resources during a project.
• Understand the trade-cost trade-offs procedure.

6.2 INTRODUCTION

Have you ever overseen a significant event? You might have served as the prom committee
chair or the board chair for the graduation ceremony in high school. You might have led your
team during the introduction of a new product, the planning of a facility expansion, or the
implementation of enterprise resource planning. Even if you have never managed people in
such circumstances, you have undoubtedly had your own personal projects to contend with,
such as writing a paper, moving to a new apartment, applying to college, or selling a house.
As a volunteer, you may have overseen the annual function, the elementary school picnic, or
the river clean-up project. How did you plan your day's events? Most of your projects, did
they finish on time? How did you handle unforeseen circumstances? Did you finish your
work on time and on budget? All of these are crucial components of project management.
Project managers with expertise are essential assets for organisations since they handle
projects frequently.

The listing of activities, deliverables, and milestones within a project constitutes scheduling
in project management. An activity's start and end dates, duration, and resources are typically
included in a schedule. Successful time management requires effective project scheduling,
especially for firms that provide professional services.

A project involves many interrelated activities (or tasks) that must be completed on or before
specified time limit, in a specified sequence (or order) with specified quality and minimum
cost of using resources such as personnel, money, material, facilities and/or space.

In this lesson we mainly focus on creating and manging schedule. This cover project
scheduling with known activity times using well known techniques-PERT and CPM,
scheduling with uncertain activity and trade cost trade-offs.

166 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

6.3 SCHEDULING WITH KNOWN ACTIVITY TIMES

Managers usually must plan, manage, and supervise projects that involve a variety of separate
jobs or tasks completed by several departments and individuals. These projects are typically
so large or intricate that management frequently struggles to remember every element
important to the plan, schedule, and development of the project. In these situations, both the
critical path method (CPM) and the programme evaluation and review technique (PERT)
have proven to be very helpful.

To aid in the planning and scheduling of the US Navy's massive Polaris Nuclear Submarine
Missile programme, which involved thousands of actions, a research team created PERT in
1956–1958. The team's goal was to build and plan the Polaris missile system as efficiently as
possible.

The team's goal was to effectively plan and build the Polaris CPM, which was created
between 1956 and 1958 by the E.I. DuPont Company and Remington Rand Corporation
virtually simultaneously. The organisation set out to create a method for keeping track of
chemical plant upkeep.
A wide range of projects can be planned, scheduled, and managed using PERT and CPM:
• Research and development of new products and processes
• Construction of plants, buildings, and highways
• Maintenance of large and complex equipment
• Design and installation of new systems

Project managers are responsible for planning and coordinating the numerous tasks or
activities in these kinds of projects to ensure that everything is finished on time.

6.3.1 PERT versus CPM

The primary difference between PERT and CPM is in the way the time needed for each
activity in a project is estimated. In PERT, each activity has three-time estimates that are
combined to determine the expected activity completion time and its variance.

PERT is considered a probabilistic technique; it allows us to find the probability that the
entire project will be completed by a specific due date. In PERT analysis emphasis is given

167 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

on the completion of a task rather than the activities required to be performed to complete a
task. Thus, PERT is also known as an event-oriented technique. PERT is used for one-time
projects that involve activities of non-repetitive nature (i.e. activities that may never have
been performed before), where completion times are uncertain.

In contrast, CPM is a deterministic approach. It estimates the completion time of each activity
using a single time estimate. This estimate, called the standard or normal time, is the time we
estimate it will take under typical conditions to complete the activity. In some cases, CPM
also associates a second time estimate with each activity. This estimate, called the crash time,
is the shortest time it would take to finish an activity if additional funds and resources were
allocated to the activity. CPM is used for completing of projects that involves activities of
repetitive nature.

6.3.2 Critical Path Analysis

The objective of critical path analysis is to predict the project's overall duration and give
starting and finishing durations to every activity involved. This makes it easier to compare
the project's actual progress to its projected completion date.

The expected duration of an activity is estimated from the duration of individual activities,
which may be determined uniquely (in the case of CPM) or may entail three-time estimates
(in the case of PERT). The following elements need to be understood in order to establish the
project scheduling.
i. Total completion time of the project.
ii. Earlier and latest start time of each activity.
iii. Critical activities and critical path.
iv. Float for each activity.

Notations:
Earliest occurrence time of an event, i. This is the latest time for an event to occur when
all the preceding activities have been completed, without delaying the entire project.
Latest allowable time of an event, i. This is the latest time at which an event can occur
without causing a delay in project’s completion time.
Early starting time of an activity (i, j).
Late starting time of an activity (i, j).
Early finishing time of an activity (i, j).

168 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Late finishing time of an activity (i, j).


duration of an activity (i, j).

There should only be one start event and one finish event in a project schedule. The other
events are numbered consecutively with integer 1, 2,…, n, such that i<j for any two events i
and j connected by an activity, which starts at i and finishes at j.

These schedule for each activity is created using a two-pass approach that includes a forward
pass and a backward pass. The earliest times ( and ) are determined during the
forward pass. The latest times ( and ) are determined during the backward pass.

6.3.2.1 Forward Pass Method (For Earliest Event Time)

According to this method, calculations start at the first event, let's say 1, move through the
events in increasing order of the event numbers, and finally stop at the last event, let's say N.
Each event's earliest occurrence time (E), as well as the earliest start and end times for each
activity that starts there, are determined. The project's earliest probable completion time is
determined by the event N's earliest occurrence time when calculations cease at that point.

The procedure can be summed up as follows:

1. Set the earliest occurrence time of initial event 1 to zero. That is, = 0, for i = 1.

2. Calculate the earliest start time for each activity that begins at event i (= 1). This is equal to
the earliest occurrence time of event, i (tail event). That is: = , for all activities (i, j)
starting at event i.

3. Calculate the earliest finish time of each activity that begins at event i. This is equal to the
earliest start time of the activity plus the duration of the activity. That is: = + =
+ , for all activities (i, j) beginning at event i.

4. Proceed to the next event, say j; j > i.

5. Calculate the earliest occurrence time for the event j. This is the maximum of the earliest
finish times of all activities ending into that event, that is, = Max { } = Max { +
}, for all immediate predecessor activities.

169 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

6. If j = N (final event), then earliest finish time for the project, that is, the earliest occurrence
time for the final event is given by = Max { } = Max { – 1 + }, for all
terminal activities

6.3.2.2 Backward Pass Method (For Latest Allowable Event Time)

The computations in this technique start with the final event N, move through the events in
decreasing sequence of event numbers, and finally arrive at the first event 1. Each event's
latest occurrence time (L), as well as the most recent start and completion times for each
activity that is ending there, are determined. Up until the initial occurrence, the process is
repeated.

The following is a summary of the process:

1. Set the latest occurrence time of last event, N equal to its earliest occurrence time (known
from forward pass method). That is, = , j = N.

2. Calculate the latest finish time of each activity which ends at event j. This is equal to latest
occurrence time of final event. That is: LFij = , for all activities (i, j) ending at event j.

3. Calculate the latest start times of all activities ending at j. This is obtained by subtracting
the duration of the activity from the latest finish time of the activity. That is: = and
= – = – , for all activity (i, j) ending at event j.

4. Proceed backward to the event in the sequence, that decreases j by 1.

5. Calculate the latest occurrence time of event i (i < j). This is the minimum of the latest
start times of all activities from the event. That is: = Min { } = Min { – }, for all
immediate successor activities.

6. If j = 1 (initial event), then the latest finish time for project, i.e. latest occurrence time
for the initial event is given by: = Min { } = Min{ – 1 – }, for all immediate
successor activities.

6.3.2.3 Float (Slack) of an Activity and Event

The amount of time that a non-critical activity or event can be postponed or prolonged
without extending the overall project completion schedule is known as the float (slack) or
170 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

free time. Finding the amount of slack time, or spare time, that each activity has is easy once
we have determined the earliest and latest timings for all activities. Slack is the amount of
time an activity may be postponed without causing the project as a whole to lag. In a project,
there are three different sorts of floats for each non-critical activity.

(a) Total float: This is the amount of time that an activity may be put off until all activities
that came before it were finished as soon as possible and all activities that followed it could
be put off until the latest time that was permitted.
For each non-critical activity (i, j) the total float is equal to the latest allowable time
for the event at the end of activity minus the earliest time for an event at the beginning of the
activity minus the activity duration. Mathematically,
Total Float = = − −

(b) Free float: This is the amount of time that each non-critical activity's completion time can
be pushed back without impacting its immediately succeeding activities. The amount of free
float time for a non-critical activity (i, j) is computed as follows:
Free Float =

(c) Independent float: This is the length of time that any non-critical activity can be
delayed without affecting the completion times of the activities that come before or after it.
Each non-critical activity's independent float time is calculated mathematically as follows:
Independent Float = =

Independent float values that are negative are regarded as zero.

6.3.2.4 Critical Path

Certain activities in any project are called critical activities because delay in their execution
will cause further delay in the project completion time. All activities having zero total float
value are identified as critical activities, i.e., L = E.

The critical path is the sequence of critical activities between the start event and end event of
a project. This is critical in the sense that if execution of any activity of this sequence is
delayed, then completion of the project will be delayed. A critical path is shown by a thick
line or double lines in the network diagram. The length of the critical path is the sum of the

171 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

individual completion times of all the critical activities and define the longest time to
complete the project. The critical path in a network diagram can be identified as below:
i. If value and value for any tail and head events is equal, then activity (i, j)
between such events is referred as critical, i.e., .
ii. On critical path .
Example 1: An established company has decided to add a new product to its line. It will buy
the product from a manufacturing concern, package it, and sell it to a number of distributors
that have been selected on a geographical basis. Market research has already indicated the
volume expected and the size of sales force required. The steps shown in the following table
are to be planned.

Activity Description Duration (days) Predecessors


A Organize sales office 6 -
B Hire salesman 4 A
C Train salesman 7 B
D Select advertising agency 2 A
E Plan advertising campaign 4 D
F Conduct advertising campaign 10 E
G Design package 2 -
H Setup packaging facilities 10 G
I Package initial stocks 6 J, H
J Order stock from manufacturer 13 -
K Select distributors 9 A
L Sell to distributors 3 C, K
M Ship stocks to distributors 5 I, L
The precedence relationship among these activities are shown in the following figure.

172 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

As the figure shows, the company can begin to organize the sales office, design the package,
and order the stock immediately. Also, the stock must be ordered and the packing facility
must be set up before the initial stocks are packaged.
(a) Draw an arrow diagram for this project.
(b) Indicate the critical path.
(c) For each non-critical activity, find the total and free float.
Solution: (a) The arrow diagram for the given project, along with E-values and L-values, is
shown in Fig.1. Determine the earliest start time – Ei and the latest finish time – Lj for each
event by proceeding as follows:

Figure 1: Network Diagram

173 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

(b) The critical path in the network diagram (Fig.1) has been shown. This has been done by
double lines by joining all those events where E-values and L-values are equal. The critical
path of the project is: 1 – 2 – 5 – 6 – 9 – 10 and critical activities are A, B, C, L and M. The
total project completion time is 25 weeks.

(c) For each non-critical activity, the total float and free float calculations are shown in
Table1.

Table 1: Calculation of Floats

174 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

IN-TEXT QUESTIONS
12. The objective of the project scheduling is to minimize total project cost. True
/ False
13. The CPM is used for completing the projects that involves activities of
repetitive nature. True / False
14. PERT is referred to as an activity-oriented technique. True / False
15. _____________is the time-consuming job or task that is a key subpart of the
total project.

6.4 SCHEDULING WITH UNCERTAIN ACTIVITY TIMES

We used the CPM technique, which assumes that all activity times are known and fixed
constants, to find all earliest and latest times to date as well as the related critical path(s). In
other words, activity times are constant. However it is possible that other factors will affect
how quickly a task is completed. PERT was developed to handle projects where the time
duration for each activity is not known with certainty but is a random variable that is
characterized by -distribution. To estimate the parameters ‘mean and variance’ of the -
distribution three-time estimates for each activity are required to calculate its expected
completion time. The necessary three-time estimates are listed below.

i. Optimistic time : The shortest possible time (duration) in which an activity


can be performed assuming that everything goes well.
ii. Pessimistic time : The amount of time needed to complete a task in the
worst conceivable circumstances. However, natural disasters like earthquakes, floods,
and the like are not included under such circumstances.
iii. Most likely time : The amount of time needed to finish a task, if it were
repeated numerous times under the same circumstances. Of course, the completion
time would happen the most frequently (i.e. model value).

175 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

The -distribution is not necessarily symmetric; the degree of skewness depends on the
location of the . The range of is assumed to enclose every possible
duration of the activity.

Expected time of an activity =

and variance of activity time, .

The variance of the overall critical path's time is calculated by aggregating the variances of
the various critical activities if the duration of the activities is a random variable. Suppose
is the standard deviation of the critical path. Then

6.4.1 Estimation of Project Completion Time

There is a potential that the project's scheduled completion time will vary because of the
unknown activity completion time. As a result, the decision-maker must be aware of the
probability that the specified time will be achieved. Using the central limit theorem, the
normal distribution can be used to approximate the probability distribution of completion
times for an event. Thus, the probability of completing the project on the schedule time, is
given by:

where, expected completion time of the project

number of standard deviations, the scheduled completion time is away from the
mean time.

is the sum of variances of critical activities.

The computation of enables a decision-maker to make certain commitments, knowing the


degree of risk. The expected completion time of the project is obtained by adding the
expected time of each critical activity.

176 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Example 2: The following network diagram represents activities associated with a project:

Determine the following:


(a) Expected completion time and variance of each activity
(b) The earliest and latest expected completion times of each event.
(c) The critical path.
(d) The probability of expected completion time of the project if the original scheduled time
of completing the project is 41.5 weeks.
(e) The duration of the project that will have 95 per cent chance of being completed.
Solution: Calculations for expected completion time (te) of an activity and variance (σ2),
using
following formulae are shown in Table 3.

= nd, .
(b) The earliest and latest expected completion time for all events considering the expected
completion time of each activity are shown in Table 3.

177 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Table 3

178 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

The E-value and L-values are shown in Fig. 2.

(c) The critical path is shown by thick line in Fig. 2 where E-values and L-values are the
same. The critical path is: 1 – 4 – 7 and the expected completion time for the project is 42.8
weeks.
(d) Expected length of critical path, = 33 + 9.8 = 42.8 weeks (Project duration).
Variance of critical path length, = 5.429 + 0.694 = 6.123 weeks.
Since = 41.5, = 42.8 and = 6.123 = 2.474, the probability of meeting the schedule
time is given by:

Thus, the probability that the project can be completed in less than or equal to 41.5 weeks is
0.3048. In other words, the probability that the project will get delayed beyond 41.5 weeks is
0.6952.
Given that
But = 1.64, from normal distribution table. Thus,
.

179 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

IN-TEXT QUESTIONS

5. Beta probability distribution is often used in computing the expected activity


completion times and variances in networks. True / False
6. The shortest possible time (duration) in which an activity can be performed
assuming that everything goes well is _______________.
7. The amount of time that is expected to complete the activity is called
____________________.

CASE STUDY
Krishna Mills
Krishna Mills made the decision to construct a new feed mill in order to improve its production capacity. The
project was divided up into various tasks, some of which had to be finished before others could begin. The
activities, as well as the anticipated times for each, are listed in Exhibit 1 as "decided upon by management and
the precedence relationships." To save as much crucial time as possible while putting the new mill into service,
the management sought to move the schedule as far in advance as possible. The mills' president remarked, "If
we can get rolling, every week spared is worth Rs 70,000 in lost contribution." Several construction tasks could
be accelerated. For instance, by working extra hours, the company's architects could design the new plant in 10
weeks rather than the 12 weeks they had initially planned. The mills will have to pay an extra Rs 25,000 for
each week that is advanced due to this advancement. The following table displays the weekly crash cost as well
as the maximum amount that each activity could crash. The independent business that was a possible contractor
for one of the project's key duties, building the plant, had already been approached by the president of mills.
Krishna Mills expected to complete the remaining tasks either directly or via its representatives. . The
management had discussed a few bonus and penalty provisions with the mill contractors. One of them was that
the mills would pay contractors an extra Rs 75,000 for each week the facility was finished before the allotted 10
weeks.

The management of Krishna Mills is interested in learning which operations would crash and how to
schedule its employees.

180 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

6.5 TRADE-COST TRADE-OFFS

The initial creators of CPM gave the project manager the choice to allocate resources to tasks
in order to speed up project completion. The option to shorten activity times must consider
the increased expenses involved, as more resources (such as additional employees, overtime,
etc.) typically raise project costs. In essence, the decision that the project manager must make
entails exchanging decreased activity time for increased project cost.

The first key concept for this approach is that of crashing.

6.5.1 Project Crashing

It is usual for a project manager to encounter one or both of the following circumstances
while overseeing a project: Both the projected project completion date and the project's
timeline are behind schedule. In either case, some or all of the ongoing tasks must be
expedited in order to complete the project by the target deadline. Crashing is the process of
reducing the length of a project in the most affordable way possible. Additionally, extending
an activity's duration past its usual point (cost-efficient) may raise the expense of carrying out
that action. For the sake of simplicity, it is assumed that the relationship between an activity's
normal time and cost as well as crash time and cost is linear. Therefore, by calculating the
relative change in the cost per unit change in time, the crash cost per unit of time may be
determined.

6.5.2 Time-Cost Trade-Off Procedure

When all essential tasks are accomplished in accordance with schedule, crashing begins, and
it ends when all essential tasks have crashed. The process of determining time-cost trade-offs
for project completion can be summed up as follows:

Step 1: Determine the normal project completion time and associated critical path.

Step 2: Identify critical activities and compute the cost slope for each of these by using the
relationship

181 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

The values of cost slope for critical activities indicate the direct extra cost required to execute
an activity per unit of time.

Step 3: For reducing the total project completion time, identify and crash an activity time on
the critical path with lowest cost slope value to the point where
i. another path in the network becomes critical, or
ii. the activity has been crashed to its lowest possible time.

Step 4: If the critical path under crashing is still critical, return to step 3. However, if due to
crashing of an activity time in step 3, other path(s) in the network also become critical, then
identify and crash the activity(s) on the critical path(s) with the minimum joint cost slope.

Step 5: Terminate the procedure when each critical activity has been crashed to its lowest
possible time. Determine total project cost corresponding to different project durations.

Example 3: The data on normal time, cost and crash time and cost associated with a project
are shown in the following table.

Activity Normal Crash


Time Cost (Rs) Time Cost (Rs)
(weeks) (weeks)
1-2 3 300 2 400
2-3 3 30 3 30
2-4 7 420 5 580
2-5 9 720 7 810
3-5 5 250 4 300
4-5 0 0 0 0
5-6 6 320 4 410
6-7 4 400 3 470
6-8 13 780 10 900
7-8 10 1,000 9 1,200
4,220

Indirect cost is Rs 50 per week.


(a) Draw the network diagram for the project and identify the critical path.
(b) What are the normal project duration and associated cost?
(c) Find out the total float associated with non-critical activities.
(d) Crash the relevant activities and determine the optimal project completion time and cost.

182 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Solution: (a) The network for normal activity times is shown in fig 3. The critical path is:
with a project completion time of 32 weeks.

E4= 10 E7= 22
L4= 12 L7= 22
4 7

7 4
0

3 6 10
1 2 5 6
E2= 3 E5= 12
E1= 0 E6= 18 13
L2= 3 L5= 12
L1= 0 L6= 18
3 5
8
E8= 32
3 E3= 4 L8= 32
L3= 4

Figure 3: Network Diagram


(b) The normal total project cost associated with normal project duration of 32 weeks is as
follows:
Total cost = Direct normal cost + Indirect cost for 32 weeks
= 4,220 + 50 × 32 = Rs 5,820
(c) Calculations for total float associated with non-critical activities are shown in table 4.
Table 4: Total Float
Activity Total Float

2-3 (7 - 3) - 3 = 1
2-4 (12 - 3) - 7 = 2
3-5 (12 - 6) - 5 = 1
4-5 (12 - 10) - 0 = 2
6-8 (32 - 18) - 13 = 1

(d) For critical activities, crash cost-slope is given in table 5.

183 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Table 5: Crash Cost Slope


Critical Activity Crash Cost per Week (Rs)

1-2

2-5

5-6

6-7

7-8

The minimum value of crash cost per week is for activity 2 – 5 and 5 – 6. Hence, crashing
activity 2 – 5 by 2 days from 9 weeks to 7 weeks. But the time should only be reduced by 1
week otherwise another path become a parallel path. Network, as
shown in fig 4, is developed when it is observed that new project time is 31 weeks and the
critical path are and .

With crashing of activity 2 – 5, the crashed total project cost becomes:


Crashed total cost = Total direct normal cost + Increased direct cost due to crashing of
activity (2 – 5) + Indirect cost for 31 weeks
= 4,220 + 1 × 45 + 50 × 31 = 4,265 + 1,550 = Rs 5,815
For revised network shown in fig 4, new possibilities for crashing critical activities are listed
in table 6.
E4= 10 E7= 21
L4= 11 L7= 21
4 7

7 4
0
8
3 6 10
1 2 × 5 6
E2= 3 E5= 11
E1= 0 E6= 17 13
L2= 3 L5= 11
L1= 0 L6= 17
3 5
8
E8= 31
3 E3= 6 L8= 31
L3= 6

Figure 4: Network Diagram

184 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Table 6: Crash Cost Slope


Critical Activity Crash Cost per Week (Rs)

1-2

2-5

2-3 𝑖𝑛 𝑖 𝑛 𝑖

3-5

5-6

6-7

7-8

Since crashed cost slope for activity 5 – 6 is minimum, its time may be crashed by 2 weeks
from 6 weeks to 4 weeks. The updated network diagram is shown in fig 5.

E4= 10 E7= 19
L4= 11 L7= 19
4 7

7 4
0
8 4
3 10
1 2 × 5 × 6
E2= 3 E5= 11
E1= 0 E6= 15 13
L2= 3 L5= 11
L1= 0 L6= 15
3 5
8
E8= 29
E3= 6
3 L8= 29
L3= 6

Figure 5: Network Diagram


It may be noted in fig 5, that the critical paths shown in fig 4 remain unchanged because
activity 5 – 6 is common in both. With crashing of activity 5 – 6 by 2 weeks, the crashed total
cost becomes:

185 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Crashed total cost = Total direct normal cost + Increased direct cost due to
crashing of activity (5 – 6) + Indirect cost for 29 weeks
= 4,220 + (1 × 45 + 2 × 45) + 50 × 29 = Rs 5,805
For revised network given in fig 5, new possibilities for crashing in the critical paths are
listed in table 7.
Table 7: Crash Cost Slope
Critical Activity Crash Cost per Week (Rs)

1-2

2-3 𝑖𝑛 𝑖 𝑛 𝑖

2-5

5-6 )

6-7

7-8

The further crashing 6 – 7 activity time from 4 weeks to 3 weeks will result in increased
direct cost than the gain due to reduction in project time. Hence, terminate crashing. The
optimal project duration is 29 weeks with associated cost of Rs 5,805 as shown in table 8.
Table 8: Crashing Schedule of Project

Project Crashing Direct Cost (Rs) Indirect Cost (Rs) Total


Duration Activity and Cost (Rs)
(weeks) Weeks Normal Crashing Total
32 4,220 4,220 5,820
31 2 - 5(1) 4,220 4,265 5,815
29 5 - 6(2) 4,220 4,355 5,805
28 6 - 7(1) 4,220 4,425 5,825

186 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

IN-TEXT QUESTIONS
8. The process of shortening the duration of a project in the least expensive
manner possible is called ____________________.
9. In time-cost trade-off function analysis the:
a) cost decreases linearly as time increases b) cost at normal time is zero
c) cost increases linearly as time decreases d) none of the above

6.6 SUMMARY

PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method) have
been widely used to help project managers plan, schedule, and manage their projects ever
since they were developed in the late 1950s.

When using PERT/CPM, a project is first broken down into its separate activities, their
immediate predecessors are noted, and the time of each activity is estimated. The creation of
a project network to display this information is the next phase.

PERT/CPM produce project scheduling data, such as the earliest start time, latest start time,
and slack for each activity. Also, it outlines the actions that must be completed in a specific
order in order to avoid delays in project completion. Given that the critical path is the longest
path through the project network, if all activities proceed according to plan, the length of the
critical path establishes the project's duration.

Yet, because there is frequently a great deal of ambiguity over how long an activity will
really last, it is challenging for all activities to continue on schedule. By collecting three
different types of estimates (most likely, optimistic, and pessimistic) for the length of each
activity, the three-estimate approach in PERT addresses this dilemma. The mean and variance
of the probability distribution for this duration are approximately determined using this
information. The likelihood that the project will be completed by the deadline can then be
roughly calculated.
187 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

The project manager can analyse the impact on total cost of adjusting the expected duration
of the project to various alternative values using the time-cost trade-offs approach in CPM.
The time and cost for each action while it is carried out normally and when it is completely
crashed are the statistics required for this activity (expedited).

6.7 GLOSSARY

• Activity: - A job or task that consumes time and is a key subpart of a total project.

• Critical Activities: - Critical activities have zero slack time.

• Critical Path Method (CPM): - A deterministic network technique that is similar to


PERT but uses only one time estimate. CPM is used for monitoring budgets and
project crashing.

• Event: - A point in time that marks the beginning or ending of an activity.

6.8 ANSWERS TO IN-TEXT QUESTIONS

1. False 6. Optimistic time


2. True 7. Most likely time
3. False 8. Crashing
4. Activity 9. Option a
5. True

6.9 SELF-ASSESSMENT QUESTIONS

16. Explain the following term in PERT/CPM


I. Earliest time
II. Latest time

188 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

III. Total activity time


17. PERT takes care of uncertain duration. How far is this statement correct?

6.10 REFERENCES

• Balakrishnan, N., Render, B., Stair, R. M., & Munson, C. (2017). Managerial decision
modeling. Upper Saddle River, Pearson Education.

• Hillier, F.& Lieberman, G.J. (2014). Introduction to operations research (10th


ed.).McGraw-Hill Education.

6.11 SUGGESTED READINGS

• Anderson, D., Sweeney, D., Williams, T., Martin, R.K. (2012). An introduction to
management science: quantitative approaches to decision making (13th ed.). Cengage
Learning.

**************LMS Feedback: [email protected]**************

189 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

LESSON 7
MARKOV PROCESSES
Dr. Shubham Agarwal
Associate Professor
New Delhi Institute of Management
GGSIP University
[email protected]

STRUCTURE

7.1 Learning Objectives


7.2 Introduction
7.3 Stochastic Process
7.4 State Space
7.5 Classification of Stochastic Process
7.6 Markov Chain
7.7 Transition probability
7.8 Transition probability matrix
7.9 Initial distribution
7.10 Concept for Classification of the states
7.10.1 Accessibility
7.10.2 Communicating state
7.10.3 Communicating class
7.10.4 Closed set of states
7.10.5 Irreducible & reducible chain
7.10.6 Absorbing state
7.10.7 Periodicity
7.10.8 First visit probability
7.10.9 Mean Passage time
7.10.10 First return probability
7.10.11 Mean recurrence time
7.11 Classification of the states
7.11.1 Recurrent State
7.11.2 Transient State
7.11.3 How to determine whether a state is recurrent or transient
190 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

through transition graph


7.12 Some important results
7.13 Basic limit theorem for aperiodic markov chain
7.14 Stationary distribution
7.15 Application areas of markov chain
7.16 Summary
7.17 Glossary
7.18 Answers to In-text Questions
7.19 Self-Assessment Questions
7.20 Suggested Readings

7.1 LEARNING OBJECTIVES

After reading the unit, students will be able to


• Define the concepts of stochastic process.
• Describe the terminologies used in stochastic process.
• Explain Markov process.
• Understand the transition probabilities.
• To check whether a state is transient or recurrent.
• Identify the situation where Markov chains can be used in Business.

6.2 INTRODUCTION

Andrei Markov was a Russian mathematician who lived from 1856 to 1922. The only subject
he did well in was math, and he had a dismal grade point average overall. Later, he was
taught the subject by Pafnuty Chebyshev, a mathematics lecturer at the University of
Petersburg who is well known for his work in probability theory. Markov first focused on
number theory, convergent series, and approximation theory as his three primary scientific
disciplines. His most famous research on Markov chains is where the phrase originates, and
his first article on the subject appeared in 1906.
The Markov chain is a fundamental mathematical tool for stochastic processes. The
Markov Property is the essential idea, according to which some stochastic process predictions
can be made more simply by treating the future as independent of the past in light of the
191 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

process's present state. This is done to make stochastic process future state forecasts simpler
to comprehend. This section will explore the principles of Markov chains, explain the
different types of Markov Chains, and provide instances of its use in business and finance.
Markov chains are employed to determine the chance of events changing states. We'll
use the weather as an example: A sunny day enhances the probability of the next day being
sunny by 70% and reduces the probability of it being wet by 30%. There is a 20% chance that
tomorrow will be sunny if it rains today, but an 80% probability that it will rain again. This
can be summarised in a transition diagram, where each potential state change is shown in Fig.
1 of the diagram.

Fig.-1: Transition diagram

7.3 STOCHASTIC PROCESS

A stochastic process is one whose outcomes depend on some element of chance. A stochastic
or random process is a collection of random variables that is indexed by a mathematical set,
which means that each random variable in the stochastic process is specifically linked to an
element in the set. The index set is the collection used to index the random variables. In the
past, the index set was a subset of the real line, such as I the natural number, which gave the
index set a temporal interpretation.

The collection's random variables all draw their values from the same state space, a
body of mathematics. The real line, the integers, or the n-dimensional Euclidean space are a
few examples of the state space.

192 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

{X(t), t ∈T}, defined on some probability space (Ω, F, P), where T is a parameter
space, is referred to as a stochastic process. State space refers to the collection of all potential
values for all random variables, and states are its constituent parts.

Example: X1 = first toss, X2 = second toss, ………, Xn = nth toss.


Then, the collection of random (X1, X2, ……., Xn) variables is called stochastic process.

7.4 STATE SPACE

The values assumed by a random variable X(t) are called or states and the collection of all
possible values forms the state space (S) of the process. If X(t)= i , then we say the process is
in state i.
a) Discrete state process: This state space is finite or countable for example the non-
negative integers {0,1,2,3,….}.
b) Continuous state process: This state space contains finite of infinite intervals of the real
number line.

7.5 CLASSIFICATION OF STOCHASTIC PROCESS

A stochastic process can be classified in different ways for example, by its state space, its
index set, or the dependence among the random variable.
a) Discrete/ Continuous time: A stochastic process is considered to be in discrete
time if the index set has a finite or countable number of elements, such as a finite set of
numbers, the set of integers, or the natural numbers. Discrete-time stochastic process is the
name given to this particular kind of stochastic process. Time is referred to as continuous and
stochastic processes are referred to as continuous - time stochastic processes if the index set
of the stochastic process is some interval of the real line.
b) Discrete/ Continuous state space: The stochastic process is referred to as a discrete or
integer-valued stochastic process if the state space consists of integers or natural numbers.
The stochastic process is known as a real-valued stochastic process or a process with
continuous state space if the state space is the real line.
193 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

7.6 MARKOV CHAIN

A sequence of random variables {Xn, where n = 0, 1, 2, 3, …..} with discrete state space is
known as markov chain if,
Pr(Xn =K | Xn-1 = j, Xn-2 = j1, …….., X0 = jn-1) = Pr(Xn =K | Xn-1 = j) = pjk
Example: X1 = 0,1, X2 = 0,1, ………….., Xn = 0,1
Then the partial sum of random variable or present value is given by,
Sn = X1 + X2 + ………….. + Xn = {0, 1, 2, ……., n}
So, the future value is, Sn+1 = X1 + X2 + ………….. + Xn + Xn+1
Therefore the markov chain is, {Sn, n ≥ 1}

7.7 TRANSITION PROBABILTY

Probability of going from state i to state j is known as transition probability.


1-step probability from n to n-1: P(Xn = j | Xn-1 = i) = pij
2-step probability from n to n-2: P(Xn = j | Xn-2 = i) = pij
n-step probability from 2n to n: P(X2n = j | Xn = i) = pij

7.8 TRANSITION PROBABILTY MATRIX

Let S be a state space, such that S = {0, 1, 2, …….} then the transition probability matrix is
given by,

194 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

0 1 2 − −
0  p00 p01 p02 − −
 − − 
1  p10 p11 p12
P = 2  p20 p21 p22 − −
_ − − −− −− − −


_ − − −− −− − − 

Where,
p00 + p01 + p02 + ........ = 1
p10 + p11 + p12 + ........ = 1

Properties of transition probability matrix:


i) pij∈S ≥ 0
ii) If each row sum is 1, then the matrix is known as stochastic matrix.
iii) If each column sum is 1, then the matrix is known as doubly stochastic matrix.

Example: Suppose that the probability of a dry day (state 0) followinga rainy day is 1/3 and
probability of a rainy day (state 1) following a dry day is 1/2. If there is a two-state Marcov
chain such that p10 = 1/3 and p01 = 1/2 and the transition probability matrix (TPM),
0 1
0 1 / 2 1 / 2 
P= 
1 1 / 3 2 / 3
Given that may 1 is a dry day, find the probability that May 3 is a dry day.
Solution: Given that X1 = May 1 is a dry day.
Probability that X3 = May 3 is a dry day is given by,
P(X3 = 0 | X1 = 0) = p00(2) = p2
0 1
1 / 2 1 / 2  1 / 2 1 / 2  0 5 / 2 7 / 12 
P 2 = P.P =  =
1 / 3 2 / 3 1 / 3 2 / 3 1 7 / 18 11 / 18

Therefore, P(X3 = 0 | X1 = 0) = 5/12

Second method using transition graph: Let the transition matrix is

195 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

0 1
0 1 / 2 1 / 2 
P= 
1 1 / 3 2 / 3
Then the transition graph is,

And p00(2) = p01 p10 + p00 p00 = (1/2)(1/3) + (1/2)(1/2) = 5/12

Example: Consider a markov chain {Xn | ≥ 0} with state space {1, 2, 3} and transition
matrix,
1 2 3
1 0 1 / 2 1 / 2 
P = 2 1 / 2 0 1 / 2
3 1 / 2 1 / 2 0
 
Then, find P(X3 = 1 | X0 = 0)
Solution: The transition graph corresponding to the given TPM is,

And p11(3) = p12 p23 p31 + p13 p32 p21 = (1/2)(1/2)(1/2) + (1/2)(1/2)(1/2) = 1/4
Therefore, P(X3 = 1 | X0 = 0) = ¼

196 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

7.9 INITIAL DISTRIBUTION

Let the state space is {0, 1, 2, .....}, initial state = 0, then P(X0 = i) is called initial distribution
and
P(X0 = i) = 𝜋i

Example: Let {Xn, n ≥ 0} be a markov chain with 3 sattes 0, 1, 2 and with transition matrix
3 / 4 1 / 4 0 

P = 1 / 4 1 / 2 1 / 4 

0 3 / 4 1/ 4 
And initial distribution, Pr(X0 = i) = 1/3, i = 0, 1, 2
Then, find P(X3 = 1 | X0 = 1) and calculate the joint probability, P(X3 = 1, X1 = 1, X0 = 2)
Solution: S = {0, 1, 2}

3 / 4 1 / 4 0 

P = 1 / 4 1 / 2 1 / 4 

0 3 / 4 1/ 4 

p11(3) = p12 p22 p21 + p11 p12 p21 + p10 p00 p01
= (1/4)(1/4)(3/4) + (1/2)(1/4)(3/4) + (1/4)(3/4)(1/4)
= 3/16
Therefore, P(X3 = 1 | X0 = 1) = 3/16
Now, p11(2) = p11 p11 + p12 p21 + p10 p01
= (1/2)(1/2) + (1/4)(3/4) + (1/4)(1/4)
= 1/2
Now, P(X3 = 1, X1 = 1, X0 = 2) = P(X3 = 1, X1 = 1 | X0 = 2) P(X0 = 2)
= P(X3 = 1| X1 = 1) P(X1 = 1 | X0 = 2) P(X0 = 2)
= p11(2) . p21(1) . (1/3)
= (1/2)(3/4)(1/3)
= 1/8
7.10 CONCEPT OF CLASSIFICATION OF STATES

7.10.1 Accessibility
If pij(n) > 0, where n ≥ 1, then state j is accessible from state i.
Let if p01 = 1/2 > 0, which means that state 1 is accessible from state 0.
197 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

And if p10 = 0, which means that state 0 is not accessible from state 1.

7.10.2 Communicating state


Let i,j ∈S such that, pij ( n )  0 and p ji ( n )  0
1 2

i  j
i.e., i and j are communicating states.

7.10.3 Communicating class


Let i ∈S then C(i) is called the communicating class such that
C(i) = { i ∈S | i  i }
Let, i, j, k ∈S and i  j , j  k , k  i
Then, C(i) = {i, j, k}

7.10.4 Closed set of states


If i and j communicate only with each other, not from other states, then C(i) = {i, j} is called
the closed set of states.

7.10.5 Irreducible & reducible chain


A markov chain is said to be irreducible if every state communicate with each other, i.e.,
there is only one communicating class.
i.e., C(i) = S
otherwise the markov chain is called reducible markov chain.

Example: Check whether the given transition matrix is irreducible or reducible for the state
space {0, 1, 2}
0 1 / 2 1 / 2 
 
(i) P = 1 / 2 0 1 / 2
1 / 2 1 / 2 0
Solution: The transition diagram for the given TPM is,

198 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Obviously, C(0) = {0, 1, 2}


Therefore, the given transition matrix is irreducible.

1 / 2 1 / 2 0 0 
1 / 2 1 / 2 0 0 
(ii) P = 
0 0 0 1 
 
1 / 4 1 / 4 1/ 4 1 / 4
Solution: The transition diagram for the given TPM is,

From the diagram it is clear that,


C(0) = {0, 1}
C(1) = {0, 1}
C(2) = {2, 3}
C(3) = {2, 3}
Therefore, the given transition matrix is reducible.

7.10.6 Absorbing state


199 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

If for any state i, C(i) has only one element, then the state is called an absorbing state.
Eg.: If C(i) = {i}, then i is called absorbing state.
And if C(j) = {j, k}, then j is not an absorbing state.

Example: Find the absorbing states from the following TPM:


0 1 2
0 1 / 3 1 / 3 1 / 3
P = 1 1 / 2 0 1 / 2 
2 0 
 0 1
Solution: From the TPM it is clear that,
C(0) = {0, 1}
C(1) = {0, 1}
C(2) = {2}
Here, {2} is absorbing state.

7.10.7 Periodicity
The period of the state i ∈ S is defined as d(i) or λ(i) and is given by,
d(i) = gcd (n ≥ 1 | pii ( n )  0 )
where n is the number of steps.

Remarks:
• If any state has a self loop then its period is 1.
• If d(i) = 1, then the state i is aperiodic state.
• If d(i) > 1, then the state i is periodic.
• If i and j are communicating states then period of i and j will be equal, i.e., d(i) = d(j)

Example: find the period of the states from the following transition diagram:
Solution: From the diagram it is clear that,
d(0) = gcd{1, 2, 3, .....} = 1
d(1) = gcd{ 2, 3, 4, .....} = 1

7.10.8 First visit probability


The first visit probability is given by,

200 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

= P{ X n = j , X m  j , m  n | X 0 = i }
(n)
f ij
i.e., i→j in n-steps but can not visit i in less than n-steps. It must be n-steps only.

Example: Find f 02 ( 2) from the following transition diagram:

Solution: From the graph, we can write


( 2)
f 02 = (1/2)(1/2) = 1/4

7.10.9 Mean Passage time


(n )
If f ij be the first visit probability, then mean passage time is given by,

ij =  n fij ( n )
n =1

7.10.10 First return probability


(n )
If f ii denotes the probability that, starting from state i, the first return to state i, in nth time
step,
= P{ X n = i, X m  i, m  n | X 0 = i
(n)
i.e., fii
The probabilities f ii (n ) are known as first return probabilities.

fii =  fii
(n)

n =1

fii = Probability [Ever return to i | X0 = i]

7.10.11 Mean recurrence time


(n )
If f ii be the first return probability, then mean recurrence time is given by,

201 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA


ii =  n fii ( n )
n =1

Example: Let {Xn, n ≥ 0} be a 2-state markov chain with state space S = {0, 1} and
transition matrix,
0 1
0 1 / 2 1 / 2 
P= 
1 1 / 3 2 / 3
Assouming X0 = 0, find the expected return time to 0.

Solution: We know that,  00 =  n f 00
(n)
= 1 f 00 (1) + 2 f 00 ( 2) + 3 f 00 (3) + 4 f 00 ( 4) + 5 f 00 (5) + .........
n =1
(1) (1)
Now, f 00 = p00 = 1/2
( 2)
f 00 = p01 p10 = (1/2)(1/3) = 1/6
( 3)
f 00 = p01 p11 p10 = (1/2)(2/3)(1/3) = 1/9
( 4)
f 00 = p01 p11 p11 p10 = (1/2)(2/3)(2/3)(1/3) = 2/27
( 5)
f 00 = p01 p11 p11 p11 p10 = (1/2)(2/3)(2/3)(2/3)(1/3) = 4/81
Therefore,
 00 = 1 (1/2) + 2 (1/6) + 3 (1/6)(2/3) + 4 (1/6)(2/3)2 + 5 (1/6)(2/3)3 + ........
= 1/2 + 1/3 + (1/6)(2/3) [3 + 4 (2/3) + 5 (2/3)2 + .......]
Let S = 3 + 4 (2/3) + 5 (2/3)2 + .......
(2/3)S = 3 (2/3) + 4 (2/3)2 + 5 (2/3)3 + .......
Subtracting we get,
(1/3)S = 3 + (2/3) + (2/3)2 + (2/3)3 + .......
= 3 + (2/3)[1 + (2/3) + (2/3)2 + (2/3)3 + ......]
= 3 + (2/3)[1/(1 – (2/3))] = 5
So, S = 15
Therefore, the expected return time to 0 is  00 = 1/2 + 1/3 + (1/6)(2/3) [15] = 5/2

202 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

7.11 CLASSIFICATION OF STATES

7.11.1 Recurrent State



If the first return probability for any state i, fii =  fii
(n)
=1, then the state is called a
n =1
recurrent state.

For any recurrent state, if mean recurrence time, ii =  n fii
(n)
< ∞ (finite), then it is called a
n =1

positive recurrent state and if ii =  n fii
(n)
= ∞ (infinite), then it is called a null recurrent
n =1
state.

7.11.2 Transient State



If the first return probability for any state i, fii =  fii
(n)
< 1, then the state is called a
n =1
transient state.

Example: consider the chain on states {1, 2, 3, 4} with TPM


1 / 3 2 / 3 0 0 
 1 0 0 0 
P= 
1 / 2 0 1/ 2 0 
 
 0 0 1 / 2 1 / 2
Find the transient and recurrent states.
Solution: Here,
(1)
f11 = 1/3
( 2)
f11 = p12 p21 = (2/3) (1) = 2/3
( 3)
f11 = p12 p22 p21 = (2/3)(0)(1) = 0
(n)
Therefore, f11 = 0, if n ≥ 3

f11 =  f11 = f11
( n) (1) ( 2) ( 3)
+ f11 + f11 + ..... = 1/3 + 2/3 + 0 + 0 +...... = 1
n =1

So, state 1 is recurrent state.

203 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

(1)
Now, f 22 =0
( 2)
f 22 = p21 p12 = (1)(2/3) = 2/3
( 3)
f 22 = p21 p11 p12 = (1)(1/3)(2/3) = 2/9

f 22 =  f 22 = f 22
(n) (1) ( 2) ( 3)
+ f 22 + f 22 + ........= 0 + 2/3 + 2/9 < 1
n =1

So, state 2 is transient state.

Now, f 33 (1) = 1/2


( 2)
f 33 = p31 p13 = (1/2)(0) = 0

f 33 =  f 33 = f 33 (1) + f33 ( 2) + ..... = 1/2 + 0 < 1
(n)

n=1

So, state 3 is transient state.

(1)
Now, f 44 = 1/2
( 2)
f 44 = p43 p34 = (1/2)(0) = 0

f 44 =  f 44 = f 44
(n) (1) ( 2)
+ f 44 + ..... = 1/2 + 0 < 1
n =1

So, state 4 is transient state.


Hence, state 1 is the recurrent state and states 2, 3, 4 are transient states.

7.11.3 How to determine whether a state is recurrent or transient through transition


graph
In the transition graph, if we move out from any state and we found that there is no
route to come back to the starting state, then the state is called transient state, otherwise it is
called a recurrent state.

Example: Find the transient and recurrent states from the following TPM:
1 2 3
1 1 / 4 3 / 4 0 
P = 2  0 7 / 8 1 / 8
3  0 1 / 2 1 / 2
 
Solution: The transition graph of the given TPM is,
204 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

For state 1, if we move out from state 1, we found that there is no route to come back to state
1, therefore state 1 is transient.
For state 2, if we move out from state 2, we found that there is a route to come back to state
2, therefore state 2 is recurrent.
Similarly, for state 3, if we move out from state 3, we found that there is a route to come back
to state 3, therefore state 3 is recurrent.

Example: consider the chain on states {1, 2, 3, 4} with TPM


 1 0 0 0
1 / 3 1 / 3 1 / 3 0
P=
 0 1 0 0
 
 0 0 0 1
Find the absorbing, transient and recurrent states using transition graph.
Solution: The transition graph of the given TPM is,

Since, C(1) = {1} and C(4) = {4}, therefore 1 and 4 are absorbing state.
For state 1, if we move out from state 1, we found that there is a route to come back to state
1, therefore state 1 is recurrent.
205 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

For state 2, if we move out from state 2, we found that there is no route to come back to state
2, therefore state 2 is transient.
For state 3, if we move out from state 3, we found that there is no route to come back to state
3, therefore state 3 is transient.
Similarly, for state 4, if we move out from state 4, we found that there is a route to come back
to state 4, therefore state 4 is recurrent.
Hence, 2 and 3 are transient states and 1 and 4 are recurrent states.

7.12 SUMMARY

If state i is recurrent iff,  pii
(n)
=
• n =0

• If state i is transient iff, 
n =0
pii
(n)


• If state i is transient then, pii ( n ) → 0, when n→

IN-TEXT QUESTIONS
16. _______________ are a fundamental part of stochastic processes and are
used widely in many different disciplines.

if  pii( n ) = 
17. n =0

The state i is called ___________.


a) Recurrent state b) Transient state
c) Both of these d) None of these
18. A __________ irreducible markov chain has all recurrent states.
19. If a TPM is doubly stochastic matrix, then the sum of each column and each
row is______ .

206 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Theorem: Recurrence and transient are class properties.


Let i→recurrent and C(i) = {i, j}, i.e., i and j are communicating then j→recurrent.
or, if i is transient then j is also transient.
Remarks:
• A finite irreducible markov chain has all recurrent states.
• A finite markov chain has atleast one recurrent state.

7.13 BASIC LIMIT THEOREM FOR APERIODIC MARKOV CHAIN

Let {Xn: n = 1, 2, 3, .....} be a recurrent, irreducible and aperiodic markov chain with
transition probability matrix P = (pij), then
1
=
(n)
Lim pij
n → ii

where ii is mean recurrence time for state i.

Example: Consider a markov chain with state space {0, 1, 2, 3, 4}. The TPM is given below:

 1 0 0 0 0 
1 / 3 1 / 3 1 / 3 0 0 

P =  0 1/ 3 1/ 3 1/ 3 0 
 
 0 0 1 / 3 1 / 3 1 / 3
 0 0 0 0 1 
( n)
Then find Lim p23
n→

Solution: The transition graph for the given TPM is,

207 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

From the graph is it clear that, 0 and 4 are absorbing states, hence recurrent.
1, 2 and 3 are transient states.
Since, 3 is transient, therefore

=0
(n)
Lim p23
n →

7.14 STATIONARY DISTRIBUTION

Consider a markov chain with transition probability pjk and TPM P = [pjk]. A probability
distribution {vj} is called stationary or invariant for the given chain if
vk =  v j p jk
j

Such that,
v j  0,  v j = 1
j

Again,
vk =  v j p jk = { vi pij } =  vi pik
j j i i

And in general,
vk =  vi pik , n  1
( n)

Let us consider the TPM,

208 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

 p11 p12 p13 − −


p p 22 p 23 − −
 21
P =  p31 p32 p33 − −
 
 − − − − −
 − − − − −

Matrix notation, V = VP or, 𝜋 = 𝜋P


Let the set of states ={1, 2, 3, .......}
𝜋 = [𝜋1 𝜋2 𝜋3 ............]
𝜋 = 𝜋P

 p11 p12 p13 − −


p p22 p23 − −
 21
[𝜋1 𝜋2 𝜋3 ............] = [𝜋1 𝜋2 𝜋3 ............]  p31 p32 p33 − −
 
− − − − −
 − − − − −

After multiplication, we have


𝜋1 = 𝜋1p11 + 𝜋2p21 + 𝜋3p31 + ......
𝜋2 = 𝜋1p12 + 𝜋2p22 + 𝜋3p32 + ......
𝜋3 = 𝜋1p13 + 𝜋2p23 + 𝜋3p33 + ......
....................................................
....................................................
n


i =1
i =1

 1 +  2 + .... +  n = 1
Solving these equations, we can find the stationary distribution.

Example: Let S = {1, 2, 3} with TPM,


1 / 2 1 / 3 1 / 6
P = 3 / 4 0 1 / 4
 0 1 0 

209 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Find the stationary distribution.


Solution: We know that,
𝜋 = 𝜋P

1 / 2 1 / 3 1 / 6
 
[𝜋1 𝜋2 𝜋3] = [𝜋1 𝜋2 𝜋3] 3 / 4 0 1 / 4
 0 1 0 

After multiplication, we have


𝜋1 = (1/2)𝜋1 + (3/4)𝜋2 ............. (1)
𝜋2 = (1/3)𝜋1 + 𝜋3 ............. (2)
𝜋3 = (1/6)𝜋1 + (1/4)𝜋2 ............. (3)
And 1 +  2 +  3 = 1 ............. (4)

Solving these equations, we have


𝜋1 = (3/2)𝜋2
𝜋3 = (1/2)𝜋2

Using values of 𝜋1 and 𝜋3 in equation (4), we have


(3/2)𝜋2 + 𝜋2 + (1/2)𝜋2 = 1
𝜋2 = 1/3
Therefore, 𝜋1 = 1/2 and 𝜋3 = 1/6
Hence, the required stationary distribution is, [𝜋1 𝜋2 𝜋3] = [1/2 1/3 1/6]

Example: Consider the following markov chain,

210 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Show that the given chain is irreducible and aperiodic. Also find the stationary distribution
for this chain.
Solution: From the transition graph it is clear that,
C(1) = {1, 2, 3}, therefore the chain is irreducible.
Also, 1 and 3 are self loops, therefore d(1) = 1, d(3) = 1
And all the states are communicating, so d(2) = 1
Since for the given markov chain the period of each state is 1, therefore it is aperiodic chain.
Th e TPM for the given chain is,
1 / 4 1 / 2 1 / 4 
P = 1 / 3 0 2 / 3
1 / 2 0 1 / 2 
We know that,
𝜋 = 𝜋P

1 / 4 1 / 2 1 / 4 
 0 2 / 3
[𝜋1 𝜋2 𝜋3] = [𝜋1 𝜋2 𝜋3] 1 / 3
1 / 2 0 1 / 2 

After multiplication, we have


𝜋1 = (1/4)𝜋1 + (1/3)𝜋2 + (1/2)𝜋3 ............. (1)
𝜋2 = (1/2)𝜋1 ............. (2)
𝜋3 = (1/4)𝜋1 + (2/3)𝜋2 + (1/2)𝜋3 ............. (3)
And 1 +  2 +  3 = 1 ............. (4)
Solving these equations, we have
𝜋1 = 2𝜋2
𝜋3 = (7/3)𝜋2
Using values of 𝜋1 and 𝜋3 in equation (4), we have
2𝜋2 + 𝜋2 + (7/3)𝜋2 = 1
𝜋2 = 3/16
Therefore, 𝜋1 = 3/8 and 𝜋3 = 7/16
Hence, the required stationary distribution is, [𝜋1 𝜋2 𝜋3] = [3/8 3/16 7/16]

211 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

IN-TEXT QUESTIONS

20. For any recurrent state, if mean recurrence time, ii =  n fii
(n)
< ∞ (finite),
n =1

then it is called a _____________.

21. If d(i) = 1, then the state i is ___________.


a) Transient state b) Aperiodic state
c) Recurrent state d) None of these

22. if C(j) = {j, k}, then j is not an _____________.

23. If i and j communicate only with each other, not from other states, then C(i)
= {i, j} is called the ___________ of states.

Example: Let {Pn, n ≥ 0} be a sequence of numbers, such that all n ≥ 0, Pn > 0,


n n

P
n =0
n =1  nP
n =0
n 
,
Consider the markov chain with S = {0, 1, 2, .......} with TPM
 p0 p1 p2 − −
1 0 0 − −

P=0 1 0 − −
 
0 0 1 − −
 − − − − −
Show that the chain in irreducible and positive recurrent.
Solution: The transition diagram for the given TPM is,

212 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

From the graph it is clear that,


C(0) = {0, 1, 2, 3, ...........} = S
Therefore, the chain is irreducible and since all the states are communicating.
Also there are paths so that we can start from any state and return to it, therefore all the states
are recurrent.

Now,  00 =  n f ii
(n)

n =1

= 1 f 00 (1) + 2 f 00 ( 2) + 3 f 00 (3) + ...........


= 1 (p0) + 2 (p1) + 3 (p2) + ..........
= [p0 + (p1 + p2 + p3 + ......)] + (p1 + 2p2 + 3p3 + ......)

=1+  nP
n =1
n

00  
Therefore, the chain is positive recurrent.

7.15 APPLICATION AREAS OF MARKOV CHAINS

Markov chains are utilised in a wide range of contexts because they may be created to
simulate a variety of real-world processes. These disciplines include speech recognition,
search engine algorithms, and the mapping of animal life populations. They are frequently
used in economics and finance to forecast macroeconomic events like market crashes and
cycles between recession and boom. Predicting asset and option values and estimating credit
213 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

risks are two other applications. To mimic the randomness in a continuous-time financial
market, Markov chains are used. For instance, a stochastic discount factor, which is defined
using a Markov chain, determines the price of an item.

7.16 SUMMARY

A key idea in stochastic processes is the Markov chain. They can be used to significantly
simplify processes that meet the Markov property, which states that a stochastic variable's
future state depends only on its current state. This means that understanding the process's past
performance won't help with future projections, which naturally minimises the quantity of
information that must be taken into account. It is possible to identify specific patterns in a
market's prior moves by examining its historical data. Markov diagrams can then be created
from these patterns and used to forecast future market movements and the dangers attached to
them.

7.17 GLOSSARY

• The Markov chain - The Markov chain is a fundamental mathematical tool for
stochastic processes. The Markov Property is the essential idea, according to which
some stochastic process predictions can be made more simply by treating the future as
independent of the past in light of the process's present state.
• A stochastic process - A stochastic or random process is a collection of random
variables that is indexed by a mathematical set, which means that each random
variable in the stochastic process is specifically linked to an element in the set.
• Markov chain - A sequence of random variables {Xn, where n = 0, 1, 2, 3, …..} with
discrete state space is known as markov chain if,

• Pr(Xn =K | Xn-1 = j, Xn-2 = j1, …….., X0 = jn-1) = Pr(Xn =K | Xn-1 = j) = pjk

• Absorbing state - If for any state i, C(i) has only one element, then the state is called
an absorbing state.

• Recurrent State - If the first return probability for any state i, fii =  fii
(n)
=1, then
n =1

the state is called a recurrent state.

214 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization


• Transient State - If the first return probability for any state i, fii =  fii
(n)
< 1, then
n =1

the state is called a transient state.

7.18 ANSWERS TO IN-TEXT QUESTIONS

1. Markov chains 5. Positive recurrent state


2. Recurrent state 6. Aperiodic state
3. Finite 7. Absorbing state
4. 1 8. Closed set

7.19 SELF-ASSESSMENT QUESTIONS

1) Consider the markov chain with state space S = {1, 2, 3} with TPM
 0 1 / 2 1 / 2
P = 1 / 2 0 1 / 2
1 / 2 1 / 2 0 
Let 𝜋 = [𝜋1 𝜋2 𝜋3] be the stationary distribution of markov chain nd d(1) denotes the
period of state 1. Show that d(1) = 1 and 𝜋1 = 1/3.
2) Consider the markov chain {Xn: n ≥ 0} on state space S = {0, 1} with TPM P. then
1 0
show that if P= 
0 1 
Then, Lim P[Xn = i] converges for i = 0, 1, but limits depand on initial distribution v.
n →

3) There is a
calculator that simply employs the digits 0 and 1. One of these digits is meant to be
transmitted through a number of phases. At each stage, though, there is a chance p
that the digit that enters will be altered when it exits and a chance q = 1 - p that it
won't. Create a Markov chain using the digits 0 and 1 as states to represent the
transmission process. What is the transition probabilities matrix? Create a tree now
215 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

and assign probabilities based on the assumption that the process starts in state 0 and
goes through two transmission stages. What is the likelihood that the machine will
eventually create the digit 0 after two stages?
4) Suppose that a man can work as a professional, a skilled worker, or an unskilled
worker. Suppose that, among the sons of professionals, 80% work in the field of their
fathers' profession, 10% are skilled labourers, and 10% are unskilled labourers. Sons
of skilled labourers make up 60% of the labour force, 20% of professionals, and 20%
of unskilled workers. In the case of unskilled labourers, 50% of the sons work as such,
with 25% of them falling into each of the other two categories. Assuming that every
man has at least one son, create a Markov chain by choosing a son at random from
each family and following that son's career path through numerous generations.
Create the transition probabilities matrix. Calculate the likelihood that a randomly
selected untrained labourer's grandson is a professional man.

7.20 SUGGESTED READINGS

• B. Sericola (2013). Markov Chains: Theory, Algorithms and Applications. London:


ISTE Ltd and John Wiley & Sons Inc.
• R. G. Gallager (2013). Stochastic processes: theory for applications. United Kingdom:
Cambridge university press.

**************LMS Feedback: [email protected]**************

216 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

LESSON 8
THEORY OF GAMES
Dr. Upasana Dhanda
Assistant Professor
S.G.T.B. Khalsa College
Delhi University
[email protected]

STRUCTURE

8.1 Learning Objectives


8.2 Introduction
8.3 Game Models
8.4 Two-person zero sum game
8.4.1 When saddle point exists
8.4.2 When saddle point does not exist
8.4.3 Dominance Rule
8.4.4 Linear Combination
8.5 Solution of m × n games – Formulation and Solution as a LPP
8.6 Summary
8.7 Glossary
8.8 Answers to In-text Questions
8.9 Self-Assessment Questions
8.10 References
8.11 Suggested Readings

8.1 LEARNING OBJECTIVES

• The students will learn the concept of game theory for decision making in managerial
problems.
• It will equip them to know the consequences of interplay and pay-offs with the use of
each combination of strategies by the players in the game.
• Students will understand various game models and their solutions to find out the
optimal strategies and expected pay-off for the players in the game.
217 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

8.2 INTRODUCTION

Game Theory helps in decision-making in situations of conflict of interests where two or


more rational opponents are competing against each other. The situation of conflicting
interests among the opponents is called a game and the opponents or the decision-makers in
the game are the players. Theory of games equips in determining the rules of rational
behaviour for the players in the game situation in which the outcomes are dependent on the
actions of interdependent players.
Each player in the game has its own set of strategies. A strategy is the action taken by the
player in various game situations. Each strategy chosen by the player in a game situation
leads to outcomes called pay-offs. Each player in the game is assumed to be rational which
means that a player’s preference of strategies is determined by the order of magnitude of the
payoffs with each strategy. . The pay-offs of various combinations of strategies by the players
are given and known to them but their objections can be different and the outcomes are
interdependent on each others’ actions. The solution of the game calls for determing the
optimal strategies for the players. Each player strives for the optimal strategy. An optimal
strategy provides the best situation in the game as it involves the maximal pay-off for the
players.
Firms competing for market share, labour union striking against management, players in a
chess game, companies competing for sales are game situations in real-life. Theory of games
helps in addressing such situations of competition and conflicting interests among opponents
with an objective of rational decision making with optimal solution for the players.

8.3 SCHEDULING WITH KNOWN ACTIVITY TIMES

There are different game models which are classified as follows.


On the basis of players:
Game situations where two opponents/players are competing against each other are called
two-person game. Games which involve more than two players are known as n-person game,
where it does not necessarily mean that exactly n persons are involved in the game but rather
the participants can be categoried into n mutually exclusive categories and members of these
categories have identical interests.

218 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

On the basis of sum of gain and loss:


Game situations where the sum of gains and losses is equal to zero are called zero-sum or
constant sum games. Games where the sum of gains and losses is not zero are called non-zero
games. For example, if the players decide that at the end of the game, the loser would pay Rs.
500 to the winner, then it is called a zero-sum game because the loss of one player is the gain
of the other player (sum of gain and loss being equal to zero).
On the basis of stratgies:
Game situations where players have a option of choosing from only a finite number of
strategies are called finite games. Game situations where players have a option of choosing
from an infinite number of strategies are called infinite games.
In our analysis, we deal with two-person zero-sum games with finite choices for players.

8.4 TWO-PERSON ZERO-SUM GAME

A two-person zero-sum game is the one which involves two players with competing interest
and gain of one is equal to the loss of another. To illustrate, let’s assume there are two
companies Alpha Limited (A) and Beta Limited (B) which are competing for the market
share. Now, given the total size of the market, gain of market share of one firm would lead to
the loss of market share for another. Thus, it is a zero-sum game as sum of gains and losses
for both the firms is equal to zero.
Now, let’s assume that both the firms are considering four strategies to increase their market
share; High advertising, celebrity endorsements, free samples and social media marketing.
We assume that currently they have equal market share and further each of the firm can
employ only one strategy at a time.
Given the above conditions, 4 × 4 =16 combinations of moves are possible. High advertising
by Alpha Limited can be accompanied by high advertising, celebrity endorsements, free
samples and social media marketing by Beta Limited. Similarly, celebrity endorsements by
Alpha Limited can be accompanied by high advertising, celebrity endorsements, free samples
and social media marketing by Beta Limited and so on for further strategies. Each
combination of strategy will affect the market share in a particular way giving the pay-offs.
For example, high advertising by Alpha Limited and high advertising by Beta Limited will
lead to 16 points (implying 16% market share) in favour of Alpha Limited. Similarly, high
advertising by Alpha Limited accompanied by celebrity endorsements by Beta Limited leads
219 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

to 17 points (17% market share in favour of Beta Limited. Similarly, there are pay-offs for
each combination of strategies employed by Alpha and Beta.
The pay-offs are shown in the matrix below. The strategies of high advertising, celebrity
endorsements, free samples and social media marketing employed by Alpha are given as a1,
a2, a3 and a4 and strategies of high advertising, celebrity endorsements, free samples and
social media marketing employed by Beta are given as b1, b2, b3 and b4 in the table. Please
note the pay-off matrix is drawn from Alpha’s viewpoint which means that a positive pay-off
means that Alpha has gained the market share at the expense of Beta and the negative pay-
offs imply Beta’s gain at the expense of Alpha.

Beta’s Strategies
b1 b2 b3 b4
a1 16 -17 -8 9
Alpha’s a2 6 8 -5 -13
strategies
a3 11 9 12 16
a4 3 4 9 8

Now, we need to understand that both the companies are aware of the pay-off matrix but they
are not aware of the strategy that the other one will choose. The conservative approach to
select the best strategy will be to assume the worst and act accordingly. Thus, with reference
to the pay-off matrix, if Alpha Limited chooses strategy a1, it would expect Beta Limited to
choose strategy b2, resulting in -17 as the pay-off for Alpha. If Alpha assumes a2, it would
expect Beta to select b4, resulting in -13 as the pay-off matrix. Similarly, if Alpha chooses a3,
it would lead to 9 as the pay-off as it would expect Beta to select b2 and choosing a4 as the
strategy by Alpha would lead to 3 as the pay-off as it would expect Beta to select b1 strategy.
We need to keep in mind that both the companies know the pay-off but are unaware of other
chosen strategy and are conservative in deciding their strategy based on the pay-offs.
The company Alpha Limited would like to make the best use of the situation by choosing the
maximum out of these minimum pay-offs. In other words, it would select the highest of the
minimum pay-offs for each of the four strategies. This decision rule is called maximin

220 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

stretagy- choosing maximum out of minimum pay-offs. Since, the minimum pay-off for each
strategy for Alpha a1, a2, a3 and a4 is -17, -13, 9 and 3 respectively; Alpha Limited would
select maximum out of these pay-offs i.e. 9 which is corresponding to strategy a3 (free
samples).
Similarly, Beta Limited would also be conservative in its approach. If Beta chooses b1, then it
would expect Alpha to choose a1 (maximum advantage for Alpha) resulting in 16 as the pay-
off. . If Beta chooses b2, then it would expect Alpha to choose a3 resulting in 9 as the pay-off.
Similarly, choosing b3 by Beta would result in a pay-off of 12 as Alpha would be expected to
choose a3 strategy and if Beta select b4, then Alpha would be expect to select a3 resulting in
16 as the pay-off. To mimize the advantage to Alpha, Beta would select the strategy that
yields the minimum advantage to its competitor. Hence, the decision of Beta Limited will be
in accordance to the minimax strategy- selecting minimum out of the maximum pay-offs.
Since, the maximum pay-off for each strategy for Alpha a1, a2, a3 and a4 is 16, 9, 12 and 16
respectively; Beta Limited would select minimum out of these maximum pay-offs i.e. 9
which is corresponding to strategy b2 (celebrity endorsements).
It should be noted that corresponding to maximin rule for Alpha Limited and minimax rule
for Beta Limited, the pay-off is 9. This pay-off is the value of the game which represents the
final pay-off to the winner by the losing player. Since, the pay-off is 9, which is drawn from
Alpha’s point of view, it means the game situation is favourable towards Alpha Limited. If
the game value was negative, then it would be favourable towards Beta Limited. The game
would said to be fair or equitable if the value of the game was zero. This means it favours
none of the players.
Thus, in the above example, Alpha’s optimal strategy is a3 (giving free samples) and Beta’s
optimal strategy is b2 (celebrity endorsements) and the value of the game is 9 which means
9% market share in favour of Alpha Limited. The game situation is favourable towards
Alpha.
8.4.1 Saddle Point
The point of equilibrium where the maximin value is equal to the minimax value is called
saddle point. To obtain the saddle pint, we find the row minima (minimum pay off for each
row in the pay off matrix) and the column maxima (maximum pay off for each column in the
pay off matrix). In case, maximum of row minima is equal to minimum of colum maxima,
then the value represents the saddle point. Let’s consider our previous example.

221 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Beta’s Strategies
b1 b2 b3 b4 Row
minima
Alpha’s a1 16 -17 -8 9 -17
strategies a2 6 8 -5 -13 -13
a3 11 9 12 16 9*
a4 3 4 9 8 3
Column 16 9* 12 16
maxima

In the table, we find the row minima (miminum pay-off for each row) and column maxima
(maximum pay-off for each column). As we can see the maximum of row minima (maximun
strategy) and minimum of column maxima (minimax strategy) is the same i.e. 9. This is the
point of equilibrium (saddle point). It represents the value of the game and implies that Alpha
limited will gain 9% market share at the cost of Beta Limited. The game situation is
favourable to Alpha Limited.
A game can have more than one saddle points as well for a given problem. Let’s consider
another example.

Beta’s Strategies
b1 b2 b3 b4 Row
minima
a1 16 -17 -8 6 -17
Alpha’s
strategies a2 6 8 -5 -13 -13
a3 10 9 9 16 9*
a4 3 5 9 10 3
Column 16 9* 9* 16
maxima

222 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

In this example, the optimal strategy for Alpha Limited is a3 and the optimal strategies for
Beta Limited are b2 and b3. There are two saddle points at 9 which is the value of the game. It
means gain of 9% market share for Alpha Limited and loss of 9% market share for Beta
Limited.
8.4.2 When Saddle Point does not exist
In case, saddle point does not exist, it is not possible to find the solution in terms of pure
strategies- maximin and minimum strategy. The solution to such problems where saddle
point does not exist calls for employing mixed strategies. A mixed strategy is the combination
of two or more strategies selected by the players at a given time, according to a pre-
determined probability. Players choose a mix of strategies in a given ratio.
Let’s discuss the solution for 2 × 2 games where saddle point does not exist.

B player’s strategies

b1 b2
A player’s strategies a1 8 -9

a2 -5 10

In the above problem, saddle point does not exist, so the method discussed in previous
section will not suffice to find the optimal strategy for player A and B.
If A player choose a1 strategy, then B player will choose b2 and if A chooses a2, then B player
would choose b1. So if B knows A’s choice, then B can ensure his/her gain by choosing a
strategy opposite to A. Therefore, A will make it difficult for B to guess what he/she is going
to choose. Similarly, B will also make it difficult for A to guess the strategy B is likely to
choose in the game situation. The players will play their strategy with ratio.
Now, A chooses strategy a1 with probability x, then A will choose a2 strategy with (1-x)
probability. If player B plays b1 strategy, then A’s pay off can be determined with reference
to the first column of the pay-off matrix as given below.
Expected pay-off of A if B adopts b1 strategy = 8x- 5(1-x)
Similarly, expected pay-off of A if B adopts b2 strategy = -9x + 10(1-x)

223 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Now, if have to find the value of x so that the expected pay-off of A can be determined
irrespective of the strategy adopted by B.
8x- 5(1-x) = -9x + 10(1-x)
8x-5+5x= -9x+10-10x
x= 15/32
This means A would adopt strategy a1 and a2 in the proportion of 15:17.
The expected pay-off for player A is
8x- 5(1-x) = ( 8 × 15/32 ) – ( 5 × 17/32 ) = 35/32
-9x + 10(1-x) = ( -9 × 15/32 ) + ( 10 × 17/32 ) = 35/32
Thus, player A will have a gain of 35/32 per play in the long run.
We can find out the mixed strategy for player B in similar manner. Now, B chooses strategy
b1 with probability y, then B will choose b2 strategy with (1-y) probability. If player A plays
a1 strategy, then B’s pay off can be determined with reference to the first row of the pay-off
matrix as given below.
Expected pay-off of B if A adopts a1 strategy = 8y- 9(1-y)
Similarly, expected pay-off of B if A adopts a2 strategy = -5y + 10(1-y)
Now, if have to find the value of y so that the expected pay-off of B can be determined
irrespective of the strategy adopted by A.
8y- 9(1-y) = -5y + 10(1-y)
8y- 9 + 9y = -5y + 10 -10y
y =19/32
This means B would adopt strategy b1 and b2 in the proportion of 19:13.
The expected pay-off (loss) for player B is
8y- 9(1-y) = ( 8 × 19/32 ) – ( 9 × 13/32 ) = 35/22
-5y + 10(1-y) = ( -5 × 19/32 ) + ( 10 × 13/32 ) = 35/22
This implies B will lose 35/22 per play in the long run.
224 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

The value of the game is 35/22.

Strategy Ratio

Player A a1 15/32

a2 17/32

Player B b1 19/32

b2 13/32

B player’s strategies

b1 b2
A player’s strategies a1 a11 a12
a2 a21 a22

Formula:
a22 - a21
x = _____________________

( a11 + a22) - (a12 + a21 )

a22 – a12
y = ____________________

( a11 + a22) - (a12 + a21 )

a11 a22 – a12 a21


V = ____________________

( a11 + a22) - (a12 + a21 )


225 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

For the above example, Let’s solve


10- (-5)
x = _______________________ = 15/32

( 8 + 10) - (-9 -5 )
10- (-9)
y = _________________________ = 19/32

( 8 + 10) - (-9 -5 )

( 8 × 10 ) – (-9 × -5)
V = _________________________ = 35/32

( 8 + 10) - (-9 -5 )

The values match with the solution obtained earlier. This means A would adopt strategy a1
and a2 in the proportion of 15:17. B would adopt strategy b1 and b2 in the proportion of 19:13.
The value of the game is 35/32 which means player A gains and Player B loses 35/32.
8.4.3 Dominance Rule
In a game, a player may find one strategy to dominate over the other(s). This means that in all
situations, a particular strategy will be preferred over the other(s). This concept of domination
of strategy is extremely useful in simplifying the problem and finding the solution to the
game.
Let’s consider an example,

B player’s strategies

b1 b2 b3
A player’s a 8 -9 10
1
strategies
a2 -5 10 6

a3 6 -11 5

226 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

We notice that every element of first row exceeds the corresponding element of third row in
the matrix (8 > 6; -9 > -11 and 10 > 5). This means that in any given situation, player A will
always prefer a1 over a3. Thus, a1 dominates a3. Hence, a3 can be deleted.

B player’s strategies

b1 b2 b3
A player’s a1 8 -9 10
strategies
a2 -5 10 6

From the reduced matrix, we observe that every element of first column is greater than the
corresponding element in third column. Since, B would like to minimize the pay-off for A, B
will select b1 over b3 always. Hence, b1 will dominate over b3. Thus, b3 can be eliminated.

b1 b2
A player’s a 8 -9
1
strategies
a2 -5 10

Now, the problem is reduced to a 2× 2 and exactly same as the previous example. Thus, it can
be solved in the manner explained earlier and the solution will be as follows.
The value of the game is 35/22.

Strategy Ratio

Player A a1 15/32

a2 17/32

a3 0

Player B b1 19/32

b2 13/32

227 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

b3 0

8.4.4 Linear Combination


Let’s consider the following game.

B player’s strategies

b1 b2
A player’s a1 28 0
strategies
a2 2 12

a3 4 7

In this problem, we see no strategy is dominating over any other strategy. However, we
notice that a linear combination of strategy a1 and a2 in the ration of 1:3 will always dominate
strategy a3 in all situations. Please note that the ratio in which combination of strategies will
dominate another strategy is checked using trial and error method.

¼ × 28 + ¾ × 2 > 4 and ¼ × 0 + ¾ × 12 > 7

The problem can be reduced 2 × 2 matrix as following:


Let’s consider the following game.

b1 b2
A player’s a1 28 0
strategies
a2 2 12

a22 - a21 12 - 2
x = _______________________ = ___________________= 5/19

( a11 + a22) - (a12 + a21 ) (28+12) – (0+2)

228 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

a22 – a12 12-0


y = ________________ ___ = __________________ = 6/19

( a11 + a22) - (a12 + a21 ) (28+12) – (0+2)

a11 a22 – a12 a21 28 × 12 – 0 × 2


V = ___________________ = _________________ = 336/38

( a11 + a22) - (a12 + a21 ) (28+12) – (0+2)

The optimal strategy for A is (5/19 , 4/19, 0) and for B is (6/19, 13/19) and the game value is
168/19.

8.5 SOLUTION OF M × N GAMES – FORMULATION AND


SOLUTION AS A LPP

Let’s consider the following game.

B player’s strategies

B1 B2 B3
A player’s A1 a11 a12 a13
strategies
A2 a21 a22 a23

A3 a31 a32 a33

B player’s strategies

B1 B2 B3
A player’s A1 8 9 3
strategies
A2 2 5 6

A3 4 1 7

229 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

We can solve the above problem by formulating it as a LPP from A’s and B’s point of view.
Let’s first consider it from the point of A. We assume x1, x2, x3 as the probabilities with
which player A will choose strategies A1, A2 and A3 respectively.
Player A would use maximin strategy which is to maximize the minimum gain from playing
this game which is assumed as ‘U’.
Now, the expected pay-off of A will be as follows:

E1 = a11x1 + a21x2 + a31x3 (If player B chooses B1)


E2 = a12x1 + a22x2 + a32x3 (If player B chooses B2)
E3 = a13x1 + a23x2 + a33x3 (If player B chooses B3)
Now, we can express the problem as

Maximize U (Maximize the minimum value of U)


subject to
a11x1 + a21x2 + a31x3 ≥ U
a12x1 + a22x2 + a32x3 ≥ U
a13x1 + a23x2 + a33x3 ≥ U

x1 + x2 + x3 = 1
Now, assuming that U is positive (which would be if all pay-offs are positive), we can divide
the constraints by U and attempt to minimize 1/U rather than maximize U.
We further define a new variable Xi = xi/U and restate the problem as follows.
Minimize 1/U = X1 + X2 + X3
a11X1 + a21X2 + a31X3 ≥ 1
a12X1 + a22X2 + a32X3 ≥ 1

230 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

a13X1 + a23X2 + a33X3 ≥ 1

X1 , X2 , X3 ≥ 0

So, for the above problem, we can formulate the game situation as a LPP for player A’s point
of view.
Minimize 1/U = X1 + X2 + X3
8X1 + 2X2 + 4X3 ≥ 1
9X1 + 5X2 + X3 ≥ 1
3X1 + 6X2 + 7X3 ≥ 1

X1 , X2 , X3 ≥ 0
Now, we can simply solve the above LPP using Simplex method and obtain the solution.

If, we look at the problem from player B’s point of view, We assume y1, y2, y3 as the
probabilities with which player B will choose strategies B1, B2 and B3 respectively.
Player B would use minixmax strategy which is to minimize the maximum gain from playing
this game which is assumed as ‘V’.
Now, the expected pay-off of B will be as follows:

E’1 = a11y1 + a12y2 + a13y3 (If player A chooses A1)


E’2 = a21y1 + a22y2 + a23y3 (If player A chooses A2)
E’3 = a31y1 + a32y2 + a33y3 (If player A chooses A3)
Now, we can express the problem as

Minimixe V
subject to

231 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

a11y1 + a12y2 + a13y3 ≤ V


a21y1 + a22y2 + a23y3 ≤ V
a31y1 + a32y2 + a33y3 ≤ V
y 1 + y2 + y3 = 1
We assume V is positive (which would be if all pay-offs are positive), we can divide the
constraints by V and attempt to maximize 1/V rather than minimize V.
We further define a new variable Yi = yi/V and restate the problem as follows.
Maximize 1/V = Y1 + Y2 + Y3
a11Y1 + a12Y2 + a13Y3 ≤ V
a21Y1 + a22Y2 + a23Y3 ≤ V
a31Y1 + a32Y2 + a33Y3 ≤ V

Y1 , Y2 , Y3 ≥ 0
This is the dual of LPP given earlier.
So, for the above problem, we can formulate the game situation as a LPP for player B’s point
of view.
Maximize 1/V = Y1 + Y2 + Y3
8Y1 + 9Y2 + 3Y3 ≤ 1
2Y1 + 5Y2 + 6Y3 ≤ 1
4Y1 + Y2 + 7Y3 ≤ 1

Y1 , Y2 , Y3 ≥ 0
Now, we can simply solve the above LPP using Simplex method and obtain the solution.
We would be solving the maximization problem and reading the optimal solution of the
primal (minimization problem) from the optimal solution of the dual.

232 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Maximize 1/V = Y1 + Y2 + Y3 + 0S1 + 0S2 + 0S3


8Y1 + 9Y2 + 3Y3 + S1 = 1
2Y1 + 5Y2 + 6Y3 + S2 = 1
4Y1 + Y2 + 7Y3 + S3 = 1

Y1 , Y2 , Y3, S1 , S2 , S3 ≥ 0
Table 1

Cj Basic Basic Y1 Y2 Y3 S1 S2 S3 Ratio


variable Solution

0 S1 1 8 9 3 1 0 0 1/8

0 S2 1 2 5 6 0 1 0 1/2

0 S3 1 4 1 7 0 0 1 1/4

Cj 1 1 1 0 0 0

Zj 0 0 0 0 0 0 0

Cj - Zj 1 1 1 0 0 0

So, in table 2, S1 exists and Y1 enters.

Table 2

Cj Basic Basic Y1 Y2 Y3 S1 S2 S3 Ratio


variable Solution

1 Y1 1/8 1 9/8 3/8 1/8 0 0 1/3

0 S2 3/4 0 11/4 21/4 -1/4 1 0 1/7

0 S3 1/2 0 -7/2 11/2 -1/2 0 1 1/11

233 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

Cj 1 1 1 0 0 0

Zj 1/8 1 9/8 3/8 1/8 0 0

Cj - Zj 0 -1/8 5/8 -1/8 0 0

So, in table 3, S3 exists and Y3 enters.


3Table 3

Cj Basic Basic Y1 Y2 Y3 S1 S2 S3 Ratio


variable Solution

1 Y1 1/11 1 15/11 0 7/44 0 -3/44 1/15

0 S2 3/11 0 67/11 0 5/22 1 -21/22 3/67

1 Y3 1/11 0 -7/11 1 -1/11 0 2/11 -1/67

Cj 1 1 1 0 0 0

Zj 1 8/11 1 3/44 0 5/44

Cj - Zj 0 3/11 0 -3/44 0 -5/44

So, in table 3, S2 exists and Y32 enters.


Table 4

Cj Basic Basic Y1 Y2 Y3 S1 S2 S3
variable Solution

1 Y1 2/67 1 0 0 29/268 -15/67 39/268

01 Y2 3/67 0 1 0 5/134 11/67 -21/134

1 Y3 8/67 0 0 1 -9/134 7/67 11/134

234 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

Cj 1 1 1 0 0 0

Zj 1 1 1 21/268 12/268 19/268

Cj - Zj 0 0 0 -21/268 -12/268 -19/268

Optimal solution is obtained.


Subtituting values,
I/V = 2/67 +3/67 +8/67 = 13/67
Value of the Game V= 67/13
Yi = yi/V
so, y1 = 2/13
y2 = 3/13
y3 = 8/13
We can read the optimal solution for the primal from the Cj - Zj values so X1, X2, X3 are
21/268, 12/268 and 19/268 respectively.
U= 21/268 + 12/268 +19/268 = 13/67
Xi = xi/U

So, x1 = 21/52
x2 = 12/52
x3 = 19/52
The optimal strategy for player A is in the ratio of 21:12: 19 and for player B is 2:3:8. The
value of the game = 67/13.

235 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

IN-TEXT QUESTIONS
1. Saddle point exists when values from maximin and minimax strategy
are ____________________.
2. Game situation occurs when two or more player have
____________________interests.
3. Every game situation must have a saddle point. True / False
4. The strategy which will always be preferred by a player over other
strategies in any situation is called _______________ strategy.
5. A game situation where the gain of one player is equal to the loss of
other player is called ____________.
6. In a two-person game, both players should have equal number of
strategies. True/False
7. Saddle point is the point of equilibrium. True/False
8. The combination of strategies used by the player(s) in a particular ratio
is called ____________ strategy.
9. Dominance principle implies that strategies of one player are
dominating over the strategies of other player. True/False
10. Mixed strategy can use only combination of two strategies. True/False

8.6 SUMMARY

In this lesson, we learnt about decision making in game situations where players have
conflicting interest and want to know the optimal strategy to be employed. The pay-off
matrix is known to the players but their decisions are interdependent. We learnt about two-
person zero sum games in different cases- when saddle point exists, when saddle point does
not exist, dominance rule and linear combination. The Solution of m × n games is also
discussed with the help of formulation and solution as a LPP

236 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

8.7 GLOSSARY

• Game: The situation of conflicting interests among the opponents is called a game.

• Strategy: A strategy is the action taken by the player in various game situations.

• Pay-off: Each strategy chosen by the player in a game situation leads to outcomes
called pay-offs.

• Saddle point: The point of equilibrium where the maximin value is equal to the
minimax value is called saddle point.

• Mixed Strategy: A mixed strategy is the combination of two or more strategies


selected by the players at a given time, according to a pre-determined probability.

• Dominance Rule: In a game, a player may find one strategy to dominate over the
other(s). This means that in all situations, a particular strategy will be preferred over
the other(s) by the player.

8.8 ANSWERS TO IN-TEXT QUESTIONS

1. same/equal 6. False
2. conflicting/contradicting 7. True
3. False 8. Mixed
4. dominating 9. False
5. zero sum game 10. False

8.9 SELF-ASSESSMENT QUESTIONS

1. Solve the following game and determine the value of the game and optimal strategies
for both the players.

237 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBA

B player’s strategies

B1 B2 B3 B3
A player’s A1 3 2 4 0
strategies
A2 3 4 2 4

A3 4 2 4 0

A4 0 4 0 8

2. Solve the following game and determine the value of the game and optimal strategies
for both the players.
B player’s strategies

B1 B2 B3
A player’s A1 3 -1 4
strategies
A2 6 7 -2

3. Solve the following game and determine the value of the game and optimal strategies
for both the players.
B player’s strategies

B1 B2
A player’s A1 3 7
strategies
A2 -5 5

4. Solve the following game and determine the value of the game and optimal strategies
for both the players.
B player’s strategies

B1 B2 B3
A player’s A1 5 9 3

238 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
MBAFT-6202 Decision Modeling and Optimization

strategies A2 6 -12 -11

A3 8 16 10

5. Solve the following game and determine the value of the game and optimal
strategies for both the players.
B player’s strategies

No Medium High
promotion promotion promotion
A player’s
strategies No 5 9 3
promotion

Medium 6 -12 -11


promotion

High 8 16 10
promotion

8.10 REFERENCES & SUGGESTED READINGS

• Vohra, N. D. (2006). Quantitative Techniques in Management, 3e. Tata McGraw-Hill


Education.

• Kothari, C.R. (2013). Quantitative Techniques, (New Format), 3/e Vikas Publishing.

• Jaisankar, S. (2009). Quantitative Techniques for Management. Excel Books.


**************LMS Feedback: [email protected]**************

239 | P a g e

© Department of Distance & Continuing Education, Campus of Open Learning,


School of Open Learning, University of Delhi
978-81-19169-15-3

9 788119 169153

Department of Distance and Continuing Education


University of Delhi

You might also like