Desicion Modeling and Optimisation
Desicion Modeling and Optimisation
Continuing Education
University of Delhi
Content Writers
Dr. Reena Jain, Dr. Deepa Tyagi,
Dr. Shubham Aggarwal, Dr. Sandeep Mishra,
Dr. Upasana Dhanda
Academic Coordinator
Mr. Deekshant Awasthi
Published by:
Department of Distance and Continuing Education under
the aegis of Campus of Open Learning/School of Open Learning,
University of Delhi, Delhi-110 007
Printed by:
School of Open Learning, University of Delhi
Disclaimer
DISCLAIMER
This book has been written for academic purposes only.Though every
effort has been made to avoid errors yet any unintentional errors
might have occurred . The authors ,the editors,the publisher and the
distributor are not responsible for any action taken on the basis of this
study module or its consequences thereof.
Lesson – 1: Model Building for Optimization & Distribution and Network Models.1
1.1 Learning Objectives
1.2 Introduction
1.3 Linear Programming model
1.4 Distribution and networking models
1.5 Summary
Lesson-4: Simulation……………………………………………………………………119
4.1. Learning Objectives
4.2. Introduction of Simulation
4.3. Key Advantages of Simulation for Business
4.4. General Elementary Steps in the Simulation Technique
4.5. Types Of Simulation Models to Control in Management Science
4.6 Monte Carlo Simulation
4.7. Tools For the Verification and Validation of Simulation Model
4.8. Advantages And Limitations of Simulation
4.9. Summary
LESSON 1
MODEL BUILDING FOR OPTIMIZATION & DISTRIBUTION AND NETWORK
MODELS
Dr. Reena Jain
Assistant Professor
Kalindi College
University of Delhi
[email protected]
STRUCTURE
After learning this chapter students would be able to formulate real life problems of logistics,
networking, production, diet requirement etc. into mathematical models. It will help them to
understand the practical applications of networking and theory studied would be helpful in
1|Page
determining the optimal solution for distribution network problems. It will be helpful in
determining shortest route between any two paces, to minimize the transportation cost
between two places, to maximize the efficiency of transportation between any two points etc.
This lesson will make them more equipped to take better managerial decisions regarding
many realistic situations discussed above.
1.2 INTRODUCTION
Model building for optimization is done using the technique of linear programming
problem and networks. Linear Programming is a very important tool of quantitative
techniques for the best possible distribution of scarce resources including labor,
materials, machinery, money, energy, and so forth. It is used in almost every aspect of
life, whether marketing or domestic or Production or anything else. You people are quiet
familiar with the term ‘Linear. It is used to describe how two or more variables in a model
relate to one another proportionally. Every time a specified change in one variable occurs
will always follow a given change in another variable. The term "programming" refers to
device some technique for doing work in organized manner. It is a planning that involves
the economic allocation of scarce resources among various options to attain the optimal
goal, i.e., to maximize or minimize the objective function. Hence, Linear Programming
is a quantitative technique for optimum allocation of
limited or rare resources like the ones mentioned above, such as labour, materials, equip
ments, money, energy, etc.
Linear programming problems in general are concerned with the use or allocation of
limited resources-labour, materials, machines and capital in the best possible manner so
that costs are minimized or profits are maximized. The best decision is found by
solving mathematical problems. Technique of networking is used for distribution models.
It includes transportation problem, perfect matching problem, maximal low problem etc.
using the idea of network.
The linear programming models are widely used to solve a number of military,
economic, industrial and social problems. There are various reasons for their wide uses
such as:
1. A large variety of problems in diverse field can be represented as linear
programming models.
2. Efficient and simple techniques are available for solving linear programming
problems.
2|Page
3. Data variation can be handled through linear programming models with ease.
Networking helps in determining shortest route between any two given points, It calculates
the max flow, helps in determining the best assignment schedule, i.e perfect matching etc.
other words, the objective function is the direct sum of the contributions made by each
variable.
Subject to constraints,
a11X1+ a12X2+… ...... + a1nXn ( , =, ≥) b1
and X1, X2 .. Xn ≥ 0
4|Page
iii) As each toy must undergo two processes i.e. grinding and polishing. So
corresponding constraints would be
4X1 + 2X2 ≤80 (for grinding)
Th e number of hours f o r g r i n d i n g machine is 40hrs per week per
grinder. So total hrs for two grinders would be 80 hrs (40*2).
Similarly other constraint would be
2X1 + 5X2 ≤ 180 (for polishing)
iv) By non-negativity condition
X1, X2 ≥ 0
Hence Final LPP is
Max Z = 3X1 + 4X2
Subject to constraints,
4X1 + 2X2 ≤ 80
2X1 + 5X2 ≤ 180
X1, X2 ≥ 0
1.3.2 Investment Model
Example 2.
Mr. Joshi received an amount of Rs.30,000 on his retirement, which he wishes to
invest in some source from where he can get fixed income. From his friend he
came to know about two types of s ecurity bonds, which yields fixed income
per annum. Bond A generates 7% per annum whereas corresponding value for Bond B
is 10%. Due to risk factors involved and feedback received from others, he decides to
invest at most of Rs.14,000 in bond B and at least Rs.4,000 in Bond A. He also wishes
5|Page
that the amount invested in Bond A should not be less than the amount invested in Bond
B. Formulate the LPP model for helping Mr. Joshi to generate maximum annual return
from his retirement fund.
Solution
Let X1 and X2 be the amount invested in Bonds A and B respectively. Income
generated from two Bonds are given. Hence the objective function is to maximize the
income:
Max Z = 0.07X1 + 0.1X2
Subject to:
X1 + X2 ≤ 30,000
X1 ≥ 4,000
X2 ≤ 14,000
X1 ≥ X2
X1, X2 ≥ 0
Example 3
A farmer uses two types of pesticides, liquid and dry for his fields. The liquid pesticide
contains 6 units of chemical A, 3 u n i t s o f c h e m i c a l B and 1 u n i t chemical C
per jar. R e s p e c t i v e v a l u e s f o r d r y p e s t i c i d e i s 1,2 and 4 units per carton. For
healthy crops the minimum requirement of chemical A, B and C are 10, 12, and12 units
respectively. The liquid pesticide is available in market for Rs. 40 per jar.
Respective value for dry p e s t i c i d e i s Rs.25 per carton. How many of each
pesticides should be bought in order to fulfil the requirements and keep costs down?
Solution
Let X1 and X2 be the number of units p u r c h a s e d of liquid and dry pesticides.
6|Page
Subject to:
6X1 + X2 ≥ 10
3X1 + 2X2 ≥ 12
X1 + 4X2 ≥ 12
X1, X2 ≥ 0
Example 4
A sewing machine manufacturer purchases semi-finished casted parts and process them to
produce three different models, basic standard and premium. The casted parts undergo
three different processes namely turning, milling, and drilling. The selling price of basic
model is Rs 500, for standard model, it is 600 and for premium model it is Rs. 700.The
demand for all the models is so large that all produced machines get sold. Cost of casted
parts of three models are Rs.200, Rs.220 and Rs.250 for basic, standard, and premium
models respectively.
Cost per hour to run each of the three processes are Rs.400 for Turning, Rs.375 for
milling and Rs.600 for drilling. The capacities of each process for each model are shown
in the following table.
Solution:
Let X1 and X2 and X3 be the number of basic, standard, and premium model
7|Page
With the information given, the hourly profit for basic, standard,
and premium model would be as follows
Profit per unit of Basic model = (500–200) – (400/25 +375/25 +
600/40) = 254
Profit per unit of standard model = (600-220) – (400/40 + 375/15 +
600/30) = 325
Profit per unit of premium model = (700 – 250) – (400/25 + 375/15 +
600/10) = 349
Hence Objective Function is
Maximize Z = 254 X1 + 325X2 + 349X3
Subjected to:
X1/25 + X2/40 + X3/25 ≤ 1
X1, X2, X3 ≥ 0
Example 5
Apollo hospital needs different number of nursing staff at different timings of day.
Each nurse has a duty of 8 hrs in a day and reports at the beginning of period. The
hospital management wants to formulate the plan that how many nurses should be
called to meet the daily needs. The following table summarizes the number of
nurses needed round the clock.
8|Page
1 8 a.m. – 12 noon 3
2 12 noon. – 4 p.m. 6
3 4 p.m. – 8 p.m. 14
4 8 p.m. – 12 mid night 6
5 12 mid night – 4 a.m. 10
6 4 a.m. – 8 a.m. 8
In order to have enough nurses available during each period, the hospital seeks to
determine the bare minimum number of nurses that should be employed. Formulate the
situation as a linear programming problem.
Solution
Let X1, X2, X3, X4, X5 and X6 be the number of nurses joining duty at the
beginning of periods 1, 2, 3, 4, 5 and 6 respectively.
Objective function is
Minimize Z = X1 + X2 + X3 + X4 + X5 + X6
Subject to
X1 + X2 ≥ 6
X2 + X3 ≥ 14
X3 + X4 ≥ 6
X4 + X5 ≥ 10
X5 + X6 ≥ 8
X6 + X1 ≥ 3
9|Page
1 4 0 0 0
2 2 1 0 3
3 2 0 1 1
4 1 2 0 1
5 0 1 1 4
6 0 0 2 2
Let Xi , i=1,2…,6 be the number of rolls cut according to ith combination.
Objective function is
Minimize Z = 0X1 + 3X2 + 1X3 + 1X4 + 4X5 + 2X6
Subject to
4X1 + 2X2 + 2X3 + 1X4 >= 150
10 | P a g e
11 | P a g e
Where Cij is the cost of transportation from ith source to jth destination subject to
the constraints that the Sum of the quantity of goods transported to different
destinations from source i must be less than equal to the availability of source i that
is
n
∑xij ≤ si for i = 1, . . . , m,
j=1
and the the Sum of the quantity of goods transported to jth destination must be greater
than demand dj , that is
m
X
∑xij ≥ dj for j = 1, . . . , n.
i=1
The necessary and sufficient condition for the existence of solution is that
Total supply = total demand
We thus define the transportation (or Hitchcock) problem as the following LP, where
the si >= 0, dj >= 0, cij >= 0 are given, with Total supply = total demand
Hence, Linear programming problem is
subject to
∑xij = si for each i = 1, . . . , m
j
Source\Destination D1 D2 D3 Availability
S1 C12 A1
S2 A2
S3 C31 A3
S4 A4
Demand d1 d2 d3
In above table C12 shows cost of transporting one unit from source 1 to destination2
and similarly C31 shows cost of transporting one unit from source 3 to destination1.
Conversion Of Unbalanced problem to Balanced Problem
If Total availability is not equal to total demand, then such a problem is known as
unbalanced problem. The very first condition for the problem to be solvable is that It
must be balanced. So, let’s take an example to see how an unbalanced problem can be
converted into balanced one.
13 | P a g e
Source\Destination D1 D2 D3 Availability
S1 C12 8
S2 10
S3 C31 10
S4 12
Demand 10 10 5
14 | P a g e
To D1 D2 D3 D4 Capacity
From
S1 21 16 25 13 11
11
S2 17 18 14 23 13 9
9 4
S3 32 27 18 41 19 16 6
6 10 3
Demand 6 10 12 3 15 4 43
15 | P a g e
To D1 D2 D3 D4 Capacity
From
S1 21 16 25 13 11
11
S2 17 18 14 23 13 1
1 12
S3 32 27 18 41 19 9 4
5 10 4
Demand 6 5 10 12 15 4 43
16 | P a g e
• Repeat above 2 steps until all availabilities get exhausted and demands are
fulfilled.
Note:
• It is always advisable to use VAM as initial basic feasible solution, if method
is not specified. The reason behind it is very simple that this provides solution
which is very close to optimal solution.
• If at any point before the end, a row's supply and column's demand are both
satisfied simultaneously, then both will be crossed out and the next variable to
be added to the basic solution will necessary be at the zero level. Such a situation
is known as degeneracy.
To D1 D2 D3 D4 Capacit Ui
y
From
S1 21 16 25 13 11 3
11
S2 17 18 14 23 13 9 3 3 3 4
6 3 4
S3 32 27 18 41 19 7 9 9 9
7 12
Demand 6 10 12 15 4 43
Vj 4 2 4 10
15 9 4 18
9 4
Optimal Solution
Once an initial basic feasible solution has been found, we will move towards the
technique to find optimal
17 | P a g e
• Determine the initial basic feasible solution by any of the three methods
defined above.
• Draw closed loop, starting from any non-basic cell. Loop should be
drawn with horizontal or vertical lines in such a way that corner should
come only at basic cells and loop should end at same non-basic cell
from where it was started. Mark alternatively (+) and (-) sign at the
corners of the loop traced, starting from non-basic cell respectively.
Calculate the transportation cost by adding or subtracting the cost of
each corner depending upon marked sign. This cost is identified as net
cost change between initial basic feasible solution and new solution.
Repeat the procedure to calculate net cost change corresponding to each
non-basic cell.
• If net cost change corresponding to each non-basic cell is positive, then initial
basic feasible solution is optimal. Otherwise, select the non-basic cell
corresponding to which highest negative net cost change is obtained.
• Out of all (-) marked cells, select minimum allocation. Add this value at (+)
marked corners and subtract at (-) marked corners. This would be new basic
feasible solution. Again, calculate net cost change corresponding to each non-
basic cell and check for optimality.
• Repeat the above steps till the condition of optimality is satisfied.
18 | P a g e
• Determine the set of numbers Ui (for rows) and Vj (for column) in such a
way that for Basic Cells Ui+Vj-Cij=0.
• Calculate the value of Ui+Vj-Cij (opportunity cost) for each non-basic cell.
If opportunity cost for each non-basic cell is negative or zero then the
current solution is optimal. otherwise the non-basic cell which has highest
opportunity cost must be entered into the basis and one of basic cell wll
become non- basic.
• Repeat the above two steps till the condition of optimality is satisfied.
1.4.2 Assignment problem
It is a special case of transportation model where workers and jobs represent the
sources and destinations respectively. The supply (demand) at each source
(destination) is exactly equal to 1. The cost of transportation Cij represents the
wages given to worker i if jth job is assigned to him/her. Such problem can be
formulated as linear programming problem just as transportation problem as
explained in previous section. These problems can be solved using Hungarian
method. The basic assumptions of Hungarian Method are explained in next section.
Basic Assumptions:
• Only one job can be assigned to each worker and only one worker can be
assigned to each job.
• Number of jobs should be equal to number of workers. i.e Problem should be
balanced.
• If problem is unbalanced, then by adding dummy jobs or dummy workers with 0
as associated cost problem must be balanced first.
• Problem should be a minimization problem.
• If problem is a maximization problem, then convert it into minimization first.
Hungarian Method
• Check whether the problem is balanced or not, if not convert into balanced
problem.
• Check whether it is a minimization problem or not, if not convert it into a
minimization problem.
• Select the row minima from each row and subtract it from its corresponding row,
the technique is termed as row reduction method.
19 | P a g e
• Select the column minima from each column and subtract it from its
corresponding column, the technique is termed as column reduction method.
• Start assignment by selecting Zeroes from rows/ column in such a way that there
should be single assignment in each row or column.
• If each row and column get assignment, then the current solution is optimal.
Calculate the cost of assignment by adding the corresponding values in original
matrix.
• If any row/column is left where there is no assignment then follow the
improvement schedule to improve the table and again do assignment.
Workers ® W1 W2 W3 W4
Job
¯
J1 6 4 8 6
J2 8 5 2 4
J3 9 4 7 3
In above problem we have 3 jobs but 4 workers, so to make it balanced add a
dummy job with all associated cost as zero.
Workers ® W1 W2 W3 W4
Job
¯
J1 6 4 8 6
J2 8 5 2 4
J3 9 4 7 3
J4 (Dummy) 0 0 0 0
20 | P a g e
Workers ® W1 W2 W3 W4
Job
¯
J1 3 5 1 3
J2 1 4 7 5
J3 0 5 2 6
J4 (Dummy) 9 9 9 9
After assignment the values of assigned positions would be added from the original
matrix to get the maximum profit.
Example:
ABC Transco has four trucks namely A, B, C and D and four sites The numbers given in the
following table shows the distance in km associated with each pair of truck and site. Find
the assignment schedule for the following problem in order to minimize the total distance
(in km) travelled?
Trucks A B C D
Sites
1 90 75 75 80
2 35 85 55 65
3 125 95 90 105
4 45 110 95 115
Solution: Subtract 75 from Row 1, 35 from Row 2, 90 from Row 3, and 45 from Row 4.
21 | P a g e
Trucks A B C D
Sites
1 15 0 0 5
2 0 50 20 30
3 35 5 0 15
4 0 65 50 70
Subtract 0 from Column 1, 0 from Colum 2, 0 from Column 3, and 5 from
Column 4
Trucks ® A B C D
Sites
¯
1 15 0 0 0
2 0 50 20 25
3 35 5 0 10
4 0 65 50 65
Start assignment
Trucks ® A B C D
Sites
¯
1 15 0 0
2 0 50 20 25
3 35 5 10
4 65 50 65
Trucks ® A B C D
Sites
¯
1 15 0 0
2 0 50 20 25
3 35 5 10
4 65 50 65
Minimum uncovered element is 5, Hence new matrix is
Trucks A B C D
1 20 5 0
2 45 20 20
3 35 0 5
4 0 60 50 60
23 | P a g e
Trucks ® A B C D
Sites
¯
1 40 5 0
2 0 25 0
3 55 0 5
4 40 30 40
24 | P a g e
• The algorithm maintains a record of the presently known shortest distance between
source and every other node, and it changes these values whenever it discovers a route
that is shorter than the previously known shortest distance.
• After the algorithm has determined the route that is the shortest between the source
node and another node, the algorithm updates label on the other node as either
"visited" or “permanent” from “unvisited” or “temporary” and adds it to the path. This
path initially contains only source node and one by one other nodes get added to it.
• The procedure is repeated until each node in the network has been included in the
route. It means procedure terminates when label of each node becomes "visited" or
“permanent” . At the end we get a route that connects the source node to all of the
other nodes with shortest path length between them.
Example: Consider the following graph with six nodes. The numbers written on edges
expresses the distance between corresponding nodes. Use Dijkstra’s algorithm to find the
shortest distance of each node from node S.
Solution: Initially Only S is labelled as “visited” and rest all five nodes are labelled as
“unvisited”. Also write the label on each node along with distance and the name of node from
which that distance is measured. To maintain the record lets write in tabular form.
1 S A (1,S)
B (5,S)
Now, Nodes A and B can be traced from node S in 1 unit of distance and 5 units of distance
respectively. Since min(1,5)=1
25 | P a g e
Hence, node A would be traced from node S and node A is labelled as “visited”. Now path
becomes {S,A} at shortest distance of 1 unit.
2 S, A B (1+2,A)
C (1+2,A)
D (1+1,A)
Label for node B Could either be (5,S) or (1+2,A)=(3,A). Since distance of node B via node
A is less hence its label would be (3,A) not (5,S). Which means shortest distance of node B
from S is 3 units via node A. Similarly labels for rest of the nodes are written. At this point
node D has least distance among unvisited nodes, so label of node D updated to visited.
3 S,A,D B (1+2,A)
C (1+2,A)
E (∞,S)
Since node B and C can be visited from node A at the distance of unit 3, which is minimum
too amongst all unvisited nodes, therefore both become visited now.
26 | P a g e
Node E could be traversed either from node C or from node D as both length are same, ie 4
units. Hence shortest path of each node from node S is
S to A, 1 unit with path SA
S to B, 3 units with path SAB
S to C, 3 units with path SAC
S to D, 2 units with path SAD
S to E, 4 units with path SACE or SADE
1.4.4 Maximal Flow problem
The objective of the max flow problem is to find the greatest amount of flow that can be
sent through a network of pipelines, channels, or other passageways while capacity
limitations is taking into account. This is one of the main concerns handled via graph
theory. The problem can be used to simulate a broad variety of real-world circumstances,
including resource distribution, communication networks, and transportation systems, to
name a few.
In the maximum flow problem, we have a directed graph with a source node S and a sink
node T, and each edge has a capacity that symbolizes the maximum amount of flow that
can be sent through it. In other words, the maximum amount of flow that can be sent
through an edge is called its capacity. The objective here is to determine the greatest
quantity of flow that can be transmitted from point S to point T while still adhering to the
capacity limitations imposed by the edges. The most common algorithm for solving
maximal flow problem is Ford-Fulkerson algorithm.
Ford-Fulkerson algorithm
This algorithm is based on finding the flow- augmenting path, residual capacity of each
edge from source to sink and bottleneck capacity of augmenting path.
Residual Capacity - Residual capacity of the directed edge is defined as the remaining
capacity of the edge. i.e, original capacity of the edge - current flow through the edge. If
there is flow along a directed edge u → v then reversed edge has a capacity 0 and it can
be considered like
f(v,u)=−f(u,v)
Residual Graph – The original graph in which instead of original capacities, Residual
capacities as written on edges.
Augmenting Path – The series of edges or the path from source to sink in residual graph
is called augmenting path.
27 | P a g e
Example: Consider the following network, where the numbers given on arcs are
capacities of corresponding edges. Find the maximal flow from source to sink
Solution: Firstly, redraw the network, taking initial flow as zero for all edges.
Find an augmenting path from source to sink. Let the path be S-A-B-T. The residual
capacities of edges are 7,5 and 8. The bottleneck capacity of this path is 5. Hence update
the flow on this path by 5 units. Now the new flow is as follows:
Next augmenting path would be S-D-A-C-T and its bottleneck capacity is 2. Hence new flow
at this path will be increased by 2.
Next possible augmenting path is S-A-C-T, with bottleneck capacity 1. So flow at this path
would be incremented by 1.
Now, no more augmenting path left. Hence process terminates here. The maximal flow is
5+5=10
In- Text Questions
1. What is basic feasible solution? What are different methods of finding basic feasible
solution for a transportation problem?
2. Which method should be used for finding BFS and why?
3. Can transportation problem be solved by LPP?
4. Differentiate between Stepping Stone and MODI Method. Which one is more
effective and why?
5. Is there any relationship between transportation problem and assignment problem? If
yes, explain
6. What is the basic nature of transportation problem, maximization or minimization?
Can both type of problems be solved using transportation problem?
7. True or False?
(a) To balance a transportation model, it may be necessary to add both a dummy
source and a dummy destination.
28 | P a g e
(b) The amounts shipped to a dummy destination represent surplus at the shipping
source.
(c) The amounts shipped from a dummy source represent shortages at the receiving
destinations.
8. In each of the following cases, determine whether a dummy source or a dummy
destination must be added to balance the model.
(a) Supply: al = 10, a2 = 5, a3 = 4, a4 = 6
Demand: bi = 10, b2 = 5, b3 = 7, b4 = 9
(b) Supply: aJ = 30, Q2 = 44
Demand: bJ = 25, bi = 30, b3 = 10
1.5 SUMMARY
In this chapter the concept of linear programming problem is explained with the help of
real life problems. Some real life situations such as production model, Investment model,
cost minimization model, production model, man power scheduling model and paper
trim loss problems are formulated using linear programming Problem. The basic
essential elements of LPP, assumptions of LPP and conditions under which LPP can be
used are explained in detail. In next section transportation problem and assignment
problems are explained as distribution models. All the methods of finding Basic Feasible
Solution (BFS) are explained in detail. Transportation problem is formulated as LPP.
Method of finding optimal solution is explained. Hungarian method is explained for
finding the optimal solution of assignment problem. All techniques are illustrated with
examples
1.6 GLOSSARY
• Linear Programming Problem- The mathematical model of some real life situation
consisting of linear objective function, linear set of constraints along with non-
negativity condition is called Linear Programming Problem.
• Decision Variable- X1, X2, X3, etc., are used to denote the activities. These are referred
to as decision variables.
• Integer Programming Problem- A mathematical model, where decision variables can
assume only integral values is called Integer Programming Problem.
29 | P a g e
Section 1.3
Ans1 A Linear Programming model essentially consists of three components.
v) The linear objective function
vi) The set of linear constraints
vii) Non-negativity of decision variables
Ans2 Min Z= - Max Z
Ans3 Because
4. A large variety of problems in diverse field can be represented as linear
programming models.
5. Efficient and simple techniques are available for solving linear programming
problems.
6. Data variation can be handled through linear programming models with ease.
Ans4 no. In LPP objective function and set of constraints should be linear.
Section 1.4
Ans1 Set of solution satisfying all the constraints having some positive values and some
zero values is called basic feasible solution. Different methods are:
30 | P a g e
Q1. A company produces two types of dolls, regular doll and premium doll. The sales volume
for regular doll is at least 80% of the total sales of both the dolls. However, the company
cannot sell more than 100 units of regular dolls per day. Both dolls use one special rubber, of
which the maximum daily availability is 240 pounds. Regular doll consumes 2 pounds per
unit whereas premium doll uses 4 pounds per unit of this special rubber. Company earns
profit of $20 and $50, from regular doll and premium doll respectively. Formulate the given
condition as a linear programming problem to determine the optimal product mix for the
company.
Q2. Alumco manufactures aluminium sheets and aluminium bars. The maximum production
capacity is estimated at either 800 sheets or 600 bars per day. The maximum daily demand is
550 sheets and 580 bars. The profit per ton is $40 per sheet and $35 per bar. Formulate the
given condition as a linear programming problem to determine the optimal daily production
mix.
31 | P a g e
Q3. An investor wishes to invest $10,000 over the next year in two types of bonds. Bond A
yields 7% and bond B yields 11%. Past experiences suggests an allocation of at least 25% in
A and at most 50% in B. Moreover, investment in A should be at least half the investment in
B. Formulate the given condition as a linear programming problem to to help the investor that
how should the fund be allocated in two bonds?
Q4. The Standard paper company produces paper rolls with a standard width of 20 m each.
Special customer orders with different widths are produced by splitting the standard rolls.
The typical order received on one day is summarized as follows:
32 | P a g e
Q6. Crompton has three factories - X, Y, and Z. It supplies its products to four distributors
located in different states. The production capacities of these factories are 200, 500 and 300
per month respectively. The Demand of distributors are given in tables. The values written in
tables are net returns corresponding to each factory -distributor pair per unit.
Determine a suitable allocation to maximize the total net return. Find the conditional solution
also, i.e. if X can’t transport to C and Z can’t transport to B.
Q8. Find the shortest route from source to all other nodes for the following graph.
33 | P a g e
1.9 REFERENCES
34 | P a g e
LESSON 2
MULTICRITERIA DECISION MODELS
STRUCTURE
35 | P a g e
36 | P a g e
The steps we have taken in the model formulation can be briefly summarized as
follows:
1. Define Variables and Constants
2. Formulate Constraints
3. Develop the Objective Function
➢ Define Variables and Constants: The first step of model formation is the
definition of' probable (choice) variables and the right hand side constants. The
right hand side-constants maybe either available resources or specified goal
limit value. It requires a careful analysis of the problem in order to identify all
significant variables that have some effect on the set of goals (objectives)
specified by the decision maker.
➢ Develop the Objective Function: Through the analysis of the decision marker’s goal
structure, the objective function must be developed. First, the preemptive priority
factors should be assigned to certain deviational variables that are relevant to goal
attainment. Second, if necessary differential weights must be assigned to deviational
variables at the same priority level. It is imperative that goals at the same priority
level be commensurable.
Now, we understand GP modeling by demonstrating through an example
with the aim of how to formulate a model. This will be helpful to clarify the
main differences between GP and LP.
37 | P a g e
38 | P a g e
iv. Because high overhead cost result when the plant is kept open past normal
hours, the company would like to minimize the amount of overtime.
These several aims are stated to as goals in the perspective of the GP
technique. The company would, naturally, like to come as close as possible to
attaining each of these targets. Because the usual form of the LP model
considers only one objective, we must create an alternative form of the model to
reproduce these multiple goals.
Now, the first step in formulating a GP model is to convert the LP model
constraints into objectives (goals).
The different aims in a GP problem are denoted to as goals
(objectives).
The first goal of the pottery company is to ignore underutilization of labor that is,
using less than 40 hours of labor each day. To denote the probability of
underutilizing labor, the LP constraint for labor, x1 + 2 x2 40 hours of labor, is
rewriting as
39 | P a g e
Because only 25 hours were used in manufacture, labor was underutilized by 15 hours (40 -
25 = 15). Thus, if we suppose d1− = 15 hours and d1+ = 0 (because no overtime is exists there),
then we have
25 + d1− − d1+ = 40
25 + 15 − 0 = 40
40 = 40
+
A positive deviational variable d1 is the quantity through which a
40 | P a g e
The next step in formulation, our GP model is to get the goal of not using less than 40 hours
of labor. We do this by creating a new setup of objective function:
−
Minimize Pd
1 1
As we know well that, the objective function in all GP models is to minimize deviation from
the goal constraint levels. In this objective function, the goal is to minimize d1− , the
underutilization of labor. If d1− = 0, then we would not be using less than 40 hours of labor.
Thus, it is our aim to make d1− equal zero or the minimum quantity possible.
The symbol P1 in the objective function describes the minimization of d1− as the first-priority
goal. This indicates that the first step will be to minimize the value of d1− before any other
goal is introduced,when this model is resolved.
In a GP model, the objective function pursues to minimize the
deviation from targets in order of the goal priorities.
Also, the fourth goal-priority in this issue is related to the labor constraint. The fourth goal
which is denoted by P4 and considered to minimize overtime. Remember that hours of
overtime are denoted by d1+ ;
Therefore, the objective function is given by
− +
Minimize Pd
1 1 , P4 d1
As earlier, the objective is to minimize the deviational variable d1+ . Further, if d1+ = 0 , there
would be no overtime throughly. In the calculation of this model, the solution to this fourth-
level goal will not be attempted until goals one, two, and three have been solved.
2.3.3. CONCEPT OF PROFIT GOAL
In our GP model, the second goal is to obtain a daily profit of $1,600. Remember
that the original LP objective function was defined as
Z = 40 x1 + 50 x2
41 | P a g e
Now we redefines this objective function as a goal constraint that follows the target level
such as
40 x1 + 50 x2 + d2− − d2+ = $1600
The deviational variables d 2− and d 2+ denote the amount of profit less than $1,600 ( d 2− ) and
the amount of profit exceeding $1,600 ( d 2+ ), respectively. The pottery company's goal to
reaching $1,600 in profit is symbolized in the objective function as
− − +
Minimize Pd
1 1 , P2 d 2 , P4 d1
Here, It is seen that only d 2− is being minimized, not d 2+ , since it is reasonable to accept that
the pottery company would be agreeable to all profits in additional of $1,600 (i.e., it does not
need to minimize d 2+ , additional profit). At the second-priority level by minimizing d 2− , the
pottery company expectations that d 2− will equal zero, which will outcome in at least $1,600
in profit.
2.3.4. CONCEPT OF MATERIAL GOAL
The third goal of the company is to ignore more than 120 pounds of clay on hand
every day. The goal constraint is
Since the deviational variable d3− denotes the amount of clay fewer than 120 pounds, and d3+
denotes the amount in additional of 120 pounds, this goal can be reproduced in the objective
function such as
− − + +
Minimize Pd
1 1 , P2 d 2 , P3d3 , P4 d1
The term P3d3+ indicates the company's requirements to minimize d3+ , the quantity of clay in
addition of 120 pounds. The P3 term indicates third most important goal of the pottery
company.
The whole GP model can now be defined symbolically as follows:
42 | P a g e
− − + +
Minimize Pd
1 1 , P2 d 2 , P3 d 3 , P4 d1
subject to
x1 + 2 x2 + d1− − d1+ = 40
40 x1 + 50 x2 + d 2− − d 2+ = 1600
4 x1 + 3 x2 + d3− − d3+ = 120
x1 , x2 , d1− , d1+ , d 2− , d 2+ , d3− , d3+ 0
The simple difference between this model and the standard LP model is that the objective
function terms are not summed to equal a total value, Z. The reason behind this is that, the
deviational variables in the objective function denotes different unit of measure. For instance,
d1− and d1+ indicates hours of labor, d 2− indicates dollars, and d3+ indicates pounds of clay.
It would be irrelevant to sum hours, dollars, and pounds. The objective function in a GP
model specifies only that the deviations from the goals represented in the objective function
be minimized individually, in order of their priority.
Since the deviational variables often have different units
of measure then the terms are not summed in the objective
function logically.
Now, suppose we want to modify the prior GP model so that our fourth-priority goal
boundaries overtime to 10 hours in place of minimizing overtime. Remember that the goal
constraint for labor is given by
x1 + 2 x2 + d1− − d1+ = 40
d1+ denotes overtime in this goal constraint. Since the new fourth-priority goal is to bound
overtime to 10 hours, the goal constraint is developed as follows:
d1+ + d 4− + d 4+ = 10
43 | P a g e
Next, consider the inclusion of a fifth-priority goal to this example. Assume that the pottery
company has limited warehouse space, so it can manufacture no more than 30 bowls and 20
mugs daily. If probable, the company would like to manufacture these amounts. However,
because the profit for mugs is more than the profit for bowls (i.e., $50 rather than $40), it is
more significant to reach the goal for mugs. This fifth goal necessitates that two new goal
constraints be formed, as follows:
x1 + d5− = 30 bowls
x2 + d6− = 20 mugs
Here, It is notice that the positive deviational variables d5+ and d6+ have been removed from
these goal constraints. The reason behind to do this as the statement of the fifth goal specifies
that "no more than 30 bowls and 20 mugs" can be produced. Further, positive deviation, or
over production, is not possible.
Since, the genuine goal of the company is to reach the levels of manufacturing shown in these
two goal constraints, the negative deviational variables d5− and d 6− are minimized in the
objective function. However, remember that it is more significant to the company to reach
the goal for mugs because mugs make more profit. This situation is reflected in the objective
function, such as:
− − + + − −
1 1 , P2 d 2 , P3d3 , P4 d 4 , 4 P5 d 5 + 5P5 d 6
Minimize Pd
Since, the goal for mugs is more significant instead of the goal for bowls, the level of
significance should be in ratio to the quantity of profit (i.e., $50 for each mug and $40 for
each bowl). Therefore, the goal for mugs is more significant than the goal for bowls in the
percentage of 5 to 4.
Here, the coefficient 5 and 4 are referred to as weights for P5 d6− and P5 d5− , respectively.
Thus, at the fifth priority level, the minimization of d 6− is "weighted" greater than the
44 | P a g e
minimization of d5− . When this model is resolved, the attainment of the goal for minimizing
d 6− (bowls) is more significant, although both goals are at the equal priority level.
At the same priority level, two or more goals can be assigned weights
to specify their relative significance.
Here it is notice that, these two weighted goals have been summed due to both are at the same
priority level. At this individual priority level, their sum characterises achievement of the
desired goal. The whole GP model, with the new goals for both overtime and production, is
developed as:
− − + + − −
Minimize Pd 1 1 , P2 d 2 , P3 d 3 , P4 d 4 , 4 P5 d 5 + 5 P5 d 6
subject to
x1 + 2 x2 + d1− − d1+ = 40
40 x1 + 50 x2 + d 2− − d 2+ = 1600
4 x1 + 3x2 + d3− − d3+ = 120
d1+ + d 4− + d 4+ = 10
x1 + d5− = 30
x2 + d6− = 20
x1 , x2 , d1− , d1+ , d 2− , d 2+ , d3− , d3+ , d 4− , d 4+ , d5− , d 6− 0
Only those linear GP problems which involve two decision variables can be solved
by the graphical method. This method is quiet similar to the graphical method of
LP. The graphical method is used in LP to maximize the objective function with
one goal only, whereas in GP, it is used to minimize the deviation from a set of
multiple goals. Here the deviation from the goal of highest priority are minimize as
much as possible and then the deviations in the other goals in order of priority are
minimized so that the achievements of the goals of higher order are not affected.
Following procedural steps are employed in the process, after the problem has
been formulated.
Step-1: Plot all structural constratints and identify the feasible region. In case, no
structural constraints exists, the feasible region is that area where both x1 and x2
45 | P a g e
− − + +
Minimize Pd
1 1 , P2 d 2 , P3 d 3 , P4 d1
subject to
x1 + 2 x2 + d1− − d1+ = 40
40 x1 + 50 x2 + d 2− − d 2+ = 1600
4 x1 + 3 x2 + d3− − d3+ = 120
x1 , x2 , d1− , d1+ , d 2− , d 2+ , d3− , d3+ 0
To graph this model, the deviational variables in each goal constraint are set equal to zero,
and we graph each subsequent equation on a set of coordinates. Here, Figure-1 is a graph of
the three goal constraints for this model.
Notice that in Figure-1, there is no feasible solution space indicated, as in a regular LP model.
This is because all three goal constraints are equations; thus, all solution points are on the
constraint lines.
The solution logic in a GP model is to try to attain the goals in the objective function, in order
of their priorities. As a goal is achieved, the next highest-ranked goal is then considered.
However, a higher-ranked goal that has been achieved is never given up in order to achieve a
lower-ranked goal.
46 | P a g e
In this example we first consider the first-priority goal is minimizing d1− . The relationship
of d1− and d1+ to the goal constraint is shown in Figure-2. The area below the goal constraint
line x1 + 2 x2 = 40 represents possible values for d1− , and the area above the line represents
values for d1+ . In order to achieve the goal of minimizing d1− , the area below the constraint
line corresponding to d1− is eliminated, leaving the shaded area as a possible solution area.
Next, we consider the second-priority goal is minimizing d 2− . In Figure-3, the area below
the constraint line 40x1 + 50x2 = 1,600 represents the values for d 2− , and the area above the
line represents the values for d 2+ . To minimize d 2− , the area below the constraint line
corresponding to d 2− is eliminated. Notice that by eliminating the area for d 2− , we do not
affect the first-priority goal of minimizing d1− .
One goal is never achieved at the expense of another higher-priority goal.
Next, we consider the third-priority goal is minimizing d3+ . Figure-4 shows the areas
corresponding to d3− and d3+ . To minimize d3+ , the area above the constraint line
4 x1 + 3x2 = 120 is eliminated. After considering the first three goals, we are left with the area
between the line segments AC and BC, which contains possible solution points that satisfy
the first three goals.
48 | P a g e
Finally, we must consider the fourth-priority goal is minimizing d1+ . To achieve this final
goal, the area above the constraint line x1 + 2 x2 = 40 must be eliminated. However, if we
eliminate this area, then both d 2− and d3− must take on values. In other words, we cannot
minimize d1+ totally without violating the first- and second-priority goals. Therefore, we
want to find a solution point that satisfies the first three goals but achieves as much of the
fourth-priority goal as possible.
Point C in Figure-5 is a solution that satisfies these conditions. Notice that if we move down
the goal constraint line 4 x1 + 3x2 = 120 toward point D, d1+ is further minimized; however,
d 2− takes on a value as we move past point C. Thus, the minimization of d1+ would be
accomplished only at the expense of a higher-ranked goal.
The solution at point C is determined by simultaneously solving the two equations that
intersect at this point. Doing so results in the following solution:
x1 = 50 bowls
x2 = 20 mugs
d1+ = 15 hours
49 | P a g e
Because the deviational variables d1− , d 2− , and d3+ all equal zero, they have been minimized,
and the first three goals have been achieved. Because d1+ = 15 hours of overtime, the fourth-
priority goal has not been achieved. The solution to a GP model such as this one is referred to
as the most adequate solution rather than the optimal solution because it fulfils the definite
goals as well as possible.
Further, GP solutions do not always find all goals, and they are not
ideal; however, they attain the most suitable solution as well as possible.
The simplex method for solving a GP problem is similar to that for a LP problem in the
modified form. In this section we shall demonstrate how the algorithm can be modified to
solve a GP model. The method of solution of GP problem by modified simplex method, is as
follows:
50 | P a g e
Step-1. Formulation of Initial Table: Construct the initial simplex table in the same way as
for LP problems with the coefficients of the associated variables (decision variables and the
deviational variables) placed in the appropriate columns. Now put a thick horizontal line
below these entries and write the pre-emptive priority goals P1 , P2 ,......, in xB column, starting
from the bottom to the top i.e., first (top) priority P1 is written at the bottom and the least
priority is written at the top.
Step-2. In GP problem there is no profit maximization or cost minimization in the objective
function. Here we minimize the unattained portions of the goal as much as possible, by
minimizing the deviational variables through the use of certain pre-emptive priority factors
and different weights attached with the deviational derivatives in the objective function. So
the pre-emptive priority factor with weight attached with the deviational derivatives in the
objective function Z will represent c j values. Write the c j -row at the top of the table.
Step-3. Test of Optimality: Compute the values of Z j and c j − Z j separately for each of the
ranked goals P1 , P2 ,....... It is because the different goals are measured in different units. Z j
and c j − Z j are computed in the same manner as in the usual simplex method of LP
problems.
(
Thus, c j − Z j = c j − ( cB column ) . j th column
T
)
Z = ( cB column ) . ( xB column )
T
and
If atleast one of these entries in P1 row is negative and there is no zero in P1 row in xB
column, then this goal P1 is not achieved and can be improved further, in this case proceed to
the next step.
Step-4. To find the Entering Vector (or Variable): the variable in the column corresponding
to the largest negative c1 − Z1 value (smallest element) in the P1 row is selected as the
51 | P a g e
entering variable (or vector). In case of tie, check the next lower priority level. The column
corresponding to the smallest element (largest negative element) in the lower priority row,
out of the columns in which there is a tie in c1 − Z1 row, is selected as key column (i.e.,
incoming variable or vector).
To Find the Outgoing Vector (Or Variable) : The outgoing vector is selected as in usual simplex
method in LP problems. The variable in the row (known as key row), which corresponds to
the minimum non-negative value, obtained by dividing the values in the xB column by the
corresponding positive elements (or values) in the key column, is taken as the outgoing
variable (or vector).
The element at the intersection of the key row and key column is called key-element.
Step-5. As in usual Simplex Method we reduce the key element equal to 1 and with its help
all other elements in the key column are reduced to zero. Thus, a new reduced matrix is
obtained.
For this matrix again find the values of Z j or c j − Z j for each of the ranked goals
P1 , P2 ,....... Now again we check c1 − Z1 row for optimality. If all entries in this P1 row are
positive then the goal is achieved (Note that in this situation the values in xB column, in P1
row will be zero to show that this goal is fully achieved).
If atleast one entry in P1 row is negative then goal P1 is still not achieved. In this case again
repeat step- 4 and 5.
Step-6. If goal P1 is achieved, then proveed to achieve the next priority goal P2 in the above
manner. The goal P2 cannot be improved (achieved) further from the present level if there is
positive entry in row P1 (higher priority goal) below the most negative entry in row P2 .
Continue this process until the lower priority goal (say Pi ) is also achieved fully or to the
nearest satisfaction. The goal Pi cannotbe improved (achieved) further from the level there is
positive entry in higher priority goals P1 , P2 ,....... rows below the most negative entry in row
Pi .
For clear understanding of the above method see the following illustrative example.
52 | P a g e
Example: A company manufactured two products radios and transisters which must be
processed through assembly and finishing departments. Assembly has 90 hours available,
finishing can handle upto 72 hours of work. Manufacturing one radio requires 6 hours in
assembly and 3 hours in finishing. The profit is Rs. 120 per radio and Rs. 90 per transistor.
The company has established the following goals and has assigned them priorities
P1 , P2 , P3 (where P1 is most important) as follows:
Priority Goal
P1 Produce to meet a radio goal of 13
P2 Reach a profit goal of Rs. 1950
P3 Produce to meet a transistor goal of 5.
Solution: Formulation of the GP problem: Firstly, the given informations can be put in a
tabular form as follows:
Also let
53 | P a g e
Solution of the GP Problem: Introducing the slack variables x3 , x4 , the above GP problem can be
written as follows:
− − −
1 2 + P2 d1 + P3d 3
Minimize Z = Pd
subject to 120 x1 + 90 x2 + d1− − d1+ = 1950
x1 + d 2− − d 2+ = 13
x2 + d3− − d3+ = 5
6 x1 + 3x2 + x3 = 90
3x1 + 6 x2 + x4 = 72
and x1 , x2 , d1− , d1+ , d 2− , d 2+ , d3− , d3+ 0.
54 | P a g e
Step-1. Formulation of the initial table: Now we formulate the starting (initial) table as
follows. (As explained in step-1 of 2.6).
cj 0 0 0 0 P2 0 P1 0 P3 0 Mini Ratio
B cB xB / x1
xB x1 x2 x3 x4 d1− d1+ d 2− d 2+ d3− d3+
Type
d 2− P1 13
1 0 0 0 0 0 1 -1 0 0 13/1 (Min)
→
d3− P3 5 0 1 0 0 0 0 0 0 1 -1 —
x3 0 90 6 3 1 0 0 0 0 0 0 0 90/6
x4 0 72 3 6 0 1 0 0 0 0 0 0 72/3
cj − Z j P3 5 0 1 0 0 0 0 0 0 0 1
P1 13 -1 0 0 0 0 0 0 1 0 0
55 | P a g e
Note that all enteries in the columns corresponding to vectors in the basis are zero. So we
may compute c j − Z j for columns corresponding to non-basic variables only. The entries in
columns corresponding to basic variables will be zero.
x1 is the incoming variable and by minimum ratio rule d 2− is the outgoing variable. Thus,
the key element is 1 ( = a21 ).
Step-5. Here, reducing all other elements in key column c1 equal to zero with the help of
key element, the next table is as follows:
cj 0 0 0 0 P2 0 P1 0 P3 0 Mini Ratio
B cB xB / d 2+
Type xB x1 x2 x3 x4 d1− d1+ d 2− d 2+ d3− d3+
56 | P a g e
x1 0 13 1 0 0 0 0 0 1 -1 0 0 —
d3− P3 5 0 1 0 0 0 0 0 0 1 -1 —
x3 0 12 0 3 1 0 0 0 -6 0 0 12/6 (Min)
6
→
x4 0 33 0 6 0 1 0 0 -3 3 0 0 33/3
cj − Z j P3 5 0 -1 0 0 0 0 0 0 0 1
P1 0 0 0 0 0 0 0 1 0 0 0
Here, we again c j − Z j compute for columns corresponding to non-basic variables only. All
enteries in the columns corresponding to basic variables will be zero.
The value of c j − Z j in P1 , P2 , P3 rows may also be found easily as follows making 1 at the
place of key element use it to reduce all enteries in P1 , P2 , P3 rows corresponding to the
column of key element to zero.
Now we proceed to achieve that goal P2 , without affecting the achievement of top priority
goal P1 .
57 | P a g e
Step-6. In the P2 row (in above table) most negative value is -120 in column corresponding
to variable d 2+ . So this variable is taken as the entering variable. Now by minimumratio rule
x3 in 4 th-row is the outgoing vector. Thus, 6( = a48 ) is the key element. Dividing this 4 th-
row by 6, we make 1 (at this place) and with the help of 1 at this place we reduce all other
elements in this d 2+ column equal to zero.
cj 0 0 0 0 P2 0 P1 0 P3 0 Mini Ratio
B cB xB / d 2+
Type xB x1 x2 x3 x4 d1− d1+ d 2− d 2+ d3− d3+
=30
d3− P3 5 0 1 0 0 0 0 0 0 1 -1 5/1 = 5
d 2+ 0 2 0
1/2
1/6 0 0 0 -1 1 0 0 2/(1/2) = 4
(Min)
→
cj − Z j P3 5 0 -1 0 0 0 0 0 0 0 1
P2 150 0 -30 20 0 0 1 0 0 0 0
P1 0 0 0 0 0 0 0 1 0 0 0
Now, we again compute c j − Z j for columns corresponding to non-basic variables only. All
enteries in the columns corresponding to basic variables will be zero.
58 | P a g e
Again in P2 row c2 − Z 2 is negative, so this solution is not optimal from P2 point of view.
Now we take variable x2 in second column corresponding to most negative entry in P2 rows
as entering variable, by minimum ratio rule d 2+ in IV row is the outgoing variable, so the key
element is 1/ 2(= a42 ) . Dividing IV row by ½, to make 1, the key element, and then reducing
all other elements in 2 nd column to 0, we get the following reduced matrix.
cj 0 0 0 0 P2 0 P1 0 P3 0
B cB
Type xB x1 x2 x3 x4 d1− d1+ d 2− d 2+ d3− d3+
x1 0 13 1 0 0 0 0 0 1 -1 0 0
d3− P3 1 0 0 -1/3 0 0 0 2 -2 1 -1
d 2+ 0 4 0 1 1/3 0 0 0 -2 2 0 0
x4 0 9 0 0 -2 1 0 0 9 -9 0 0
cj − Z j P3 5 0 0 1/3 0 0 0 -2 2 0 1
P2 150 0 0 30 0 0 1 -60 60 0 0
P1 0 0 0 0 0 0 0 1 0 0 0
59 | P a g e
We get
In the above table we note that there is negative entry -60 in P2 . But P2 cannot be improved
further as there is positive entry below this element in P1 row (top priority row). Similarly, if
we move to improve P3 , then also it is not possible as there is positive entry in row P1 , below
the negative entry in row P3 .
x1 = 13, x2 = 4, d1− = 30, d3− = 1, d1+ = 0 = d2− = d 2+ = d3+ i.e., radios and 4 transistors
should be manufactured.
For this solution the first (top) priority goal P1 is fully achieved (13 radios), the second
priority goal P2 is missed by Rs. 30 (here
Profit = 120 13 + 90 4 = Rs.1920, and Rs.1950 − 1920 = Rs.30 only), and the last priority
goal P3 is also missed by 1 transistor (here 5- 4 = 1).
60 | P a g e
61 | P a g e
potential audiences of a one- minute ad of each type are shown in below table. Leon Burnit
must determine how many football ads and soap opera ads to purchase.
Ad HIM LIP HIW Cost
If we let
𝑥1 = # of minutes of ads shown during football games
𝑥2 = # of minutes of ads shown during soap operas
We can write the constraints of the problem as
7 x1 + 3 x2 40
10 x1 + 5 x2 60
5 x1 + 4 x2 35
100 x1 + 60 x2 600
x1 , x2 0
From the Figure-1, we find that no point that satisfies the budget constraint meets all three
of Priceler’s goals. Thus, the problem has no feasible solution. It is impossible to meet all of
Priceler’s goals, so Burnit might ask Priceler to identify, for each goal, a cost (per-unit short
of meeting each goal) that is incurred for failing to meet the goal.
Burnit can now formulate an LP that minimizes the cost incurred in deviating from
Priceler’s three goals. The trick is to transform each inequality constraint in that represents
one of Priceler’s goals into an equality constraint. The cost-minimizing solution might
under- satisfy or over-satisfy a given goal, so we need to define the following deviational
variables:
62 | P a g e
Figure-2.1
63 | P a g e
➢ Each million exposures by which Priceler falls short of the HIW goal costs
Priceler a $50,000 penalty because of lost sales.
To find the best solution satisfying the above equations, we can write the following model
with the objective:
Z = 250; x1 = 6, x2 = 0,
d1+ = 2, d 2+ = d3+ = d1− = d 2− = 0 and d3− = 5.
64 | P a g e
− − −
1 1 + P2 d 2 + P3 d 3
Minimize = Pd
7 x1 + 3 x2 + d1− − d1+ = 40
− +
10 x1 + 5 x2 + d − d
2 2 = 60
− +
5 x1 + 4 x2 + d − d
3 3 = 35
100 x1 + 60 x2 + s4 = 600
x1 , x2 , di− , di+ , s4 0, i
Preemptive goal programming problems can be solved by an extension of the simplex known
as the goal programming simplex. To prepare a problem for solution by the goal
programming simplex, we must compute n Row 0s (objective rows), with the i-th row
corresponding to goal i.
We thus have
d1− 0 7 3 -1 0 0 1 0 0 0 40
d 2− 0 10 5 0 -1 0 0 1 0 0 65
65 | P a g e
d3− 0 5 4 0 0 -1 0 0 1 35
s4 0 100 60 0 0 0 0 0 0 1 600
Z1 1 0 0 0 0 0 − P1 0 0 0 0
Z2 1 0 5 P2 10 P2 − P2 0 10 P2 0 0 0 20 P2
−
7 7 7 7
Z3 1 0 13P3 5 P3 0 − P3 5 P3 0 0 0 45 P3
−
7 7 7 7
x1 0 1 3 −1 0 0 1 0 0 0 40
7 7 7 7
d 2− 0 0 5 10 -1 0
−
10 1 0 0 20
7 7 7 7
d3− 0 0 13 5 0 -1
−
5 0 1 45
7 7 7 7
Z1 1 0 0 0 0 0 − P1 0 0 0 0
Z2 1 0 − P2 0 − P2 0 0 0 0 − P2 0
10
66 | P a g e
Z3 1 0 P3 0 0 − P3 0 0 0 − P3 5P3
20
x1 0 1 3 0 0 0 1 0 0 1 6
5 7 100
d 2− 0 0 -1 0 -1 0 0 1 0
−
1 0
10
d3− 0 0 1 0 0 -1 0 0 1
−
1 5
20
d1+ 0 0 6 1 0 0 -1 0 0 7 2
5 100
When a preemptive goal programming problem involves only two decision variables, the
optimal solution can be found graphically. For example, suppose HIW is the highest priority
goal, LIP is the second-highest, and HIM is the lowest. From the Figure, we find that the set
of points satisfying the highest-priority goal (HIW) and the budget constraint is bounded by
the triangle ABC.
67 | P a g e
Among these points, we now try to come as close as we can to satisfying the second-
highest-priority goal (LIP). Unfortunately, no point in triangle ABC satisfies the LIP goal.
We see from the figure, however, that among all points satisfying the highest-priority goal,
point C (C is where the HIW goal is exactly met and the budget constraint is binding) is the
unique point that comes the closest to satisfying the LIP goal.
Simultaneously solving the following equations, we find that point C (3, 5) is the solution
that satisfies both goals and closest to satisfying the LIP goal.
5 x1 + 4 x2 = 35
100 x1 + 60 x2 = 600
We can use computer system i.e, MS Excel Solver to solve preemptive GP models.
GP has a close correspondence with decision making. As managers are constantly called
upon to make decisions in order to solve problems, this technique is particularly relevant in
the field. Business success relies on effective decision making processes, and GP models can
assist. In particular assigned weights can express the intensity with which the goals are
68 | P a g e
2.9 SUMMARY
➢ The next point in discussion has been on the concepts of GP. The differences between
GP and LP have been brought out. The distinguishing features of GP revolve around
its ability to use the ordinal principle of preemptive priority structure of the goals of
management which may be incommensurable.
➢ The formulations of GP models with its steps have been covered with the
typical and comprehensive examples.
➢ In the graphical method of solving the GP problem, one problem was
formulated and solved graphically for a meaningful appreciation.
➢ The optimal solutions of GP problems by modified simplex method with its
steps have been covered with the typical and comprehensive examples.
1. What is GP?
problem.
6. Suppose a firm manufactures two products. Each product requires time in two
production departments: Product 1 requires 20 hours in department 1 and 10 hours in
department 2. Product 2 requires 10 hours in department 1 and 10 hours in department
2. Production time is limited in department 1 to 60 hours and in department 2 to 40
hours. Contribution to profits for the two products in Rs. 40 and Rs. 80 respectively.
Management has established the following goal priorities:
Pl (priority1): To meet production goals of 2 units for each product.
P2 (priority2): To maximize profits.
Formulate the problem.
− − −
Answer: Minimize Z = Pd
1 2 + Pd
1 3 + P2 d1
70 | P a g e
8. In the problem given in question-6, the company sets the following two equally
ranked goals
(i) reach a profit goal of Rs. 1500
(ii) meet a product of radios goal of 10
Formulate the problem as a GP problem and solve by graphical as well as by modified
simplex method.
71 | P a g e
Since the net profit from the sale of Product A is twice the amount from that of
Product B, the manager has twice, as much desire to achieve sales for Product A
as for Product B.
P3: He wants to minimize the overtime operation of the plant as much as
possible.
Solve this problem by Graphical Method of GP as well as by modified simplex
method.
− − − +
Answer: Minimize Z = Pd
1 1 + P2 (2d 2 + d3 ) + P3 d1
73 | P a g e
74 | P a g e
10. Consider a goal with constraint g ( x1 , x2 ,......, xn ) + d1− − d1+ = b1 and the term
3d1− + 2d1+ in the objective function, the decision-maker
(a) prefers g ( x1 , x2 ,......, xn ) b1 rather than b1
(b) prefers g ( x1 , x2 ,......, xn ) b1 rather than b1
(c) not concerned with either or 1
(d) none of these.
Answers To Objective Questions
1. 0 2. less. 3. greater. 4. highest ; negative. 5. d i− 0.
6. (b). 7. (b). 8. (d ). 9. (c). 10. (b).
2.12 GLOSSARY
• GP: Goal Programming isan extension of Linear Programming in which targets are
specified for a set of constraints.
75 | P a g e
1. Anderson, D., Sweeney, D., Williams, T., Martin, R.K. (2012). An introduction to
management science: quantitative approaches to decision making (13th ed.). Cengage
Learning.
2. Balakrishnan, N., Render, B., Stair, R. M., & Munson, C. (2017). Managerial decision
modeling. Upper Saddle River, Pearson Education.
3. Hillier, F.& Lieberman, G.J. (2014). Introduction to operations research (10th
ed.).McGraw-Hill Education.
4. Powell, S. G., & Baker, K. R. (2017). Business analytics: The art of modeling with
spreadsheets. Wiley.
5. Swarup, K. Gupta, P. K. & Mohan, M. (2012). Introduction to operation research.
Sixteenth edition, Sultan Chand & Sons.
6. Hamdy, A. Taha (2017). Operation research an introduction . Tenth edition,Global
Edition, Pearson Eduction Ltd.
76 | P a g e
LESSON 3
WAITING LINE MODELS
Dr. Shubham Agarwal
Associate Professor
New Delhi Institute of Management
GGSIP University
[email protected]
STRUCTURE
3.2 INTRODUCTION
78 | P a g e
service, queuing theory is regarded as one of the standard approaches of operations research
and management science (along with linear programming, simulation, etc.).
The mathematical analysis of queues is known as queueing theory. The theory makes it possible to
mathematically analyse a number of connected processes, such as getting to the front of the line,
waiting in line, and receiving service. In order to reduce the average cost of using the queuing
system and the cost of service, the queuing model aims to determine the ideal service rate and
server count. Numerous further mathematical models for understanding and resolving issues with
waiting lines are provided by queuing theory.
The units requiring service enter the queuing system on their arrival and join a queue. The
queue represents the number of customers waiting for service. A queue is called finite if the
number of units in it is finite otherwise it is called infinite. Some of the basic elements of
queuing system are as follows:
• Input source of queue
• Queue discipline (Service discipline)
• Service mechanism (Service system)
• System output
The size represents the total number of potential customers who will require service. The size
of the population is described by the following factors given below:
i) According to source- The source of customers can be finite or infinite. For example,
all people of a city or state (and others) could be the potential customers at a
supermarket. The number of people being very large, it can be taken to be infinite
79 | P a g e
whereas there are many situations in business and industrial conditions where we
cannot consider the population to be infinite; it is finite.
ii) According to numbers- The customers may arrive for service individually or in
groups. Single arrivals are illustrated by patients visiting a doctor, students reaching at
a library counter etc. On the other hand, families visiting restaurants, ships
discharging cargo at a dock are examples of group or batch arrivals.
iii) According to time- Customers arrive in the system at a service facility according to
some known schedule or else they arrive randomly. Arrivals are considered random
when they are independent of one another and their occurrence cannot be predicted
exactly. The queuing models wherein customer’s arrival times are known with
certainty are categorized as deterministic models and are easier to handle. On the
other hand, a substantial majority of the queuing models are based on the premise that
the customers enter the system stochastically, at random points in time.
b) Pattern of arrivals at the system
Customers' arrival processes (or patterns) at the support system are divided into two groups:
static arrival processes and dynamic arrival processes.
i) In static arrival process, the control depends on the nature of arrival rate (random or
constant). Random arrivals are either at a constant rate or varying with time. Thus to
analyze the queuing system, it is necessary to attempt to describe the probability
distribution of arrivals. From such distributions we obtain average time between
successive arrivals, also called “inter-arrival time” (time between two consecutive
arrivals), and the average arrival rate (i.e. number of customers arriving per unit of
time at the service system).
ii) In dynamic arrival processboth the service centre and the customers have control.
By varying staffing levels at various service times, varying service fees at various
times, or allowing entrance with appointments, the service facility can adjust its
capacity to match changes in the intensity of demand.
Frequently in queuing problems, the number of arrivals per unit of time can be estimated by a
probability distribution known as the Poisson distribution, as it adequately supports many real
world situations.
The behaviour or attitude of the customers entering the queueing system is another factor to
take into account. Customers can be divided into two groups based on how patient or
80 | P a g e
impatient they are. A customer is described as patient if, upon entering the service system, he
remains there until served, regardless of how long he must wait. In contrast, an impatient
customer is one who waits in the queue for a predetermined amount of time before leaving
due to factors like the length of the queue in front of him.Some interesting observations of
customer behavior in queues are as follows:
i) Balking- Some customers even before joining the queue get discouraged by seeing
the number of customers already in service system or estimating the excessive waiting
time for desired service decide to return for service at a later time. This is known as
balking.
ii) Reneging- Customers after joining the queue wait for sometime and leave the service
system due to intolerable delay, so they renege.
iii) Jockeying- Customers who switch from one queue to another hoping to receive
service more quickly are said to be jockeying.
iv) Collusion- Customers in the queue may demand the service on their behalf as well as
on behalf of others is known as collusion.
(a) Static queue disciplines are based on the individual customer's status in the queue. Few
of such disciplines are:
i) First-come-first-served (FCFS)- If the customers are served in the order of their
arrival, then this is known as the first-come-first-served (FCFS) service discipline.
ii) Last-come-first-served (LCFS)- Sometimes, the customers are serviced in the
reverse order of their entry so that the ones who join the last are served first, then this
is called last-come-first-served (LCFS) service discipline.
(b) Dynamic queue disciplines are based on the individual customer attributes in the queue.
Few of such disciplines are:
i) Service in Random Order (SIRO)- Under this rule customers are selected for
service at random, irrespective of their arrivals in the service system. In this every
81 | P a g e
customer in the queue is equally likely to be selected. The time of arrival of the
customers is, therefore, of no relevance in such a case.
iii) Priority Service- Under this rule customers are grouped in priority classes on the
basis of some attributes such as service time or urgency or according to some
identifiable characteristic to provide the service. The treatment of VIPs in preference
to other patients in a hospital is an example of priority service.
iv) Round Robin service- Every customer gets a time slice. If his service is not
completed, he will re-enter the queue.
Use simple and easily understandable language.
3.3.3 Service mechanism (Service system):
The uncertainties involved in the service mechanism are the number of servers, the
number of customers getting served at any time, and the duration and mode of service.
Networks of queues consist of more than one servers arranged in series and/or parallel.
Random variables are used to represent service times, and the number of servers, when
appropriate. If service is provided for customers in groups, their size can also be a random
variable. A service system has only a few components listed below:
• Configuration of the service system
• Speed of the service
• System capacity
i) Single Server – Single Queue-Single server models are those where there is only one
queue and one service station facility, and the customer waits until the service point is
prepared to accept him in for servicing. A library counter serving as an example of a
single server facility, with students gathering at it.
82 | P a g e
customers customers
arrivequeue service facility leave
ii) Single Server – Several Queues- In this type of facility there are several queues and
the customer may join any one of these but there is only one service channel.
customers customers
arrivequeues service facility leave
iii) Several (Parallel) Servers – Single Queue-This kind of strategy uses multiple servers,
each of which offers the same kind of service. When one of the service channels is prepared
to receive the customers in for servicing, they wait in a single line.
customers customers
arrivequeue service stations leave
83 | P a g e
iv) Several Servers – Several Queues-This kind of model comprises of a number of servers,
each of which has its own queue. An illustration of this type of model is the various cash
counters in an electricity office where customers can settle their electricity bills.
Customers customers
arrivequeues service stations leave
v) Service facilities in a series-In this, a customer approaches the first station, receives some
service there, moves to the next station, receives more service there, and then does it all over
again and so forth, until the user ultimately exits the system after receiving the full service.
For instance, the machining of a particular steel object might involve a succession of single
servers performing cutting, turning, knurling, drilling, grinding, and packaging operations.
customers customers
arrivequeue service mechanism queue service mechanism leave
b) Speed of Service
In a queuing system, the speed with which service is provided can be expressed in either of
two ways as, service rate and as service time.
i) The service rate describes the average number of customers that can be
served per unit time. Service rate is denoted by µ.
ii) The service time indicates the amount of time needed to serve a customer.
Service time is the reciprocal of service rate, i.e, service time = 1/ µ.
84 | P a g e
Eg: If a cashier can attend, on an average 10 customers in an hour, the service rate would be
expressed as 10 customers/hour and service time would be equal to 6 minutes/customer.
c) System capacity
In a queuing system, it is important to take into account how many consumers can wait at
once. If the waiting area is big, it can be assumed that it is practically infinite. But based on
our regular use of telephone networks, we know that the size of the buffer that receives our
call while we wait for a free line is crucial as well.
3.3.4 System output:
The rate at which consumers are served is referred to as system output. It depends on how long the
facility needs to provide the service and how the service facility is set up. In a single channel facility,
the queue's output is unimportant since the client leaves after obtaining the service. However, in a
multistage channel facility, the queue's output is crucial because the probability of a service station
breakdown can affect the queues. The queue prior to the breakdown will get longer, while the line
after the breakdown will get shorter.
86 | P a g e
IN-TEXT QUESTIONS
1. The objective of the queuing model is to find out the ___________ service rate
and the number of servers so that the average cost of being in the queuing
system and the cost of service are minimized.
The following are some symbols and terminology used in the queuing models:
If the arrivals are completely random then the probability distribution of number of
arrivals in a fixed time interval follows a poisson’s distribution.
Proof: To prove this theorem we shall make some assumptions which are as follows:
Let there are n units in the system at time t.
1) The probability of one arrival in small time interval Δt = λ.Δt
2) The probability of more than one arrival in the time interval Δt is zero because Δt is very
small.
3) The process has independent increments.
4) Pn(t) be the probability of n arrivals in time t.
There may be two cases:
Case-I: when n > 0 then, two events can occur, which are shown below:
n units n units (n – 1) units n units
t Δt t + Δt t Δt t + Δt
88 | P a g e
Let there are n units in the system at time t, no arrival takes place in time Δt. Hence there
remain n units in the system at time (t + Δt).
Probability of this event = (probability of n units in the system)
x (probability of no arrival in time Δt)
= Pn(t) . (1 – λ.Δt)
Let there are (n – 1) units in the system at time t, one arrival takes place in time Δt. So there
are n units in the system at time (t + Δt).
Probability of this event = [probability of (n – 1) units in the system]
x (probability of one arrival in time Δt)
= Pn - 1(t) .λ.Δt
Hence the probability of n units in the system at time (t + Δt) is,
Pn(t + Δt) = Pn(t) . (1 – λ.Δt) + Pn - 1(t) .λ.Δt
Pn(t + Δt) = Pn(t) – λ. Pn(t). Δt + Pn - 1(t) .λ.Δt
Pn(t + Δt) – Pn(t) = – λ. Pn(t) + λ. Pn - 1(t)
Δt
Taking limit Δt→0, we get
Pn(׳t) = – λ. Pn(t) + λ. Pn - 1(t) , for n > 0 ……….(1)
no arrival
t Δt t + Δt
Let there is no unit in the system at time t, no arrival takes place in time Δt. Hence there
would be zero units in the system at time (t + Δt).
Probability of this event = (probability of no unit in the system)
x (probability of no arrival in time Δt)
P0(t + Δt) = P0(t) . (1 – λ.Δt)
P0(t + Δt) – P0(t) = – λ. P0(t). Δt
P0(t + Δt) – P0(t) = – λ. P0(t)
Δt
Taking limit Δt→ 0, we get
P0 (׳t) = – λ. P0(t) , for n = 0 ……….(2)
89 | P a g e
In order to find the probability distribution, we shall make use of generating function of Pn(t).
i.e,
∞
P(z,t) = ∑ Pn(t). zn ……….(3)
n=0
or, ∞ ∞ ∞
n ׳
∑ z Pn (t) = – λ ∑ z Pn(t) + λ ∑ z Pn - 1(t)
n n
n=0 n=0 n=1
or, P (׳z,t) = λ (z – 1)
P(z,t)
Integrating with respect to t, we get
log P(z,t) = λ (z – 1). t + c
Putting t = 0,
log P(z,0) = c ……….(5)
Now from equation (3), we have
∞
P(z,0) = ∑Pn(0). zn
n=0
∞
P(z,0) = ∑Pn(0). zn+ P0(0). z0
90 | P a g e
n=1
Therefore, P(z,0) = 1 + 0 = 1
Now from equation (5),
c = log (1) = 0
Hence, log P(z,t) = λ (z – 1). t
P(z,t) = eλ (z – 1). t
or, P(z,t) = eλzt. e-λt ……….(6)
Now from equation (3), we have
∞
P(z,t) = ∑ Pn(t). zn
n=0
Theorem:According to the Markovian process of interarrival time, the amount of time until
the subsequent arrival is made is irrespective of the amount of time that has passed since the
previous arrival. i.e, to say,
P[T ≥ t1 ӏ T ≥ t0] = P[0 ≤ T ≤ t1–t0]
Proof: Using conditional probability, we can write
P[T ≥ t1 ӏ T ≥ t0] = P[(T ≥ t1)and (T ≥ t0)]/P[T ≥ t0] ………...(1)
Since the interarrival times are exponentially distributed, R.H.S. of equation (1) becomes,
[∫t0t1 λ.e-λt dt]/ [∫t0∞ λ.e-λt dt] = [e-λt1 - e-λt0]/[- e-λt0]
Hence,
P[T ≥ t1 ӏ T ≥ t0] = 1 - e-λ(t1-t0) …………(2)
Also,
92 | P a g e
The analysis of queuing theory involves the study of the behavior of the system over time.
The state of the system is the basic concept in the analysis of the queuing theory. The state of
the queuing system may be classified as follows:
The length of the queue will grow over time and eventually reach infinity if the system's arrival rate
is higher than its service rate. Such a state is known as explosive state.
93 | P a g e
(i) System length- The average number of customers in the system, those waiting to be and
those being serviced, is known as the length of the system.
(ii) Queue length-Queue length is the average number of customers in line waiting to obtain
service.
(iii) Waiting time in the queue-It is the typical length of time a customer must wait in line
before it is put into operation.
(iv) Waiting time in the system- It is the amount of time, on average, that a consumer
spends using the system between joining the queue and receiving their service.
(v) Servicing time- The time taken for servicing of a unit is known as its servicing time.
(vi) Mean arrival rate- The expected number of arrivals occurring in the time interval of
length unity is called mean arrival rate. It is denoted by λ.
(vii) Mean arrival time- It is the reciprocal of mean arrival rate and is defined as,
Mean arrival time = 1/mean arrival rate = 1/λ.
(vii) Mean servicing rate- The expected number of services completed in the time interval of
length unity is called mean servicing rate. It is denoted by µ.
(vii) Mean servicing time- It is the reciprocal of mean servicing rate and is defined as, Mean
servicing time = 1/mean servicing rate = 1/µ.
(ix) Server busy period- The busy period of the server is the time during which itremains
busy in servicing.
(x) Server idle period- When all the units in the queue are served, the idle period of the
server begins and it continues up to the time of the arrival of the unit i.e, the idle period
of the server is the time during which he remains free because there is no unit in the
system to be served.
(xi) Traffic intensity (Utilization factor)- An important parameter in any queuing system is
the traffic intensity also called the load or the utilization factor, defined as the ratio of
the mean servicing time over the mean arrival time. It is denoted by ρ and is defined as,
ρ = λ/µ
94 | P a g e
Notation for describing the characteristics of a queuing model was first suggested by David
G. Kendall in 1953. Kendall's notation introduced an (a/b/c) queuing notation that can be
found in all standard modern works on queuing theory.
Where,
a describes the interarrival time distribution,
b the service time distribution and
c the number of servers
For example, "G/D/1" would indicate a General arrival process, a Deterministic (constant
time) service process and a single server. Some other examples are M/M/1, M/M/c, M/G/1,
G/M/1 and M/D/1.
Later in 1966, A. Lee extended Kendall’s notations by adding fourth (d) and fifth (e)
characteristics to the notation to cover other queuing models. Then the following symbolic
expression can be used to fully specify the queuing model:
(a/b/c) : (d/e)
Where a, b and c describes their usual meaning and the addition letters d and e describes the
capacity of the system and the queue discipline respectively.
For example, (M/M/4) : (25/FCFS) could represent a bank with exponential arrival times,
exponential service times, 4 tellers, total capacity of 25 customers and an FCFS queue
discipline.
95 | P a g e
t Δt t + Δt t Δt t + Δt t Δt t + Δt
Let there are (n-1) units in the system at time t, one arrival takes place in time Δt and
no service provided in time Δt. Hence there remain n units in the system at time (t + Δt).
Probability of this event = [probability of (n-1) units in the system]
x (probability of one arrival in time Δt)
x (probability of no service in time Δt)
= Pn-1(t) .λ.Δt . (1 – µ.Δt)
Let there are n units in the system at time t, no arrival takes place in time Δt and no
service provided in time Δt. So there are n units in the system at time (t + Δt).
Probability of this event = [probability of n units in the system]
x (probability of no arrival in time Δt)
x (probability of no service in time Δt)
= Pn(t) . (1- λ.Δt). (1 – µ.Δt)
Let there are (n+1) units in the system at time t, no arrival takes place in time Δt and
one service provided in time Δt.
So there are n units in the system at time (t + Δt).
Probability of this event = [probability of n units in the system]
x (probability of no arrival in time Δt)
x (probability of no service in time Δt)
96 | P a g e
t Δt t + Δt t Δt t + Δt
Let there is no unit in the system at time t and no arrival takes place in time Δt.
So there is no units in the system at time (t + Δt).
98 | P a g e
Example:A telephone booth's patrons are thought to be Poisson distributed and come with an
average interval of 10 minutes. The phone call's duration is spread exponentially, with a
mean of five minutes. Determine:
(a) Expected number of units in the queue.
(b) Expected waiting time in the queue.
(c) Expected number of units in the system.
(d) Expected waiting time in the system
(e) Expected fraction of the day that the phone will be in use.
(f) Probability that the customer will have to wait.
Solution: Given,
The mean arrival time = 10 min
The mean service time = 5 min
The mean arrival rate, λ = (1/10) x 60 = 6/hour
The mean service rate, µ = (1/5) x 60 = 12/hour
(a) Expected number of units in the queue,
Lq= λ2/[µ.(µ-λ)] = (6)2/[12. (12-6)] = 0.5 units
(b) Expected waiting time in the queue,
Wq = λ/[µ.(µ-λ)] = (6)/[12. (12-6)] = 0.0833 hours
(c) Expected number of units in the system,
Ls = λ/(µ-λ) = 6/(12-6) = 1 unit
99 | P a g e
Solution: Given,
The mean arrival rate, λ = 4/hour
The mean service time = 10 min
The mean service rate, µ = (1/10) x 60 = 6/hour
(a) Average number of customer in the shop,
Ls = λ/(µ-λ) = 4/(6-4) = 2 Customers
(b) Average waiting time of a customer,
Wq = λ/[µ.(µ-λ)] = (4)/[6. (6-4)] = 0.333 hours
(c) The probability that a customer will have to wait,
P(W > 0) = 1 – P0 = 1 - (1- ρ) = ρ = λ / µ = 4/6 = 0.667
(d) (W/W > 0) = 1/(µ-λ) = 1/(6-4) = 1/2 hours = 30 mins
Example: A TV repairman works on the sets in the order that they are delivered and
anticipates that each set's repair time will be exponentially dispersed, with a mean of 30
minutes. In a Poisson fashion, the sets come on average every 12 to 10 hours throughout the
day. Determine:
100 | P a g e
(a) What is the expected idle time per day for the repairman?
(b) How many TV sets will be there waiting for the repair?
Solution: Given,
The mean service time = 30 mins
The mean arrival rate, λ = 12/10 hours a day = 12/(10 x 60) = 1/50 per min
The mean service rate, µ = 1/30 per min
(a) Busy period, ρ= λ / µ = (1/50)/(1/30) = 0.6 hour
The idle time, P0 = 1- ρ = 1- 0.6 = 0.4 hour
The expected idle time per day for the repairman = 0.4 x 10 = 4 hrs/day
(b) The number of TV sets waiting for the repair,
Lq= λ2/[µ.(µ-λ)] = (1/50)2/[(1/30). ((1/30)-(1/50))] = 0.9 units
Solution: Given,
The mean arrival rate, λ = 30 trains per day = 30/(24x60) = 1/48 trains per min
The mean service time = 36 mins
The mean service rate, µ = 1/36 trains per min
(a) The average number of trains in the system,
Ls = λ/(µ-λ) = (1/48)/((1/36)-(1/48)) = 3 trains
(b) Expected length of non-empty queue,
(L/L > 0) = µ/(µ-λ) = (1/36)/((1/36)-(1/48)) = 4 trains
(c) The probability that the queue size exceeds 12,
P(Queue size ≥ 12) = (λ/µ)12 = [(1/48)/(1/36)]12 = (0.75)12 = 0.032
101 | P a g e
IN-TEXT QUESTIONS
5. __________ is the average time that a customer spends in the system from
the entry in the queue to the completion of the service.
6. The average number of customers in the queue waiting to receive the service
is called _____________.
7. The expected number of services completed in the time interval of length
unity is called:
(a) Mean servicing rate
(b) Mean arrival rate
(c) Mean waiting time
(d) Mean servicing time
8. The time taken for servicing of a unit is known as its ______________.
102 | P a g e
c-1 ∞
Ʃ Pn + ƩPn = 1
n=0 n=c
c-1 ∞
Ʃ [(1/n!).(λ/µ) . P0] + Ʃ [(1/cn-c).(1/c!).(λ/µ)n. P0] = 1
n
n=0 n=c
c-1 ∞
P0 [ Ʃ (c /n!).(λ/(cµ)) + Ʃ (cc/c!).(λ/cµ)n] = 1
n n
n=0 n=c
c-1 ∞
P0 [ Ʃ (1/n!).(cρ) + (c /c!) Ʃ (ρ)n] = 1
n c
(Since, ρ= λ/cµ)
n=0 n=c
c-1
P0 [ Ʃ (1/n!).(cρ)n + (cc/c!). (ρc+ ρc+1 + ρc+2 +…..upto ∞)] = 1
103 | P a g e
n=0
c-1
P0 [ Ʃ (1/n!).(cρ)n + (cc/c!). (ρc/(1-ρ))] = 1
n=0
c-1
P0= [ Ʃ (1/n!).(cρ)n + (cc/c!). (ρc/(1-ρ))]-1
n=0
▪ Pn = (1/n!).(λ/µ)n. P0 , n≤c
▪ Pn = (1/c ).(1/c!).(λ/µ) . P0, n ≥ c
n-c n
Example:There are two long distance providers for a telephone exchange. The telephone
company discovers that long distance calls typically come at a rate of 15/hour during peak
load, as predicted by the poisson distribution. These conversations' durations are roughly
exponentially distributed, with a mean duration of 5 minutes. Find:
(a) How likely is it that a subscriber will have to wait for long distance calls during
the busiest time of the day?
(b) What is the average waiting time for the customers?
Solution: Given,
Number of servers, c =2
The mean arrival rate, λ = 15 per hour
The mean service time = 5 min
The mean service rate, µ = 1/5 per min = 60/5 per hour = 12 per hour
Now, ρ = λ /(cµ) = 15/(2x12) = 5/8
104 | P a g e
c-1
P0= [ Ʃ (1/n!).(cρ)n + (cc/c!). (ρc/(1-ρ))]-1
n=0
1
P0= [ Ʃ (1/n!).(5/4)n + (22/2!). ((5/8)2 /(1-(5/8)))]-1
n=0
Example: A tax consulting firm has 3 counters in its offices to receive the people who have
problems concerning their income and the sales tax. On an average 48 persons arrive in
8hours a day. Each tax advisor spends 15 min on an average for an arrival if the arrival time
follows a Poisson distribution and the service time follows an exponential distribution. Find:
(a) The typical user count in the system.
(b) The customer's system-wide average wait period.
(c) The typical amount of customers in line for service.
(d) The length of time that consumers typically wait in line.
(e) The likelihood that a customer will need to wait before receiving assistance.
Solution: Given,
Number of servers, c = 3
The mean arrival rate, λ = 48 persons 8 hours a day = 48/8 = 6 / hour
The mean service time = 15 min
The mean service rate, µ = 1/15 per min = 60/15 per hour = 4 / hour
Now, ρ = λ /(cµ) = 6/(3x4) = 1/2
c-1
P0= [ Ʃ (1/n!).(cρ)n + (cc/c!). (ρc/(1-ρ))]-1
n=0
2
P0= [ Ʃ (1/n!).(3/2)n + (33/3!). ((1/2)3 /(1-(1/2)))]-1
n=0
105 | P a g e
N
Now, in order to find P0, use the fact that the total probability, ƩPn= 1
n=0
P0 + P1 + P2 +……+PN = 1
P0 + ρ. P0 + ρ2. P0 +…….+ρN. P0 = 1
P0 [1+ ρ + ρ2 +…….+ρN] = 1
P0 [(1- ρN+1)/(1-ρ)] = 1
P0 = (1-ρ)/ (1- ρN+1)
Hence,
Pn = ρn. P0 = ρn. [(1-ρ)/ (1- ρN+1)] , for n ≤ N
107 | P a g e
N
▪ Lq= P0. Ʃ (n-1).ρn
n=1
▪ Ws = Ls /λ
▪ Wq = Lq /λ = Ws – (1/µ)
▪ ρ = λ/µ
Example:Think about a single server queuing system with exponential response time and
poisson arrival. Five customers come per hour, and services last 30 minutes. if the device can
only accommodate four users. Find:
(a) The likelihood that the system is vacant.
(b) The typical user count in the system.
(c) The typical amount of customers waiting in line.
Solution: Given,
The mean arrival rate, λ = 5 / hour
The mean service time = 30 min
The mean service rate, µ = 1/30 per min = 60/30 per hour = 2 / hour
N=4
Now, ρ = λ /µ = 5/2 = 2.5
(a) Probability that the system is empty,
P0 = (1-ρ)/ (1- ρN+1) = (1-2.5)/ (1- (2.5)5) = 0.016
(b) The average number of customers in the system,
4
Ls = P0. Ʃ (n.ρn) = P0. [0.ρ0 + 1.ρ1 + 2.ρ2 + 3.ρ3 +4.ρ4]
n=0
108 | P a g e
4
Lq= P0. Ʃ (n-1).ρn= P0. [0.ρ1 + 1.ρ2 + 2.ρ3 + 3.ρ4]
n=1
Example: Three clients can be served at once in a one-person barbershop, two of whom can
wait while the other is being attended to. If a customer arrives and the store is closed, he
moves to another store. The average rate of random customer arrival is 4 per hour, and the
average service duration is 10 minutes. Determine:
(a) The probability distribution for how many customers are in line for assistance.
(b) The amount of patrons anticipated to be waiting in the store.
(c) The anticipated clientele at the barbershop.
(d) How long can a client anticipate being in the store for?
Solution: Given,
The mean arrival rate, λ = 4 / hour
The mean service time = 10 min
The mean service rate, µ = 1/10 per min = 60/10 per hour = 6 / hour
N=3
Now, ρ = λ /µ = 4/6 = 2/3 = 0.667
(a) The probability distribution for the number of customers waiting for service,
P0 = (1-ρ)/ (1- ρN+1) = (1-0.667)/ (1- (0.667)4) = 0.4152
P1 = ρ. P0 = 0.667 x 0.4152 = 0.2769
P2 = ρ2. P0 = (0.667)2 x 0.4152 = 0.1847
P3 = ρ3. P0 = (0.667)3 x 0.4152 = 0.1232
(b) Expected number of customers waiting in the shop,
3
Lq= P0. Ʃ (n-1).ρn= P0. [0.ρ1 + 1.ρ2 + 2.ρ3]
n=1
3
Ls = P0. Ʃ (n.ρn) = P0. [0.ρ0 + 1.ρ1 + 2.ρ2 + 3.ρ3]
n=0
110 | P a g e
c-1 N
Ʃ Pn + ƩPn = 1
n=0 n=c
c-1 N
Ʃ [(1/n!).(ρc) . P0] +
n
Ʃ [(cc/c!).ρn. P0] = 1
n=0 n=c
c-1 N
P0 [ Ʃ (1/n!).(ρc) + n
Ʃ (ρc)c.(1/c).(ρ)n-c] = 1
n=0 n=c
c-1 N
P0 [ Ʃ (1/n!).(ρc) + (ρc) .(1/c!) Ʃ (ρ)n-c] = 1 (Since, ρ= λ/cµ)
n c
n=0 n=c
111 | P a g e
c-1
P0 [ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). (1+ ρ1 + ρ2 +…..+ ρN-c)] = 1
n=0
c-1
P0 [ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). {(1-ρN-c+1)/(1-ρ)}] = 1
n=0
c-1
P0 =[ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). {(1-ρN-c+1)/(1-ρ)}]-1 ,(if ρ = λ/(cµ) ≠ 1)
n=0
c-1
P0 =[ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). (N-c+1)]-1 ,(if ρ = λ/(cµ) = 1)
n=0
N
▪ Ls =Ʃ [n. Pn]
n=0
▪ Wq = Lq /λ’
(where λ’ is the effective arrival rate and is given by, λ ‘= λ(1-PN))
▪ Ws = Ls /λ’
▪ ρ = λ/(cµ)
c-1
▪ P0 =[ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). {(1-ρN-c+1)/(1-ρ)}]-1 ,(if ρ = λ/(cµ) ≠ 1)
n=0
c-1
[ Ʃ (1/n!).(ρc)n + (ρc)c.(1/c!). (N-c+1)]-1 ,(if ρ = λ/(cµ) = 1)
n=0
(1/n!).(ρc)n. P0 , 0 ≤ n ≤ c
▪ Pn = (cc/c!).ρn. P0 , c ≤ n ≤ N
0 , for n > N
Example:Allow for the establishment of a car inspection station with 3 examination stalls.
Assume that a vehicle waits so that it can move to the front of the line when a stall becomes
available. The station has room for nearly 4 vehicles to wait at once. The station can only
112 | P a g e
hold 7 vehicles at a time. During peak hours, a mean of one vehicle arrives according to the
poisson distribution per minute. With a mean of six minutes, the service duration follows an
exponential distribution. Determine:
(a) The typical amount of cars waiting in line.
(b) The typical amount of vehicles using the system at peak times.
(c) The system's anticipated wait period.
(d) The anticipated volume of vehicles per hour that are unable to access the station.
Solution: Given, c = 3, N = 7
Mean arrival rate, λ = 1 car per min
Mean service time = 6 min
Mean servicing rate, µ = 1/6 car per min
ρ = λ/(cµ) = 1/(3(1/6)) = 2
c-1
P0 = [ Σ (ρc)n/n! + ((ρc)c/c!) [(1-ρN-c+1)/(1-ρ)]-1
n=0
c-1
= [ Σ (6)n/n! + ((6)3/3!).(1-27-3+1)/(1-2)]-1
n=0
7
Lq = Σ (n-3)(cc/c!) ρn P0
n=3
7
Lq = Σ (n-3)(33/3!) (2)n P0
n=3
113 | P a g e
c N
Ls = Σ n.Pn+ Σ n.Pn
n=0 n=c+1
3 7
Ls= Σ n.[(1/n!) (ρc) P0 ] + Σ n.[(cc/c!) ρn P0]
n
n=0 n=4
3 7
Ls = [Σ n.(1/n!) (6) + Σ n.(cc/c!) ρn] P0
n
n=0 n=4
Many different business scenarios have been used to apply the waiting line or queuing theory.
There are likely to be queues waiting in any circumstance where there are clients, including
banks, post offices, movie theatres, gas stations, train ticket desks, doctor's offices, etc.
Customers typically want a specific degree of service, whereas businesses that provide
114 | P a g e
service facilities work to keep costs down while still providing the required service. Queuing
theory can be used to solve issues like the ones listed below:
The traditional queuing theory's presumptions might be too severe to accurately simulate
practical scenarios. These models are unable to handle the complexity of production lines
with product-specific characteristics. To simulate, evaluate, visualise, and optimise time
dynamic queuing line behaviour, specific tools have been developed. Following are a few of
queueing theory's drawbacks:
• The majority of queuing models are highly complicated and difficult to comprehend.
• The exact theoretical distribution that would apply to a particular queue situation is
frequently unknown.
• The analysis of waiting problems becomes more challenging if the queuing discipline
does not follow the "first come, first serve" principle.
3.15 SUMMARY
• The queuing model's goal is to determine the best service rate and server count in order to
reduce both the typical cost of using the system and the cost of providing service.
• If arrivals are entirely random, a poisson's distribution can be used to describe the chance
distribution of the number of arrivals over a given period of time.
115 | P a g e
3.16 GLOSSARY
Waiting Lines - Queues or waiting lines are a typical occurrence in both regular life and
several corporate and industrial settings.
Input source of queue - Customers requiring service are generated at different times by an
input source, commonly known as population.
Queue discipline (Service discipline) - The queue discipline is the order or manner in which
customers from the queue are selected for service.
System output - The rate at which consumers are served is referred to as system output. It
depends on how long the facility needs to provide the service and how the service facility is
set up.
Steady state - A queuing system is said to be in a steady state when the probability of having
a certain amount of customers is independent of time.
Explosive state - The length of the queue will grow over time and eventually reach infinity if
the system's arrival rate is higher than its service rate. Such a state is known as explosive
state.
116 | P a g e
4. Poisson distribution
117 | P a g e
8. The machines in production shop breakdown at an average of 2 per hour. The non
productive time of any machine costs Rs.30 per hour. If the cost of repairman is Rs.50
per hour and the service rate is 3 per hour. Determine:
a) The number of machines not working at any point of time.
b) The average time that a machine is waiting for the repairman.
c) The cost of non-productive time of the machine per hour.
d) The expected cost of system per hour.
9. Patrons arrive at a reception counter at the rate of 2 per min. The receptionist in duty
takes an average of 1 min per patron. Calculate:
a) What is the chance that a patron will straight way meet the receptionist?
b) The probability that the receptionist is busy.
c) Average number of patrons in the system.
10. In a car manufacturing plant, a loading crane takes exactly 10 min to load a car into a
wagon and again comes back to the position to load another car. If the arrival of cars
is in a poisson stream at an average rate of one after every 20 min, calculate:
a) The average waiting time of a car in the queue.
b) The average waiting time in the system.
118 | P a g e
LESSON 4
SIMULATION
Dr. Deepa Tyagi
Assistant Professor
Shaheed Rajguru College of Applied Sciences for Women
University of Delhi
STRUCTURE
119 | P a g e
Simulation refers a explanatory form that admits a resolution creator to estimate the
nature of a model under differing environments.
It is well known that, not all real-world problems can be solved by applying a specific type of
technique so-called mathematical models and formulas that could be applied to certain types
of problems and then performing the calculations. Some problem situations are too complex
to be represented by the concise techniques presented so far in this text. In such cases,
simulation is an alternative form of analysis tool for computational solution of problems.
The use of simulation as a decision-making tool is fairly widespread, and you are
definitely familiar with some of the ways it is used. Other reasons for the demand of
simulation contains:
120 | P a g e
This Simulation idea is best implicit accompanying an model: assume construction a model
of by virtue of what a day at a general store pans out. You assemble rules for in what way or
manner community will communicate, when equipment are brought, ‘congestion,’ ‘free
time,’ and anything different to build an correct model of that real-world sells ground.
You will therefore run that model, usually in simulation program, to visualize the results of
achieving those rules against variables, in the way that a late supply consignment or a Black
Friday surge.
There are generally three positions in which you would be going to use simulation models:
➢ When you lack info, that is accepted when examining old or historical
occurrences.
➢ When your trade processes are also complex expected resolved through usual
orders.
Those positions perform self-explanatory, but skilled are deeper-level trade advantages
to running a imitation model, that we investigate in detail beneath.
1. Flexibility
6. Time density
7. Test of Complications
121 | P a g e
➢ Flexibility:
✓ You can pretend many various things. From trade movements to preparation
aircraft pilots, skilled is no deficiency of existent and potential applications for
simulation structures.
✓ When it comes to simulate for trade (like, the retail instance determined above), you
can engage it to capture insights in excavating, production, sell, supply chain
management, management, and many remainder of something. It’s manufacturing
agnostic and appropriate to innumerable use-cases.
✓ If you have enough calculating capacity, you can simulate amazingly complex
scenarios, in the way that the regular movements of an airstrip through an
complete quarter or a city traffic gridiron.
✓ It doesn’t matter how many rules you put or variables you confuse at
bureaucracy. As long as you have the need calculating capacity, you can
simulate it accompanying relative ease.
✓ Inserting even individual change in a complex, big process can influence delays
and control of product quality questions command a price of tens or a great
number of millions of greenbacks.
✓ With simulation, you can field-test your changes before they are executed in
122 | P a g e
the here and now. You can take understandings about potential risks early,
and act in advance of ruling class.
➢ Take, e.g., closing a road: agreed, it will cause a bottleneck, but accompanying
simulation, you will see when that obstacle will be most harsh. You can understand
road construction crews to not affiliate with organization the district all the while
congestion (so concerning manage easier for traffic to flow).
➢ The info may be irrelevant contemporary, but it could be appropriate from now
on when the right factors (like, technology, regulatory atmosphere, etc.,) occur.
✓ This is valuable news for conclusion-makers, leaders, and shareholders the one
are analyzing project proposals and changes to their existent wholes.
➢ Time density
➢ Though you want visions covering months or age into the future, you cannot
afford to wait that long to receive ruling class. With simulation modeling, you
can acquire facts about the complete for instance is 12 months and
comparatively fast example is inside 1 daytime.
➢ For example, Texas A&M scientists currently fake the potential of biomass hike
in cool seasons.
123 | P a g e
➢ Test of Complications
✓ In addition, you can recognize potential difficulties and combine resolutions for
those (and test repeated) before executing your change in the here and now.
Regardless of the type of imitation complicated, following fundamental steps are used for all
simulation models:
1. Identify the complications and set aims.
2. Develop the simulation model.
3. Test the model undoubtedly that it indicates bureaucracy being intentional.
4. Develop individual or more experiments (environments under that the model's
conduct will be checked).
5. Run the simulation and judge the results.
6. Repeat steps 4 and 5 as far as you are compensated accompanying the results.
The beginning task in problem solving (identification) of some sort search out simply
acknowledge the problem and set goals that the answer is engaged to obtain; simulation is no
omission. A clear announcement of the aims can determine not only managing for model
designing but too the balance for estimation of the accomplishment or failure of a simulation.
In general, the aim of a simulation study search out decide by virtue of what a arrangement
will function under sure environments. The more distinguishing a organizer is about what he
or she is expect, the better the chances that the simulation model will be devised to attain
that. Toward that end, the executive must select the outlook and level of detail of simulation.
This signifies the unavoidable standard of complicatedness of the model and the news
necessities of the study.
124 | P a g e
The second task is model development. Typically, this includes determining on the form of
the model and utilizing a computer to complete activity the simulations. (For teaching
purposes, the instances and questions in this place episode are generally manual, but in most
actual-life practically computers are used. This stems from the need for abundant numbers of
runs, the complicatedness of simulations, and the need for the act of one that records of
results.) Data accumulation is a important constituent model happening. The amount and type
of facts wanted are a direct function of the sphere and level of detail of the simulation. The
facts are required for both model development and judgment. Naturally, the model must be
planned to authorize judgment of key resolution opportunities.
The third step that is validation (Testing) phase is approximately had connection with
model happening. Its main purpose search out decide if the model adequately describes
evident method efficiency. An investigator usually achieves this by equating the results of
simulation runs accompanying popular performance of bureaucracy under the unchanging
chances. If aforementioned a contrasting cannot be made cause, for instance, actual-life data
are difficult or impossible to acquire, an alternative search out engage a test of fairness, in
which day of reckoning and belief of things used to bureaucracy or similar plans are
depended for validation that the results are reasonable and agreeable. Still another aspect of
confirmation is cautious concern of the arrogance of the model and the principles of
parameters used in experiment the model. Again, day of reckoning and belief of those adept
the real-life structure and those the one must use the results are essential. Finally, note that
model development and model validation be similar or consistent: Model deficiencies
exposed all along confirmation prompt model revisions, that bring about the need for further
validation works and possibly further revisions.
The fourth step in simulation is designing experiments. Experiments are the character of a
simulation; they help answer the what-if questions formal in simulation studies. By searching
the process, the manager or predictor learns about creature nature.
The fifth step is to run the simulation model. If a simulation model is deterministic as well
all parameters are known and constant, only a particular run will command a price of each
what if question. But if the model is probabilistic, accompanying parameters liable to be
subjected chance variability,manifold runs will be needed to get a clear image of the results.
In this manual, probabilistic simulations are the focus of the consideration, and the comments
are restricted to bureaucracy. Probabilistic simulation is basically a form of random
examination, accompanying each run representing individual observation. Therefore,
mathematical hypothesis maybe used to decide appropriate sample sizes. In effect, the best
the strength of instability owned by simulation outcomes, the greater the number of
125 | P a g e
simulation runs required to solve a justifiable level of assurance in the results as accurate
signs of model nature.
The last step in the simulation process is to analyze and interpret the results. Explanation of
the results depends to a big magnitude on the grade at which point the simulation model
approximates realism; the nearer the estimate, the less need to "modify" the results.
Moreover, the nearer the estimate of the model to real world, the less the risk intrinsic in
employing the results.
Built on analytical models, Monte Carlo studies use the practical data of the authentic
system’s inputs and outputs (for example, supply consumption and manufacture yield). It
before identifies doubts and potential risks through possibility distributions.
The benefit of a Monte Carlo-based simulation is that it determines knowledge and a all-
encompassing understanding of potential warnings to your basic and period-to-market.
You can implement Monte Carlo simulations to almost some corporation or field, containing
oil and gas, production, engineering, supply chain administration, and many possible choice.
We will survey the Monte Carlo Simulation in concisely in the next sections of this chapter.
126 | P a g e
An agent-based simulation is a model that examines the impact of an ‘agent’ on the ‘system’
or ‘atmosphere.’ In simple terms, just plan the impact a new laser-cutting tool or some other
firm apparatus has on your overall manufacturing line.
The ‘agent’ in agent-based models maybe public, equipment, and practically whatever other.
The simulation contains the agent’s ‘performance,’ that be a part of rules of by means of what
those agents must act in bureaucracy. You formerly examine by virtue of what bureaucracy
responds to those rules.
However, you must draw your rules from real-globe data -alternatively, you will not create
correct insights. In a habit, it serves by way of to analyze a projected change and identify
potential risks and time.
A individual happening simulation model enables you to note the particular performances
that influence your trade processes. For example, the typical mechanics support process
includes the end-user calling you, your system taking and allocating the call, and your agent
picking up agreement.
You would use a discrete happening simulation model to test that mechanics support process.
You can use individual event simulation models to study many types of structures (for
instance, healthcare, production, etc), and for a various range of outcomes.
For example, the Nebraska Medical Center had used individual happening simulation models
to visualize how it manage discard productivity bottlenecks, increase the application of its
functioning rooms, and lower sufferer/specialist travel distance and occasion.
This is a very abstract form of simulation modeling. Unlike agent-based modeling and
individual case modeling, structure movement does not involve particular analyses about the
system. So for a production ability, this model will not influence in data about the system and
labor.
127 | P a g e
Somewhat, trades would use structure movement models to simulate for a long-term,
strategic-level view of the overall structure.
In other words, the preference be going to grab aggregate-level observations about the entire
structure in reaction to an operation — like, a decline in CAPEX, outcome a product line, etc.
There are many various types of simulation methods. The consultation will devote effort to
something probabilistic simulation utilizing the Monte Carlo procedure. The method gets its
name from the famous Mediterranean resort lead games of chance. The chance factor is an
main situation of Monte Carlo simulation, and this approach maybe used only when a process
has a chance, or chance, component.
In the Monte Carlo arrangement, a administrator labels a frequency distribution that indicates
the chance component of the system understudy. Random samples captured from this
frequency distribution are similar to findings created on the system itself. As the number of
findings increases, the results of the simulation will more carefully approximate the nature of
the actual system, given an appropriate model has been developed. Sampling is consummate
apiece use of chance numbers.
The elementary steps in the process are in this manner:
1. Describe a frequency distribution individually chance component of the system.
2. Figure out an assignment so that intervals of chance numbers will pertain the
frequency distribution.
3. Acquire the chance numbers required for the learning.
4. Explain the results.
The chance numbers used in Monte Carlo simulation can arise some source that exhibits the
essential randomness. Typically, they derive from one of two sources: Large studies believe
computer-generated chance numbers, and limited studies usually use numbers from a table of
random digits like the one demonstrated in Table 19S-1. The digits are listed in pairs for
availability, but they maybe used individually, in pairs, or in whatsoever combination some
given issue demands.
128 | P a g e
Two main appearance of the sets of chance numbers are owned by simulation. One is that
process are evenly delivered. This way that for some height arrangement of digits (for
example, two-digit numbers), each probable outcome (for instance, 34, 89, 00) has the same
possibility of performing. The second feature is that there are no distinct shapes in sequences
of numbers to implement individual to conclude numbers further in the series (so the name
chance digits). This feature holds for some series of numbers; the numbers can be interpret
across rows and up or down columns.
When utilizing the table, it is main to prevent forever offset in the uniform spot; that would
effect the same series of numbers each period. Various procedures endure for selecting a
chance starting point. One can use the serial number of a model~r;bill to select the row,
column, and way of number choice. Or use rolls of a die. For our purposes, the starting point
will be described in each manual case or problem because all obtains the equivalent results.
The process of simulation will become more transparent as we solve some simple problems.
Example-1:
129 | P a g e
Simulate breakdowns for a 10-day duration. Read two-number random numbers from Table
19S-l, starting at the top of column 1 and study down.
a) Develop cumulative frequencies for breakdowns:
1. Convert frequencies into relative frequencies by dividing each frequency by the sum
of the frequencies. Thus, 10 turns into 10/100 = .10, 30 turns into 30/100 =.30, and so
forth.
2. Develop cumulative frequencies by successive summing. The results are
demonstrated in the following table:
Number of Frequency Relative Frequency Cumulative
Breakdowns Frequency
0………. 10 .10 .10
1………. 30 .30 .40
2………. 25 .25 .65
3………. 20 .20 .85
4………. 10 .10 .95
5………. 5 .05 1.00
- -
100 1.00
130 | P a g e
a) Obtain the random numbers from Table 19S-1, column 1, as stated in the question: 18 25
73 12 54 96 23 31 45 01
b) Convert the random numbers into numbers of breakdowns:
18 falls in the intervening time 11 to 40 and corresponds, then, to one breakdown on day
1. 25 falls in the intervening time 11 to 40, this corresponds to one breakdown on day 2.
131 | P a g e
The mean number of breakdowns for this 10-period simulation is 17110 = 1.7
breakdowns per day. Compare this to the predicted number of breakdowns based on the
historical data:
0(.10) + 1(.30) + 2(.25) + 3(.20) + 4(.10) + 5(.05) = 2.05 per day
Following are the various points to be noticing:
1. This simple model is proposed to represent the fundamental idea of Monte Carlo
simulation. If our only aim search out estimate the average number of
132 | P a g e
breakdowns, we would not ought simulate; we commit base the estimate on the
historical data only.
2. The simulation endure be considered as a sample; it is completely likely that extra
runs of 10 numbers would produce different means
3. Because of the irregularity owned by the results of small samples, it hopeful
foolish to attempt to draw any firm decisions from them; in an real study, much
larger sample sizes almost used.
In few cases, it is beneficial to assemble a flowchart that defines a simulation, particularly
if the simulation will include periodic updating of system values (for instance, amount of
stock available), as illustrated in Example-2. The Excel computer program expression for
this question is demonstrated below. Note that the adjustment of values in columns B, C,
and E must be exactly as revealed.
133 | P a g e
The simulation results are demonstrated in the following screen. Use key F4 do to a
simulation or additional simulation.
Output:
One of the authentic questions that the simulation investigator faces is to validate the model.
The simulation model is accurate only if the model is an correct description of the original
structure, else it is invalid.
Validation and verification are the two steps in any simulation project to validate a model.
134 | P a g e
• Validation is the method of matching two results. In this method, we require to match
the description of a theoretical model to the real structure. If the comparison is
correct, therefore it is valid, otherwise invalid.
• Verification is the method of equating two or more results to insure its accuracy. In
this method, we should equate the model’s application and its combined data among
the developer's visionary report and specifications.
There are various techniques used to perform Verification & Validation of Simulation Model.
Following are some of the common techniques –
There are several methods used to act Verification & Validation of Simulation Model.
Following are few of the ordinary methods –
135 | P a g e
✓ Through following the intermediate results and matching them with noticed
results.
✓ Through testing the simulation model output utilizing various input combos.
✓ The model must communicate accompanying the customer during the whole
of the process.
Step 2 − Test the model at hypotheses data. This maybe attained by referring the
hypothesis data into the model and examination it quantitatively. Sensitive study can
further be acted to see the effect of change in the consequence when important
changes are created in the input data.
✓ Comparison maybe acted using the Turing Test. It presents the data in
management pattern, that maybe explained by specialists only.
✓ Statistical plan maybe used for equate the model output accompanying the
actual system output.
136 | P a g e
After created the model, we have to display comparison of its output data with original
system data. Following are the two ways to execute this contrasting.
In this technique, we use real-realm inputs of the model to equate its output with that
of the real-world inputs of the original structure. This process of validation is straight
forward, still, it can give few problems when achieved, in the way that if the output be
going to be compared to average distance, awaiting time, useless period, etc. it maybe
distinguished using statistical tests and hypothesis test. Some of the statistical tests are
Chi-square test, Kolmogorov-Smirnov test, Cramer-von Mises test, and the Moments
test.
Consider we should describe a expected structure which doesn’t lie at the nor has
existed earlier. Therefore, skilled is no historical data feasible to compare its
efficiency with. Hence, we should use a supposed system formed on assumptions.
Following useful hints will help in making it effective.
➢ Subsystem Validity − A model itself can not have any existing structure to match
it accompanying, but it may comprise a famous subsystem. Each of that validity
can be tested alone.
➢ Face Validity − When the model acts on opposite logics, before it bear be rejected
although it behaves like the actual structure.
137 | P a g e
➢ It compresses time for fear that managers can fast ascertain complete belongings.
➢ It is a useful technique for solving a business problem where many values of the
variables are not known or partly known in advance and there is no easy way to find
these values.
Further, Application areas for simulation are practically unlimited. Today simulation can be
used for decision-support with supply chain management, workflow and throughput analysis,
facility layout design, resource usage and allocation, resource management and process
change. Whether contemplating a new office building, planning a new factory design,
assessing predictive and reliability maintenance, anticipating new or radical procedures,
deploying new staff, or planning a day’s activities, simulation can play a crucial role in
finding the right and timely solutions. The progressive and technology driven organizations,
in pursuit of winning and/or maintaining their market share, have taken different approaches
to their success. In their pursuit, some have focused on “customer service”, many have
embraced the “productivity” theme, and yet others have pursued the important issue of
138 | P a g e
“quality and reliability”. In recent times, simulation has been very successfully used as a
modeling and analysis tool.
However, certain conditions are too accompanying simulation. Leading with these are:
1. Simulation does not produce an best result; it quite signifies an similar nature for a given
batch of inputs.
2. For extensive simulation, it can compel important endeavor to establish a suitable model as
well substantial computer time to access simulations.
4.10 SUMMARY
➢ The next point in discussion has been on the various kinds of Simulations.
➢ The next stage in consideration has done on the Various Kinds of Simulations.
2. What are some of the primary reasons for the widespread use of simulation techniques
in practice?
3. What are few of the basic reasons for the extensive use of Simulation methods in
essence?
140 | P a g e
3. Simulation is
(a) valuable to analyse problems where examining result is different.
(b) a statistical experiment as such its outcomes are based on statistical errors.
(c) definitive in type.
(d) all the above
(b) accurate
(c) estimate
(d) simplified.
11. One can boost the probability that results of simulation are not invalid by
(a) using individual probability distribution in place of continuous one.
(b) validating the simulation model.
(c) changing the input parameters.
(d) none of the above.
12. Biased random sampling is made from among possible choices which have
(a) different probability
(b) equal possibility
(c) possibility which do not sum to unity
(d) none of the above.
• Anderson, D., Sweeney, D., Williams, T., Martin, R.K. (2012). An introduction to
management science: quantitative approaches to decision making (13th ed.). Cengage
Learning.
• Balakrishnan, N., Render, B., Stair, R. M., & Munson, C. (2017). Managerial decision
modeling. Upper Saddle River, Pearson Education.
• Powell, S. G., & Baker, K. R. (2017). Business analytics: The art of modeling with
spreadsheets. Wiley.
143 | P a g e
LESSON 5
DECISION MAKING UNDER UNCERTAINTY
“A decision is the conclusion of a process by which one choices between two or more available
courses of action for the purpose of attaining a goal”
STRUCTURE
144 | P a g e
5.2 INTRODUCTION
Humans make a lot of decisions every day, and occasionally we make ones that could have a
significant impact on our lives both now and in the future. The capability of making good
judgments on time has a significant impact on the success or failure that an individual or
organisation experiences. We would prefer to make the right choice when it comes to key
decisions like where to attend college, whether to buy or rent a car, and other similar choices.
When a decision maker is presented with multiple option possibilities and an unclear or risk-
filled pattern of future occurrences, decision analysis can be utilised to create the best course
of action. In order to decide whether to deploy a medical screening test to identify metabolic
problems in neonates, for instance, The State of North Carolina conducted decision analysis.
Decision analysis therefore consistently demonstrates its value in decision making. Even
when a thorough decision analysis has been performed, unforeseen future circumstances cast
doubt on the outcome. The chosen decision alternative may occasionally produce good or
great results. In other circumstances, a hypothetical future occurrence might materialise and
render the chosen decision alternative only mediocre or worse. The uncertainty surrounding
the outcome is a direct cause of the risk attached to any chosen alternative. Risk analysis is a
key component of a sound decision analysis. The decision-maker is given probability
information about both potential positive and negative outcomes through risk analysis.
Decision making under risk and uncertainty is a fact of life. In decision making under pure
uncertainty, the decision maker has no knowledge regarding any of the states of nature
outcomes, and/or it is costly to obtain the needed information. There are many ways of
handling unknowns when making a decision. We will try to enumerate the most common
methods used to get information prior to decision making under risk and uncertainty.
145 | P a g e
When making decisions under uncertainty, decision makers are completely in the dark
regarding the likelihood of various outcomes. In other words, they are unsure of how likely
(or unlikely) a particular scenario is. For instance, it is impossible to forecast the likelihood
that Mr. X will serve as the nation's prime minister for the ensuing 15 years.
When it is impossible to quantify the probability of a result, the decision-maker must base
their choice only on the conditional payoff values themselves, keeping the effectiveness
standard in mind.
Under conditions of uncertainity, only payoffs are known and the chance of occurrence any
state of nature is unknown. The following are the criteria of decision making under
uncertainity:
(i) Optimism (Maximax or Minimin) criterion
(ii) Pessimism (Maximin or Minimax) criterion
(iii) Equal Probabilities (Laplace) criterion
(iv) Coefficient of optimism (Hurwiez) criterion
(v) Regret (Salvage) criterion
This criterion ensures that the decision-maker doesn't miss the chance to choose the particular
strategy that correspond to largest possible profit (maximax) or the lowest possible cost
(minimin). So, out of all the alternatives, he selects the decision alternative that maximizes
the maximum payoff (or minimizes the minimum payoff).
Step 1: Determine the maximum (or minimum) possible payoff corresponding to each
alternative.
Step 2: Select that decision alternative which corresponds to the maximum (or minimum) of
the above maximum (minimum) payoffs.
146 | P a g e
Because this criterion finds the option with the overall highest reward feasible while adopting
a very optimistic future outlook, it is called the optimistic criterion.
Which strategy should the concerned executive choose using optimistic criterion?
Solution: In Table 2 we see that using optimistic criterion executive’s maximax choice is the
first Strategies, . The 7,00,000 payoff is the maximum of the maximum payoffs (i.e.,
7,00,000, 5,00,000, and 3,00,000) for each Strategies.
Table 2
States of Strategies
Nature
7,00,000 5,00,000 3,00,000
3,00,000 4,50,000 3,00,000
1,50,000 0 3,00,000
Column 7,00,000 5,00,000 3,00,000
(maximum)
Maximax Payoff
147 | P a g e
Step 1: Determine the minimum (or maximum) possible cost for each alternative.
Step 2: Choose that alternative which corresponds to the maximum of the above minimum
payoffs (or minimum of the above maximum cost).
Example 2: Use the data given in example 1 and find that which strategy should the
concerned executive choose using pessimistic criterion?
Solution: In Table 3 we see that using pessimistic criterion executive’s maximin choice is the
first Strategies, . The 3,00,000 payoff is the maximum of the minimum payoffs (i.e.,
1,50,000, 0, and 3,00,000) for each Strategies.
Table 3
States of Strategies
Nature
7,00,000 5,00,000 3,00,000
3,00,000 4,50,000 3,00,000
1,50,000 0 3,00,000
Column 1,50,000 0 3,00,000
(minimum)
Maximin Payoff
Since the probabilities of states of nature are unknown, it is assumed that all states of nature
will occur with equal probability meaning that all possible events have an equal chance of
happening.
Step 1: Assign equal probability value to each state of nature by using the formula:
148 | P a g e
Step 2: Compute the expected (or average) payoff for each alternative (course of action) by
adding all the payoffs and dividing by the number of possible states of nature, or by applying
the formula:
(Probability of state of nature j)× (Payoff value for the combination of alternative I and state
of nature j.)
Step 3: Select the best expected payoff value (maximum for profit and minimum for cost).
Example 3: Use the data given in example 1 and find that Which strategy should the
concerned executive choose using equal probabilities criterion?
Solution: Assuming that each state of nature has a probability 1/3 of occurrence. Thus, from
table 4, using equal probabilities criterion, we see that the largest expected return is from
strategy , the executive must select strategy .
Table 4
States of Nature
Strategies
Expected Return (Rs.)
7,00,000 3,00,000 1,50,000 (7,00,000+3,00,000+1,50,000)/3=3,83,333.33 Largest
5,00,000 4,50,000 0 (5,00,000+4,50,000+0)/3=3,16,666.66 Payoff
3,00,000 3,00,000 3,00,000 (3,00,000+3,00,000+3,00,000)/3=3,00,000
Step 1: Chosse an appropriate degree of optimism (or pessimism) of the decision maker. Let
be his degree of optimism and then be the degree of pessimism. [ ].
149 | P a g e
Step 2: Determine the maximum as well as minmum payoff for each alternative and obtain
the quantities
Solution: The data of the problem is summarized in the following table (negative figures in
the table shows profit).
Table 5
Courses of Action
States of Nature
(use Y and Z) (use X)
(Price of X increases) -3,000 -4,000
(Price of X does not increase) -3,000 -1,000
Given the coefficient of optimism equal to 0.4, the coefficient of pessimism will be 1-
0.4=0.6. Then according to Hurwicz, select course of action that optimizes (maximum for
profit and minimum for cost) the payoff value
Table 6
Course of Action Best Payoff Worst Payoff h
-3,000 -3,000 -3,000
-1,000 -4,000 -2,800
Since course of action has the least cost (maximum profit) = 0.4(1,000) + 0.6(4,000) = Rs
2,800, the manufacturer should adopt this.
150 | P a g e
The final decision criterion that we explore is based on opportunity loss. This criterion is also
called opportunity loss decision criterion or minimax regret decision criterion. The
discrepancy between the optimal payoff and the actual payoff obtained is referred to as
opportunity loss. In other words, it represents the amount lost as a result of choosing the
wrong alternative. Regret (savage) identifies the decision inside each alternative that
minimize the maximum opportunity loss.
Step 1: From the given payoff matrix, develop an opportunity-loss (or regret) matrix as
follows:
(i) Find the best payoff corresponding to each state of nature.
(ii) Subtract all other payoff values in that row from this value.
Step 2: For each decision alternative identify the worst (or maximum regret) payoff value.
Record this value in the new row.
Step 3: Select a decision alternative resulting in a smallest anticipated opportunity loss value.
Example 5: Considering the same data of example 1 and find that Which strategy should the
concerned executive choose using regret criterion?
Table 7
Strategies
State of Best
Nature Payoff
7,00,000-7,00,000 = 0 7,00,000-5,00,000 = 2,00,000 7,00,000-3,00,000 = 4,00,000 7,00,000
4,50,000-3,00,000 = 1,50,000 4,50,000-5,50,000 = 0 4,50,000-3,00,000 = 1,50,000 4,50,000
3,00,000-1,50,000 = 1,50,000 3,00,000-0 = 3,00,000 3,00,000-3,00,000 = 0 3,00,000
Column
1,50,000 3,00,000 4,00,000
(maximum)
Minimax Regret
151 | P a g e
IN-TEXT QUESTIONS
9. A course of action that may be chosen by a decision maker is called an ……
10. In hurwicz describes making the coefficient of realism describes the degree of
…….
11. …….. decision making criterion uses an opportunity loss decision.
The decision-maker has access to enough data to estimate the likelihood of each event (state
of nature). A decision maker is considered to make risky decisions when he selects one
alternative out of numerous that have known probabilities of occurrence. From the past data,
several outcomes' probability can be calculated. The decision-maker may frequently base
their choices on personal beliefs about what will happen in the future or on information
gleaned from market research, the opinions of experts, etc. The issue can be resolved as a
decision problem under risk.
Under the condition of risk, one of the most common ways of making decisions under risk is
evaluating the alternative with the highest expected monetary value of the expected payoff.
The ideas of expected opportunity loss and expected value of perfect information are also
discussed.
The expected monetary value (EMV) for a certain course of action is obatained be adding
payoff values multiplied by the probabilities associated with each state of nature.
Mathematically, EMV is stated as follows:
152 | P a g e
Example 6: Mr X flies quite often from town A to town B. He can use the airport bus which
costs Rs 25 but if he takes it, there is a 0.08 chance that he will miss the flight. The stay in a
hotel costs Rs 270 with a 0.96 chance of being on time for the flight. For Rs 350 he can use a
taxi which will make 99% chance of being on time for the flight. If Mr X catches the plane
on time, he will conclude a business transaction that will produce a profit of Rs 10,000,
otherwise he will lose it. Which mode of transport should Mr X use? Answer on the basis of
the EMV criterion.
Solution: Computation of EMV associated with various courses of action is shown in table 8.
Table 8
Courses of Action
Bus Stay in Hotel Taxi
States of Nature
Expected Expected Expected
Cost Probability Cost Probability Cost Probability
value value value
10,000-25 0.92 9,177 10,000-270 0.96 9,340.80 10,000-350 0.99 9,553.50
Catches the flight = 9,975 = 9,730 = 9,650
Miss the flight -25 0.08 -2 -270 0.04 -10.8 -350 0.01 -3.5
Expected
monetary value 9,175 9,330 9,550
(EMV)
Since EMV associated with course of action ‘Taxi’ is largest (= Rs 9,550), it is the logical
alternative.
Example 7: A company manufactures goods for a market ggods in which the technology of
the product is changing rapidly. The research and development department has produced a
new product that appears to have potential for commercial exploitation. A further Rs 60,000
is required for development testing. The company has 100 customers and each coustomer
might purchase, at the most, one unit of the product. Market research suggests that a selling
price of Rs 6,000 for each unit, with the total variable costs of manufacturing and selling
estimate as Rs 2,000 for each unit.
Form previous experience, it has been possible to derive a probability distribution relating to
the proportion of customers who will buy the product as follows:
Proportion of customers: 0.04 0.08 0.12 0.16 0.20
Probability: 0.10 0.10 0.20 0.40 0.20
Determine the expected opportunity losses, given no other information than that stated above,
and check whether or not the company should develop the product.
Solution: If p is the proportion of customers who purchase the new product, the company’s
conditional profit is: .
Let be the possible states of nature, i.e. proportion of the customers who
will buy the new product and (develop the product) and (do not develop the product) be
the two courses of action.
The conditional profit values (payoffs) for each pair of and are shown in the table 9.
154 | P a g e
Table 9
Proportion of
Conditional Profit = Rs
Customers
(4,00,000p - 60,000)
(States of
Nature)
(Develop) (Do not Develop)
0.04 -44,000 0
0.08 -28,000 0
0.12 -12,000 0
0.16 4,000 0
0.20 20,000 0
Using the given estimates of probabilities associated with each state of nature, the expected
opportunity loss (EOL) for each course of action is given below:
Since the company seeks to minimize the expected opportunity loss, the company should
select course of action (do not develop the product) with minimum EOL.
Choosing a course of action that produces the intended results in the presence of any state of
nature is simple if the decision maker is able to obtain flawless (complete and accurate)
knowledge about the occurrence of various states of nature. The Expected value of perfect
information (EVPI) may be defined as the maximum sum a person would be willing to pay to
obtain perfect knowledge of which event would occur. Without any more information, the
EMV or EOL criterion assists the decision-maker in choosing a specific course of action that
maximises the expected payoff. Mathematically, it is stated as:
155 | P a g e
Example 8: XYZ company manufactures parts for passenger cars and sells them in lots of
10,000 parts each. The company has a policy of inspecting each lot before it is actually
shipped to the retailer. Five inspection categories, established for quality control, represent
the percentage of defective items contained in each lot. These are given in the following
table. The daily inspection chart for past 100 inspections shown the following rating or
breakdown inspection: Due to this the management in considering two possible course of
action:
(i) : Shut down the entire plant operations and thoroughly inspect each machine.
Proportion of
Rating Frequency
Defective Items
(ii) : Continue production as it now exists but offer the customer a refund for defective
items that are discovered and subsequently returned.
The first alternative will costs Rs 600 while the second alternative will cost the company Rs 1
for each defective item that is returned. What is the optimum decision for the company? Find
the EVPI.
Solution: Calculations of inspectiona and refund cost are shown in table 11.
156 | P a g e
Table 11
Defective Cost Opportunity Loss
Rating Probability
Rate Inspect Refund Inspect Refund
A 0.02 0.25 600 200 400 0
B 0.05 0.30 600 500 100 0
C 0.10 0.20 600 1,000 0 400
D 0.15 0.20 600 1,500 0 900
E 0.20 0.05 600 2,000 0 1,400
1.00 600 670 EOL=170 240
For lot A:
Since the cost of refund is more than the cost of inspection, the plant should be shut down for
inspection. Also, EVPI = EOL of inspection = Rs 170.
IN-TEXT QUESTIONS
4. The expected monetary value criterion is used for decision making under risk. True/False
5. The difference between the highest the lowest EMV is said to be EVPI. True/False
6. The payoff due to equally likely criterion of decision making is same as minimum…….
157 | P a g e
A decision tree can graphically represent any issue that can be expressed in a decision table.
The different decision-alternatives and the order of events are graphically represented by
decision trees as tree branches. Similar to a network, a decision tree is made up of nodes (or
points) and arcs (or lines). They include decision (choice) nodes and states of nature (chance)
nodes when building a tree diagram. Thses nodes are depicted by following symbols:
□ A decision point (or node). Branches (arcs) coming from the decision point (nodes)
denote all decision alternatives available to the decision maker at that point. The decision-
maker must choose just one of these alternatives.
○ Situation of uncertainity (or an outcome node or event point). Arcs emanating from an
outcome node denote all outcomes that could occur at that node. Only one of these
possibilities will come true.
These occurrences, which may indicate customer demand or other factors, are not entirely
within the decision maker's control. The primary benefit of a tree diagram is that a following
act (referred to as a second act) to the occurrence of each event may also be portrayed. In the
tree diagram, the outcome (payoff) for each act-event combination may be shown at the
extremities of each branch. The following decision tree diagram is displayed:
Outcome
Act Event Act Event (Payoff)
158 | P a g e
Folding back or rolling back a decision tree is the process of analysing a decision tree to find
the best course of action. By working our way back to the first decision node, we start with
the payoffs (i.e., the right extreme of the tree). In folding back the decision tree, we use the
following two rules:
• Using the probability of each possible outcome at that node and the payoffs associated
with those outcomes, we compute the expected payoff at each outcome node.
• We choose the alternative that produces the better expected payout at each decision
node. If the expected payoffs are profits, we choose the alternative with the highest
value. In contrast, we choose the alternative with the smallest value if the expected
payoffs are costs.
Construct and evaluate the decision tree diagram for the above data. Show your workings for
evaluation.
159 | P a g e
Solution: The decision tree of the given problem along with necessary calculations is shown
in figure 9.1.
Figure 9.1
Probability Payoff (in'000 Rs) Expected Payoff (in'000 Rs)
0 165
P(x1 |D2 )=0
0
6 0.5 × 0 =0.0 600
160 | P a g e
Figure 10.1
Since the EMV = Rs 10,160 at node D1 is highest, therefore the best strategy is to accept
course of action A first and if A is successful, then accept B.
161 | P a g e
IN-TEXT QUESTIONS
5.7 SUMMARY
The topic of decision analysis, which is an analytical and systematic method of analysing
decision making, is introduced in this chapter. We begin by outlining the procedures involved
in decision-making under two different conditions: (1) uncertainty and (2) risk. We use
criteria like maximax, maximin, criterion of realism, equally likely, and minimax regret to
determine the optimum options for decision-making when faced with ambiguity. We examine
the calculation and application of the expected monetary value (EMV), expected opportunity
loss (EOL), and expected value of perfect knowledge for decision-making under risk (EVPI).
For more complex issues requiring sequential decision-making, decision trees are employed.
Here, we calculate the expected value of sample data (EVSI).
5.8 GLOSSARY
162 | P a g e
1. Alternative 5. False
2. Optimism 6. EOL criterion
3. Minimax regret 7. Decision Tree
4. True 8. Folding back
11. What techniques are used to solve decision-making problems under uncertainity?
Which technique results in an optimistic decision?
12. State the meanings of EMV and EVPI.
13. A manufacturer manufactures a product, of which the principal ingredient is a
chemical X. At the moment, the manufacturer spends Rs 1,000 per year on supply of
X, but there is a possibility that the price may soon increase to four times its present
figure because of a worldwide shortage of the chemical. There is another chemical Y,
which the manufacturer could use in conjunction with a third chemical Z, in order to
give the same effect as chemical X. Chemicals Y and Z would together cost the
manufacturer Rs 3,000 per year, but their prices are unlikely to rise. What action
should the manufacturer take? Apply the maximin and minimax criteria for decision-
making and give two sets of solutions. If the coefficient of optimism is 0.4, then find
the course of action that minimizes the cost.
14. The manager of a flower shop promises its customers delivery within four hours on all
flower orders. All flowers are purchased on the previous day and delivered to Parker
by 8.00 am the next morning. The daily demand for roses is as follows.
163 | P a g e
The manager purchases roses for Rs 10 per dozen and sells them for Rs 30. All unsold
roses are donated to a local hospital. How many dozens of roses should Parker order each
evening to maximize its profits? What is the optimum expected profit?
15. A large steel manufacturing company has three options with regard to production: (i)
produce commercially (ii) build pilot plant (iii) stop producing steel. The management
has estimated that their pilot plant, if built, has 0.8 chance of high yield and 0.2 hance
of low yield. If the pilot plant does show a hight yield, management assigns a
probability of 0.75 that the commercial plant will also have a high yield. If the pilot
plant shows a low yield, there is only a 0.1 chance that the commercial plant will
show a high yield. Finally, management’s best assessment of the yield on a
commercial-size plant without building a pilot plant first has a 0.6 chance of high
yield. A pilot plant will cost Rs. 3,00,000. The profits earned under high and low yield
conditions are Rs. 1,20,00,000 and – Rs. 12,00,000 respectively. Find the optimum
decision for the company.
5.11 REFERENCES
• Powell, S. G., & Baker, K. R. (2017). Business analytics: The art of modeling with
spreadsheets. Wiley.
• Anderson, D., Sweeney, D., Williams, T., Martin, R.K. (2012). An introduction to
management science: quantitative approaches to decision making (13th ed.). Cengage
Learning.
• Balakrishnan, N., Render, B., Stair, R. M., & Munson, C. (2017). Managerial decision
modeling. Upper Saddle River, Pearson Education.
164 | P a g e
LESSON 6
PROJECT SCHEDULING
Dr. Sandeep Mishra
Assistant Professor
Shaheed Rajguru College of Applied Sciences for Women.
University of Delhi
Email Id: [email protected]
STRUCTURE
• Determine earliest start, earliest finish, latest start, latest finish, and slack times for
each activity.
• Understand the impact of variability in activity times on the project completion time.
• Develop resource loading charts to plan, monitor, and control the use of various
resources during a project.
• Understand the trade-cost trade-offs procedure.
6.2 INTRODUCTION
Have you ever overseen a significant event? You might have served as the prom committee
chair or the board chair for the graduation ceremony in high school. You might have led your
team during the introduction of a new product, the planning of a facility expansion, or the
implementation of enterprise resource planning. Even if you have never managed people in
such circumstances, you have undoubtedly had your own personal projects to contend with,
such as writing a paper, moving to a new apartment, applying to college, or selling a house.
As a volunteer, you may have overseen the annual function, the elementary school picnic, or
the river clean-up project. How did you plan your day's events? Most of your projects, did
they finish on time? How did you handle unforeseen circumstances? Did you finish your
work on time and on budget? All of these are crucial components of project management.
Project managers with expertise are essential assets for organisations since they handle
projects frequently.
The listing of activities, deliverables, and milestones within a project constitutes scheduling
in project management. An activity's start and end dates, duration, and resources are typically
included in a schedule. Successful time management requires effective project scheduling,
especially for firms that provide professional services.
A project involves many interrelated activities (or tasks) that must be completed on or before
specified time limit, in a specified sequence (or order) with specified quality and minimum
cost of using resources such as personnel, money, material, facilities and/or space.
In this lesson we mainly focus on creating and manging schedule. This cover project
scheduling with known activity times using well known techniques-PERT and CPM,
scheduling with uncertain activity and trade cost trade-offs.
166 | P a g e
Managers usually must plan, manage, and supervise projects that involve a variety of separate
jobs or tasks completed by several departments and individuals. These projects are typically
so large or intricate that management frequently struggles to remember every element
important to the plan, schedule, and development of the project. In these situations, both the
critical path method (CPM) and the programme evaluation and review technique (PERT)
have proven to be very helpful.
To aid in the planning and scheduling of the US Navy's massive Polaris Nuclear Submarine
Missile programme, which involved thousands of actions, a research team created PERT in
1956–1958. The team's goal was to build and plan the Polaris missile system as efficiently as
possible.
The team's goal was to effectively plan and build the Polaris CPM, which was created
between 1956 and 1958 by the E.I. DuPont Company and Remington Rand Corporation
virtually simultaneously. The organisation set out to create a method for keeping track of
chemical plant upkeep.
A wide range of projects can be planned, scheduled, and managed using PERT and CPM:
• Research and development of new products and processes
• Construction of plants, buildings, and highways
• Maintenance of large and complex equipment
• Design and installation of new systems
Project managers are responsible for planning and coordinating the numerous tasks or
activities in these kinds of projects to ensure that everything is finished on time.
The primary difference between PERT and CPM is in the way the time needed for each
activity in a project is estimated. In PERT, each activity has three-time estimates that are
combined to determine the expected activity completion time and its variance.
PERT is considered a probabilistic technique; it allows us to find the probability that the
entire project will be completed by a specific due date. In PERT analysis emphasis is given
167 | P a g e
on the completion of a task rather than the activities required to be performed to complete a
task. Thus, PERT is also known as an event-oriented technique. PERT is used for one-time
projects that involve activities of non-repetitive nature (i.e. activities that may never have
been performed before), where completion times are uncertain.
In contrast, CPM is a deterministic approach. It estimates the completion time of each activity
using a single time estimate. This estimate, called the standard or normal time, is the time we
estimate it will take under typical conditions to complete the activity. In some cases, CPM
also associates a second time estimate with each activity. This estimate, called the crash time,
is the shortest time it would take to finish an activity if additional funds and resources were
allocated to the activity. CPM is used for completing of projects that involves activities of
repetitive nature.
The objective of critical path analysis is to predict the project's overall duration and give
starting and finishing durations to every activity involved. This makes it easier to compare
the project's actual progress to its projected completion date.
The expected duration of an activity is estimated from the duration of individual activities,
which may be determined uniquely (in the case of CPM) or may entail three-time estimates
(in the case of PERT). The following elements need to be understood in order to establish the
project scheduling.
i. Total completion time of the project.
ii. Earlier and latest start time of each activity.
iii. Critical activities and critical path.
iv. Float for each activity.
Notations:
Earliest occurrence time of an event, i. This is the latest time for an event to occur when
all the preceding activities have been completed, without delaying the entire project.
Latest allowable time of an event, i. This is the latest time at which an event can occur
without causing a delay in project’s completion time.
Early starting time of an activity (i, j).
Late starting time of an activity (i, j).
Early finishing time of an activity (i, j).
168 | P a g e
There should only be one start event and one finish event in a project schedule. The other
events are numbered consecutively with integer 1, 2,…, n, such that i<j for any two events i
and j connected by an activity, which starts at i and finishes at j.
These schedule for each activity is created using a two-pass approach that includes a forward
pass and a backward pass. The earliest times ( and ) are determined during the
forward pass. The latest times ( and ) are determined during the backward pass.
According to this method, calculations start at the first event, let's say 1, move through the
events in increasing order of the event numbers, and finally stop at the last event, let's say N.
Each event's earliest occurrence time (E), as well as the earliest start and end times for each
activity that starts there, are determined. The project's earliest probable completion time is
determined by the event N's earliest occurrence time when calculations cease at that point.
1. Set the earliest occurrence time of initial event 1 to zero. That is, = 0, for i = 1.
2. Calculate the earliest start time for each activity that begins at event i (= 1). This is equal to
the earliest occurrence time of event, i (tail event). That is: = , for all activities (i, j)
starting at event i.
3. Calculate the earliest finish time of each activity that begins at event i. This is equal to the
earliest start time of the activity plus the duration of the activity. That is: = + =
+ , for all activities (i, j) beginning at event i.
5. Calculate the earliest occurrence time for the event j. This is the maximum of the earliest
finish times of all activities ending into that event, that is, = Max { } = Max { +
}, for all immediate predecessor activities.
169 | P a g e
6. If j = N (final event), then earliest finish time for the project, that is, the earliest occurrence
time for the final event is given by = Max { } = Max { – 1 + }, for all
terminal activities
The computations in this technique start with the final event N, move through the events in
decreasing sequence of event numbers, and finally arrive at the first event 1. Each event's
latest occurrence time (L), as well as the most recent start and completion times for each
activity that is ending there, are determined. Up until the initial occurrence, the process is
repeated.
1. Set the latest occurrence time of last event, N equal to its earliest occurrence time (known
from forward pass method). That is, = , j = N.
2. Calculate the latest finish time of each activity which ends at event j. This is equal to latest
occurrence time of final event. That is: LFij = , for all activities (i, j) ending at event j.
3. Calculate the latest start times of all activities ending at j. This is obtained by subtracting
the duration of the activity from the latest finish time of the activity. That is: = and
= – = – , for all activity (i, j) ending at event j.
5. Calculate the latest occurrence time of event i (i < j). This is the minimum of the latest
start times of all activities from the event. That is: = Min { } = Min { – }, for all
immediate successor activities.
6. If j = 1 (initial event), then the latest finish time for project, i.e. latest occurrence time
for the initial event is given by: = Min { } = Min{ – 1 – }, for all immediate
successor activities.
The amount of time that a non-critical activity or event can be postponed or prolonged
without extending the overall project completion schedule is known as the float (slack) or
170 | P a g e
free time. Finding the amount of slack time, or spare time, that each activity has is easy once
we have determined the earliest and latest timings for all activities. Slack is the amount of
time an activity may be postponed without causing the project as a whole to lag. In a project,
there are three different sorts of floats for each non-critical activity.
(a) Total float: This is the amount of time that an activity may be put off until all activities
that came before it were finished as soon as possible and all activities that followed it could
be put off until the latest time that was permitted.
For each non-critical activity (i, j) the total float is equal to the latest allowable time
for the event at the end of activity minus the earliest time for an event at the beginning of the
activity minus the activity duration. Mathematically,
Total Float = = − −
(b) Free float: This is the amount of time that each non-critical activity's completion time can
be pushed back without impacting its immediately succeeding activities. The amount of free
float time for a non-critical activity (i, j) is computed as follows:
Free Float =
(c) Independent float: This is the length of time that any non-critical activity can be
delayed without affecting the completion times of the activities that come before or after it.
Each non-critical activity's independent float time is calculated mathematically as follows:
Independent Float = =
Certain activities in any project are called critical activities because delay in their execution
will cause further delay in the project completion time. All activities having zero total float
value are identified as critical activities, i.e., L = E.
The critical path is the sequence of critical activities between the start event and end event of
a project. This is critical in the sense that if execution of any activity of this sequence is
delayed, then completion of the project will be delayed. A critical path is shown by a thick
line or double lines in the network diagram. The length of the critical path is the sum of the
171 | P a g e
individual completion times of all the critical activities and define the longest time to
complete the project. The critical path in a network diagram can be identified as below:
i. If value and value for any tail and head events is equal, then activity (i, j)
between such events is referred as critical, i.e., .
ii. On critical path .
Example 1: An established company has decided to add a new product to its line. It will buy
the product from a manufacturing concern, package it, and sell it to a number of distributors
that have been selected on a geographical basis. Market research has already indicated the
volume expected and the size of sales force required. The steps shown in the following table
are to be planned.
172 | P a g e
As the figure shows, the company can begin to organize the sales office, design the package,
and order the stock immediately. Also, the stock must be ordered and the packing facility
must be set up before the initial stocks are packaged.
(a) Draw an arrow diagram for this project.
(b) Indicate the critical path.
(c) For each non-critical activity, find the total and free float.
Solution: (a) The arrow diagram for the given project, along with E-values and L-values, is
shown in Fig.1. Determine the earliest start time – Ei and the latest finish time – Lj for each
event by proceeding as follows:
173 | P a g e
(b) The critical path in the network diagram (Fig.1) has been shown. This has been done by
double lines by joining all those events where E-values and L-values are equal. The critical
path of the project is: 1 – 2 – 5 – 6 – 9 – 10 and critical activities are A, B, C, L and M. The
total project completion time is 25 weeks.
(c) For each non-critical activity, the total float and free float calculations are shown in
Table1.
174 | P a g e
IN-TEXT QUESTIONS
12. The objective of the project scheduling is to minimize total project cost. True
/ False
13. The CPM is used for completing the projects that involves activities of
repetitive nature. True / False
14. PERT is referred to as an activity-oriented technique. True / False
15. _____________is the time-consuming job or task that is a key subpart of the
total project.
We used the CPM technique, which assumes that all activity times are known and fixed
constants, to find all earliest and latest times to date as well as the related critical path(s). In
other words, activity times are constant. However it is possible that other factors will affect
how quickly a task is completed. PERT was developed to handle projects where the time
duration for each activity is not known with certainty but is a random variable that is
characterized by -distribution. To estimate the parameters ‘mean and variance’ of the -
distribution three-time estimates for each activity are required to calculate its expected
completion time. The necessary three-time estimates are listed below.
175 | P a g e
The -distribution is not necessarily symmetric; the degree of skewness depends on the
location of the . The range of is assumed to enclose every possible
duration of the activity.
The variance of the overall critical path's time is calculated by aggregating the variances of
the various critical activities if the duration of the activities is a random variable. Suppose
is the standard deviation of the critical path. Then
There is a potential that the project's scheduled completion time will vary because of the
unknown activity completion time. As a result, the decision-maker must be aware of the
probability that the specified time will be achieved. Using the central limit theorem, the
normal distribution can be used to approximate the probability distribution of completion
times for an event. Thus, the probability of completing the project on the schedule time, is
given by:
number of standard deviations, the scheduled completion time is away from the
mean time.
176 | P a g e
Example 2: The following network diagram represents activities associated with a project:
= nd, .
(b) The earliest and latest expected completion time for all events considering the expected
completion time of each activity are shown in Table 3.
177 | P a g e
Table 3
178 | P a g e
(c) The critical path is shown by thick line in Fig. 2 where E-values and L-values are the
same. The critical path is: 1 – 4 – 7 and the expected completion time for the project is 42.8
weeks.
(d) Expected length of critical path, = 33 + 9.8 = 42.8 weeks (Project duration).
Variance of critical path length, = 5.429 + 0.694 = 6.123 weeks.
Since = 41.5, = 42.8 and = 6.123 = 2.474, the probability of meeting the schedule
time is given by:
Thus, the probability that the project can be completed in less than or equal to 41.5 weeks is
0.3048. In other words, the probability that the project will get delayed beyond 41.5 weeks is
0.6952.
Given that
But = 1.64, from normal distribution table. Thus,
.
179 | P a g e
IN-TEXT QUESTIONS
CASE STUDY
Krishna Mills
Krishna Mills made the decision to construct a new feed mill in order to improve its production capacity. The
project was divided up into various tasks, some of which had to be finished before others could begin. The
activities, as well as the anticipated times for each, are listed in Exhibit 1 as "decided upon by management and
the precedence relationships." To save as much crucial time as possible while putting the new mill into service,
the management sought to move the schedule as far in advance as possible. The mills' president remarked, "If
we can get rolling, every week spared is worth Rs 70,000 in lost contribution." Several construction tasks could
be accelerated. For instance, by working extra hours, the company's architects could design the new plant in 10
weeks rather than the 12 weeks they had initially planned. The mills will have to pay an extra Rs 25,000 for
each week that is advanced due to this advancement. The following table displays the weekly crash cost as well
as the maximum amount that each activity could crash. The independent business that was a possible contractor
for one of the project's key duties, building the plant, had already been approached by the president of mills.
Krishna Mills expected to complete the remaining tasks either directly or via its representatives. . The
management had discussed a few bonus and penalty provisions with the mill contractors. One of them was that
the mills would pay contractors an extra Rs 75,000 for each week the facility was finished before the allotted 10
weeks.
The management of Krishna Mills is interested in learning which operations would crash and how to
schedule its employees.
180 | P a g e
The initial creators of CPM gave the project manager the choice to allocate resources to tasks
in order to speed up project completion. The option to shorten activity times must consider
the increased expenses involved, as more resources (such as additional employees, overtime,
etc.) typically raise project costs. In essence, the decision that the project manager must make
entails exchanging decreased activity time for increased project cost.
It is usual for a project manager to encounter one or both of the following circumstances
while overseeing a project: Both the projected project completion date and the project's
timeline are behind schedule. In either case, some or all of the ongoing tasks must be
expedited in order to complete the project by the target deadline. Crashing is the process of
reducing the length of a project in the most affordable way possible. Additionally, extending
an activity's duration past its usual point (cost-efficient) may raise the expense of carrying out
that action. For the sake of simplicity, it is assumed that the relationship between an activity's
normal time and cost as well as crash time and cost is linear. Therefore, by calculating the
relative change in the cost per unit change in time, the crash cost per unit of time may be
determined.
When all essential tasks are accomplished in accordance with schedule, crashing begins, and
it ends when all essential tasks have crashed. The process of determining time-cost trade-offs
for project completion can be summed up as follows:
Step 1: Determine the normal project completion time and associated critical path.
Step 2: Identify critical activities and compute the cost slope for each of these by using the
relationship
181 | P a g e
The values of cost slope for critical activities indicate the direct extra cost required to execute
an activity per unit of time.
Step 3: For reducing the total project completion time, identify and crash an activity time on
the critical path with lowest cost slope value to the point where
i. another path in the network becomes critical, or
ii. the activity has been crashed to its lowest possible time.
Step 4: If the critical path under crashing is still critical, return to step 3. However, if due to
crashing of an activity time in step 3, other path(s) in the network also become critical, then
identify and crash the activity(s) on the critical path(s) with the minimum joint cost slope.
Step 5: Terminate the procedure when each critical activity has been crashed to its lowest
possible time. Determine total project cost corresponding to different project durations.
Example 3: The data on normal time, cost and crash time and cost associated with a project
are shown in the following table.
182 | P a g e
Solution: (a) The network for normal activity times is shown in fig 3. The critical path is:
with a project completion time of 32 weeks.
E4= 10 E7= 22
L4= 12 L7= 22
4 7
7 4
0
3 6 10
1 2 5 6
E2= 3 E5= 12
E1= 0 E6= 18 13
L2= 3 L5= 12
L1= 0 L6= 18
3 5
8
E8= 32
3 E3= 4 L8= 32
L3= 4
2-3 (7 - 3) - 3 = 1
2-4 (12 - 3) - 7 = 2
3-5 (12 - 6) - 5 = 1
4-5 (12 - 10) - 0 = 2
6-8 (32 - 18) - 13 = 1
183 | P a g e
1-2
2-5
5-6
6-7
7-8
The minimum value of crash cost per week is for activity 2 – 5 and 5 – 6. Hence, crashing
activity 2 – 5 by 2 days from 9 weeks to 7 weeks. But the time should only be reduced by 1
week otherwise another path become a parallel path. Network, as
shown in fig 4, is developed when it is observed that new project time is 31 weeks and the
critical path are and .
7 4
0
8
3 6 10
1 2 × 5 6
E2= 3 E5= 11
E1= 0 E6= 17 13
L2= 3 L5= 11
L1= 0 L6= 17
3 5
8
E8= 31
3 E3= 6 L8= 31
L3= 6
184 | P a g e
1-2
2-5
2-3 𝑖𝑛 𝑖 𝑛 𝑖
3-5
5-6
6-7
7-8
Since crashed cost slope for activity 5 – 6 is minimum, its time may be crashed by 2 weeks
from 6 weeks to 4 weeks. The updated network diagram is shown in fig 5.
E4= 10 E7= 19
L4= 11 L7= 19
4 7
7 4
0
8 4
3 10
1 2 × 5 × 6
E2= 3 E5= 11
E1= 0 E6= 15 13
L2= 3 L5= 11
L1= 0 L6= 15
3 5
8
E8= 29
E3= 6
3 L8= 29
L3= 6
185 | P a g e
Crashed total cost = Total direct normal cost + Increased direct cost due to
crashing of activity (5 – 6) + Indirect cost for 29 weeks
= 4,220 + (1 × 45 + 2 × 45) + 50 × 29 = Rs 5,805
For revised network given in fig 5, new possibilities for crashing in the critical paths are
listed in table 7.
Table 7: Crash Cost Slope
Critical Activity Crash Cost per Week (Rs)
1-2
2-3 𝑖𝑛 𝑖 𝑛 𝑖
2-5
5-6 )
6-7
7-8
The further crashing 6 – 7 activity time from 4 weeks to 3 weeks will result in increased
direct cost than the gain due to reduction in project time. Hence, terminate crashing. The
optimal project duration is 29 weeks with associated cost of Rs 5,805 as shown in table 8.
Table 8: Crashing Schedule of Project
186 | P a g e
IN-TEXT QUESTIONS
8. The process of shortening the duration of a project in the least expensive
manner possible is called ____________________.
9. In time-cost trade-off function analysis the:
a) cost decreases linearly as time increases b) cost at normal time is zero
c) cost increases linearly as time decreases d) none of the above
6.6 SUMMARY
PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method) have
been widely used to help project managers plan, schedule, and manage their projects ever
since they were developed in the late 1950s.
When using PERT/CPM, a project is first broken down into its separate activities, their
immediate predecessors are noted, and the time of each activity is estimated. The creation of
a project network to display this information is the next phase.
PERT/CPM produce project scheduling data, such as the earliest start time, latest start time,
and slack for each activity. Also, it outlines the actions that must be completed in a specific
order in order to avoid delays in project completion. Given that the critical path is the longest
path through the project network, if all activities proceed according to plan, the length of the
critical path establishes the project's duration.
Yet, because there is frequently a great deal of ambiguity over how long an activity will
really last, it is challenging for all activities to continue on schedule. By collecting three
different types of estimates (most likely, optimistic, and pessimistic) for the length of each
activity, the three-estimate approach in PERT addresses this dilemma. The mean and variance
of the probability distribution for this duration are approximately determined using this
information. The likelihood that the project will be completed by the deadline can then be
roughly calculated.
187 | P a g e
The project manager can analyse the impact on total cost of adjusting the expected duration
of the project to various alternative values using the time-cost trade-offs approach in CPM.
The time and cost for each action while it is carried out normally and when it is completely
crashed are the statistics required for this activity (expedited).
6.7 GLOSSARY
• Activity: - A job or task that consumes time and is a key subpart of a total project.
188 | P a g e
6.10 REFERENCES
• Balakrishnan, N., Render, B., Stair, R. M., & Munson, C. (2017). Managerial decision
modeling. Upper Saddle River, Pearson Education.
• Anderson, D., Sweeney, D., Williams, T., Martin, R.K. (2012). An introduction to
management science: quantitative approaches to decision making (13th ed.). Cengage
Learning.
189 | P a g e
LESSON 7
MARKOV PROCESSES
Dr. Shubham Agarwal
Associate Professor
New Delhi Institute of Management
GGSIP University
[email protected]
STRUCTURE
6.2 INTRODUCTION
Andrei Markov was a Russian mathematician who lived from 1856 to 1922. The only subject
he did well in was math, and he had a dismal grade point average overall. Later, he was
taught the subject by Pafnuty Chebyshev, a mathematics lecturer at the University of
Petersburg who is well known for his work in probability theory. Markov first focused on
number theory, convergent series, and approximation theory as his three primary scientific
disciplines. His most famous research on Markov chains is where the phrase originates, and
his first article on the subject appeared in 1906.
The Markov chain is a fundamental mathematical tool for stochastic processes. The
Markov Property is the essential idea, according to which some stochastic process predictions
can be made more simply by treating the future as independent of the past in light of the
191 | P a g e
process's present state. This is done to make stochastic process future state forecasts simpler
to comprehend. This section will explore the principles of Markov chains, explain the
different types of Markov Chains, and provide instances of its use in business and finance.
Markov chains are employed to determine the chance of events changing states. We'll
use the weather as an example: A sunny day enhances the probability of the next day being
sunny by 70% and reduces the probability of it being wet by 30%. There is a 20% chance that
tomorrow will be sunny if it rains today, but an 80% probability that it will rain again. This
can be summarised in a transition diagram, where each potential state change is shown in Fig.
1 of the diagram.
A stochastic process is one whose outcomes depend on some element of chance. A stochastic
or random process is a collection of random variables that is indexed by a mathematical set,
which means that each random variable in the stochastic process is specifically linked to an
element in the set. The index set is the collection used to index the random variables. In the
past, the index set was a subset of the real line, such as I the natural number, which gave the
index set a temporal interpretation.
The collection's random variables all draw their values from the same state space, a
body of mathematics. The real line, the integers, or the n-dimensional Euclidean space are a
few examples of the state space.
192 | P a g e
{X(t), t ∈T}, defined on some probability space (Ω, F, P), where T is a parameter
space, is referred to as a stochastic process. State space refers to the collection of all potential
values for all random variables, and states are its constituent parts.
The values assumed by a random variable X(t) are called or states and the collection of all
possible values forms the state space (S) of the process. If X(t)= i , then we say the process is
in state i.
a) Discrete state process: This state space is finite or countable for example the non-
negative integers {0,1,2,3,….}.
b) Continuous state process: This state space contains finite of infinite intervals of the real
number line.
A stochastic process can be classified in different ways for example, by its state space, its
index set, or the dependence among the random variable.
a) Discrete/ Continuous time: A stochastic process is considered to be in discrete
time if the index set has a finite or countable number of elements, such as a finite set of
numbers, the set of integers, or the natural numbers. Discrete-time stochastic process is the
name given to this particular kind of stochastic process. Time is referred to as continuous and
stochastic processes are referred to as continuous - time stochastic processes if the index set
of the stochastic process is some interval of the real line.
b) Discrete/ Continuous state space: The stochastic process is referred to as a discrete or
integer-valued stochastic process if the state space consists of integers or natural numbers.
The stochastic process is known as a real-valued stochastic process or a process with
continuous state space if the state space is the real line.
193 | P a g e
A sequence of random variables {Xn, where n = 0, 1, 2, 3, …..} with discrete state space is
known as markov chain if,
Pr(Xn =K | Xn-1 = j, Xn-2 = j1, …….., X0 = jn-1) = Pr(Xn =K | Xn-1 = j) = pjk
Example: X1 = 0,1, X2 = 0,1, ………….., Xn = 0,1
Then the partial sum of random variable or present value is given by,
Sn = X1 + X2 + ………….. + Xn = {0, 1, 2, ……., n}
So, the future value is, Sn+1 = X1 + X2 + ………….. + Xn + Xn+1
Therefore the markov chain is, {Sn, n ≥ 1}
Let S be a state space, such that S = {0, 1, 2, …….} then the transition probability matrix is
given by,
194 | P a g e
0 1 2 − −
0 p00 p01 p02 − −
− −
1 p10 p11 p12
P = 2 p20 p21 p22 − −
_ − − −− −− − −
_ − − −− −− − −
Where,
p00 + p01 + p02 + ........ = 1
p10 + p11 + p12 + ........ = 1
Example: Suppose that the probability of a dry day (state 0) followinga rainy day is 1/3 and
probability of a rainy day (state 1) following a dry day is 1/2. If there is a two-state Marcov
chain such that p10 = 1/3 and p01 = 1/2 and the transition probability matrix (TPM),
0 1
0 1 / 2 1 / 2
P=
1 1 / 3 2 / 3
Given that may 1 is a dry day, find the probability that May 3 is a dry day.
Solution: Given that X1 = May 1 is a dry day.
Probability that X3 = May 3 is a dry day is given by,
P(X3 = 0 | X1 = 0) = p00(2) = p2
0 1
1 / 2 1 / 2 1 / 2 1 / 2 0 5 / 2 7 / 12
P 2 = P.P = =
1 / 3 2 / 3 1 / 3 2 / 3 1 7 / 18 11 / 18
195 | P a g e
0 1
0 1 / 2 1 / 2
P=
1 1 / 3 2 / 3
Then the transition graph is,
Example: Consider a markov chain {Xn | ≥ 0} with state space {1, 2, 3} and transition
matrix,
1 2 3
1 0 1 / 2 1 / 2
P = 2 1 / 2 0 1 / 2
3 1 / 2 1 / 2 0
Then, find P(X3 = 1 | X0 = 0)
Solution: The transition graph corresponding to the given TPM is,
And p11(3) = p12 p23 p31 + p13 p32 p21 = (1/2)(1/2)(1/2) + (1/2)(1/2)(1/2) = 1/4
Therefore, P(X3 = 1 | X0 = 0) = ¼
196 | P a g e
Let the state space is {0, 1, 2, .....}, initial state = 0, then P(X0 = i) is called initial distribution
and
P(X0 = i) = 𝜋i
Example: Let {Xn, n ≥ 0} be a markov chain with 3 sattes 0, 1, 2 and with transition matrix
3 / 4 1 / 4 0
P = 1 / 4 1 / 2 1 / 4
0 3 / 4 1/ 4
And initial distribution, Pr(X0 = i) = 1/3, i = 0, 1, 2
Then, find P(X3 = 1 | X0 = 1) and calculate the joint probability, P(X3 = 1, X1 = 1, X0 = 2)
Solution: S = {0, 1, 2}
3 / 4 1 / 4 0
P = 1 / 4 1 / 2 1 / 4
0 3 / 4 1/ 4
p11(3) = p12 p22 p21 + p11 p12 p21 + p10 p00 p01
= (1/4)(1/4)(3/4) + (1/2)(1/4)(3/4) + (1/4)(3/4)(1/4)
= 3/16
Therefore, P(X3 = 1 | X0 = 1) = 3/16
Now, p11(2) = p11 p11 + p12 p21 + p10 p01
= (1/2)(1/2) + (1/4)(3/4) + (1/4)(1/4)
= 1/2
Now, P(X3 = 1, X1 = 1, X0 = 2) = P(X3 = 1, X1 = 1 | X0 = 2) P(X0 = 2)
= P(X3 = 1| X1 = 1) P(X1 = 1 | X0 = 2) P(X0 = 2)
= p11(2) . p21(1) . (1/3)
= (1/2)(3/4)(1/3)
= 1/8
7.10 CONCEPT OF CLASSIFICATION OF STATES
7.10.1 Accessibility
If pij(n) > 0, where n ≥ 1, then state j is accessible from state i.
Let if p01 = 1/2 > 0, which means that state 1 is accessible from state 0.
197 | P a g e
And if p10 = 0, which means that state 0 is not accessible from state 1.
i j
i.e., i and j are communicating states.
Example: Check whether the given transition matrix is irreducible or reducible for the state
space {0, 1, 2}
0 1 / 2 1 / 2
(i) P = 1 / 2 0 1 / 2
1 / 2 1 / 2 0
Solution: The transition diagram for the given TPM is,
198 | P a g e
1 / 2 1 / 2 0 0
1 / 2 1 / 2 0 0
(ii) P =
0 0 0 1
1 / 4 1 / 4 1/ 4 1 / 4
Solution: The transition diagram for the given TPM is,
If for any state i, C(i) has only one element, then the state is called an absorbing state.
Eg.: If C(i) = {i}, then i is called absorbing state.
And if C(j) = {j, k}, then j is not an absorbing state.
7.10.7 Periodicity
The period of the state i ∈ S is defined as d(i) or λ(i) and is given by,
d(i) = gcd (n ≥ 1 | pii ( n ) 0 )
where n is the number of steps.
Remarks:
• If any state has a self loop then its period is 1.
• If d(i) = 1, then the state i is aperiodic state.
• If d(i) > 1, then the state i is periodic.
• If i and j are communicating states then period of i and j will be equal, i.e., d(i) = d(j)
Example: find the period of the states from the following transition diagram:
Solution: From the diagram it is clear that,
d(0) = gcd{1, 2, 3, .....} = 1
d(1) = gcd{ 2, 3, 4, .....} = 1
200 | P a g e
= P{ X n = j , X m j , m n | X 0 = i }
(n)
f ij
i.e., i→j in n-steps but can not visit i in less than n-steps. It must be n-steps only.
n =1
201 | P a g e
ii = n fii ( n )
n =1
Example: Let {Xn, n ≥ 0} be a 2-state markov chain with state space S = {0, 1} and
transition matrix,
0 1
0 1 / 2 1 / 2
P=
1 1 / 3 2 / 3
Assouming X0 = 0, find the expected return time to 0.
Solution: We know that, 00 = n f 00
(n)
= 1 f 00 (1) + 2 f 00 ( 2) + 3 f 00 (3) + 4 f 00 ( 4) + 5 f 00 (5) + .........
n =1
(1) (1)
Now, f 00 = p00 = 1/2
( 2)
f 00 = p01 p10 = (1/2)(1/3) = 1/6
( 3)
f 00 = p01 p11 p10 = (1/2)(2/3)(1/3) = 1/9
( 4)
f 00 = p01 p11 p11 p10 = (1/2)(2/3)(2/3)(1/3) = 2/27
( 5)
f 00 = p01 p11 p11 p11 p10 = (1/2)(2/3)(2/3)(2/3)(1/3) = 4/81
Therefore,
00 = 1 (1/2) + 2 (1/6) + 3 (1/6)(2/3) + 4 (1/6)(2/3)2 + 5 (1/6)(2/3)3 + ........
= 1/2 + 1/3 + (1/6)(2/3) [3 + 4 (2/3) + 5 (2/3)2 + .......]
Let S = 3 + 4 (2/3) + 5 (2/3)2 + .......
(2/3)S = 3 (2/3) + 4 (2/3)2 + 5 (2/3)3 + .......
Subtracting we get,
(1/3)S = 3 + (2/3) + (2/3)2 + (2/3)3 + .......
= 3 + (2/3)[1 + (2/3) + (2/3)2 + (2/3)3 + ......]
= 3 + (2/3)[1/(1 – (2/3))] = 5
So, S = 15
Therefore, the expected return time to 0 is 00 = 1/2 + 1/3 + (1/6)(2/3) [15] = 5/2
202 | P a g e
203 | P a g e
(1)
Now, f 22 =0
( 2)
f 22 = p21 p12 = (1)(2/3) = 2/3
( 3)
f 22 = p21 p11 p12 = (1)(1/3)(2/3) = 2/9
f 22 = f 22 = f 22
(n) (1) ( 2) ( 3)
+ f 22 + f 22 + ........= 0 + 2/3 + 2/9 < 1
n =1
n=1
(1)
Now, f 44 = 1/2
( 2)
f 44 = p43 p34 = (1/2)(0) = 0
f 44 = f 44 = f 44
(n) (1) ( 2)
+ f 44 + ..... = 1/2 + 0 < 1
n =1
Example: Find the transient and recurrent states from the following TPM:
1 2 3
1 1 / 4 3 / 4 0
P = 2 0 7 / 8 1 / 8
3 0 1 / 2 1 / 2
Solution: The transition graph of the given TPM is,
204 | P a g e
For state 1, if we move out from state 1, we found that there is no route to come back to state
1, therefore state 1 is transient.
For state 2, if we move out from state 2, we found that there is a route to come back to state
2, therefore state 2 is recurrent.
Similarly, for state 3, if we move out from state 3, we found that there is a route to come back
to state 3, therefore state 3 is recurrent.
Since, C(1) = {1} and C(4) = {4}, therefore 1 and 4 are absorbing state.
For state 1, if we move out from state 1, we found that there is a route to come back to state
1, therefore state 1 is recurrent.
205 | P a g e
For state 2, if we move out from state 2, we found that there is no route to come back to state
2, therefore state 2 is transient.
For state 3, if we move out from state 3, we found that there is no route to come back to state
3, therefore state 3 is transient.
Similarly, for state 4, if we move out from state 4, we found that there is a route to come back
to state 4, therefore state 4 is recurrent.
Hence, 2 and 3 are transient states and 1 and 4 are recurrent states.
7.12 SUMMARY
If state i is recurrent iff, pii
(n)
=
• n =0
• If state i is transient iff,
n =0
pii
(n)
IN-TEXT QUESTIONS
16. _______________ are a fundamental part of stochastic processes and are
used widely in many different disciplines.
if pii( n ) =
17. n =0
206 | P a g e
Let {Xn: n = 1, 2, 3, .....} be a recurrent, irreducible and aperiodic markov chain with
transition probability matrix P = (pij), then
1
=
(n)
Lim pij
n → ii
Example: Consider a markov chain with state space {0, 1, 2, 3, 4}. The TPM is given below:
1 0 0 0 0
1 / 3 1 / 3 1 / 3 0 0
P = 0 1/ 3 1/ 3 1/ 3 0
0 0 1 / 3 1 / 3 1 / 3
0 0 0 0 1
( n)
Then find Lim p23
n→
207 | P a g e
From the graph is it clear that, 0 and 4 are absorbing states, hence recurrent.
1, 2 and 3 are transient states.
Since, 3 is transient, therefore
=0
(n)
Lim p23
n →
Consider a markov chain with transition probability pjk and TPM P = [pjk]. A probability
distribution {vj} is called stationary or invariant for the given chain if
vk = v j p jk
j
Such that,
v j 0, v j = 1
j
Again,
vk = v j p jk = { vi pij } = vi pik
j j i i
And in general,
vk = vi pik , n 1
( n)
208 | P a g e
i =1
i =1
1 + 2 + .... + n = 1
Solving these equations, we can find the stationary distribution.
209 | P a g e
1 / 2 1 / 3 1 / 6
[𝜋1 𝜋2 𝜋3] = [𝜋1 𝜋2 𝜋3] 3 / 4 0 1 / 4
0 1 0
210 | P a g e
Show that the given chain is irreducible and aperiodic. Also find the stationary distribution
for this chain.
Solution: From the transition graph it is clear that,
C(1) = {1, 2, 3}, therefore the chain is irreducible.
Also, 1 and 3 are self loops, therefore d(1) = 1, d(3) = 1
And all the states are communicating, so d(2) = 1
Since for the given markov chain the period of each state is 1, therefore it is aperiodic chain.
Th e TPM for the given chain is,
1 / 4 1 / 2 1 / 4
P = 1 / 3 0 2 / 3
1 / 2 0 1 / 2
We know that,
𝜋 = 𝜋P
1 / 4 1 / 2 1 / 4
0 2 / 3
[𝜋1 𝜋2 𝜋3] = [𝜋1 𝜋2 𝜋3] 1 / 3
1 / 2 0 1 / 2
211 | P a g e
IN-TEXT QUESTIONS
20. For any recurrent state, if mean recurrence time, ii = n fii
(n)
< ∞ (finite),
n =1
23. If i and j communicate only with each other, not from other states, then C(i)
= {i, j} is called the ___________ of states.
P
n =0
n =1 nP
n =0
n
,
Consider the markov chain with S = {0, 1, 2, .......} with TPM
p0 p1 p2 − −
1 0 0 − −
P=0 1 0 − −
0 0 1 − −
− − − − −
Show that the chain in irreducible and positive recurrent.
Solution: The transition diagram for the given TPM is,
212 | P a g e
n =1
00
Therefore, the chain is positive recurrent.
Markov chains are utilised in a wide range of contexts because they may be created to
simulate a variety of real-world processes. These disciplines include speech recognition,
search engine algorithms, and the mapping of animal life populations. They are frequently
used in economics and finance to forecast macroeconomic events like market crashes and
cycles between recession and boom. Predicting asset and option values and estimating credit
213 | P a g e
risks are two other applications. To mimic the randomness in a continuous-time financial
market, Markov chains are used. For instance, a stochastic discount factor, which is defined
using a Markov chain, determines the price of an item.
7.16 SUMMARY
A key idea in stochastic processes is the Markov chain. They can be used to significantly
simplify processes that meet the Markov property, which states that a stochastic variable's
future state depends only on its current state. This means that understanding the process's past
performance won't help with future projections, which naturally minimises the quantity of
information that must be taken into account. It is possible to identify specific patterns in a
market's prior moves by examining its historical data. Markov diagrams can then be created
from these patterns and used to forecast future market movements and the dangers attached to
them.
7.17 GLOSSARY
• The Markov chain - The Markov chain is a fundamental mathematical tool for
stochastic processes. The Markov Property is the essential idea, according to which
some stochastic process predictions can be made more simply by treating the future as
independent of the past in light of the process's present state.
• A stochastic process - A stochastic or random process is a collection of random
variables that is indexed by a mathematical set, which means that each random
variable in the stochastic process is specifically linked to an element in the set.
• Markov chain - A sequence of random variables {Xn, where n = 0, 1, 2, 3, …..} with
discrete state space is known as markov chain if,
• Absorbing state - If for any state i, C(i) has only one element, then the state is called
an absorbing state.
• Recurrent State - If the first return probability for any state i, fii = fii
(n)
=1, then
n =1
214 | P a g e
• Transient State - If the first return probability for any state i, fii = fii
(n)
< 1, then
n =1
1) Consider the markov chain with state space S = {1, 2, 3} with TPM
0 1 / 2 1 / 2
P = 1 / 2 0 1 / 2
1 / 2 1 / 2 0
Let 𝜋 = [𝜋1 𝜋2 𝜋3] be the stationary distribution of markov chain nd d(1) denotes the
period of state 1. Show that d(1) = 1 and 𝜋1 = 1/3.
2) Consider the markov chain {Xn: n ≥ 0} on state space S = {0, 1} with TPM P. then
1 0
show that if P=
0 1
Then, Lim P[Xn = i] converges for i = 0, 1, but limits depand on initial distribution v.
n →
3) There is a
calculator that simply employs the digits 0 and 1. One of these digits is meant to be
transmitted through a number of phases. At each stage, though, there is a chance p
that the digit that enters will be altered when it exits and a chance q = 1 - p that it
won't. Create a Markov chain using the digits 0 and 1 as states to represent the
transmission process. What is the transition probabilities matrix? Create a tree now
215 | P a g e
and assign probabilities based on the assumption that the process starts in state 0 and
goes through two transmission stages. What is the likelihood that the machine will
eventually create the digit 0 after two stages?
4) Suppose that a man can work as a professional, a skilled worker, or an unskilled
worker. Suppose that, among the sons of professionals, 80% work in the field of their
fathers' profession, 10% are skilled labourers, and 10% are unskilled labourers. Sons
of skilled labourers make up 60% of the labour force, 20% of professionals, and 20%
of unskilled workers. In the case of unskilled labourers, 50% of the sons work as such,
with 25% of them falling into each of the other two categories. Assuming that every
man has at least one son, create a Markov chain by choosing a son at random from
each family and following that son's career path through numerous generations.
Create the transition probabilities matrix. Calculate the likelihood that a randomly
selected untrained labourer's grandson is a professional man.
216 | P a g e
LESSON 8
THEORY OF GAMES
Dr. Upasana Dhanda
Assistant Professor
S.G.T.B. Khalsa College
Delhi University
[email protected]
STRUCTURE
• The students will learn the concept of game theory for decision making in managerial
problems.
• It will equip them to know the consequences of interplay and pay-offs with the use of
each combination of strategies by the players in the game.
• Students will understand various game models and their solutions to find out the
optimal strategies and expected pay-off for the players in the game.
217 | P a g e
8.2 INTRODUCTION
218 | P a g e
A two-person zero-sum game is the one which involves two players with competing interest
and gain of one is equal to the loss of another. To illustrate, let’s assume there are two
companies Alpha Limited (A) and Beta Limited (B) which are competing for the market
share. Now, given the total size of the market, gain of market share of one firm would lead to
the loss of market share for another. Thus, it is a zero-sum game as sum of gains and losses
for both the firms is equal to zero.
Now, let’s assume that both the firms are considering four strategies to increase their market
share; High advertising, celebrity endorsements, free samples and social media marketing.
We assume that currently they have equal market share and further each of the firm can
employ only one strategy at a time.
Given the above conditions, 4 × 4 =16 combinations of moves are possible. High advertising
by Alpha Limited can be accompanied by high advertising, celebrity endorsements, free
samples and social media marketing by Beta Limited. Similarly, celebrity endorsements by
Alpha Limited can be accompanied by high advertising, celebrity endorsements, free samples
and social media marketing by Beta Limited and so on for further strategies. Each
combination of strategy will affect the market share in a particular way giving the pay-offs.
For example, high advertising by Alpha Limited and high advertising by Beta Limited will
lead to 16 points (implying 16% market share) in favour of Alpha Limited. Similarly, high
advertising by Alpha Limited accompanied by celebrity endorsements by Beta Limited leads
219 | P a g e
to 17 points (17% market share in favour of Beta Limited. Similarly, there are pay-offs for
each combination of strategies employed by Alpha and Beta.
The pay-offs are shown in the matrix below. The strategies of high advertising, celebrity
endorsements, free samples and social media marketing employed by Alpha are given as a1,
a2, a3 and a4 and strategies of high advertising, celebrity endorsements, free samples and
social media marketing employed by Beta are given as b1, b2, b3 and b4 in the table. Please
note the pay-off matrix is drawn from Alpha’s viewpoint which means that a positive pay-off
means that Alpha has gained the market share at the expense of Beta and the negative pay-
offs imply Beta’s gain at the expense of Alpha.
Beta’s Strategies
b1 b2 b3 b4
a1 16 -17 -8 9
Alpha’s a2 6 8 -5 -13
strategies
a3 11 9 12 16
a4 3 4 9 8
Now, we need to understand that both the companies are aware of the pay-off matrix but they
are not aware of the strategy that the other one will choose. The conservative approach to
select the best strategy will be to assume the worst and act accordingly. Thus, with reference
to the pay-off matrix, if Alpha Limited chooses strategy a1, it would expect Beta Limited to
choose strategy b2, resulting in -17 as the pay-off for Alpha. If Alpha assumes a2, it would
expect Beta to select b4, resulting in -13 as the pay-off matrix. Similarly, if Alpha chooses a3,
it would lead to 9 as the pay-off as it would expect Beta to select b2 and choosing a4 as the
strategy by Alpha would lead to 3 as the pay-off as it would expect Beta to select b1 strategy.
We need to keep in mind that both the companies know the pay-off but are unaware of other
chosen strategy and are conservative in deciding their strategy based on the pay-offs.
The company Alpha Limited would like to make the best use of the situation by choosing the
maximum out of these minimum pay-offs. In other words, it would select the highest of the
minimum pay-offs for each of the four strategies. This decision rule is called maximin
220 | P a g e
stretagy- choosing maximum out of minimum pay-offs. Since, the minimum pay-off for each
strategy for Alpha a1, a2, a3 and a4 is -17, -13, 9 and 3 respectively; Alpha Limited would
select maximum out of these pay-offs i.e. 9 which is corresponding to strategy a3 (free
samples).
Similarly, Beta Limited would also be conservative in its approach. If Beta chooses b1, then it
would expect Alpha to choose a1 (maximum advantage for Alpha) resulting in 16 as the pay-
off. . If Beta chooses b2, then it would expect Alpha to choose a3 resulting in 9 as the pay-off.
Similarly, choosing b3 by Beta would result in a pay-off of 12 as Alpha would be expected to
choose a3 strategy and if Beta select b4, then Alpha would be expect to select a3 resulting in
16 as the pay-off. To mimize the advantage to Alpha, Beta would select the strategy that
yields the minimum advantage to its competitor. Hence, the decision of Beta Limited will be
in accordance to the minimax strategy- selecting minimum out of the maximum pay-offs.
Since, the maximum pay-off for each strategy for Alpha a1, a2, a3 and a4 is 16, 9, 12 and 16
respectively; Beta Limited would select minimum out of these maximum pay-offs i.e. 9
which is corresponding to strategy b2 (celebrity endorsements).
It should be noted that corresponding to maximin rule for Alpha Limited and minimax rule
for Beta Limited, the pay-off is 9. This pay-off is the value of the game which represents the
final pay-off to the winner by the losing player. Since, the pay-off is 9, which is drawn from
Alpha’s point of view, it means the game situation is favourable towards Alpha Limited. If
the game value was negative, then it would be favourable towards Beta Limited. The game
would said to be fair or equitable if the value of the game was zero. This means it favours
none of the players.
Thus, in the above example, Alpha’s optimal strategy is a3 (giving free samples) and Beta’s
optimal strategy is b2 (celebrity endorsements) and the value of the game is 9 which means
9% market share in favour of Alpha Limited. The game situation is favourable towards
Alpha.
8.4.1 Saddle Point
The point of equilibrium where the maximin value is equal to the minimax value is called
saddle point. To obtain the saddle pint, we find the row minima (minimum pay off for each
row in the pay off matrix) and the column maxima (maximum pay off for each column in the
pay off matrix). In case, maximum of row minima is equal to minimum of colum maxima,
then the value represents the saddle point. Let’s consider our previous example.
221 | P a g e
Beta’s Strategies
b1 b2 b3 b4 Row
minima
Alpha’s a1 16 -17 -8 9 -17
strategies a2 6 8 -5 -13 -13
a3 11 9 12 16 9*
a4 3 4 9 8 3
Column 16 9* 12 16
maxima
In the table, we find the row minima (miminum pay-off for each row) and column maxima
(maximum pay-off for each column). As we can see the maximum of row minima (maximun
strategy) and minimum of column maxima (minimax strategy) is the same i.e. 9. This is the
point of equilibrium (saddle point). It represents the value of the game and implies that Alpha
limited will gain 9% market share at the cost of Beta Limited. The game situation is
favourable to Alpha Limited.
A game can have more than one saddle points as well for a given problem. Let’s consider
another example.
Beta’s Strategies
b1 b2 b3 b4 Row
minima
a1 16 -17 -8 6 -17
Alpha’s
strategies a2 6 8 -5 -13 -13
a3 10 9 9 16 9*
a4 3 5 9 10 3
Column 16 9* 9* 16
maxima
222 | P a g e
In this example, the optimal strategy for Alpha Limited is a3 and the optimal strategies for
Beta Limited are b2 and b3. There are two saddle points at 9 which is the value of the game. It
means gain of 9% market share for Alpha Limited and loss of 9% market share for Beta
Limited.
8.4.2 When Saddle Point does not exist
In case, saddle point does not exist, it is not possible to find the solution in terms of pure
strategies- maximin and minimum strategy. The solution to such problems where saddle
point does not exist calls for employing mixed strategies. A mixed strategy is the combination
of two or more strategies selected by the players at a given time, according to a pre-
determined probability. Players choose a mix of strategies in a given ratio.
Let’s discuss the solution for 2 × 2 games where saddle point does not exist.
B player’s strategies
b1 b2
A player’s strategies a1 8 -9
a2 -5 10
In the above problem, saddle point does not exist, so the method discussed in previous
section will not suffice to find the optimal strategy for player A and B.
If A player choose a1 strategy, then B player will choose b2 and if A chooses a2, then B player
would choose b1. So if B knows A’s choice, then B can ensure his/her gain by choosing a
strategy opposite to A. Therefore, A will make it difficult for B to guess what he/she is going
to choose. Similarly, B will also make it difficult for A to guess the strategy B is likely to
choose in the game situation. The players will play their strategy with ratio.
Now, A chooses strategy a1 with probability x, then A will choose a2 strategy with (1-x)
probability. If player B plays b1 strategy, then A’s pay off can be determined with reference
to the first column of the pay-off matrix as given below.
Expected pay-off of A if B adopts b1 strategy = 8x- 5(1-x)
Similarly, expected pay-off of A if B adopts b2 strategy = -9x + 10(1-x)
223 | P a g e
Now, if have to find the value of x so that the expected pay-off of A can be determined
irrespective of the strategy adopted by B.
8x- 5(1-x) = -9x + 10(1-x)
8x-5+5x= -9x+10-10x
x= 15/32
This means A would adopt strategy a1 and a2 in the proportion of 15:17.
The expected pay-off for player A is
8x- 5(1-x) = ( 8 × 15/32 ) – ( 5 × 17/32 ) = 35/32
-9x + 10(1-x) = ( -9 × 15/32 ) + ( 10 × 17/32 ) = 35/32
Thus, player A will have a gain of 35/32 per play in the long run.
We can find out the mixed strategy for player B in similar manner. Now, B chooses strategy
b1 with probability y, then B will choose b2 strategy with (1-y) probability. If player A plays
a1 strategy, then B’s pay off can be determined with reference to the first row of the pay-off
matrix as given below.
Expected pay-off of B if A adopts a1 strategy = 8y- 9(1-y)
Similarly, expected pay-off of B if A adopts a2 strategy = -5y + 10(1-y)
Now, if have to find the value of y so that the expected pay-off of B can be determined
irrespective of the strategy adopted by A.
8y- 9(1-y) = -5y + 10(1-y)
8y- 9 + 9y = -5y + 10 -10y
y =19/32
This means B would adopt strategy b1 and b2 in the proportion of 19:13.
The expected pay-off (loss) for player B is
8y- 9(1-y) = ( 8 × 19/32 ) – ( 9 × 13/32 ) = 35/22
-5y + 10(1-y) = ( -5 × 19/32 ) + ( 10 × 13/32 ) = 35/22
This implies B will lose 35/22 per play in the long run.
224 | P a g e
Strategy Ratio
Player A a1 15/32
a2 17/32
Player B b1 19/32
b2 13/32
B player’s strategies
b1 b2
A player’s strategies a1 a11 a12
a2 a21 a22
Formula:
a22 - a21
x = _____________________
a22 – a12
y = ____________________
( 8 + 10) - (-9 -5 )
10- (-9)
y = _________________________ = 19/32
( 8 + 10) - (-9 -5 )
( 8 × 10 ) – (-9 × -5)
V = _________________________ = 35/32
( 8 + 10) - (-9 -5 )
The values match with the solution obtained earlier. This means A would adopt strategy a1
and a2 in the proportion of 15:17. B would adopt strategy b1 and b2 in the proportion of 19:13.
The value of the game is 35/32 which means player A gains and Player B loses 35/32.
8.4.3 Dominance Rule
In a game, a player may find one strategy to dominate over the other(s). This means that in all
situations, a particular strategy will be preferred over the other(s). This concept of domination
of strategy is extremely useful in simplifying the problem and finding the solution to the
game.
Let’s consider an example,
B player’s strategies
b1 b2 b3
A player’s a 8 -9 10
1
strategies
a2 -5 10 6
a3 6 -11 5
226 | P a g e
We notice that every element of first row exceeds the corresponding element of third row in
the matrix (8 > 6; -9 > -11 and 10 > 5). This means that in any given situation, player A will
always prefer a1 over a3. Thus, a1 dominates a3. Hence, a3 can be deleted.
B player’s strategies
b1 b2 b3
A player’s a1 8 -9 10
strategies
a2 -5 10 6
From the reduced matrix, we observe that every element of first column is greater than the
corresponding element in third column. Since, B would like to minimize the pay-off for A, B
will select b1 over b3 always. Hence, b1 will dominate over b3. Thus, b3 can be eliminated.
b1 b2
A player’s a 8 -9
1
strategies
a2 -5 10
Now, the problem is reduced to a 2× 2 and exactly same as the previous example. Thus, it can
be solved in the manner explained earlier and the solution will be as follows.
The value of the game is 35/22.
Strategy Ratio
Player A a1 15/32
a2 17/32
a3 0
Player B b1 19/32
b2 13/32
227 | P a g e
b3 0
B player’s strategies
b1 b2
A player’s a1 28 0
strategies
a2 2 12
a3 4 7
In this problem, we see no strategy is dominating over any other strategy. However, we
notice that a linear combination of strategy a1 and a2 in the ration of 1:3 will always dominate
strategy a3 in all situations. Please note that the ratio in which combination of strategies will
dominate another strategy is checked using trial and error method.
b1 b2
A player’s a1 28 0
strategies
a2 2 12
a22 - a21 12 - 2
x = _______________________ = ___________________= 5/19
228 | P a g e
The optimal strategy for A is (5/19 , 4/19, 0) and for B is (6/19, 13/19) and the game value is
168/19.
B player’s strategies
B1 B2 B3
A player’s A1 a11 a12 a13
strategies
A2 a21 a22 a23
B player’s strategies
B1 B2 B3
A player’s A1 8 9 3
strategies
A2 2 5 6
A3 4 1 7
229 | P a g e
We can solve the above problem by formulating it as a LPP from A’s and B’s point of view.
Let’s first consider it from the point of A. We assume x1, x2, x3 as the probabilities with
which player A will choose strategies A1, A2 and A3 respectively.
Player A would use maximin strategy which is to maximize the minimum gain from playing
this game which is assumed as ‘U’.
Now, the expected pay-off of A will be as follows:
x1 + x2 + x3 = 1
Now, assuming that U is positive (which would be if all pay-offs are positive), we can divide
the constraints by U and attempt to minimize 1/U rather than maximize U.
We further define a new variable Xi = xi/U and restate the problem as follows.
Minimize 1/U = X1 + X2 + X3
a11X1 + a21X2 + a31X3 ≥ 1
a12X1 + a22X2 + a32X3 ≥ 1
230 | P a g e
X1 , X2 , X3 ≥ 0
So, for the above problem, we can formulate the game situation as a LPP for player A’s point
of view.
Minimize 1/U = X1 + X2 + X3
8X1 + 2X2 + 4X3 ≥ 1
9X1 + 5X2 + X3 ≥ 1
3X1 + 6X2 + 7X3 ≥ 1
X1 , X2 , X3 ≥ 0
Now, we can simply solve the above LPP using Simplex method and obtain the solution.
If, we look at the problem from player B’s point of view, We assume y1, y2, y3 as the
probabilities with which player B will choose strategies B1, B2 and B3 respectively.
Player B would use minixmax strategy which is to minimize the maximum gain from playing
this game which is assumed as ‘V’.
Now, the expected pay-off of B will be as follows:
Minimixe V
subject to
231 | P a g e
Y1 , Y2 , Y3 ≥ 0
This is the dual of LPP given earlier.
So, for the above problem, we can formulate the game situation as a LPP for player B’s point
of view.
Maximize 1/V = Y1 + Y2 + Y3
8Y1 + 9Y2 + 3Y3 ≤ 1
2Y1 + 5Y2 + 6Y3 ≤ 1
4Y1 + Y2 + 7Y3 ≤ 1
Y1 , Y2 , Y3 ≥ 0
Now, we can simply solve the above LPP using Simplex method and obtain the solution.
We would be solving the maximization problem and reading the optimal solution of the
primal (minimization problem) from the optimal solution of the dual.
232 | P a g e
Y1 , Y2 , Y3, S1 , S2 , S3 ≥ 0
Table 1
0 S1 1 8 9 3 1 0 0 1/8
0 S2 1 2 5 6 0 1 0 1/2
0 S3 1 4 1 7 0 0 1 1/4
Cj 1 1 1 0 0 0
Zj 0 0 0 0 0 0 0
Cj - Zj 1 1 1 0 0 0
Table 2
233 | P a g e
Cj 1 1 1 0 0 0
Cj 1 1 1 0 0 0
Cj Basic Basic Y1 Y2 Y3 S1 S2 S3
variable Solution
234 | P a g e
Cj 1 1 1 0 0 0
So, x1 = 21/52
x2 = 12/52
x3 = 19/52
The optimal strategy for player A is in the ratio of 21:12: 19 and for player B is 2:3:8. The
value of the game = 67/13.
235 | P a g e
IN-TEXT QUESTIONS
1. Saddle point exists when values from maximin and minimax strategy
are ____________________.
2. Game situation occurs when two or more player have
____________________interests.
3. Every game situation must have a saddle point. True / False
4. The strategy which will always be preferred by a player over other
strategies in any situation is called _______________ strategy.
5. A game situation where the gain of one player is equal to the loss of
other player is called ____________.
6. In a two-person game, both players should have equal number of
strategies. True/False
7. Saddle point is the point of equilibrium. True/False
8. The combination of strategies used by the player(s) in a particular ratio
is called ____________ strategy.
9. Dominance principle implies that strategies of one player are
dominating over the strategies of other player. True/False
10. Mixed strategy can use only combination of two strategies. True/False
8.6 SUMMARY
In this lesson, we learnt about decision making in game situations where players have
conflicting interest and want to know the optimal strategy to be employed. The pay-off
matrix is known to the players but their decisions are interdependent. We learnt about two-
person zero sum games in different cases- when saddle point exists, when saddle point does
not exist, dominance rule and linear combination. The Solution of m × n games is also
discussed with the help of formulation and solution as a LPP
236 | P a g e
8.7 GLOSSARY
• Game: The situation of conflicting interests among the opponents is called a game.
• Strategy: A strategy is the action taken by the player in various game situations.
• Pay-off: Each strategy chosen by the player in a game situation leads to outcomes
called pay-offs.
• Saddle point: The point of equilibrium where the maximin value is equal to the
minimax value is called saddle point.
• Dominance Rule: In a game, a player may find one strategy to dominate over the
other(s). This means that in all situations, a particular strategy will be preferred over
the other(s) by the player.
1. same/equal 6. False
2. conflicting/contradicting 7. True
3. False 8. Mixed
4. dominating 9. False
5. zero sum game 10. False
1. Solve the following game and determine the value of the game and optimal strategies
for both the players.
237 | P a g e
B player’s strategies
B1 B2 B3 B3
A player’s A1 3 2 4 0
strategies
A2 3 4 2 4
A3 4 2 4 0
A4 0 4 0 8
2. Solve the following game and determine the value of the game and optimal strategies
for both the players.
B player’s strategies
B1 B2 B3
A player’s A1 3 -1 4
strategies
A2 6 7 -2
3. Solve the following game and determine the value of the game and optimal strategies
for both the players.
B player’s strategies
B1 B2
A player’s A1 3 7
strategies
A2 -5 5
4. Solve the following game and determine the value of the game and optimal strategies
for both the players.
B player’s strategies
B1 B2 B3
A player’s A1 5 9 3
238 | P a g e
A3 8 16 10
5. Solve the following game and determine the value of the game and optimal
strategies for both the players.
B player’s strategies
No Medium High
promotion promotion promotion
A player’s
strategies No 5 9 3
promotion
High 8 16 10
promotion
• Kothari, C.R. (2013). Quantitative Techniques, (New Format), 3/e Vikas Publishing.
239 | P a g e
9 788119 169153