0% found this document useful (0 votes)
214 views192 pages

1 Decision Science

This textbook covers topics related to decision science and is intended for use by Singhania University in Rajasthan, India as well as other universities and competitive exams. It includes information on assignment models, transportation models, linear programming, Markov chains, simulation techniques, decision theory, game theory, queuing theory, CPM, PERT, sequencing problems, and probability distributions. The textbook was published in 2017 by Success Publications and provides a comprehensive overview of the key concepts and techniques in decision science.

Uploaded by

Abu Musa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
214 views192 pages

1 Decision Science

This textbook covers topics related to decision science and is intended for use by Singhania University in Rajasthan, India as well as other universities and competitive exams. It includes information on assignment models, transportation models, linear programming, Markov chains, simulation techniques, decision theory, game theory, queuing theory, CPM, PERT, sequencing problems, and probability distributions. The textbook was published in 2017 by Success Publications and provides a comprehensive overview of the key concepts and techniques in decision science.

Uploaded by

Abu Musa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 192

This Text Book is useful for Singhania University Rajasthan

and other Universities of India as well as Competitive Exams.

M.B.A. (Sem. - II)

2017
Published by
Mr. Rajesh M. Patne
Success Publications

Copy Right
With the Publisher

Printed at
Success Publications

Edition
2017

Edited By
Mr. Valmik Gaikwad

Typesetting, Layout
Miss. Rajashi Sul

Cover Designing
Mrs. Jyotsana Kadam



No part of this book may be reproduced or copied in any form or by any means [graphic,
electronic or mechanical, including photocopying, recording, taping, or information
retrieval systems] or reproduced on any disc, tape, perforated media or other information
storage device, etc., without the written permission of the publishers.
Every effort has been made to avoid errors or omissions in this book. Inspite of this errors
may creep in. Any mistake, error or discrepancy noted may be brought to our Notice which
shall be taken care of in the next edition. It is Notified that publisher shall not be responsible
for any damage or loss of action to anyone of any kind in any manner, therefrom. It is
suggested to all the readers, always refer original references wherever necessary.

ii
Preface
It is a matter of great pleasure for us to present this book to our esteemed readers.
This book has been designed as standard text on ‘Decision Science’ for M.B.A.
(Sem. - II).
This book comprehensively covers the entire syllabus of M.B.A.(Sem.- II) Course
of Singhania University Rajasthan effective from 2017 onwards. It has been written
to meet the requirements of students of M.B.A.(Sem.- II) Some of the special features
of the book are as follows :
1. Full coverage of the revised syllabus of M.B.A.(Sem.- II)
2. Chapter outline at the beginning of each chapter to give a bird’s eye view of the
topics covered in the chapter.
3. Pointwise explanation of each topic in the chapter.
4. Topics are logically arranged in numbered paragraphs exactly according to the
modified syllabus.
5. Proposed questions at the end of each chapter.
6. Extensive use of diagrams, tables and various forms to give visual view of key
concepts and techniques.
7. Conversational, lucid and simple language.
Every effort has been made to provide the readers with most up-to-date and
authentic material on the subject.
We are very grateful to our publisher Mrs. and Mr. Rajesh Patne who have
rendered all possible assistance in bringing out this book. We wish to acknowledge
our deep gratitude to staff who have assisted and helped us in preparing this book.
We will consider our efforts amply rewarded in case the book proves useful to the
students and teachers of the subject.
Suggestions of readers are welcome and shall be acknowledged with gratitude.

With best wishes.

 iii 
Syllabus
M.B.A. (Sem. - II)
Decision Science
Unit Topic No. of
No. Lectures

1. 1.1 Introduction : 9+2


Importance of Decision Sciences and Role of quantitative techniques in
decision making.
1.2 Assignment Models:
Concept, Flood’s Technique/ Hungarian Method, applications including
restricted & multiple assignments.
1.3 Transportation Models:
Concept, Formulation, Problem types: Balanced, unbalanced,
Minimization, Maximization Basic initial solution using North West Corner,
Least Cost & VAM, Optimal Solution using MODI.
2. 2.1 Linear Programming: 8+2
Concept, Formulation & Graphical Solution
2.2 Markov Chains & Simulation Techniques:
Markov chains: Applications related to management functional areas,
Implications of Steady state Probabilities, Decision making based on
the inferences Monte Carlo Simulation, scope and limitations.
3. 3.1 Decision Theory : 6+2
Concept, Decision under risk (EMV) and uncertainty
3.2 Game Theory :
Concept, 2 by 2 zero sum game with dominance, Pure and Mixed Strategy
3.3 Queuing Theory:
Concept, Single Server ( M/M/I , Infinite, FIFO) and Multi Server (M/M/C,
Infinite, FIFO)
4. 4.1 CPM and PERT : 6+2
Concept, Drawing network, identifying critical path
Network Calculations :
Calculating ESt, LST, EFT, LFT, Slack and probability of project completion
4.2 Sequencing problems :
Introduction, Problems involving n jobs- 2 machines, n jobs- 3 machines
& n jobs-m machines; Comparison of priority sequencing rules.
5. 5.1 Probability : 6+2
Concept, Addition, Conditional Probability theorem based decision
making, (Numerical based on functional areas of business expected).
5.2 Probability Distributions :
Normal, Binomial, Interval estimation, standard errors of estimation.

 iv 
INDEX
Unit Topic Name Page No.

1. Introduction to Decision Science 1.1 to 1.38


1.1 Decision Science
1.2 Assignment Model
1.3 Transportation Model
2. Linear Programming and Markov Chains and
Simulation Techniques 2.1 to 2.47
2.1 Linear Programming Problem (L.P.P)
2.2 Markov Chain
2.3 Simulation Techniques
3. Decision Theory, Game Theory and Queuing
Theory 3.1 to 3.35
3.1 Decision Theory
3.2 Game Theory
3.3 Queuing Theory
4. CPM, PERT and Sequencing Problems 4.1 to 4.32
4.1 CPM
4.2 PERT
4.3 Network Calculations
4.4 Sequencing Problems
5. Probability 5.1 to 5.34
5.1 Probability
5.2 Theorem of Probability
5.3 Probability Distribution
5.4 Binomial Probability Distribution
5.5 Normal Probability Distribution
5.6 Statistic Estimation
Bibliography 5.35

v
Introduction to UNIT

Decision Science 1
1.1 Decision Science
1.2 Assignment Model
1.3 Transportation Model

Introduction:
The discipline of decision science is concerned with the study and application of various
data analysis, modeling and optimization tools and techniques that are useful in the real
world decision making situations in various areas of management. It covers a wide
range of topics including statistical tools and models, mathematical programming
including linear, integer and non-linear programming, combinatorial optimization,
models based on fuzzy sets and systems, and simulation models. These models find
useful applications in such wide-ranging areas as production and operations
management, distribution and logistics, marketing research and financial modeling.

1.1 Decision Science:


Decision Science (also called Management Science or Operation Research) is an
approach to decision making which utilizes extensively quantitative analysis. Students
training to become managers, who work in or interact with, the service and or the
manufacturing industries need to know how managers can solve both simple and
complex problems using mathematical models. Statistics is the study of strategies for
decision making under conditions of uncertainty in such a way as to maximize the
expected utility is called as the decision science.
A) Meaning:
The area of Decision Sciences includes risk management, decision making under
uncertainty, statistics and forecasting, operations research, negotiation and auction
analysis, and behavioral decision theory.

1. 1
M.B.A. (Sem. - II) Decision Science
B) Definition:
G. D. H. Claassen, Th. H. B. Hendriks, Eligius M. T. Hendrix:
”Decision science is the discipline that is concerned with the development and
applications of quantitative methods and techniques to support decision making
processes”.

1.1.1 Role of Quantitative Techniques in Decision Science:


Quantitative techniques are those statistical and operation research techniques which
help in the decision making process especially concerning business and industry.
Those techniques which provide the decision maker a systemic means of analysis
based on the quantitative data in formulating policies for achieving pre-determined
goals. This technique greatly helps in handling the many complex problems. The role
can be understood under following heads:
1) As a Tool:
They provide a tool for scientific analysis.
2) Makes Efficient use of Resources:
They provide solution for various business problems. They enable proper use of
resources.
3) Useful in Minimisation:
They help in minimizing waiting and service costs.
4) Useful in Decision Making for Management:
They enable the management to decide when to buy and how much to buy. They
assist in choosing an optimum strategy.
5) Useful in Optimum Resource Allocation:
They render greater help in optimum resource allocation. They facilitate the process
of decision making.

1.1.2 Importance of Decision Science:


Importance of decision science is given as follows:
1) Better Utilisation of Resources:
Decision making helps to utilise the available resources for achieving the objectives
of the organisation. The available resources are the 6 Ms, i.e. Men, Money,
Materials, Machines, Methods and Markets. The manager has to make correct
decisions for all the 6 Ms. This will result in better utilisation of these resources.
2) Facing Problems and Challenges:
Decision making helps the organisation to face and tackle new problems and
challenges. Quick and correct decisions help to solve problems and to accept new
challenges.

1. 2
Introduction to Decision Science
3) Business Growth:
Quick and correct decision making results in better utilisation of the resources. It
helps the organisation to face new problems and challenges. It also helps to
achieve its objectives. All this results in quick business growth. However, wrong,
slow or no decisions can result in losses and industrial sickness.
4) Achieving Objectives:
Rational decisions help the organisation to achieve all its objectives quickly. This is
because rational decisions are made after analysing and evaluating all the
alternatives.
5) Increases Efficiency:
Rational decisions help to increase efficiency. Efficiency is the relation between
returns and cost. If the returns are high and the cost is low, then there is efficiency
and vice versa. Rational decisions result in higher returns at low cost.
6) Facilitate Innovation:
Rational decisions facilitate innovation. This is because it helps to develop new
ideas, new products, new process, etc. This results in innovation. Innovation gives a
competitive advantage to the organisation.
7) Motivates Employees:
Rational decision results in motivation for the employees. This is because the
employees are motivated to implement rational decisions. When the rational
decisions are implemented the organisation makes high profits. Therefore, it can
give financial and non-financial benefits to the employees.

1.2 Assignment Model:


The basic objective of an assignment problem is to assign n number of resources to n
number of activities so as to minimize the total cost or to maximize the total profit of
number of activities so as to minimize the total cost or to maximize the total profit of
allocation in such a way that the measure of effectiveness is optimized. The problem of
assignment arises because available resources such as men, machines, etc., have
varying degree of efficiency for performing different activities such as job. Therefore
cost, profit or time for performing the different activities is different. Hence the problem
is, how should the assignments be made so as to optimize (maximize or minimize) the
given objective. The assignment model can be applied in many decision-making
processes like determining optimum processing time in machine operators and jobs,
effectiveness of teachers and subjects, designing of good plant layout, etc. This
technique is found suitable for routing travelling salesman to minimize the total travelling
cost, or to maximize the sale.
―Assignment Problem is the technique of selecting the best possible assignment of
tasks from a number of alternatives".

1. 3
M.B.A. (Sem. - II) Decision Science

1.2.1 Mathematical Model of Assignment Problem:


Given n resources (or facilities) and activities (or jobs), and effectiveness (in terms of
cost, profit, time, etc.) of each resource (facility) for each activity (job), the problem lies
in assigning each resource to one and only one activity (job) so that the given measure
of effectiveness is optimised.
The data matrix for this problem is shown in table

Resources Supply
(Workers) J1 J2…Jn
W1 c11 c12…c1n 1
W2 c21 c22…c2n 1
… .. … ….
… .. … ….
Wn cn1 Cn2…cm 1
Demand 1 1….1 n
From the table, it may be noted that the data matrix is the same as the transportation
cost matrix that supply (or availability) of each of the resources and the demand at
each of the destinations is taken to be one. It is due to this fact that assignments on
made on a one-to=one basis.
Let xij denote the assignment of facility i to job j such that:
1 if facility is assigned to job j
xij =
0 otherwise
Then, the mathematical model of the assignment problem can be stated as:
n n
Minimize Z cij x ij
i 1j 1

Subject to the constraints


n
x ij 1 , for all i (resource availability)
j 1
n
x ij 1 , for all j (activity requirement)
i 1

And xij = 0 or 1, for all i and j.


Where cij represents the cost of assignments of resources i to activity j.
From the above, it is clear that the assignment problem is a special one of the
transportation problem with two characteristics:
1) The cost matrix is a square matrix, and
2) The optimal solution for the problem would always be such that there would be only
one assignment in a given row or column of the cost matrix.
A) A few applications of assignment problem are:
1) Assignment of employees to machines.
2) Assignment of operators to jobs.

1. 4
Introduction to Decision Science
3) Effectiveness of teachers and subjects.
4) Allocation of machines for optimum utilization of space.
5) Salesman to different sales areas.
6) Clerks to various counters.

1.2.2 Methods to Solve Assignment Problem:


An assignment problem can be solved by the following four methods:
1) Enumeration Method:
In this method, u list of all possible assignments among the given resources (men.
machines, etc.) and activities (jobs, sales areas, etc.) is prepared. Then an
assignment involving the minimum cost, time of distance (or maximum profit) is
selected. If two or more assignments have the same minimum cost, time of distance
(or maximum profit). then the problem has multiple optimal solutions. In general, if
an assignment problem involves n workers/jobs, then there are in total n! possible
assignments.
For example, for an n = 5 workers/jobs problem. one has to evaluate a total of 5! or
120 assignments. However, when n is large then the method is unsuitable for
manual calculations. Hence, this method it suitable only for small n.
2) Simplex Method:
Since each assignment problem can be formulated as a 0 or 1 integer linear
programming problem, such a problem can also be solved by the simplex method.
As can be seen in the general mathematical formulation of the assignment problem,
there are n x n decision variables and n + n or In equalities. In particular, for a
problem involving 5 workers/jobs, there will be 25 decision variables and 10
equalities. It is, again, difficult to solve manually. ` '
3) Transportation Method:
Since an assignment problem is special case of the transportation problem, it can
also be solved by transportation methods. However, every feasible solution of a
general assignment problem having a square payoff matrix of order n should have
m +n - 1 = n +n - 1 = 2n - 1 assignments. But due to the special structure of this
problem, any solution cannot have more than n assignments; Thus, The assignment
problem is inherently degenerate. In order to remove degeneracy, (n -1) number of
dummy allocations (deltas or epsilons) will be required in order to proceed with the
transportation method. Thus, the problem of degeneracy at each solution makes the
transportation method computationally inefficient for solving an assignment problem.
4) Hungarian (Flood’s Technique) Assignment Method (HAM):
It may be observed that none of the three working methods discussed earlier to
solve an assignment is efficient. A method, designed specially to handle the
assignment problems in an efficient way, called Hungarian Assignment Method. is
available, which is based on the concept of opportunity cost.

1. 5
M.B.A. (Sem. - II) Decision Science
Hungarian method successively modifies the rows and columns of the effectiveness
matrix until there is at least one zero component in each row and column such that a
complete assignment overspending to these zeros can be made. This complete
assignment will be an optimal assignment in that when it is applied to the original
effectiveness matrix, the resulting total effectiveness will be a minimum. The method
will always converge to an optimal assignment in a finite number of steps technically
known as the assignment algorithm.
A) Hungarian (Flood’s Technique) Method for Solving Assignment Problem:
Following the step solving Hungarian assignment method Problem:
Step 1: In a given problem, if the number of rows is not equal to the number of
columns and vice versa, then add a dummy row or a dummy column. The
assignment costs for dummy cells are always assigned as zero.
Step 2: Reduce the matrix by selecting the smallest element in each row and
subtract with other elements in that row.
Step 3: Reduce the new matrix column-wise using the same method as given in
step 2.
Step 4: Draw minimum number of lines to cover all zeros.
Step 5: If Number of lines drawn = order of matrix, then optimality is reached, so
proceed to step 7. If optimality is not reached, then go to step 6.
Step 6: Select the smallest element of the whole matrix, which is NOT
COVERED by lines. Subtract this smallest element with all other remaining
elements that are NOT COVERED by lines and add the element at the
intersection of lines. Leave the elements covered by single line as it is. Now go
to step 4.
Step 7: Take any row or column which has a single zero and assign by squaring
it. Strike off the remaining zeros, if any, in that row and column (X). Repeat the
process until all the assignments have been made.
Step 8: Write down the assignment results and find the minimum cost/time.
Note: While assigning, if there is no single zero exists in the row or column,
choose any one zero and assign it. Strike off the remaining zeros in that column
or row, and repeat the same for other assignments also. If there is no single
zero allocation, it means multiple numbers of solutions exist. But the cost will
remain the same for different sets of allocations.

5) Restriction in Assignment:
In some special type of assignment problems, it is possible to find out a situation
where a particular resource assignment to a particular activity is considered to be
very large and hence not feasible. So we consider this type of assignments
prohibited and these are denoted by M or .

1. 6
Introduction to Decision Science
6) Multiple Assignments:
In some special type of assignment problems, it is possible to find out a situation in
finally reduced matrix, where there are more than one ways to strike out the zeros.
This situation depicts that the problem has more than one optimal assignment
pattern. Alternative optimal solution offers greater flexibility to the decision maker as
he can select the one which is more suitable for his requirement among the two or
more optimal assignment patterns.

1.3 Transportation Model:


A transportation problem fundamentally deals with the difficulty which aspires to find the
best possible way to accomplish the demand of 'n' demand points using the capabilities
of 'm' supply points. While attempting to find the best probable way, usually a variable
cost of transporting the product from 1 supply point to a demand point or a related
constraint ought to be taken into concern. This kind of problem is known as allocation or
transportation problem in which the main idea is to reduce the charge or the time of
transport. If the overall capacity is equivalent to the total requirement, the problem is
referred to as balanced transportation problem or else it is referred as unbalanced
transportation problem.
A) Meaning:
The transportation problem is a special type of linear programming problem where
the objective is to minimize the cost of distributing a product from a number of
sources or origins to a number of destinations. Because of its special structure the
usual simplex method is not suitable for solving transportation problems. These
problems require a special method of solution. The origin of a transportation
problem is the location from which shipments are dispatched. The destination of a
transportation problem is the location to which shipments are transported. The unit
transportation cost is the cost of transporting one unit of the consignment from an
origin to a destination. In the most general form, a transportation problem has a
number of origins and a number of destinations. A certain amount of a particular
consignment is available in each origin. Likewise, each destination has a certain
requirement. The transportation problem indicates the amount of consignment to be
transported from various origins to different destinations so that the total
transportation cost is minimised without violating the availability constraints and the
requirement constraints. The decision variables xij of a transportation problem
indicate the amount to be transported from the ith origin to the jth destination. Two
subscripts are necessary to describe these decision variables. A transportation
problem can be formulated as a linear programming problem using decision
variables with two subscripts.

1. 7
M.B.A. (Sem. - II) Decision Science
B) Important Definitions Under Transportation Problems:
1) Basic Feasible Solution:
A feasible solution to a m-origin, n-destination problem is said to be basic if the
number of positive allocations are equal to (m + n - 1).
2) Feasible Solution:
A set of positive individual allocations which simultaneously removes
deficiencies is called a feasible solution.
3) Optimal Solution:
A feasible solution (not basically basic) is said to be optimal if it minimises the
total transportation cost.
C) Mathematical Formulation of Transportation Problems:
Let there be three units, producing scooter, say, A1, A2 and A3 from where the
scooters are to be supplied to four depots say B1, B2, B3 and B4.
Let the number of scooters produced at A1, A2 and A3 be a1, a2 and a3
respectively and the demands at the depots be b1, b2, b3 and b4 respectively.
We assume the condition
a1+a2+a3 = b1+b2 + b3 + b4
i.e., all scooters produced are supplied to the different depots.
Let the cost of transportation of one scooter from A1 to B1 be c11. Similarly, the
cost of transportations in other cases are also shown in the figure and Table.
Let out of a1 scooters available at A1, x11 be taken at B1 depot, x12 be taken at B2
depot and to other depots as well, as shown in the following figure and table 1.

Total number of scooters to be transported from A1 to all destinations, i.e., B1, B2,
B3, and B4 must be equal to a1.
x11 + x12 + x13 + x14 = a1…………………………………………… (1)

1. 8
Introduction to Decision Science
Similarly, from A2 and A3 the scooters transported be equal to a2 and a3
respectively.
x21 + x22 + x23 + x24 = a2…………………………………………… (2)
and x31 + x32 + x33 + x34 = a3………………………………………. (3)
On the other hand it should be kept in mind that the total number of scooters
delivered to B1 from all units must be equal to b1, i.e.,
x11 + x21 + x31 = b1…………………………………………………….. …………. (4)
Similarly, x12 + x22 + x32 = b2………………………………………………….. (5)
x13 + x23 + x33 = b3…………………………………………………………………(6)
x14 + x24 + x34 = b4 ……………………………..… (7)
With the help of the above information we can construct the following table:

The cost of transportation from Ai (i=1,2,3) to Bj (j=1,2,3,4) will be equal to


S= …………………(8)
Where the symbol put before signifies that the quantities must be
summed over all i = 1,2,3 and all j = 1,2,3,4.
Thus we come across a linear programming problem given by equations (1) to (7)
and a linear function (8).
We have to find the non-negative solutions of the system such that it minimizes the
function (8).
We can think about a transportation problem in a general way if there are m sources
(say A1, A2, ...,Am) and n destinations (say B1, B2,....,Bn). We can use ai to denote
the quantity of goods concentrated at points Ai (i=1, 2,...., m) and Bj denote the
quantity of goods expected at points Bj(j =1,2,...,n). We assume the condition a1 +
a2 + ....+ am = b1 + b2 + .... + bn implying that the total stock of goods is equal to
the summed demand for it.

1. 9
M.B.A. (Sem. - II) Decision Science
D) General Representation of Transportation Model:
The transportation problem can also be represented in a tabular form as shown in
table given below:
Let be the cost of transporting a unit of the product from ith origin to jth
destination.
be the quantity of the commodity available at source i,
be the quantity of the commodity needed at destination j, and be the quantity
transported from ith source to jth destination.

D1 D2 …. Dn Supply ai
S1 c11 c12 … c1n a1
x11 x12
S2 c21 c22 … c1n a2
x21 X22
… ….. ….. … …. …
Sn cn1 cn2 ….. cmn an
xn1 xn2
Demand b1 b2 …. bn m
ai
n
bi
bi i 1 j 1

1.3.1 Types of Transportation Problems:


There are mainly four types of transportation problems:
1) Balanced Transportation Problem
2) Unbalanced Transportation Problem
3) Minimisation Transportation Problem
4) Maximisation Transportation Problem

1) Balanced Transportation Problem:


If the sum of the supplies of all the sources is equal to the sum of the demands of all
the destinations, then the problem is termed as balanced transportation problem.
This may be represented by relation:
m n
ai bj
i 1 j 1

2) Unbalanced Transportation Problem:


If the sum of the supplies of all the sources is not equal to the sum of the demands,
of, all the destinations, then the problem is termed as, unbalanced transportation
problem. This may be represented by relation:

1. 10
Introduction to Decision Science
m n
ai bj
i 1 j 1

m n
If ai b j , then include a dummy destination to absorb the excess supply.
i 1 j 1

m n
If ai b j , then include a dummy source absorb the excess demand.
i 1 j 1

3) Minimisation Transportation Problem:


In general, transportation model is used for cost minimization problems.
4) Maximisation Transportation Problem:
It is used to solve problems in which objective is to maximize total value or benefit.
That is, instead of unit cost cij, the unit profit or pay-off pij associated with each
route,(i, j) is given.
Then the objective function in terms of total profit or pay-off is stated as follows:
m n
Maximise Z pij x ij
i 1 j 1

1.3.2 Some Basic Solution for Transportation Problem Definitions:


The following terms are to be defined with reference to the transportation problems:
1) Feasible Solution (F.S.):
A set of non-negative allocations Xij > 0 which satisfies the row and column
restrictions is known as feasible solution.
2) Basic Feasible Solution (B.F.S.):
A feasible solution to an m-origin and n-destination problem is said to be basic
feasible solution if the number of positive allocations are (m + n – 1). If the number
of allocations in a basic feasible solutions are less than (m + n–1), it is called
degenerate basic feasible solution (DBFS) (otherwise non-degenerate).
3) Optimal Solution:
A feasible solution (not necessarily basic) is said to be optimal if it minimizes the
total transportation cost.
4) Non-Degenerate Basic Feasible Solution:
A basic feasible solution to a (m x n) transportation problem that contains exactly (m
+ n - l) allocation in independent positions.
5) Degenerate Basic Feasible Solution:
A basic feasible solution that contains less than (m + n - 1) non-negative allocation.

1.3.3 Phases of Solution of Transportation Problems:


There are two phases in which all transportation problems can be solved:

1. 11
M.B.A. (Sem. - II) Decision Science
Phase I:
1) Initial Basic Feasible Solution:
Usually a transportation problem involves in origins and n destinations. If the sum of
the origin capacities equals the sum of destination requirements i.e., a i b j then a
feasible solution exists. A feasible solution satisfies m + n - 1 number of allocations
in the transportation problem matrix. On the other hand, the number of allocations,
m + n - 1 is equal to the order of matrix (m x n) or number of basic cells.
Some of the important methods of initial basic feasible solution are:
A) North-West Method
B) Matrix Minimum/Least Cost Method
C) Vogel’s Approximation Method

A) North-West Corner (NWC) Method:


The method can be summarised as follows:
Step 1: The first assignment is made in the cell occupying the upper left-hand
(north-west) comer of the transportation table. The maximum possible amount is
allocated there. That is, x11 = min (a1, b1). This value of x11 is then entered in the cell
(1, 1) of the transportation cable.
Step 2: Further this allocation is of such a magnitude that either the origin capacity
of the first row is exhausted or the destination requirement of the first column is
satisfied or both such as:
1) If 1, > a1, move vertically downwards to the second row and make the second
allocation of x21 = min( a2, b1 - a1) in the cell (2, 1).
2) If b1 < a1, move horizontally right-side to the second column and make the
second allocation of x12= min (a1 - b1, b2) in the cell (1, 2).
3) lf b1 = a1, there is a tie for the second allocation. One can make the second
allocation in cell (2, 2) in the same manner as was done in point 1 and 2
mentioned above.
Step 3: Start from the new north-west comet of the transportation table and repeat
steps 1 and 2 until all the requirements are satisfied.
Advantages of North West Corner Method:
1) This method is very effective as it provide step by step solution.
2) It is very simple to obtain optimum solution through this method.
Disadvantages of North West Corner Method:
1) This method does not take into consider the important factor viz. cost which is
sought to be minimized.
2) NW comer rule take more time in obtaining optimal solution.

Example:
Solve the following transportation problem using North-West Corner method:

1. 12
Introduction to Decision Science
Factories Warehouse Supply
W1 W2 W3
F1 2 7 4 5
F2 3 3 1 8
F3 1 6 2 14
Demand 7 9 18 34
Solution:
The steps involving in finding the initial basic feasible solution using NWC method:
Step 1: Choose the North-West Comer cell (F1, W1).
Step 2: Allocate as many units as possible looking to the supply and demand, i.e.,
min (5, 7) =5
Allocate 5 units in the cell (F.1, W 1).
Step 3: Since the supply at factory F. is exhausted move to the cell (F2, W1)
Step 4: Choose the min (8, 2) = 2
Allocate 2 units to the cell (F2, W 1)
Step 5: Since the requirement is satisfied at W1 move to the cell (F2, W 2).
Allocate min (6, 9) to cell (F2, W 2) i.e., 6 units.
Step 6: Since the supply is exhausted at F2 move to the cell (F2, W 2)
Choose the min (7, 3)=3
Allocate 3 units to the cell (F3, W 2)
Step 7: Now move to the cell (F3, W 3) and allocate min (4, 18) = 4 units.
Step 8: Now move to cell (F4, W 3) and allocate min (14, 14) = 14 units.

Since m + n – 1 = 4 +3 - 1 = 6, hence the solution is feasible.


Total Transportation Cost = 2 5+3 2+3 6+4 3+7 4+2 14
= 10+6+18+12+28+28 = 102
B) Least Cost / Matrix Minimum Method:
The steps involved in the least cost method are as follows:

1. 13
M.B.A. (Sem. - II) Decision Science
Step 1: Choose the cell (Oi, Dj) with smallest unit cost in the transportation table
and allocate maximum possible to this cell. Eliminate the row i or column j in which
either availability or demand is exhausted. If the smallest unit cost cell is not unique,
then choose the cell to which the maximum allocation can he made.
Step 2: Adjust the supply and demand for all remaining rows and columns and
repeat the process with the smallest unit cost among the remaining rows and
column in which either supply or demand is exhausted.
Step 3: Continue with the procedure till supply at various origins and demand at
various destinations are satisfied.
Advantages of Least Cost Method:
1) This method provides accurate solution as transportation cost is considered
while making allocation.
2) It is very simple and easy to calculate optimum solution under this method.
Disadvantages of Least Cost Method:
1) Solution given by this method is not closer to optimal solution.
2) This method is based on the selection through personal observation when there
is a tie in the minimum cost, it does not follow any systematic rule.

Example:
Solve the following transportation problem using matrix minima method:
D1 D2 D3 D4 Availability
O1 10 8 7 12 5000
O2 12 13 6 10 6000
O3 8 10 12 14 9000
Demand 7000 5500 4500 3000 20000
Solution:
Using matrix minima, we have following allocations:

TC = 8 7000+8 5000+10 500+6 4500+10 1500+14 1500


=56000+40000+5000+27000+15000+21000=164000

1. 14
Introduction to Decision Science
C) Vogel’s Approximation Method (VAM):
The vogel’s approximation method is mostly preferred because the initial basic
feasible obtained by this method is either optimal or very nearer to optimal solution,
due to which the amount of time necessary to arrive at the final solution is greatly
reduced.
Various steps involved are as under:
Step 1: For each row of the transportation table identify the smallest and next-to-
smallest cost. Determine the difference between them for each row. These are
called ‘penalties’. Put them alongside the transportation table by enclosing them in
the parentheses (braces) against the respective rows. Similarly, compute these
penalties for each column.
Step 2: Identify the row or column with the largest penalty among all the rows and
columns. In this identified row or column select the cell with least cost and allocates
feasible number of units to this cell. Eliminate that row or column whose demand
and supply requirements are fulfilled. If the largest penalties corresponding to two or
more rows are equal, one selects either of them.
Step 3: Again compute the column and row penalties for the reduced transportation
table and then go to step 2.
Repeat the procedure until all the requirements are satisfied.
Advantages of Vogel’s Approximation Method:
1) This method is very systematic.
2) This method takes lesser time in solving transportation problem.
3) Less computation are involved in this method.
Disadvantages of Vogel’s Approximation Method:
1) This method provides approximate solution to the given problem.
2) This method is tedious when the given matrix is a large.

Example:
Find the optimal solution for transporting the products at a minimum cost for the
following transportation problem with cost structure as follows:

From To Supply
W1 W2 W3 W4
F1 6 4 1 5 14
F2 8 9 2 7 16
F3 4 3 6 2 5
Demand 6 10 15 4 35
Solution:
Applying Vogel’s’ approximation method, the steps are as below:
Step 1: First compute the penalties by taking difference between the two lowest cost

1. 15
M.B.A. (Sem. - II) Decision Science
cell in each row and in each column.
Step 2: Identify the largest penalty among row and column penalty.
Step 3: Allocate feasible number of units in the lowest cost cell of the row or column
having the largest penalty. A
Step 4: Once the allocation is done fully to a row or column ignores that row or column
for further consideration by eliminating that row or column.
Step 5: Calculate the penalties again and repeat the procedure until the supply and
demand exhaust.

TC = 6 4+8 1+4 1+4 10+2 15+2 4


=24+8+4+40+30+8 = 114
Phase II:
Optimum Basic Feasible Solution: A basic feasible solution that minimises the total
transportation cost is called the optimum solution. It can be obtained by applying the
following two methods:
A) Modified Distribution (MODI) Method:
This method is easier and more efficient than the stepping-stone method. It is based
on the concept of the dual variables that are used to evaluate the empty cells. Using
these dual variables the opportunity cost of each of the empty cells is determined.
The opportunity cost values in both the methods indicate the optimality or otherwise
of a given solution.

1. 16
Introduction to Decision Science
Optimisation Using Modified Distribution Method (MODI):
MODI method is an efficient method of testing the optimality of a transportation
solution. It may he recalled that in the application of the stepping stone method,
each of the empty cells is evaluated for the opportunity cost by drawing it closed
loop. In that situation, where at large number of sources and destinations are
involved, this would be a very time consuming and complex exercise. MODI method
avoids this kind of extensive scanning and reduces the number of steps required in
the evaluation of the empty cells. This method gives a straightforward computational
scheme whereby we can determine the opportunity cost of each of the empty cells.
Steps of MODI Method:
Step 1: Add to the transportation table a column on the RHS titled up and a row in
the bottom of it labeled vj.
Step 2: In this step following sub steps are perfumed:
1) Choose that value of ui and vj equal to zero corresponding to the
rows/columns which has maximum no. of allocations. Generally, a value
0 (zero) is assigned to the first row, i.e. ui = 0.
2) Consider every occupied cell in the first row individually and assign the
column value vj. (When the occupied cell is in the jth column of the row)
which is such that the sum of the row and the column values is equal to
the unit cost value in the occupied cell. With the help of these values,
consider other occupied cells one-by one and determine the appropriate
values of ui's, taking in each case ui + vj = cij. Thus, if ui is the row value
of the ith row and vi is the column value of the jth column and cu is the unit
cost of‘ the cell in the i'°‘ row and jth column, then the row and column
values are obtained using the following equation:
ui + vj =cij
Step 3: Having determined all ui and vj values, calculate for each unoccupied cell
ij cij (u i v j ) . The ij ’s represent the opportunity costs of various cells. After
obtaining the opportunity costs, proceed in the same way as in the stepping stone
method.
lf all the empty cells have positive opportunity costs, the solution is optimal solution.
lf ij ’s values are negative, the given solution is not an optimal solution and if one or
more ij ’s values are negative i.e., ij < 0, then the cell with the largest opportunity
cost value is selected, a closed loop traced and transfers of units along the route are
made in accordance with the method. Then the resulting solution is again tested for
optimality and improved if necessary. The process is repeated until an optimal
solution is obtained.
Hence the condition for the solution of being optimal solution is:
cij -(ui+vj) 0
Example:
Solve the following transportation problem using Vogel's approximation method and
check for optimality using MODI method.

1. 17
M.B.A. (Sem. - II) Decision Science
Machine Destination
D1 D2 D3 D4 Supply
O1 21 16 25 13 11
O2 17 18 14 23 13
O3 32 17 18 41 19
Demand 6 10 12 15 43
Solution:
Since, a i bj 43 the given transportation problem 9s balanced. Hence, there exists
a basic feasible solution to this problem. By VAM (Vogel’s Approximation Method), the
initial basic feasible solution, with all allocations are made during the procedure in a
single table shown below. Applying penalty:

Hence final table after all allocations will be,

1. 18
Introduction to Decision Science
Initial transportation cost,
TC = 1113+617+314+423+1017+918
=143+102+42+92+170+162= RS.711
MODI Method:
Since the number of basic cells = 6= 4+3-1, the solution is non-degenerate. Now we
calculate ui and vi using ui +vi =Cij for all allocated cells.
u1 v4 13, u 2 v1 17, u 2 v3 14,
u2 v4 23, u 3 v2 17, u 3 v3 18
Let arbitrarily v4 0
Then,
u1 13
u2 23
u3 18 ( 19) 27
v1 17 23 6
v2 17 27 10
v3 14 23 9
Now we find the net evaluations dij = cij - (ui + uj) for all non-occupied cells
Then we have,
d11 c11 (u1 v1 ) 21 (13 6) 14
d12 c12 (u1 v 2 ) 16 (13 10) 13
d13 c13 (u1 v3 ) 25 (13 9) 21
d 22 c 22 (u 2 v 2 ) 18 (23 10) 5
d 31 c31 (u 3 v1 ) 32 (27 6) 11
d 34 c34 (u 3 v4 ) 41 (27 0) 14
Since all dij 0, the solution under the test is optimal and unique. Therefore, the optimal
allocation
Hence minimum transportation cost
=11 x 13+ 6 x 17+ 3 x 14 + 4 x 23 + 10 x 17+ 9 x 18
= 143 + 102 + 42 + 92 + 170 + 162=Rs 711.

Solved Examples

A) Examples on Assignment Problems (Hungarian Method):

Example 1:
A job has four men available for work on four separate jobs. Only one man can work on
any one job. The cost of assigning each man to each job is given in the following table.
The objective is to assign men to jobs such that the total cost of assignment is
minimum.

1. 19
M.B.A. (Sem. - II) Decision Science
Jobs 1 2 3 4

Persons
A 20 25 22 28
B 15 18 23 17
C 19 17 21 24
D 25 23 24 24

Solution:
Step 1: Identify the minimum element in each row and subtract it from every element of
that row.

Jobs 1 2 3 4

Persons
A 0 5 2 8
B 0 3 8 2
C 2 0 4 7
D 2 0 1 1

Step 2: Identify the minimum element in each column and subtract it from every
element of that column.

Jobs 1 2 3 4

Persons
A 0 5 1 7
B 0 3 7 1
C 2 0 3 6
D 2 0 0 0

Step 3: Make the assignment for the reduced matrix obtain from steps 1 and 2 in the
following way.
1) Examine the rows successively until a row with exactly one unmarked zero is found.
Enclose this zero in a box as an assignment will be made there and cross (X) all
other zeros appearing in the corresponding column as they will not be considered
for future assignment. Proceed in this way until all the rows have been examined.
2) After examining all the rows completely, examine the columns successively until a
column with exactly one unmarked zero is found. Make an assignment to this single
zero by putting square around it and cross out (X) all other assignments in that row,
proceed in this manner until all columns have been examined.

1. 20
Introduction to Decision Science
3) Repeat the operations (a) and (b) successively until one of the following situations
arises:
a) All the zeros in rows/columns are either marked or crossed (X) and there is
exactly the assignment in each row and in each column. In such a case optimum
assignment policy for the given problem is obtained.
b) The total number of marked zeros is less than the order of the matrix. In such a
case proceed to next step 4. There may be some row (or column) without
assignment, i.e. the total number of marked zeros is less than the order of the
matrix. In such a case proceed to next step 4.

Step 4:
Draw the minimum number of vertical and horizontal lines necessary to cover all the
zeros in the reduced matrix obtained from step 3 by adopting the following procedure:
1) Mark all the rows that do not have assignments.
2) Mark all the columns (not already marked) which have zeros in the marked rows.
3) Mark all the rows (not already marked) that have assignments in marked
columns.
4) Repeat steps 4 (ii) and (iii) until no more rows or columns can be marked.
5) Draw straight lines through all unmarked rows and columns.

1. 21
M.B.A. (Sem. - II) Decision Science
Step 5:
Select the smallest element from all the uncovered elements. Subtract this smallest
element from all the uncovered elements and add it to the elements, which lie at the
intersection of two lines. Thus, we obtain another reduced matrix for fresh assignment.
Jobs 1 2 3 4

Persons
A 0 4 0 6
B 0 2 6 0
C 3 0 3 6
D 3 0 0 0
Go to step 3 and repeat the procedure until you arrive at an optimum assignment.

Since the number of assignments is equal to the number of rows (& columns), this is
the optimal solution.
The total cost of assignment = A1 + B4 + C2 + D3
Substitute the values from original table:
i.e. 20 + 17 + 24 + 17 = 78.

Example 2:
A departmental head has four subordinates, and four tasks to be performed. The
subordinates differ in efficiency, and the tasks differ in their intrinsic difficulty. His
estimate, of the time each man would take to perform each task, is given the matrix
below.
Men 1 2 3 4

Persons
A 18 26 17 11
B 13 28 14 26
C 38 19 18 15
D 19 26 24 10

1. 22
Introduction to Decision Science
Solution:
Step 1:
Identify the minimum element in each row and subtract it from every element of that
row, we get the reduced matrix
Men 1 2 3 4

Persons
A 7 15 6 0
B 0 15 1 13
C 23 4 3 0
D 9 16 14 0
Step 2:
Identify the minimum element in each column and subtract it from every element of that
column.
Men 1 2 3 4

Persons
A 7 11 5 0
B 0 11 0 13
C 23 0 2 0
D 9 12 13 0
Step 3:
Make the assignment for the reduced matrix obtain from steps 1 and 2 in the following
way:
Now proceed as in the previous example
Optimal assignment is: A→G, B → E, C →F and D→ H
The minimum total time for this assignment scheduled is 17 +13+19+10 or 59 man-
hours.

Example 3:
Solve the following assignment problem:

Typist Job
P Q R S
A 85 50 30 40
B 90 40 70 45
C 70 60 60 50
D 75 45 35 55

1. 23
M.B.A. (Sem. - II) Decision Science
Solution:
Applying flood technique, the steps are as below:
Step 1: Row Reduction: Subtract from each row the lowest cost in the row.
P Q R S
A 55 20 0 10
B 50 0 30 5
C 20 10 10 0
D 40 10 0 20
Step 2: Column Reduction: Now, subtract from each column the lowest cost in the
column. (Hence all columns contain zero except column I so changes will be occur
only in column I).
P Q R S
A 55 20 0 10
B 50 0 30 5
C 20 10 10 0
D 40 10 0 20

Step 3: Drawing Line: Drawing of the minimum number of horizontal and vertical
lines necessary to cover all zeros.

Here only three lines are sufficient for covering all zeros but the matrix is of 4x4. So it
does not give the optimal solution. Here 5 is the smallest number that does not have a
line through it.
Now subtract the least uncovered cell value 5 from each of the uncovered values.

1. 24
Introduction to Decision Science
Since the number of lines which covers the all zero 3 but order of matrix is 4. Then it
does not give optimal solution. Now, subtract the least uncovered cell value 5 from
each of the uncovered values.

Step 4: Optimal Solution: Now the numbers of lines are 4. Hence it gives the optimal
solution. Assignments can be making according to above mention procedure, as
follows:

The assignments have been made in the following order.


A S
B Q
C P
D R

B) Examples on Transportation Problem:


1) Balanced Transportation Problem:
Example 4:
Solve the following transportation problem:
From To Supply
D1 D2 D3 D4 D5
F1 20 28 32 55 70 50
F2 48 36 40 44 25 100
F3 35 55 22 45 48 150
Demand 100 70 50 40 40 300

1. 25
M.B.A. (Sem. - II) Decision Science
Solution:
Using the least cost method, we have following matrix form:

Since m + n - 1 = 3 + 5 - 1=1
TC = 20 50+35 50+36 60+55 10+50 22+45 40+25 40
1000+1750+2160+550+110+1800+1000=9360
2) Unbalanced Transportation Problem:
Example 5:
Solve the following transportation problem using North West Comer Method:
From To Supply
D1 D2 D3 D4
F1 10 8 7 12 500
F2 12 13 6 10 500
F3 8 10 12 14 900
Demand 700 550 450 300 1900
2000
Solution:
It is an unbalanced transportation problem so it should be balanced by introducing
dummy row with 0.

1. 26
Introduction to Decision Science
TC = 10500+12200+13300+10250+12150+14200+0100
= 5000+2400+3900+2500+5400+28000
=2200

Example 6:
Find the optimal solution for transporting the products at a minimum cost for the
following transportation problem with cost structure as follows:
P Q R Availabilities
A 16 19 12 15
B 22 13 19 16
C 14 28 8 12
Requirement 10 16 17 42
Solution:
Applying VAM method:

Allocate 10 to AP = 160;
Allocate 5 to AR = 60;
Allocate 16 to BQ = 2058; Allocate 12 to CR = 96
Total Cost = 160+60+208+96 = 524

3) Maximization Transportation Problem:


Example 7:
Consider the following profit matrix:
A B C D E Supply
1 19 21 16 15 15 150
2 9 13 11 19 11 200
3 18 19 20 24 14 125
Demand 80 100 75 45 125

1. 27
M.B.A. (Sem. - II) Decision Science
Solution:
Since the given problem is a maximization problem, we have to convert it into a
minimization problem by subtracting each element of profit matrix from the biggest
element (24), producing the matrix as follows:
A B C D E Supply
1 5 3 8 9 9 150
2 15 11 13 5 13 200
3 6 5 4 0 10 125
Demand 80 100 75 45 125

In the above table, the total demand (425) is less than the total supply (475), hence it is
an unbalanced problem. So we need to introduce a dummy column. Now by using
Vogel’s approximation method, the initial solution is as shown in the following table;

We have total eight allocations which is equal to (m + n — 1), hence the problem is not
degeneracy. Now, we will apply MODI method to test the optimality of this initial
solution.

1. 28
Introduction to Decision Science
Since not all the opportunity costs are positive or zero, we will introduce cell (2, D) and
drop cell (2, A), now the revised table will be as follows:

Since all the opportunity costs are positive or zero, therefore the above solution is an
optimal solution.
Total profit: (50 x 5) + (100 x 3) + (25 x 5) + (125 x 13) + (30 x 6) + (75 x 4) + (20 x 0) +
(50 x 0) = Rs. 2780

Example 8:
Find the initial solution to the following transportation problem.

D E F G Supply
A 3 7 6 4 5
B 2 4 3 2 2
C 4 3 8 5 3
Demand 3 3 2 2
Using
a) North-west corner rule
b) Least cost method
c) Vogel’s approximation method
Solution:
Let be the capacities of the origin and destinations respectively. Further
then the problem is a balanced transportation problem. This indicates that a
feasible solution exists in the given transportation problem.
a) North-west Corner rule:
Initial basic feasible solution to the given transportation problem is calculated by
using the north-west corner method and presented as below:

1. 29
M.B.A. (Sem. - II) Decision Science

Calculations of total transportation cost:


Total cost

Total 48
b) Least Cost Method:
Initial feasible solution to the given problem is computed by using the least cost
method and presented as below

:
Calculations of total transportation cost:
Total cost

Total 36
c) Vogel’s Approximation Method:
The initial feasible solution to the given transportation problem is calculated using
VAM and represented as below:

1. 30
Introduction to Decision Science

Calculation of the total transportation cost:


Total cost

Total 32

Example 9:
Solve the following transportation problem by North-West corner rule, Matrix Minima
(Least cost Method) and VAM Method.
Factories Supply
6 4 1 5 14
8 9 2 7 16
4 3 6 2 05
Demand 6 10 15 4 35
Solution:
1) North-West Corner Method:
Factories Supply
6(6) 4(8) 1 5 14
8 9(2) 2(14) 7 16
4 3 6(1) 2(4) 05
Demand 6 10 15 4 35
The total Feasible Transportation Cost is
= 6(6) + 4(8) + 9(2) + 2(14) + 6(1) + 2(4)
= Rs. 128/

1. 31
M.B.A. (Sem. - II) Decision Science
2) Least Cost Method:
Factories Supply
6 4 1(14) 5 14
8(6) 9(9) 2(1) 7
4 3(1) 6 2(4) 05
Demand 6 10 15 4 35
The Total feasible transportation cost
= 1(14) + 8(6) + 9(9) + 2(1) +3(1) +2(4)
= Rs.156/-

3) VAM Method:
Factories Supply
6(4) 4(10) 1 5 14
8(1) 9(9) 2(15) 7 16
4(1) 3 6 2(4) 05
Demand 6 10 15 4 35
The Total feasible transportation cost
= 6(4) + 4(10) + 8(1) + 2(15) + 4 (1) + 2(4)
= Rs. 114/-

Example 10:
A company has three plants supplying the same product to the five distribution centers.
Due to peculiarities inherent in the set of cost of manufacturing, the cost/ unit will vary
from plant to plant which is given below. There are restrictions in the monthly capacity
of each plant, each distribution center has a specific sales requirement, capacity
requirement and the cost of transportation is given below.

Factories S Supply
5 3 3 6 4 200
4 5 6 3 7 125
2 3 5 2 3 175
Demand 60 80 85 105 70 400
500

The cost of manufacturing a product at the different plants is Fixed cost is Rs 7x105, 4x
105 and 5x 105. Whereas the variable cost per unit is Rs 13/-, 15/- and 14/-
respectively. Determine the quantity to be dispatched from each plant to different
distribution centers, satisfying the requirements at minimum cost.

1. 32
Introduction to Decision Science
Solution:
Factories D Supply
18 16 16 19 17 0 200
19 20 21 18 22 0 125
16 17 19 16 17 0 175
Demand 60 80 85 105 70 100 500

Factories D Supply
18 16(55) 16(85) 19 17(60) 0 200
19 20(25) 21 18 22 0(100) 125
16(60) 17 19 16(105) 17(10) 0 175
Demand 60 80 85 105 70 100 500

Therefore the total feasible transportation cost,


= 16(55) + 16(85) + 17(60) + 20(25) + 0(100) +16(60) + 16(105) +17(10)
= Rs. 6570/-

Example 11:
Find the Optimal solution by MODI Method for the transportation problem.
Destination
1 2 3 4 Supply
1 4 2 7 3 250
Source 2 3 7 5 8 450
3 9 4 3 1 500
Demand 200 400 300 300

Solution:
Vogel’s Approximation method (VAM) is preferred to find initial feasible solution. The
advantage of this method is that it gives an initial solution which is nearer to an optimal
solution or the optimal solution itself.
Step 1: The given transportation problem is a balanced one as the sum of supply
equals to sum of demand.
Step 2: The initial basic solution is found by applying the VAM method and result is
shown in the following table:

1. 33
M.B.A. (Sem. - II) Decision Science

Initial Transportation cost

Step 3: Check for degeneracy. For this, verify the condition,


Number of allocations, N = m + n – 1

Since the condition is satisfied, degeneracy does not exist.


Step 4: Test for optimality using modified distribution method. Compute the values of U
and V for rows and columns respectively by applying the formula for occupied cells.

Then, the opportunity cost for each unoccupied cell is calculated using the formula
and denoted at the left hand bottom corner of each unoccupied cell.
The computed values of and and are shown in given table below.

Calculate the values of and , using the formula for occupied cells. Assume any one
of and value as zero ( is taken as 0).

1. 34
Introduction to Decision Science

Calculate the values of , using the formula for unoccupied cells

Since all the opportunity cost, values are positive the solution is optimum.
Total transportation cost



Review Questions

Q.1. What is decision science? Explain the importance of decision science.


Q.2. Explain the role of Quantitative techniques in decision sciences.
Q.3. Give mathematical model of assignment problem? Explain the various methods
used to solve the assignment problem.
Q.4. Explain in detail Hungarian (Flood’s Technique) method for solving assignment
problem.
Q.5. Give mathematical formulation of transportation problem. Describe the various
types of transportation problems.
Q.6. Explain in detail various methods of initial basic feasible solution.
Q.7. Write a short note on: MODI method.
Q.8. Problems for Practice:
1) A company has three plants supplying the same product to the five distribution
centers. Due to peculiarities inherent in the set of cost of manufacturing, the cost/
unit will vary from plant to plant. There are restrictions in the monthly capacity of

1. 35
M.B.A. (Sem. - II) Decision Science
each plant, each distribution center has a specific sales requirement, capacity
requirement and the cost of transportation is given below. The cost of
manufacturing a product at the different plants is Fixed cost is Rs 7x105, 4x 105
and 5x 105. Whereas the variable cost per unit is Rs 13/-, 15/- and 14/-
respectively. Determine the quantity to be dispatched from each plant to different
distribution centers, satisfying the requirements at minimum cost.
2) Solve the following transportation problem by North-West corner rule, Row
Minima, Column Minima, Matrix Minima and VAM Method:

3) Obtain an initial feasible solution to the following TP using Matrix Minima Method.

4) Determine an initial basic feasible solution for the following TP, using least cost
method.

5) Obtain the initial solution for the following TP using (i) NWCR (ii) Least cost
method (iii) VAM Destination.

1. 36
Introduction to Decision Science

6) Determine an initial basic feasible solution for the following TP, using least cost
method.

7) Solve the following T.P. using Vogel's Method.

8) The following table gives the cost of transporting material from supply points A, B,
C and D to demand points E, F, G, H and J.

The present allocation is as follows:


A to E 90, A to F 10, B to F 150, C to F 10, C to G 50, C to J 120, D to H 210, D to
J 70.

1. 37
M.B.A. (Sem. - II) Decision Science
a) Check the allocation is optimum, if not find an optimum schedule.
b) If in the above problem the transportation cost from A to G is reduced to 10,
what will be the new optimum schedule?
c) If the availability of supply point A is reduced by 10 units, use each of the
following criteria to obtain a initial basic feasible solution:
I) Northwest corner rule.
II) Least cost method.
d) Starting with best initial solution is found in part (c), obtain an optimal solution,
and hence produce transportation schedule.
9) Find minimum transport cost of the following:

10) In the inventory of a company, which deals with large heavy metal blocks, four
new blocks are to be placed. There are only five empty places in the inventory of
the company, namely A, B, C, D and E, where places A and C are relatively small.
Hence the place A cannot hold block 3 and place C cannot hold block 2. The cost
of transferring the blocks into places of inventory is as follows:
Places
Blocks A B C D E
1 9 11 15 10 11
2 12 9 - 10 9
3 - 11 14 11 7
4 14 8 12 7 8
Find the optimal assignment schedule.
11) Find the initial solution to the following transportation problem.

Using
a) North-west corner rule
b) Least cost method
c) Vogel’s approximation method

1. 38
Linear Programming and UNIT
Markov Chains &
Simulation Techniques 2
2.1 Linear Programming Problem (L.P.P)
2.2 Markov Chain
2.3 Simulation Techniques

Introduction:
Linear programming constitutes a set of mathematical methods specially designed for
the modeling and solution of certain kinds of constrained optimization problems. The
mathematical presentation of a linear programming problem in the form of a linear
objective function and one or more linear constraints with equations or in equations
constitutes a linear programming problem. The process leading to the construction of
this model is referred to as the model building or mathematical formulation of business
problem given. In this model, a linear objective function of the decision variables are
maximized/minimized subject to a set of linear constraints with equations/in equations.

2.1 Linear Programming Problem (L.P.P) :


The term „Linear‟ is used to describe the proportionate relationship of two or more
variables in a model. The given change in one variable will always cause a resulting
proportional change in another variable. The word „programming‟ is used to specify a
sort of planning that involves the economic allocation of limited resources by adopting a
particular course of action or strategy among various alternatives strategies to achieve
the desired objective. Hence, Linear Programming is a mathematical technique for
optimum allocation of limited or scarce resources, such as labor, material, machine,
money energy etc.

2.1.1 Meaning:
“A linear programming problem consists of a linear function to be maximized or
minimized subject to certain constraints in the form of linear equations or inequalities“.

2. 1
M.B.A. (Sem. - II) Decision Science
The linear programming is a technique for choosing the best alternative from a set of
feasible alternatives, in situations in which the objective function as well as the
constraints can be expressed as linear mathematical functions. In order to apply linear
programming, certain requirements have to be met. These are discussed here.
a) There should be an objective which should be clearly identifiable and measurable in
quantitative terms. It could be, for example, maximization of sales, of profit,
minimization of cost, and so on.
b) The activities to be included should be distinctly identifiable and measurable in
quantitative terms, for instance, the products included in a production planning
problem.
c) The resources of the system, which are to be allocated for the attainment of the
goal, should also be identifiable and measurable quantitatively. They must be in
limited supply. The technique would involve allocation of these resources in a
manner that would trade off the returns on the investment of the resources for the
attainment of the objective.
d) The relationships representing the objective as also the limitation considerations,
represented by the objective function and the constraint equations or inequalities,
respectively, must be linear in nature.
e) There should be a series of feasible alternative courses action available to the
decision makers which are determined by the resource constraints.
When these stated conditions are satisfied in a given situation, the problem can be
expressed in algebraic form, called Linear Programming Problem (LPP) and then
solved for optimal decision.

2.1.2 General Mathematical Model of Linear Programming Problem:


The general linear programming problem (or model) with n decision variables and m
constraints can be stated as in the following form:
Find the values of decision variables so as to optimize (Max. or Min.)
subject to the linear constraints,

And
The above formulation can be expressed in a compact form using summation sign.
Optimize (Max. or Min.) (Objective Function) ………(1)
Subject to linear constraints,
(Constraints) ………(2)
And (Non-negative conditions) …….. (3)

2. 2
Introduction to Decision Science
Where, the s are coefficients representing the per unit contribution of decision variable
, to the value of objective function. The s are called the technological coefficients or
input-output coefficients and represent the amount of resource, say i consumed per unit
of variable (activity) . In the given constraints, the s can be positive, negative or zero,
represents the total availability of the ith resource. The term resource is used in a very
general sense to include any numerical value associated with the right-hand side of a
constraint. It is assumed that for all i. However, if any < 0, then both sides of
constraint i can be multiplied by -1 to make and reserve the inequality of the
constraint. In the general L.P. problem, the expression ( , =, ) means that in any
specific problem each constraint may take only one of the three possible forms:
a) Less than or equal to
b) Equal to
c) Greater than or equal to

2.1.3 Important Definitions Under L.P.P:


1) Solution:
The set of decision variables that satisfy the constraints of an LP
problem is said to constitute the solution to that LP problem.
2) Feasible Solution:
The set of values of decision variables that satisfy all the
constraints and non-negativity conditions of an LP problem simultaneously is said to
constitute the feasible solution to that LP problem.
3) Infeasible Solution:
The set of values of decision variables that do not satisfy all the
constraints and non-negativity of an LP problem simultaneously is said to constitute
the infeasible solution to that LP problem.
4) Basic Solution:
For a set of m simultaneous equations in n variables (n>m), a solution obtained by
setting (n - m) variables equal to zero and solving for remaining m equations in m
variable is called a basic solution. The (n - m) variables whose value did not appear
in this solution are called non-basic variables and the remaining m variables are
called basic variables.
5) Basic Feasible Solution:
A feasible solution to an LP problem which is also the basic solution is called the
basic feasible solution. That is, all basic variables assume non-negative values.
Basic feasible solutions are of two types”.
a) Degenerate Solution:
A basic feasible solution is called degenerate if the value of at least one basic
variable is zero.

2. 3
M.B.A. (Sem. - II) Decision Science
b) Non-degenerate Solution:
A basic feasible solution is called non-degenerate if values of all m basic
variables are non-zero and positive.
6) Optimum Basic Feasible Solution:
A basic feasible solution that optimizes (maximizes or minimizes) the objective
function value of the given LP problem is called an optimum basic feasible solution.
7) Unbounded Solution:
A solution that can indefinitely increase or decrease the value of the objective
function of the LP problem is called an unbounded solution.

2.1.4 Formulation of L.P.P. :


A) General Structure:
The general structure of the Linear Programming model essentially consists of three
components.
a) The activities (variables) and their relationships.
b) The objective function.
c) The constraints.

a) Decision Variables:
The activities are represented by X1, X2, X3 ……..Xn. These are known as
Decision variables.
b) Objective Function:
The Objective function of an LPP (Linear Programming Problem) is a
mathematical representation of the objective in terms a measurable quantity
such as profit, cost, revenue, etc.
Optimize (Maximize or Minimize) Z=C1X1 +C2X2+ ………..Cn Xn, Where Z is the
measure of performance variable.
X1, X2, X3, X4,…..,Xn are the decision variables, C1, C2,.…,Cn are the
parameters that give contribution to decision variables.
c) Constraints:
The constraints are the set of linear inequalities and/or equalities which impose
restriction of the limited resources.

B) Guidelines for Formulating Linear Programming Model:


1) Identify and define the decision variable of the problem.
2) Define the objective function.
3) State the constraints to which the objective function should be optimized (i.e.
Maximization or Minimization).
4) Add the non-negative constraints from the consideration that the negative values
of the decision variables do not have any valid physical interpretation.

2. 4
Introduction to Decision Science
C) Basic Assumptions of L.P.P. :
1) Certainty:
In all LP models it is assumed that, all the model parameters such as availability
of resources, profit (or cost) contribution of a unit of decision variable and
consumption of resources by a unit of decision variable must be known and
constant.
2) Divisibility (Continuity):
The solution values of decision variables and resources are assumed to have
either whole numbers (integers) or mixed numbers (integer or fractional).
However, if only integer variables are desired, then Integer programming
method may be employed.
3) Additivity:
The value of the objective function for the given value of decision variables and
the total sum of resources used, must be equal to the sum of the contributions
(Profit or Cost) earned from each decision variable and sum of the resources
used by each decision variable respectively. /The objective function is the direct
sum of the individual contributions of the different variables.
4) Linearity:
All relationships in the LP model (i.e. in both objective function and constraints)
must be linear.

2.1.5 Graphical Solution of L.P.P :


Linear programming with two decision variables can be solved graphically, although the
method is quite simple the principle of solution is based on certain analytical concepts.
While obtaining the optimal solution to the LP problem by the graphical method, the
statement of the following theorems of linear programming is used. This is one of the
simplest methods to solve Linear Programming Problems.
Steps:
1) Substitute the values of vertices in minimize or maximize objective and find the
optimum solution. That is for maximize objective where the solution is maximum and
for minimize objective where the solution is minimum. Treat each inequality into
equality.
2) Plot the point and Draw line on the Graph for Equality.
3) Draw the region whether it is towards the origin i.e. (0, 0) or away from the origin by
() arrow.
4) Find out common feasible region. This may be the open region or closed polygon.
5) The optimum value is always at vertices of polygon.
6) There are a finite number of basic feasible solutions within the feasible solution
space.

2. 5
M.B.A. (Sem. - II) Decision Science
7) If the convex set of the feasible solutions of the system of simultaneous equations:
Ax = b, , is a convex polyhedron, then at least one of the extreme points gives
an optimal solution.
8) If the optimal solution occurs at more than one extreme point, the value of the
objective function will be the same for all convex combinations of these extreme
points.

A) Variations in Graphical Solution to LPP:


There are some linear programming problems which do not have unique optimal
solutions are illustrated in figure below:
1) Unboundedness:
When the value of decision variables in linear programming is permitted to increase
infinitely without violating the feasibility condition, then the solution is said to be
unbounded. Here, the objective function value can also be increased infinitely.
However, an unbounded feasible region may yield some definite value of the
objective function.
2) Infeasibility:
If it is not possible to find a feasible solution that satisfies all the constraints, then LP
problem is said to have an infeasible solution or alternatively, inconsistence,
infeasibility depends solely on the constraints and has nothing to do with the
objective function.
3) Multiple Optimum Solutions:
The solution to a linear programming problem shall always be unique if the slope of
the objective function is different from the slopes of the constraints. In case the
objective function has slope which is same as that of a constraint, then multiple
optimal solutions might exist.

2.2 Markov Chain:


A Markov chain (discrete-time Markov chain or DTMC) named after Andrey Markov, is a
mathematical system that undergoes transitions from one state to another, among a
finite or countable number of possible states. It is a random process usually
characterised as memory less: the next state depends only on the current state and not
on the sequence of events that preceded it. This specific kind of "memorylessness" is
called the Markov property. Markov chains have many applications as statistical
models of real-world processes. The use of the term in Markov chain Monte Carlo
methodology covers cases where the process is in discrete time (discrete algorithm
steps) with a continuous state space.
Definition:
“A Markov chain is a sequence of random variables X1, X2, X3, ... with the Markov
property, namely that, given the present state, the future and past states are
independent. Formally,
2. 6
Introduction to Decision Science

The possible values of Xi form a countable set S called the state space of the chain.
Markov chains are often described by a directed graph, where the edges are labeled by
the probabilities of going from one state to the other states.”

2.2.1 Markov Chain Processes:


1) Time-homogeneous Markov Chains (Stationary Markov Chains):
Time-homogeneous Markov chains are processes where,

For all n, the probability of the transition is independent of n.

2) Markov Chain of Order m (Markov Chain with Memory m):


Where m is finite, is a process satisfying

In other words, the future state depends on the past m states. It is possible to
construct a chain (Yn) from (Xn) which has the 'classical' Markov property by taking
as state space the ordered m-topples of X values, i.e. Yn = (Xn, Xn−1... Xn−m+1).
3) Additive Markov Chain of Order m:
Additive Markov Chain of order m is determined by an additive conditional
probability,

The value f(xn, xn-r, r) is the additive contribution of the variable xn-r to the conditional
probability.

2.2.2 Theorems and Lemmas of Markov Chain:


First some notation:
P always represents a transition matrix, while Pij represents an element of it. Xj is
always a random variable.
Definition:
State I is recurrent if
P(Xn= I for some n ≥1|X0= i) = 1.
Otherwise it is transient.
Definition:
A chain is irreducible if every state can be reached from any other one. That is Pij (1) >
o  i, j

2. 7
M.B.A. (Sem. - II) Decision Science
We state without proof the following results : A state is recurrent if and only if Pii (n)=
 and it is transient if and only if Pii (n)<  2
We delve right in with a lemma that connects the notions of recurrence and
irreducibility.
Lemma 2.1:
In an irreducible chain, all the states are either transient or recurrent.
Proof: We take the shortest path from state I to state j (let it have n steps), and the
shortest path from j to I (let it have m steps). Thus we have pij(n)=  >0 and pji (m)=b>0
and so we have
pii(l  n  m) pij(n) jj(l)pji m
 abp jj (l)
Pii(l  n  m) p ji lpij n
 abpij l
So it is obvious that either and are both finite or are both infinite.
Thus from the above result we note that all the states of an irreducible chain are either
transient or recurrent, as desire

Lemma 2.2 :
Facts about recurrence.
1) If state i is recurrent and I  j, then j is recurrent.
2) If state I is transient and I  j, then j is transient.
3) The states of a finite, irreducible Markov chain are all recurrent.
Proof:
1) We employ the definition of recurrence. Thus  n, m ≥ 0 st pij (n) > 0 and Pij (m) > 0.
Thus we have

2) We apply similar logic as above. Since i  j  n > 0 st Pij(n) >0 so for m > m we
have Pii (m)  Pij(n) Pij (m-n) and thus we have:

2. 8
Introduction to Decision Science

Which implies that as desired.


3) We know from Lemma 2.1 that since the chain is irreducible, all the states are either
recurrent or transient. First assume that all states are transient. Now, fix a state I
and consider the number of times that we pass state j after starting at i. Since we
pass state j for some states j would be infinite. So this implies that the expected
number of returns to state j after starting at state j would also be infinite. But that
contradicts the geometric distribution of the number of return, which has the
expectation be at one over the probability of returning.

Theorem 2.3: (Fundamental Theorem for Markov Chains):


An irreducible, ergodic Markov chain has a unique stationary distribution . The limiting
distribution exists and is equal to .
Proof: Since the chain is ergodic, it is non-null recurrent which implies from above that
= Pij (n) > 0 I and . Now we consident for any M
Now letting n we get that which implies from above
that the same is true for the infinite case Now we consider the probability of
moving from i to j in n+1 steps so Pij (n+1) = (n) Pij Now
we again can let n which implies that which implies leads to the
following contradiction:

Thus we come to the conclusion that = Now we can consider  i= 


i/ . Now we can consider now to show uniqueness we consider the
following:

2. 9
M.B.A. (Sem. - II) Decision Science

So we have that and taking M, n   we get that i


but we know that from the transition matrix that Pji (n)  1 and so
and so taking n  we get .
Now we know that is a stationary distribution so it sums up to 1, and so we let M 
and we get which implies that the stationary distribution is unique.
The above process thus shows the existence of a limiting distribution and so we now
know that an ergodic chain converges to its stationary distribution.
This proof allows us to take any bounded function g and say with probability 1 that

Which is a very strong result that we use constantly in Markov chain Monte Carlo?
We conclude this section of lemmas and theorems with a useful definition and a small
lemma.

2.2.3 Applications Related to Management Functional Area:


The major applications of Markov chains are in the following areas:
1) Personnel:
Determining future manpower requirements of an organization taking into
consideration retirements, deaths, resignations, etc.
2) Finance:
Determining allowances for doubtful accounts, if.
3) Production:
Helpful in evaluating alterative maintenance polices, queuing systems and work
assignments.
4) Marketing:
Useful in analyzing and predicting customer‟s buying behavior in terms of loyalty to
A particular product brand, switching patterns to other brands; and market share of
the company versus its competitors.

2.2.4 Implication of Steady State Probability:


If the matrix of transition probabilities remains constant, that is no action is taken by
anyone to alter it, a steady state will he arrived at in due course of time. Steady state
implies a state of equilibrium. The market share of the three newspapers in this case
will become steady and though exchange of customers would take place the market

2. 10
Introduction to Decision Science
share will remain frozen. In other words at the steady state, if the present market share
is multiplied by the transition matrix the resulting market share will be the same. The
transition matrices P(t) = etR where R is the generator matrix. That each element of P(t)
was a constant plus a sum of multiples of et j where the j are the eigenvalues of the
generator matrix R. The non-zero eigenvalues of R are either negative or have
negative real part. So the non-constant et j go to zero as t . So P(t) approaches a
limiting matrix, which we shall denote by P( ), as t tends to . This is similar to what
happened in the case of a Markov chain. The rows of this limiting matrix contain the
probabilities of being in the various states, as time gets large. These probabilities are
called as steady state probabilities.
Just as the transition probabilities for a discrete time Markov chain satisfy the Chapman
Kolmogorov equations, the continuous time transition probability function also satisfies
these equations. Therefore, for any states i and j and nonnegative numbers t and s (0≤
s ≤ t),
M
pij (t) pik (s) pkj (t s)
k 1

A pair of states i and j are said to communicate if there are times t 1, and t2 such that
pij(t1) > 0 and pji(t2) > 0. All states that communicate are said to form a class. lf all states
form a single class, i.e., if the Markov chain is irreducible (hereafter assumed), then
pij(t) > 0. for all t > 0 and all states i and j.
Furthermore.
lim pij (t) 0
t

always exists and is independent of the initial state of the Markov chain for j = 0. 1 .....
M. These limiting probabilities are commonly referred to as the steady state probabilities
(or stationary probabilities) of the Markov chain.
The πj satisfy the equations,
M

j i pij (t), for j=0,1,….M and every t≥0.


i 0

However, the following steady-state equation provide a more useful system of


equations for solving for the steady-state probabilities:
jq j i qij
for j=0,1,…..M.
i j

And
M

j 1
j 0

The steady-state equation for state j has an intuitive interpretation. The left-hand side
(πj qj) is the rare at which the process leaves state j, since πj is the (steady-state)

2. 11
M.B.A. (Sem. - II) Decision Science
probability that the process is in state j and qj is the transition rate out of state j given
that the process is in state j. Similarly, each term on the right hand side (πi qij) is the rate
at which the process enters state j from state i. since q ij is the transition rate from state i
to state j given that the process is in state i. By summing over all i #= j, the entire right-
hand side then gives the rate at which the process enters state j from any other state.
The overall equation thereby states that the rate at which the process leaves state j
must equal the rate at which the process enters state j. Thus, this equation is analogous
to the conservation of flow equations encountered in many engineering and science
courses. Because each of the first M + 1 steady-state equations requires that two rates
be in balance (equal), these equations sometimes are called the balance equation.

Example 1:
A machine which at any time can be in either of two states, working = 1 or broken = 2.
Let X(t) be the random variable corresponding to the state of the copier at time t.
Suppose the time between transitions between working and broken are exponential
random variables with mean 1/2 week so q12 = 2, and the time between transitions
between broken and working are exponential random variables with mean 1/9 week so
q21 = 9. Suppose all these random variables along with X(0) are independent so that
X(t) is a Markov process. The generator matrix is,
-2 2
R = 9 -9 .
In the previous section we saw that
1 9 + 2e-11t 2 - 2e-11t
P(t) = etR = 11 9 - 9e-11t 2 + 9e-11t
As t one has
9 2
1 9 + 2e-11t 2 - 2e-11t 11 11
P(t) = 11 9 - 9e-11t 2 + 9e-11t P( ) = 9 2
11 11
since e-11t 0. In fact within a week one has P(t) quite close to P( ) since e-11
0.0002. So for a day more than a week in the future one has
9
Pr{working} 11
2
Pr{broken}
11
no matter whether it is working or broken now. In the long run it is working 9/11 of the
time and broken 2/11 of the time. These probabilities
9
1 = lim Pr{ X(t) = 1 | X(0) = 1} = lim Pr{ X(t) = 1 | X(0) = 2} = 11
t t

2. 12
Introduction to Decision Science
2
2 = lim Pr{ X(t) = 2 | X(0) = 1} = lim Pr{ X(t) = 2 | X(0) = 2} = 11
t t
are called the steady state probabilities of being in states 1 and 2. In many applications
of Markov, processes the steady state probabilities are the main items of interest since
one is interested in the end behavior of the system. For the time being, it assumes the
following about the generator matrix R.
(1) The eigenvalues zero is not repeated.
(2) The other eigenvalues are negative or have negative real part.
Then t
1 0 0
t 2
0 e 0
P(t) = etR = TetDT-1 = T T-1
0 0 et n

Approaches
1 t12 t1n 1 0 0
1 t22 t2n 0 0 0
P( ) = T -1 = T -1
1 tn2 tnn 0 0 0
1 0 0 11 12 1n 1 2 n
1 0 0 21 22 2n 1 2 n
= =
1 0 0 n1 n2 nn 1 2 n
Where we let,
1 2 n

-1 21 22 2n
T =
n1 n2 nn
So
1 2 n
1 2 n
P( ) =
1 2 n

So the rows of P( ) are all equal to the vector which is the first row of T-1 which is a left
eigenvector of R with eigenvalues 0. In other words
(3) R = 0
where
= ( 1, 2, …, n)
So the rows of P( ) are vectors  which are solutions of the equation R = 0 and
which are probability vectors, i.e. the sum of the entries of  equals 1. The entries of
 are called steady state probabilities.

2. 13
M.B.A. (Sem. - II) Decision Science
Another way to see that the rows of P( ) satisfy (3) is to start with the Kolmogoroff
dP(t) dP(t)
equation dt = P(t)R and let t . Since P(t) P( ) one has dt P( )R.
dP(t)
However, the only way this can be consistent is for dt o. So P( )R = 0. Since the
ith row of P( )R is the ith row of P( ) times R we see the rows of P( ) satisfy (3).

Example 2:
Find the steady state vector = ( 1, 2) in Example 1.
Since R = 0 we get
-2 2
( 1, 2) 9 - 9 = (0, 0)
Therefore
-2 1 + 9 2 = 0
2 1 + 9 2 = 0
Therefore 2 = 2 1/9
In order to find 1 and 2 we need to use the fact that 1 + 2 = 1. Combining this with 2
= 2 1/9 gives 1 + 2 1/9 = 1 or 11 1/9 = 1 or 1 = 9/11 and 2 = 2/11.

Example 3:
The condition of an office copier at any time t is either good = 1, poor = 2 or broken = B.
- 0.2 0.05 0.15
Suppose the generator matrix is R = 0.02 - 0.5 0.48 . The equation R = 0 is
0.48 0.12 - 0.6
- 0.2 1 + 0.05 2 + 0.15 3 = 0
0.02 1 - 0.5 2 + 0.48 3 = 0
0.48 1 + 0.12 2 - 0.6 3 = 0
These equations are dependent and when we solve them we get 1 = 404 3/165 and 1
= 16 3/33.
Using 1 + 2 + 3 = 1 we get 404 3/165 + 16 3/33 + 3 = 1.
Thus 649 3/165 = 1 or 3 = 165/649 0.254
and 1 = 404/649 0.622 and 2 = 80/649 0.123.

2.2.5 Decision Making Based on the Inferences:


Inferential analysis is used to generalise the results obtained from a random
(probability) sample back to the population from which the sample was drawn. This
analysis is only required when:
1) A sample is drawn by a random procedure and
2) The response rate is very high.

2. 14
Introduction to Decision Science
Hence, this type of analysis is not appropriate when:
1) Non-probability methods of selection are used;
2) The response rate is less than, say, 85 per cent, unless independent evidence is
available to indicate that the sample is reasonably representative; and
3) The data are obtained from a population.
Inferential procedures are not a substitute for measures of association, many studies
appear to use inferential analysis in making decisions about whether a relationship
should be taken seriously, and ignore measures of association or influence in the
process.
Inferential analysis is sometimes presented as determining whether results obtained
have occurred „other than by chance'. Sometimes the notion of „chance‟ in this context
is meant to refer to results occurring by accident or though some random error in the
procedures.
In inferential analysis, two major activities are as below:
1) Estimation of unknown parameter of a population on the basis of sample statistics.
2) Testing whether the sample data have sufficient evidence to support or reject a
hypothesis about the population parameter.
Inferential analysis helps the decision maker to draw conclusion about the
characteristics of a large population on the basis of sample results.

2.3 Simulation Techniques:


The designers and analysts in physical sciences have long used the technique of
simulation and it promises to become an important tool for tackling the complicated
problems of managerial decision-making. Scale models of machines have used to
simulate the plant layouts and models of aircrafts have tested in wind tunnels to
determine their aerodynamic characteristics. Simulation, which can appropriately called
management laboratory, determines the effect of a number of alternate policies without
disturbing the real system. lt helps in selecting the best policy with the prior assurances
that its implementation will be beneficial.
Definition of Simulation:
Simulation may be defined as, “an operation research technique which makes use of a
computer generated model representing all the characteristics of the given problem, in
order to assist the management in decision making under uncertainty by evaluating all
the available coerces of action in a planned and systematic manner.”

2.3.1 Types of Simulation:


Simulations generally come in three types: live, virtual and constructive. A simulation
also may be a combination of two or more types. Within these types, simulations can be
science-based (where, for example, interactions of things are observed or measured),
or involve interactions with humans. These types are as follows.

2. 15
M.B.A. (Sem. - II) Decision Science
1) Live Simulations :
Live simulations typically involve humans and/or equipment and activity in a setting
where they would operate for real. Think war games with soldiers out in the field or
operating command posts. Time is continuous, as in the real world. Another
example of live simulation is testing a car battery using an electrical tester.
2) Virtual Simulations:
Virtual simulations typically involve humans and/or equipment in a computer-
controlled setting. Time is in discrete steps, allowing users to concentrate on the
important stuff, so to speak. A flight simulator falls into this category.
3) Constructive Simulations:
Constructive simulations typically do not involve humans or equipment as
participants. Rather than by time, they are driven more by the proper sequencing of
events. The anticipated path of a hurricane might be "constructed" through
application of temperatures, pressures, wind currents and other weather
factors. Science-based simulations are typically constructive in nature.

2.3.2 Monte Carlo Simulation:


Monte Carlo simulation is a computerized mathematical technique that allows people to
account for risk in quantitative analysis and decision-making. Professionals in such
widely disparate fields as finance, project management, energy, manufacturing,
engineering, research and development, insurance, oil & gas, transportation, and the
environment use the technique. Monte Carlo simulation furnishes the decision-maker
with a range of possible outcomes and the probabilities they will occur for any choice of
action. It shows the extreme possibilities-the outcomes of going for broke and for the
most conservative decision-along with all possible consequences for middle-of-the-road
decisions. Monte Carlo simulation, or probability simulation, is a technique used to
understand the impact of risk and uncertainty in financial, project management, cost,
and other forecasting models.

2.3.3 Principles of Monte Carlo Simulation:


The principle behind Monte Carlo simulation is that the empirical process of can assess
the behavior of a statistic in random samples actually drawing many random samples
and observing this behavior. The strategy for doing this is to create an artificial “world,"
or pseudo-population, which resembles the teal world in all relevant respects. This
pseudo-population consists of mathematical procedures for generating sets of numbers
that resemble samples of data drawn from the true population. We then use this
pseudo-population to conduct multiple trials of the statistical procedure of interest to
investigate how that procedure behaves across samples.
The basic Monte Carlo procedure is as follows:

2. 16
Introduction to Decision Science
1) Specify the pseudo-population in symbolic terms in such a way that it can used to
generate samples. This usually means developing a computer algorithm to generate
data in a specified manner.
2) Sample from the pseudo-population (a pseudo-sample) in ways reflective of the
statistical situation of interest, for example, with the same sampling strategy, sample
sine, and so forth.
^
3) Calculate the pseudo-sample and store it in a vector,
4) Repeat Steps 2 and 3 t times, where t is the number of trials.
5) Construct a relative frequency distribution of the resulting θ̂ t values, which is the
Monte Carlo estimate of the sampling distribution of θ̂ under the conditions
specified by the pseudo-population and the sampling procedures.
Clearly, Monte Carlo simulation is very simple in concept as it flows naturally from
the conception of what a sampling distribution is. The complicated aspects of the
technique are (a) writing the computer code to simulate the data conditions desired
and (b) interpreting the estimated sampling distribution. This monograph explains
these to convey a practical understanding of how one undertakes this procedure
and uses the results.

2.3.4 How Monte Carlo Simulation Works:


Monte Carlo simulation performs risk analysis by building models of possible results by
substituting a range of values-a probability distribution-for any factor that has inherent
uncertainty. It then calculates results repeatedly, each time using a different set of
random values from the probability functions. Depending upon the number of
uncertainties and the ranges specified for them, a Monte Carlo simulation could involve
thousands or tens of thousands of recalculations before it is complete. Monte Carlo
simulation produces distributions of possible outcome values. By using probability
distributions, variables can have different probabilities of different outcomes occurring.
Probability distributions are a much more realistic way of describing uncertainty in
variables of a risk analysis. Common probability distributions include:
1) Normal or Bell Curve:
The user simply defines the mean or expected value and a standard deviation to
describe the variation about the mean. Values in the middle near the mean are most
likely to occur. It is symmetric and describes many natural phenomena such as
people‟s heights. Examples of variables described by normal distributions include
inflation rates and energy prices.
2) Lognormal:
Values are positively skewed, not symmetric like a normal distribution. It is used to
represent values that do not go below zero but have unlimited positive potential.
Examples of variables described by lognormal distributions include real estate
property values, stock prices, and oil reserves.

2. 17
M.B.A. (Sem. - II) Decision Science
3) Uniform:
All values have an equal chance of occurring, and the user simply defines the
minimum and maximum. Examples of variables that could uniformly distribute
include manufacturing costs or future sales revenues for a new product.
4) Triangular:
The user defines the minimum, most likely, and maximum values. Values around
the most likely are more likely to occur. Variables that could describe by a triangular
distribution include past sales history per unit of time and inventory levels.
5) PERT:
The user defines the minimum, most likely, and maximum values, just like the
triangular distribution. Values around the most likely are more likely to occur.
However values between the most likely and extremes are more likely to occur than
the triangular; that is, the extremes are not as emphasized. An example of the use
of a PERT distribution is to describe the duration of a task in a project management
model.
6) Discrete :
The user defines specific values that may occur and the likelihood of each. An
example might be the results of a lawsuit: 20% chance of positive verdict, 30%
change of negative verdict, 40% chance of settlement, and 10% chance of mistrial.
During a Monte Carlo simulation, values are sampled at random from the input
probability distributions. Each set of samples is called iteration, and the resulting
outcome from that sample is recorded. Monte Carlo simulation does this hundreds or
thousands of times, and the result is a probability distribution of possible outcomes. In
this way, Monte Carlo simulation provides a much more comprehensive view of what
may happen. It tells you not only what could happen, but how likely it is to happen.

Example on Monte Carlo Simulation:


Example :
A retailer deals in perishable items, the daily demand and supply of which are random
variables. The past 500 days data show the following.

Supply Demand
Available(kg) Number of days Available(kg) Number of days
10 40 10 50
20 50 20 110
30 190 30 200
40 150 40 100
50 70 50 40

2. 18
Introduction to Decision Science
The retailer buys the item at Rs. 20 per kg and sells at Rs. 30 per kg. lf any of the
commodity remains at the end of the day, it has no salable value and is a dead loss.
Moreover, the loss on any unsatisfied demand is Rs. 8 per kg.
Given the following random numbers: 31,18,63, 84, 15, 79, 07, 32, 43, 75, 81 and 27.
Use the random numbers alternately to simulate supply and demand for six days sales.
Solution:
The probability distributions for demand and supply, allotted random numbers (RN) to
each level of supply and demand in the proportions are as in the following table.
Supply Probability RNI Demand Probability RNI
10 0.08 00-07 10 0.10 00-09
20 0.10 08-17 20 0.22 10-31
30 0.38 18-55 30 0.40 32-71
40 0.30 56-85 40 0.20 72-91
50 0.14 86-99 50 0.08 92-99

Using given random numbers, we simulate for six days sales.

Day RN Supply RN Demand Cost Revenue Loss Profit


Kg kg (Rs) (Rs) (Rs) (Rs)
1 31 30 18 20 600 600 - -
2 63 40 84 40 800 1200 - 400
3 15 20 79 40 400 600 160 40
4 07 10 32 30 200 300 160 -60
5 43 30 75 40 600 900 80 220
6 81 40 27 20 800 600 - -200
From the last column of the table, it is observed that the retailer makes a net profit of
Rs. 400.

2.3.5 Scope and Limitations of Monte Carlo Simulation:


A) Scope:
Monte Carlo simulation includes numerous advantages these are as follows:
1) Probabilistic Result:
The results obtained from Monte Carlo simulation not only reveal what could
possibly happen but also the extent of possibility for each outcome.
2) Graphical Result:
Due to the data generated by a Monte Carlo simulation, it becomes easier to
create graphs of various outcomes as well as their chances of occurrence. This
is imperative for communicating findings to other stakeholders.

2. 19
M.B.A. (Sem. - II) Decision Science
3) Sensitivity Analysis:
With just a few cases, it becomes difficult with deterministic analysis to look for
the variables, which affect the outcome, the most. In Monte Carlo simulation, it is
easier to find inputs showing the largest impact on bottom-line results.
4) Scenario Analysis:
In deterministic analysis, it is quite difficult to model different combinations of
values for distinctive inputs to know the effects of totally different scenarios.
Through the use of Monte Carlo simulation, analysts can see precisely which
inputs had values together when certain outcomes took place. This is very
useful for pursuing further analysis.
5) Correlation of Inputs:
In Monte Carlo simulation, it is possible to form independent relationships
between input variables. Moreover, it is important for precision to signify how,
actually, when certain factors go up, other go down correspondingly.
B) Limitations:
While Monte Carlo simulation has its fair share of benefits, as with other
mathematical models, it also has its limitations. These are as follows.
1) Misleading Results:
Simulations can lead to misleading results if inappropriate inputs are entering
into the model. The simulation process begins with entering asset class or
portfolio returns, standard deviations, and correlations. When cash flows are
added to the analysis they may be adjusted for inflation, which can present
another possible problem if an unrealistic inflation rate is assumed. The user
should be prepared to make the necessary adjustments if the results that are
generated seem out of line.
2) Not Consequences Based:
While Monte Carlo does a fine job of illustrating the wide variance of possible
results and the probability of success or failure over thousands of "different
market environments," it may not consider the consequences based on the
"applicable market environment" that exists over an investor‟s lifetime. There are
a number of unknown factors that cannot be truly accounted for.
3) Not Model Serial Correlations:
Monte Carlo simulation does not model serial correlations. This means the
numbers coming out in each draw are random; there is no way to control what
comes out in the next draw based on what was just drawn. For example,
inflation could be 3% in one period and 10% in the next. Behavior like this just
does not occur in the economy.

2. 20
Introduction to Decision Science
Solved Examples
A) Examples on L.P. Model Formulation:
Example 1:
A manufacturing company is engaged in producing three types of products: A, B and C.
The production department produces, each day, components sufficient to make 50 units
of A, 25 units of B and 30 units of C. The management is confronted with the problem of
optimizing the daily production of the products in the assembly department, where only
100 man-hours are available daily for assembling the products. The following additional
information is available:
Profit contribution per Assembly time
Type of product
unit of product (Rs) per product (hrs)
A 12 0.8
B 20 1.7
C 45 2.5
The company has a daily order commitment for 20 units of products A and a total of 15
units of products B and C. Formulate this problem as an LP model so as to maximize
the total profit.
Solution:
The data of the problem is summarized as follows:
Resources/Constraints Product type Total
A B C
Production capacity (units) 50 25 30
Man-hours per unit 0.8 1.7 2.5 100
Order commitment (units) 20 15 (both for B and C)
Profit contribution (Rs/unit) 12 20 45

a) Decision Variables:
Let and number of units of products A, B and C to be produced,
respectively.
The LP Model:
Maximize (total profit) subject to the constraints
1) Labour and materials
(a) 0.8

2) Order commitment
(a)
and

2. 21
M.B.A. (Sem. - II) Decision Science
Example 2:
A company has two plants, each of which produces and supplies two products: A and
B. The plants can each work up to 16 hours a day. In plant 1, it takes three hours to
prepare and pack 1,000 gallons of A and one hour to prepare and pack one quintal of B.
In plant 2, it takes two hours to prepare and pack 1,000 gallons of A and 1.5 hours to
prepare and pack a quintal of B. In plant 1, it costs Rs 15,000 to prepare and pack
1,000 gallons of A and Rs 28,000 to prepare and pack a quintal of B, whereas in plant 2
these costs are Rs 18,000 and Rs 26,000, respectively. The company is obliged to
produce daily at least 10,000 gallons of A and 8 quintals of B. Formulate this problem
as an LP model to find out as to how the company should organize its production so
that the required amounts of the two products be obtained at the minimum cost.
Solution:
The data of the problem is summarized as follows:
Total
Resources/Constraints Product
Availability(hrs)
A B
Plant 1: 3 hrs/thousand 1
Preparation time (hrs) 16
gallons hr/quintal
Plant 2: 2 hrs/thousand 1.5
16
gallons hr/quintal
Minimum daily 8
10,000 gallons
production quintals
Plant 1:
28,000/q
Cost of production (Rs) 15,000/thousand
uintals
gallons
Plant 2:
26,000/q
18,000/thousand
uintals
gallons
a) Decision Variables:
Let quantity of product A (in „000 gallons) to be produced in planta 1 and 2,
respectively.
quantity of product B (in quintals) to be producedin plants 1 and 2,
respectively.
b) The LP Model:
Minimize (total cost) subject to the
constraints
1) Preparation time
(a)
2) Minimum daily production requirement
(a)
and

2. 22
Introduction to Decision Science
Example 3:
An electronic company is engaged in the production of two components and that
are used in radio sets. Each unit of costs the company Rs. 5 in material, while each
of costs the company Rs. 25 in wages and Rs. 15 in material. The company sells
both products on one-period credit terms, but the company‟s labor and material
expenses must be paid in cash. The selling price of is Rs. 30 per unit and of it is
70 per unit. Because of the company‟s strong monopoly in these components, it is
assumed that the company can sell, at the prevailing prices, as many units as it
produces. The company‟s production capacity is, however, limited by two
considerations. First, at the beginning of period 1, the company has an initial balance of
Rs. 4,000 (cash plus bank credit plus collections from past credit sales). Second, the
company has available in each period 2,000 hours of machine time and 1,400 hours of
assembly time. The production of each requires 3 hours of machine time and 2 hours
of assembly time, whereas the production of each requires 2 hours of machine time
and 3 hours of assembly time. Formulate this problem as an LP model so as to
maximize the total profit to the company.
Solution:
LP Model Formulation:
The data of the problem is summarized as follows:
Resources/Constraints Components Total Availability

Budget 10/unit 40/unit Rs 4,000


Machine time 3 hrs/unit 2 hrs/unit 2,000 hours
Assembly time 2 hrs/unit 3 hrs/unit 1,400 hours
Selling price Rs 30 Rs 70
Cost(wages + material) price Rs 10 Rs 40

Decision Variables:
Let and number of units of components and to be produced, respectively.
The LP Model:
Maximize (total profit) Z = selling price – cost price

subject to the constraints,


1) The total budget available

2) Production time
(a)
and .

2. 23
M.B.A. (Sem. - II) Decision Science
Example 4:
A company has two grades of inspectors 1 and 2, the members of which are to be
assigned for a quality control inspection. It is required that at least 2,000 pieces be
inspected per 8-hour day. Grade 1 inspectors can check pieces at the rate of 40 per
hour, with an accuracy of 97 percent. Grade 2 inspectors check at the rate of 30 pieces
per hour with an accuracy of 95 percent. The wage rate of a Grade 1 inspector is Rs. 5
per hour while that of a Grade 2 inspector is Rs. 4 per hour. An error made by an
inspector costs Rs. 3 to the company. There are only nine Grade 1 inspectors and
eleven Grade 2 inspectors available to the company. The company wishes to assign
work to the available inspectors so as to minimize the daily inspection cost.
Solution:
LP Model Formulation:
The data of the person is summarized as follows:
Inspectors
Grade 1 Grade 2
Number of inspectors 9 11
Rate of checking 40 pieces/hr 30 pieces/hr
Inaccuracy in checking 1-0.97=0.03 1-0.95=0.05
Cost of inaccuracy in checking Rs 3/piece Rs 3/piece
Wage rate/hour Rs 5 Rs 4
Duration of inspection = 8 hrs per day
Total pieces which must be inspected =2,000

Decision Variables:
Let number of Grade 1 and 2 inspectors to be assigned for inspection,
respectively.
The LP Model:
Hourly cost of each of Grade 1 and 2 inspectors can be computes as follows:
Inspector Grade 1: Rs.
Inspector Grade 2: Rs.
Based on the given data, the linear programming problem can be formulated as follows:
Minimize (daily inspection cost)
subject to the constraints
1) Total number of pieces that must be inspected in an 8-hour day

2) Number of inspectors of Grade 1 and 2 available


(a)

and

2. 24
Introduction to Decision Science
Example 5:
An electronic company produces three types of parts for automatic washing machines.
It purchases casting of the parts from a local foundry and then finishes the part on
drilling, shaping and polishing machines. The selling price of parts A, B and C,
respectively cost Rs 5, Rs 6 and Rs 10. The shop possesses only one of each type of
casting machine. Costs per hour to run each of the three machines are Rs 20 for
drilling, Rs 30 for shaping and Rs 30 for polishing. The capacities (parts per hour) for
each part on each machine are shown in the table:
Machine Capacity per hour
Part A Part B Part C
Drilling 25 40 25
Shaping 25 20 20
Polishing 40 30 40
The management of the shop wants to know how many parts of each type it should
produce per hour in order to maximize profit for an hour‟s run. Formulate this problem
as an LP model so as to maximize total profit to the company.
Solution:
LP Model Formulation:
Let number of type A, B and C parts to be produced per hour,
respectively.
Profit must allow not only for the cost of the casting but also for the cost of drilling,
shaping and polishing. Since 25 type A parts per hour can be run on the drilling
machine at a cost of Rs 20, then Rs 20/25=Re 0.80 is the drilling cost per type A part.
Similar reasoning for shaping and polishing gives
Profit per type A part = (8 - 5) -
Profit per type B part = (10 - 6) -
Profit per type C part = (14 - 10) -
On the drilling machine, one type A part consumes 1/25th of the available hour, a type B
part consumes 1/40th, and a type C part consumes 1/25th of an hour. Thus, the drilling
machine constraint is

Similarly, other constraints can be established.


The LP Model:
Maximize (total profit) subject to the constraints
1) Drilling machine =

2. 25
M.B.A. (Sem. - II) Decision Science
2) Shapingmachine =

3) Polishingmachine =

and

B) Examples on Graphical Method for L.P.P (Maximisation Problems):


Example 6:
Use the graphical method to solve the following LP problem Maximize
subject to the constraints
1)
2)
3)
And
Solution :
a) The given LP problem is already in mathematical form.
b) We shall treat as the horizontal axis and as the vertical axis. Each constraint
will be plotted on the graph by treating it as a linear equation and it is then that the
appropriate inequality conditions will be used to mark the area of feasible solutions.
Consider the first constraint . Treat this as the equation
. The easiest way to plot this line is to find any two points that satisfy the
equation and then to draw a straight line through them. The two points are generally
the points at which the line intersects the and axes. For example, when
we get . Similarly when . These two
points are then connected by a straight line as shown in the figure. But the equation
is: Where are these points satisfying . Any point above the
constraint line violates the inequality condition. But any point below the line does not
violate the constraint. Thus, the inequality and non-negativity condition can only be
satisfied by the shaded area (feasible solution) as shown in fig.

2. 26
Introduction to Decision Science
Similarly, the constraints and are also plotted on the graph and
are indicated by the shaded area as shown in fig. Since all constraints have been
graphed, the area which is bounded by all the constraints lines including all the
boundary points is called the feasible region (or solution space). The feasible region
is shown in fig by the shaded area OABCD.
1) Since the optimal value of the objective function occurs at one of the extreme points
of the feasible region, it is necessary to extreme points of the feasible region
are:
2) Evaluate objective function value at each extreme point of the feasible region as
shown in the table:
Extreme Coordinates Objective f unction value
points ( )
O (0, 0)
A (60, 0)
B (60, 20)
C (30, 40)
D (0, 40)
3) Since we desire Z to be maximum, from 3(ii), we conclude that maximum value of Z
= 1,100 is achieved at the point extreme B (60, 20). Hence the optimal solution to
the given LP problem is:
.

Example 7:
Use the graphical method to solve the following LPP.
Maximize subject to the constraints
1)
2)
3)
4)
and

Solution:
Plot on a graph each constraint by first treating it as a linear equation in the same way
as discussed earlier. Use the inequality condition of each constraint to mark the feasible
region as shown in fig. The feasible region is shown by the shaded area.
Here it may be noted that we have not considered the area below the lines
and for the negative values of . This is because of the non-negativity
condition, which implies that negative values of are not desirable.

2. 27
M.B.A. (Sem. - II) Decision Science

The coordinates of extreme points of the feasible region are:


The value of objective function at each of
these extreme points is always:
Coordinates Objective f unction value
Extreme point
( )
O (0, 0)
A (1, 0)
B (3, 1)
C (4, 2)
D (2, 4)
E (0, 5)
The maximum value of the objective function Z = 10 occurs at the extreme point (4, 2).
Hence, the optimal solution to the given LP problem is:

Example 8:
Solve the following LPP graphically:
Maximize subject to
1)
2)
and .
Solution:
Since resource value (RHS) of the first constraint is negative, multiplying both sides of
this constraint by -1. The constraint becomes: Plotting on a graph each
constraint by first treating them as a linear equation in the usual manner.

2. 28
Introduction to Decision Science

The feasible region (solution space) satisfying the constraints and non-negativity
restrictions is shown by shaded area in fig. The value of the objective function at each
of the extreme point A (0, 1), B (0, 2) and C (2, 3) is as follows:
Coordinates Objective Function Value
Extreme point
( )
A (0, 1)
B (0, 2)
C (2, 3)
The maximum value of bijective function Z = 4 occurs at extreme points B and C. This
implies that every point between B and C on the line BC also gives the same value of Z.
Hence, problem has multiple optimal solutions and Max Z = 4.

Example 9:
A retired person wants to invest upto an amount of Rs. 30,000 in fixed income
securities. His broker recommends investing in two bonds: Bond A yielding 7% and
Bond B yielding 10%. After some consideration, he decides to invest at most Rs. 12,000
in Bond B and at least Rs. 6,000 in Bond A. He also wants the amount invested in Bond
A to be at least equal to the amount invested in Bond B. What should the broker
recommend if the investor wants to maximize his return on investment? Solve
graphically.
Solution:
Let and be the amount invested in Bonds A and B, respectively. Using the given
data, we may state the problem as follows:
Maximize
Subject to

2. 29
M.B.A. (Sem. - II) Decision Science

The constraints are plotted graphically in the given fig. The feasible region is shown
shaded and is bound by points A, B, C, D and E.

The extreme points are evaluated here.

Point
A 6,000 0 420
B 6,000 6,000 1,020
C 12,000 12,000 2,040
D 18,000 12,000 2,460
E 30,000 0 2,100

The Z-value is maximum at point D, Accordingly, the optimal solution is: invest Rs.
18,000 in Bond A and Rs. 12,000 in Bond B. It would yield a return of Rs. 2,460.

C) Examples on Graphical Method for LPP (Minimisation Problems) :


Example 10:
Use the graphical method to solve the following LP problem.
Minimize subject to the constraints

and

2. 30
Introduction to Decision Science
Solution:
Plot on a graph each constraint by first treating them as a linear equation in the same
way as discussed earlier. Use the inequality condition of each constraint to mark the
feasible region as shown in fig.

The coordinates of the extreme points of the feasible region (bounded from below) are:
A = (12, 0), B = (4, 2), C = (1, 5) and D = (0, 10). The value of objective function at each
of these extreme points is as follows:
Coordinates Objective function value
Extreme points
( )
A (12, 0)
B (4, 2)
C (1, 5)
D (0, 10)
The minimum value of the objective function Z = 13 occurs at the extreme point C (1, 5).
Hence, the optimal solution to the given LP problem is: and Min Z = 13.

Example 11:
Use the graphical method to solve the following LP problem.
Minimize subject to the constraints
1)
2)
3)
and

2. 31
M.B.A. (Sem. - II) Decision Science
Solution:
Plot on a graph each constraint by first treating it as a linear equation as discussed
earlier. Use the inequality condition of each constraint to mark the feasible region as
shown I fig. It may once again be noted here that the area below the lines
and is not desirable, due to the reason that values and are desired to
be non-negative, i.e. . The coordinates of the extreme points of the
feasible region are: O = (0, 0), A = (2, 0), B = (4, 2), C = (2, 4) and D = (0, 10/3). The
value of the objective function at each of these extreme points is as follows :

Coordinates Objective Function Value


Extreme points
( )
O (0, 0)
A (2, 0)
B (4, 2)
C (2, 4)
D (0, 10/3)

The minimum value of the objective function Z = - 2 occurs at the extreme point A (2, 0).
hence, the optimal solution to the given LP problem is: and Min Z = - 2.

Example 12:
G. J. Breweries Ltd. have two bottling plants one located at „G‟ and the other at „J‟. Each
plant products three drinks, whisky, beer and brandy named A, B and C respectively.
The numbers of the bottles produced per day are shown in the table:
Drink Plant at
G J
Whisky 1,500 1,500
Beer 3,000 1,000
Brandy 2,000 5,000

A market survey indicates that during the month of July, there will be a demand 20,000
bottles of whisky, 40,000 bottles of beer and 44,000 bottles of brandy. The operating
cost per day for plants G and J are 600 and 400 monetary unit. For how many days
each plant be run in July so as to minimize the production cost, while still meeting the
market demand? Solve graphically.
Solution:
Let us define the following decision variables:
Number of days of work at plant G and J, respectively.
Then the LP model of the given problem can be expressed as:
Minimize subject to the constraints
2. 32
Introduction to Decision Science
1)
2)
3)
and
The feasible solution space depicted in fig. is unbounded on the upper side. The
coordinates of extreme points of the feasible solution space bounded from below are: A
(22, 0), B (12, 4) and C (0, 40).

The value of objective function at each of the extreme points is shown below:

Objective Function Value


Extreme points Coordinates
A (22, 0)
B (12, 4)
C (0, 40)

The minimum value of objective function occurs at point B (12, 4). Hence, the Plant G
should run for days and plant J for days to have a minimum production
cost of Rs. 8,800.

Example 13:
A diet for a sick person must contain at least 4,000 units of vitamins, 50 units of
minerals and 1,400 calories. Two foods A and B are available at a cost of Rs. 4 and Rs.
3 per unit, respectively. If one of A contains 200 units of vitamins, 1 unit of mineral and
40 calories and one unit of food B contains 100 units of vitamins, 2 units of minerals and
40 calories, find by graphic method what combination of foods be used to have least
cost?
Solution:
The data of the given problem can be summarized as shown follows:

2. 33
M.B.A. (Sem. - II) Decision Science
Food Units content of Cost per unit (Rs.)
Vitamins Mineral Calories
A 200 1 40 4
B 100 2 40 3
Minimum requirement 4,000 50 1,400

Let number of units of food A and B, to be used respectively.


Then LP model of the given problem is:
Minimize (total cost) subject to the constraints
1)
2)
3)
and

The coordinates of the extreme points of the feasible solution space shown in fig. are: A
(0, 40), B (5, 30), C (20, 15) and D (50, 0). The value of the objective function at each of
these extreme points is as follows:

Coordinates Objective function value


Extreme points
( )
A (0, 40)
B (5, 30)
C (20, 15)
D (50, 0)

2. 34
Introduction to Decision Science
The minimum value of the objective function Z = 110 occurs at the extreme point B (5,
30). Hence, the optimal solution to the given LP problem is: and . Hence,
to have least cost of Rs. 110, the diet should contain units of food A and
units of food B.

D) Problems for Markov Chain:


Example 14:
A computer system can operate in two different modes. Every hour, it remains in the
same mode or switches to a different mode according to the transition probability matrix

1) Compute the 2-step transition probability matrix.


2) If the system is in Mode I at 5:30 pm, what is the probability that it will be in Mode I
at 8:30 pm on the same day?

Solution:
1) P(2)=P.P=
2) There are 3 transitions between 5:30 and 8:30, thus we need to compute P11(3). The
3-step transition probability matrix is

(there is no need to compute the entire matrix). Hence, P11(3)= 0.496

Example 15:
The pattern of sunny and rainy days on the planet Rainbow is a homogeneous Markov
chain with two states. Every sunny day is followed by another sunny day with probability
0.8. Every rainy day is followed by another rainy day with probability 0.6.
1) Today is sunny on Rainbow. What is the chance of rain the day after tomorrow?
2) Compute the probability that April 1 next year is rainy on Rainbow.

Solution:
1) Let “sunny” be state I and “rainy” be state 2. Write the transition probability matrix,

We need to find the 2-step transition probability P12(2). The 2-step transition
probability matrix is

only one element P12(2) needs to be computed. The probability that the day after
tomorrow is rainy is 0.28.

2. 35
M.B.A. (Sem. - II) Decision Science
2) April 1 next year is so many transitions away that we can use the steady-state
distribution. To find it, we solve the system of equations P= , 1+ 2=1. These
equations are :
0.8  1+0.4  2=  1
0.2  1+0.6  2=  2
 1+  2= 1
From the first equation,  1 2  2. From the second equation, again  1=2  2. We
know that one equation will follow from others. Substituting  1=2  2 into the last
question, we get 2  2+  2=1. From here  2=1/3 and  2=2/3
Hence the probability April 1next year is rainy is  2=1/3

Example 16:
A computer device can be either in a busy mode (state 1) processing a task, or in an
idle mode (state 2), when there are no tasks to process. Being in a busy mode, it can
finish a task and enter an idle mode any minute with the probability 0.2. Thus, with the
probability 0.8 it stays another minute in a busy mode. Being in an idle mode, it receives
a new task any minute with the probability 0.1 and enters a busy mode. Thus, it stays
another minute in an idle mode with the probability 0.9. The initial state is idle. Let Xn be
the state of the device after n minutes.
1) Find the distribution of X2.
2) Find the steady-state distribution of Xn.
Solution:
The transition probability matrix is given as

Also, we have X(0)=2 (idle mode) with probability 1, i.e. P0=(0,1)


1)
Thus, X(2), the state after 2 transition, is busy with probability 0.17 and idle with
probability 0.83.
2) To find the steady-state distribution. we solve the system of equations  P=  ,  1+
 2=1.
These equations are:
0.8  1+ 0.1  2=  1
0.2  1+ 0.9  2=  2
 1+  2=1
From the first equation,  2=2  1. From the second equation, again  2 = 2  1.
Substituting  2= 2  1 into the last equation, we get  1+2  1=1.From here,  1= 1/3
and  2=2/3
P{Xn= busy} = 1/3, P{Xn = idle} = 2/3

2. 36
Introduction to Decision Science
Example 17:
A system has three possible states, 0, 1 and 2. Every hour it makes a transition to a
different state, which is determined by a coin flip. For example, from state 0, it makes a
transition to state 1 or state 2 with probabilities 0.5 and 0.5.
1) Find the transition probability matrix.
2) Find the three-step transition probability matrix.
3) Find the steady-state distribution of the Markov chain.
Solution:

1)

2)

3) Solve the system  P =  along with the normalizing condition  1+  2+  3=1.


  
  
  
  
  
  
 
  
  
  
The steady state probability distribution is  = (1/3,1/3,1/3)

E) Problems on Monte Carlo Simulation:


Example 18:
High Fashion Textiles Ltd. has the following sales and the lead times probability
distribution:
No. of Weekly Demand Weekly Lead No. of
Times (Units) Times Times.
30% 250 1 60%
40% 275 2 20%
30% 300 3 0%
4 10%
The company has a policy of ordering exactly 1500 units when the level of the beginning
inventory falls to one-third of the ordering quantity or less. Assuming the beginning
inventory level to be the same as the order quantity, simulate the experiment for 10 trials
and advise the management as to the average inventory required to be necessarily
maintained by them. Also comment upon the validity of your result.

2. 37
M.B.A. (Sem. - II) Decision Science
Solution:
Generating, 2-digit random numbers for both the demand as well as the lead times, we
have:
Weekly Demand Probability Cumulative Random Nos.
(Units) Probability
250 0.30 0.30 00-59
275 0.40 0.70 30-69
300 0.30 1.00 70-99

(Weekly) Probability Probability Random Nos.


1 0.60 0.60 00-59
2 0.20 0.80 60-79
3 0.10 0.90 80-89
4 0.10 1.00 90-99

Simulating the model for 10 trials, we have:


Here; r quantity = 1500 units-order el=OOunits)
Beginning inventory = 1500 units.
Week Qty. Beginning Demand Closing Los Order Me
Receipt Inventory Inventory Sal Qty.
Rand Dema Rand Lead
No. No. Time
1 1500 11 250 1250 - -
2 1250 91 300 950 - -
3 950 99 300 650 - -
4 650 57 275 375 - -
5 375 26 250 125 - 1500 90 4
6 125 61 275 0 150 -
7 0 70 270 0 300 -
8 0 33 275 0 275 -
9 1500 1500 91 300 1200 - -
10 Total 1200 67 275 925 -
7500 725

Average Inventory = 7550 -H 0 = 755 units per week.


Validity of the Result. : The result obtained as above should not be taken Jto be the optimal
solution. In actual practice, thousands of such trials have to Ipe performed to obtain a
meaningful accurate solution. (Column No. 4 & 5 of Random number table are used in
the above problem).

2. 38
Introduction to Decision Science
Example 19:
Introduction of a new product in a market is an extremely risky & | Uncertain event. Apart
from the initial investment required, the chances of • failure are very high. Inspite of the
high degree of risk, the management of M/ tundra & Associates Ltd. is planning to
introduce multicolored umbrellas in Indian markets this summer. The probability
estimates for various decision parables, as per the estimates of the Marketing Research
Division of the Tptopany are as follows:

Sales Price (Rs’00) Expected Sales Value Variable Cost (Rs/Unit)


Value Probability Value Probability Value Probability*
(Rs./Unit (Units)
3 0.60 2000 0.20 1 0.20
4 0.60 3000 0.50 2 0.50
5 0.20 4000 0.30 3 0.30

Advise the management of M/s Indra Ltd. as to the likely average profits.
Solution:
Generating, 2-digit random numbers as usual, we have:
For Selling Price
value Probability Cumulative Probability Random Nos.
0.20 0.20 00-19
4 0.60 0.80 20-69
5 0.20 1.00 70-99

For Expected Sales Volume


value Probability Cumulative Probability Random Nos.
3 0.20 0.20 00-19
4 0.50 0.70 20-69
5 0.30 1.00 70-99

For Variable Cost Per Unit


value Probability Cumulative Probability Random Nos.
1 0.20 0.20 00-19
2 0.50 0.70 20-69
3 0.30 1.00 70-99

Net profit can easily ascertained by using the formulae:


Net profit, NP = (Sales price - variable cost) * Expected sales volume – Initial
Investment.

2. 39
M.B.A. (Sem. - II) Decision Science
Simulating for 20 trials, we have:-
No. Selling Price Variable Cost Exp. Volume
Trials
R. No. Value R. No. Value(Rs.) R. No. Value Net Prof
(Rs.) (Rs.) (Rs.)
1 57 4 13 1 86 4,000 7,000
2 39 4 22 2 13 2,00 -10001,000
3 96 5 63 2 67 3, 000 4,000
4 49 4 63 2 00 2,000 -1,000
5 77 4 23 2 79 4,000 3,000
6 26 4 18 1 37 3,000 4,000
7 95 5 32 2 85 4,000 7,000
8 30 4 21 2 24 3,000 1,000
9 50 4 13 1 18 2,000 1,000
10 11 3 16 1 36 3,000 1,000
11 80 5 10 1 52 3,000 7,000
12 64 4 77 3 16 2,000 -3,000
13 88 5 68 2 17 2,000 1,000
14 69 4 48 2 32 3,000 1,000
15 24 4 51 2 29 2,000 -1,000
16 07 3 24 2 29 3,000 -2,000
17 70 4 97 3 02 2,000 -3,000
18 08 4 54 2 12 2,000 -3,000
19 40 4 44 2 08 2,000 -1,000
20 16 3 90 I 59 3,000 0

Total profit = 22,000


Total trials = 20
Expected average profits = 22,000-20 = Rs. 11,000

Example 20:
A car dealer wishes to apply the technique of simulation for determining the optimal
inventory' level required to be maintained by him. The daily demand for a car is highly
unpredictable and can be presented as under:
Daily demand (per unit) Probability (%)
0-1 -
2 6
3 14
4 18
5 17
6 16
7 12
8 8

2. 40
Introduction to Decision Science
9 6
10 3
>10 -
The dealer follows the policy of ordering 30 units, whenever the closing inventory falls to
200 units or below. Assuming the lead time to be 5 days and the beginning inventory to
be 30 units, simulate the inventory model for a two-week period - and also comment
upon the validity of the result.
Solution:
Generating random numbers, we have:
Daily demand Probability Cumulative Probability R. Nos.
0-1
2 0.06 0.20 00-05
3 0.14 0.20 06-19
4 0.18 0.38 20-37
5 0.17 0.55 38-54
6 0.16 0.7 55-70
7 0.12 0.8 71-82
8 0.08 0.9 83-90
9 0.06 0.97 91-96
10 0.03 1.00 97-99
>10 -

Day Qty. Demand Opening R. Demand Closing Los Order


No. reed inventory Nos. Inventory sale qty.
1 30 97 10 20 30
2 20 02 2 18
3 18 80 7 11
4 11 66 6 5
5 5 96 9 0 4
6 30 0 55 6 24
7 24 50 5 19 30
8 19 29 4 15
9 15 58 6 9
10 9 51 5 4
11 4 04 2 2
12 30 2 86 8 24
13 24 24 4 20 30
14 20 39 5 15
Total 186 units

Average closing inventory = 186 -*-4= 13.29 units.


However, it is better to simulate the trials for a very large number of runs before applying
the result to the problem at hand.

2. 41
M.B.A. (Sem. - II) Decision Science
Example 21:
Alpha Ltd. is introducing a new product in market. The following information has been
collected by the marketing research department:
Variable Probability Fixed Probability Sales Sales Probability
Cost (per costs Price (per Volume
unit) unit)
3.00 0.10 10,000 0.10 6.00 20,000 0.10
3.50 0.20 15,000 0.15 6.25 18,000 0. ^20
4.00 0.30 20,000 0.25 6.50 16,000 0.25
4.50 0.10 25,000 0.30 6.75 15,000 0.15
5.00 0.30 30,000 0.20 7.00 10,000 0.30

From the above data simulate a 10 days trial and determine the expect^ profit. Sales
volume need not considered as separate variable.
Solution:
Generating random numbers, we have:
Variable Cost Probability Cumulative Random Nos.
(per unit) Probability
3.00 0.10 0.10 00-09
3.50 0.20 0.30 10-29
4.00 0.30 0.60 30-59
4.50 0.10 0.70 60-69
5.00 0.30 1.00 70-99
10.000 0.10 0.10 00-29
15.000 0.15 0.25 10-24
20.000 0.25 0.50 25-49
25.000 0.30 0.80 50-79
30.000 0.20 1.00 80-99

Sales (/unit) Sales volume Probability Cumin. Probability R. Nos. 1


6.00 20.000 0.10 0.10 00-09
6.25 18.000 0.20 0.30 10-29
6.50 16.000 0.25 0.55 30-54
6.75 15.000 0.15 0.70 55-59
7.00 10,000 0.30 1.00 70-99

2. 42
Introduction to Decision Science
This column is not simulated, since it is given that the sales volume is not a separate
random variable.
1 2 3 4 5 6 7 8 9 10
Trial (From Selling Sales Sales Variab Total Total Cost Net
the Price volume value le cost variable fixed Cost Profit
R. No. (/unit) (/unit) cost Cost
Table)
R. No.
1 11 6.25 18,000 1,12,500 3.00 54,000 15,000 69,000 43,500
2 91 7.00 10,000 70.000 5.00 50,000 30.000 80,000 (10,000)
3 99 7.00 10.000 70,000 5.00 50,000 30,000 80,000 (10,000)
4 57 6.75 15.000 1,01,250 4.00 60.000 25,000 85,000 16,250
5 26 6.25 18,000 1,12,500 3.50 63,000 20,000 83,000 29,500
6 5 6.50 16,000 1,04,000 4.00 64,000 25,000 89,000 15,000
7 70 7.00 10.000 70,000 5.00 50,000 25.000 75,000 (5.000)
8 33 6.50 16,000 1,04,000 4.00 64.000 20,000 84,000 20,000
9 91 7.00 10,000 70,000 5.00 50,000 30,000 80,000 (10,000)
Total net profit = 98,000
No. of trials = 10
Average net profit = Rs. 9,800

Example 22:
Bright Bakery keeps stock of a popular brand of cake. Previous experience indicates
me daily demand as given here.
Daily Demand 0 10 20 30 40 50
Probability 0.01 0.20 0.15 0.50 0.12 0.02
Consider the following sequence of is random numbers:
R.NO. 48, 78, 19, 51, 56, 77, 15, 14, 68, 09
Using this sequence, simulate the demand for the next 10 days. Find out the stock
situation if the owner of me bakery decides to make 30 cakes every day. Also estimate
the daily average demand for the cakes on the basis of simulated data.
Solution:
According to the given distribution of demand, the random number coding for various
demand levels
Demand Probability Cumulative Probability Random Number Interval
0 0.01 0.01 00
10 0.20 0.21 01-20
20 0.15 0.36 21-35
30 0.50 0.86 36-85

2. 43
M.B.A. (Sem. - II) Decision Science
40 0.12 0.98 86-97
50 0.02 1.00 98-99

The simulated demand for the cakes for the next 10 days is given below. Also given is
the stock situation for various days in accordance with the bakery decision of making 30
cakes per day.

Day Number Random Demand Generated Demand Stock Leave


1 48 30 -
2 78 30 -
3 19 10 20(30-10)
4 51 30 20
5 56 30 20
6 77 30 20
7 15 10 40[20+(30-10)]
8 14 10 6[40+(30-10)]
9 68 30 60
10 09 10 80[60 + (30-10)]

Hence the Expected demand = 220/10 = 22 unit per day.



2. 44
Introduction to Decision Science
Review Questions

Q.1. What is Linear Programming? Describe the mathematical model of L.P.


Q.2. What is Graphical Method? Give the procedure of graphical method.
Q.3. What is Markov Chain? Explain the application Markov Chain.
Q.4. Explain the steady state probability with example.
Q.5. What is simulation? Explain simulation in queuing system
Q.6. Write a short note on: Monte Carlo simulation works
Q.7. Explain the e scopes and limitations of Monte Carlo simulation.
Q.8. Problems for Practice:
1) Solve the L.P. problem:
Minimise: Z x1 3x2 2x3
Subject to the constraints:
3x1 x 2 3x 3 7
2x1 4x 2 12
4x1 3x 2 8x 3 10
and x1 , x2 , x3 0
2) A company makes two kinds of belts. Belt A is of high quality and belt B i.e. of
lower quality. The respective profits are Rs.8 and Rs.6 per belt. Each belt of type A
requires twice as much time as belt of type B and if all belts were of type B, the
company could make 1,000 belts per day. The supply of leather is sufficient for
only 800 belts (both A and B combined). Belt A requires a fancy buckle and only
400 such buckles are available per. day. Three are only 700 buckles 11 day
available for type B. Determine the number of belts to be produced for each type
so as to maximise profit. Formulate and solve the problem graphically.
3) A retired person wants to invest up to an amount of Rs. 30,000 in fixed income
securities. His broker recommends investing in two bonds - Bond A yielding 7%
and Bond B yielding 10%. After some consideration, he decides to invest at most
Rs. 12,000 in Bond B and atleast Rs. 6,000 in Bond A. He also wants the amount
invested in Bond A to be atleast equal to the amount invested in Bond B. What
should the broker recommend if the investor wants to maximise his return on
investment? Solve graphically.
4) Solve graphically the following LPP:
Maximize Z 8x1 16x2
Subject to
x1 x 2 200
x2 125
3x1 6x 2 900
x1 , x 2 0

2. 45
M.B.A. (Sem. - II) Decision Science
5) A confectioner sells confectionery items. Past data of demand per week in
hundred kilograms with frequency is given below:
Demand/week 0 5 10 15 20 25
Frequency 2 11 8 21 5 3

Using the following sequence of random numbers, generate the demand for next
15 weeks. Also find out the average demand per week.
Random numbers: 35 52 90 13 23 73 34 57
35 83 94 565 67 66 60

6) The management of ABC Company is considering the question of marketing a


new product. The fixed cost required in the project is RS. 4,000. Three factors are
uncertain, viz,, the selling price, variable cost and the annual sales volume. The
product has a life of only one year. The management has the data on these three
factors as under:
Selling Price Probability Variable Probability Sales Probability
(Rs.) Cost (Rs) Volume (Units)
3 0.2 1 0.3 2,000 0.3
4 0.5 2 0.6 3,000 0.3
5 0.3 3 0.1 5,000 0.4

Consider the following sequence of thirty random numbers: 81, 32, 60, 04, 46, 31,
67, 25, 24, 10, 40, 02, 39, 68, 08, 59, 66, 90, 12, 64, 79, 31, 86, 68, 82, 89, 25, 11,
98, 16.
Using the sequence (First 3 random numbers first trial, etc.), simulate the average
profit for the above project on the basis of 10 trials.

7) High Fashion Textiles Ltd. has the following sales and the lead times probability
distribution:
No. of Times Weekly Demand Weekly lead No. of Times
(Units) Times
30% 250 1 60%
40% 275 2 20%
30% 300 3 10%
4 10%
The company has a policy of ordering exactly 1500 units when the level of the
beginning inventory falls to one-third of the ordering quantity or less. Assuming the
beginning inventory level to be the same as the order quantity, simulate the
experiment for 10 trials and advise the management as to the average inventory
required to be necessarily maintained by them. Also comment upon the validity of your
result.

2. 46
Introduction to Decision Science
8) A firm plans to purchase at least 200 quintals of scrap containing high quality
metal X and low quality metal Y. It decides that the scrap to be purchased must
contain at least 100 quintals of metal X and not more than 35 quintals of metal Y.
The firm can purchase the scrap from two suppliers (A and B) in unlimited
quantities. The percentage of X and Y metals in terms of weight in the scrap
supplied by A and B is given below.
Metals Supplier A Supplier B
X 25% 75%
Y 10% 20%

The price of A‟s scrap is Rs 200 per quintal and that of B is Rs. 400 per quintal.
The firm wants to determine the quantities that it should buy from the two suppliers
so that the total cost is minimized.
9) Solve the following LP problem graphically:
Maximize
Subject to,

and .



2. 47
Decision Theory, UNIT
Game Theory and
Queuing Theory 3
3.1 Decision Theory
3.2 Game Theory
3.3 Queuing Theory

Introduction:
The success or failure that an individual or organization experiences, depends, largely,
on the ability of making appropriate decisions. Making a decision requires an
enumeration of feasible and viable alternatives (courses of action or strategies) the
projection of consequences associated with different alternatives, and a measure of
effectiveness (or an objective) to identify the best alternative to be used. Queuing theory
is a form of probability that pertains to the study of waiting lines (queues). This is for a
system with a steady inflow of units (customers) and a specified number of servers
(service facilities). The analyst wants to know if the number of service facilities in the
system is adequate to handle the inflow of demands. The goal is to calculate various
performance measures of the system. These include the probability a server is
immediately available to a new arrival the average number of units in the queue, in the
system and the corresponding times in the queue and system.

3.1 Decision Theory:


Decision theory in economics, psychology, philosophy, mathematics and statistics is
concerned with identifying the values, uncertainties and other issues relevant in a given
decision, its rationality, and the resulting optimal decision. It is closely related to the field
of game theory as to interactions of agents with at least partially conflicting interests
whose decisions affect each other.
Definition:
Decision theory which is defined as the process of quantitative analysis of all futon that
influences the decision problem assists the decision-making in analyzing these
problems with several courses of action and consequences.

3. 1
M.B.A. (Sem. - II) Decision Science

3.1.1 Concept of Decision Theory:


Decision theory is a framework of logical and mathematical concepts, aimed at helping
managers in formulating rules that may lead to a most advantageous course of action
under the given circumstances. An interdisciplinary approach to determine how
decisions are made given unknown variables and an uncertain decision environment
framework. Decision theory brings together psychology, statistics, philosophy and
mathematics to analyze the decision-making process. Decision theory is applied to a
wide variety of areas such as game theory, auctions, evolution and marketing. Decision
theory (or decision analysts) provides an analytical and systematic approach to depict
the expected result of a situation, when alternative managerial actions and outcomes
are compared. Decision theory is the combination of descriptive and prescriptive
business modeling approach to classify the degree of knowledge. The degree of
knowledge is usually divided into four categories, as shown in Figure. The complete
knowledge (or certainty) is on the far right and complete ignorance is on the far left.
Between the two are risk and uncertainty.
Ignorance Uncertainty Risk Certainty

Increasing Knowledge

Irrespective of the type of decision model, there are certain essential characteristics that
are common to all.

3.1.2 Steps of Decision-Making Process:


The decision-making process involves the following steps:
1) Identify and define the problem.
2) List all possible future events called status of nature, which can occur in the context
of the decision problem. Such events are not under the control of decision-maker
because they are erratic in nature.
3) Identify all the courses action (alternatives or decision choice) that are available to
the decision-maker. The decision-maker has control over these courses of action.
4) Express the payoffs (Pij) resulting from each pair of course of action and state of
nature. These payoffs are normally expressed, in a monetary value.
5) Apply an appropriate mathematical decision theory model to select the best course
of action from the given list based on some criterion (measure of effectiveness) that
results in the optimal (desired) payoff.

3.1.3 Types of Decision-Making Environments:


To arrive at a good decision it is required to consider all available data an exhaustive list
of alternative, knowledge of decision environment, and use of appropriate quantitative

3. 2
Decision Theory, Game Theory and Queuing Theory
approach for decision-making. There are four types of decision-making environments:
Certainty. Uncertainty, risk and conflict have been described. The knowledge of these
environments helps in choosing the appropriate quantitative approach for decision-
making. There are two main types of decision-making. These are as follows.
1) Decision-Making under Certainty:
In this case the decision-maker has the complete knowledge (perfect information) of
consequence of every decision choice (course of action or alternative). Obviously,
he will select an alternative that yields the largest return (payoff) for the known
future (state of nature).
2) Decision Making under Risk:
Decision-making under risk is a probabilistic decision situation in which more than
one state of nature exists and the decision-maker has sufficient information to
assign probability values to the likely occurrence of each of these states. Knowing
the probability distribution of the states of nature, the best decision is to select that
course of action which has the largest expected payoff value. The expected
(average) payoff of an alternative is the sum of all possible payoffs of that alternative
weighted by the probabilities of the occurrence of those payoffs. The most widely
used criterion for evaluating various courses of action (alternatives) under risk is the
Expected Monetary Value (EMV) or Expected Utility.
a) Expected Monetary Value (EMV):
The expected monetary value (EMV) for a given course of action is the weighted
sum of possible payoffs for each alternative. This is obtained. The expected (or
mean) value is the long-run average value that would result if the decision were
repeated a large number of times. Mathematically EMV is stated as follows:
m

EMV(course of action Sj)=


pij pi
i 1
Where m=number of possible states of nature
pi=Probability of occurrence of states of nature Ni
pij=payoff associated with state of nature Ni and course of action Sj
b) Steps for calculating EMV:
The various steps involved in the calculation of EMV are as follows:
i) Construct a payoff matrix listing all possible courses of action and states of
nature. Enter the conditional payoff values associated with each possible
combination of course of action and state of nature along with the
probabilities of the occurrence of each state of nature.
ii) Calculate the EMV for each course of action by multiplying the conditional
payoffs by the associated probabilities and adding these weighted values for
each course of action.
iii) Select the course of action that yields the optimal EMV.

3. 3
M.B.A. (Sem. - II) Decision Science
Example:
Mr. X flies quite often from town A to town B. He can use the airport bus which
costs Rs 25 but if he takes it , there is a 0.08 chance that he will miss the flight.
The stay in a hotel costs Rs 270 with a 0.96 chance of being on time for the
flight. For Rs 350 he can use a taxi which will make 99 per cent chance of being
on time for the flight. lf Mr. X catches the plane on time, he will conclude a
business transaction that will produce a profit of Rs 10.000, and otherwise he
will lose it. Which mode of transport should Mr. X use? Answer based on the
EMV criterion.
Solution:
Computation of EMV of various courses of action is shown in Table
For Bus:
Cost Probability Expected
Value
Catches the flight 10000-25=9975 0.92 9,177
Miss the fight -25 0.08 -2.0
Expected monetary value (EMV) = 9175:
For Stay in Hotel:
Cost Probability Expected
Value
Catches the flight 10000-270=9730 0.96 9340.80
Miss the fight -270 0.04 -10.80
Expected monetary value (EMV) = 9330
For Taxi:
Cost Probability Expected
Value
Catches the flight 10000-350=9650 0.99 9553.50
Miss the fight -350 0.01 -3.50
Expected monetary value (EMV) = 9550
Comparing the EMV associated with each course of action indicates that course
of action ‘Taxi ‘is the logical alternative because it has the highest EMV.
3) Decision-making under Uncertainty:
In this case the decision maker is unable to specify the probabilities with which the
various states of nature (futures) will occur. However, this is not the case of decision
making under ignorance, because the possible states of nature are known. Thus,
decisions under uncertainty are taken even with less information than decisions
under risk. For example, the probability that Mr. X will be the prime minister of the
country 15 years from now is not known. In the absence of knowledge about the
probability of any state of nature (future) occurring, the decision- maker must arrive

3. 4
Decision Theory, Game Theory and Queuing Theory
at a decision only on the actual conditional payoff values, together with a policy
(attitude). There are several different criteria of decision-making in this situation. The
criteria that we will discuss in this section include:
a) Optimism (Maxima: or Minima) Criterion:
In this criterion the decision-maker ensures that he should not miss the
opportunity to achieve the largest possible profit (maximax) or the lowest
possible cost (minimin). Thus, he selects the alternative (decision choice or
course of action) that represents the maximum of the maxima (or minimum of
the minima) payoff (consequences or outcomes). The working method is
summarized as follows:
i) Locate the maximum (or minimum) payoff values corresponding to each
alternative (or course of action).
ii) Select an alternative with best-anticipated payoff value (maximum for profit
and minimum for cost).
Since in this criterion the decision-maker selects an alternative with largest (or
lowest) possible payoff value, it is also called an optimistic decision criterion.
b) Pessimism (Maximin or Minimum) Criterion:
In this criterion the decision-maker ensures that he should earn no less (or pay
no more) than some specified amount. Thus, he selects the alternative that
represents the maximum of the minima in case of profit (or minimum of the
maxima in case of loss) payoff in case of profits. The working method is
summarized as follows:
i) Locate the minimum (or maximum in case of profit) payoff value in case of
loss (or cost) data corresponding to each alternative.
ii) Select an alternative with the best anticipated payoff value (maximum for
profit and minimum for loss or cost).
Since in this criterion the decision-maker is conservative about the future and
always anticipates the worst possible outcome (minimum for profit and
maximum for cost or loss), it is called a pessimistic decision criterion. This
criterion is also known as Wald’s criterion.
c) Equal Probabilities (Laplace) Criterion:
Since the probabilities of states of nature are not known, it is assumed that all
states of nature will occur with equal probability, i.e. each state of nature is
assigned an equal probability. As states of nature are mutually exclusive and
collectively exhaustive so the probability of each of these must be 1/(number of
states of nature). The working method is summarized as follows:
i) Assign equal probability value to each state of nature by using the formula: 1
+ (Number of states of nature).
ii) Compute the expected (or average) payoff for each alternative (course of
action) by adding all the payoffs and dividing by the number of possible
states of nature, or by applying the formula:

3. 5
M.B.A. (Sem. - II) Decision Science
(Probability of state of nature j) (Payoff value for the combination of
alternative i and state of nature j)
iii) Select the best expected payoff value (maximum for profit and minimum for
cost).
This criterion is also known as the criterion of insufficient reason. This is
because except in a few cases, some information of the likelihood of occurrence
of states of nature is available.
d) Coefficient of Optimism (Hurwicz) Criterion:
This criterion suggests that a rational decision-maker should be neither
completely optimistic nor pessimistic and, therefore must display a mixture of
both. Hurwicz who suggested this criterion introduced the idea of a coefficient of
optimism (denoted by ά) to measure the decision-makers degree of optimism.
This coefficient lays between 0 and 1, where 0 represents a complete
pessimistic attitude about the future and 1 a complete optimistic attitude about
the future. Thus, if ά is the coefficient of optimism, then (1- ά) will represent the
coefficient of pessimism. The Hurwicz approach suggests that the decision
maker must select an alternative that maximizes.
H (Criterion of realism) = ά (Maximum in column) + (1-ά ) (Minimum in column)
The working method is summarized as follows:
i) Decide the coefficient of optimism ά (alpha) and then coefficient of
pessimism (1-ά).
ii) For each alternative select the largest and lowest payoff value and multiply
these with ά and (1 -ά) values respectively. Then calculate the weighted
average. H by using above formula.
iii) Select an alternative with best anticipated weighted average payoff value.
e) Regret Criterion:
This criterion is also known as opportunity loss decision criterion or minimax
regret decision criterion. This is because decision-maker regrets the fact that he
adopted a wrong course of action (or alternative) resulting in an opportunity loss
of payoff Thus, he always intends to minimize this regret. The working method is
summarized as follows:
i) From the given payoff matrix, develop an opportunity loss (or regret) matrix
as follows:
a) Find the best payoff corresponding to each state of nature
b) Subtract all other entries (payoff values) in that row from this value.
ii) For each course of action (strategy or alternative) identify the worst or
maximum regret value. Record this number in a new row.
iii) Select the course of action (alternative) with the smallest anticipated
opportunity-loss value.

3. 6
Decision Theory, Game Theory and Queuing Theory
Example 1:
A food products' company is contemplating the introduction of a revolutionary new
product with new packaging or replacing the existing product at much higher price (S1).
lt may even make a moderate change in the composition of the existing product, with a
new packaging at a small increase in price (S2), or may mall a small change in the
composition of the existing product, backing it with the word ‘New' and a negligible
increase in price (S3). The three possible states of nature or events are: (i) high
increase in sales (N1), (ii) no change in sales (N2) and (iii) decrease in sales (N3). The
marketing department of the company worked out the payoffs in terms of yearly net
profits for each of the strategies of three events (expected sales). This is represented in
the following table:

States of Nature
Strategies N1 N2 N3
S1 7,00,000 3,00,000 1,50,000
S2 5,00,000 4,50,000 0
S3 3,00,000 3,00,000 3,00,000

Which strategy should the concerned executive choose on the basis of


(a) Maximin criterion (b) Maximax criterion
(c) Minimax regret criterion (d) Laplace criterion?

Solution:
The payoff matrix is rewritten as follows:
a) Maximin Criterion:

Strategies
States of Nature S1 S2 S3
N1 7,00,000 5,00,000 3,00,000
N2 3,00,000 4,50,000 3,00,000
N3 1,50,000 0 3,00,000
Column(minimum) 1,50,000 0 3,00,000

The maximum of column minima is 3, 00,000. Hence, the company should adopt
strategy S3.

3. 7
M.B.A. (Sem. - II) Decision Science
b) Maximax Criterion:
Strategies
States of Nature S1 S2 S3
N1 7,00,000 5,00,000 3,00,000
N2 3,00,000 4,50,000 3,00,000
N3 1,50,000 0 3,00,000
Column(maximum) 7,00,000 5,00,000 3,00,000
The maximum of column maxima is 7, 00,000. Hence, the company should adopt
strategy S1.
c) Minimax Regret Criterion:
Opportunity loss table is shown below:
Strategies
States of Nature S1 S2 S3
N1 7,00,000- 7,00,000- 7,00,000-3,00,000
7,00,000 5,00,000 =4,00,000
=0 =2,00,000
N2 4,50,000- 4,50,000- 4,50,000-
3,00,000= 4,50,000=0 3,00,000=1,50,000
1,50,000
N3 3,00,000- 3,00,000-0= 3,00,000-3,00,000=0
1,50,000 3,00,000
=1,50,000
Column(maximum) 1,50,000 3,00,000 4,00,000

Hence the company should adopt minimum opportunity loss strategy. S1.
d) Laplace Criterion:
Since we do not know the probabilities of states of nature, assume that they are
equal. For this example, we would assume that each state of nature has a
probability 1/3 of occurrence. Thus,

Strategy Expected Return (Rs.)


S1 (7,00,000+3,00,000+1,50,000)/3=3,83,333.33
S2 (5,00,000+4,50,000+0)/3=3,16,666.66
S3 (3,00,000+3,00,000+3,00,000)/3=3,00,000
Since the largest expected return is from strategy S1, the executive must select
strategy S1.

3. 8
Decision Theory, Game Theory and Queuing Theory
4) Decision Making Under Partial Information:
This type of situation is somewhere between the conditions of risk and conditions of
uncertainty. As regards conditions of risk, we have seen that the probability of the
occurrence of various states of nature are known as the basis of past experience,
and in conditions of uncertainty, there is no such data available. But many situations
arise where there is partial availability of data. In such circumstances, we can say
that decision making is done on the basis of partial information.
5) Decision Making Under Conflict:
A condition of conflict is supposed to occur when we are dealing with rational
opponent rather than the state of nature. The decision maker, therefore, has to
choose a strategy taking into consideration the action or counter-action of his
opponent. Brand compe-tition, military weapons, market place, etc. are problems
which come under this category. The strategy choice is done as the basis of game
theory where a decision maker anticipates the action of the opponent and then
determines his own strategy.

3.2 Game Theory:


In business and economics literature, the term ‘game’ refers to a situation of conflict and
competition in which two or more competitors (or participants) are involved in the
decision making process in anticipation of certain outcomes over a period of time. The
competitors are referred to as players. A player may be an individual. a group of
individuals, or an organization. A few examples of competitive and conflicting decision
environment that involve the interaction between two or more competitors, where
knowledge of theory of games may help them in selecting an optimal strategy are:
1) Pricing of products, where a firm’s ultimate sales are determined not only by the
price levels it selects but also by the prices its competitors set.
2) Various tv networks have found that a program me success largely depends on
what the competitors presents in the same time slot: the outcomes of one networks
programming decisions have, therefore, been increasingly influenced by the
corresponding decisions made by other networks,
3) Success of a business tax strategy greatly depends on the position taken by the
internal revenue service regarding the expenses that may be disallowed,
4) Success of an advertising/marketing campaign largely depends on various types of
services offered to the customers etc.

3.2.1 Theory of Game:


Theory of games provides a series of mathematical models that may be useful in
explaining interactive decision-making concepts, where two or more competitors are
involved under conditions of conflict and competition. But as a practical tool, it is limited

3. 9
M.B.A. (Sem. - II) Decision Science
in scope. However, such models provide an opportunity to a competitor to evaluate not
only his personal alternatives (courses of action). but also the evaluation of the
opponent‘s (or competitors) possible choices in order to win the game. Game theory
came into existence in 20th Century. However, in 1944 John Von Neumann and Oscar
Morgenstem published a book named Theory of Games and Economic Behavior, in
which they discussed how businesses of all types may use this technique to determine
the best strategies given a competitive business environment
The models in the theory of games can classify based on the following factors:
1) Number of Players:
lf a game involves only two players (competitors), and then it is called a two-person
game. However if the number of players are more the game is referred to as n
person game.
2) Sum of Gains and Losses:
If in a game, the sum of the gains to one player is exactly equal to the sum of losses
to another player, so that, the sum of the gains and losses equals zero, then the
game is said to be a zero-sum game. Otherwise it is said to be non-zero sum game.
3) Strategy:
The strategy for a player is the list of all possible actions (moves or courses of
action) that he will take for every payoff (outcome) that might arise. lt is assumed
that the players are already aware of the rules governing the choices. The outcome
resulting from a particular choice is also known to the players in advance and is
expressed in terms of numerical values (e.g. money, per cent of market share or
utility). Here it is not necessary that the players have definite information about each
other‘s strategies.
The particular strategy by which a player optimizes his gains or losses, without knowing
the competitors strategies, is called optimal strategy. The expected outcome per play,
when players follow their optimal strategy, is called the value of the game.

3.2.2 2 by 2 Zero Sum Game with Dominance:


Consider a 2 x 2 game with payoff matrix
Player II:
p11 p12
playerI
p21 p22

Let xi be the probability player I plays row i with i=1,2 and let yi be the probability player
2 2
II plays column j with j=1,2. Since i 1 xi=1 and j 1 yi =1, so it can write

x2 1 x1
y2 1 y1

3. 10
Decision Theory, Game Theory and Queuing Theory
A) Algorithm for 2 x 2:
Step 1:
Examine the payoff matrix for a saddle point. lf one or more exist, the optimal
minimax strategies are pure strategies. They are obtained by playing the row and
column a saddle point is in with probability 1, and the other row and column with
probability 0. The saddle point is necessarily the value of the game. if a saddle point
does not exist, go to step 2.
Step 2:
p22 p21
x1* (1)
p11 p22 p 12 p21
x2* 1 x1* (2)
p22 p12
y1* (3)
p11 p22 p 12 p21
y2* 1 y1* (4)
These will be optimal minimax strategies for player I and II.
Step 3:
The value of the game is
v x1* y1* p11 x1* (1 y1* ) p12 (1 x1* ) y1* p21 (1 x1* )(1 y1* )p22

3.2.3 Game Strategy:


The strategy for a player is the list of all possible action (or moves or courses of action)
that he will 'take for every payoff (outcome) that might arise. It is assumed that the rules
governing the choices are known in advance to the players. The outcome resulting from
a particular choice is also known to the players in advance and is expressed in terms of
numerical values (e.g. money, percent of market share or utility). Here it is not
necessary that players have definite information about each other’s strategies.
The particular strategy (or complete plan) by which a player optimizes his gains or
losses without knowing the competitor's strategies is called optimal strategy. The
expected outcome per play when players follow their optimal strategy is called value of
the game.
Generally, the following two types of strategies are used by players in a game:
A) Pure Strategy:
A pure strategy defines a specific move or action that a player will follow in every
possible attainable situation in a game. Such moves may not be random, or drawn
from a distribution, as in the case of mixed strategies. A pure strategy provides a
complete definition of how a player will play a game. In particular, it determines the
move a player will make for any situation he or she could face. A player's strategy
set is the set of pure strategies available to that player. This is the decision rule that

3. 11
M.B.A. (Sem. - II) Decision Science
is always used by the player to select the particular strategy (course of action).
Thus, each player knows in advance all strategies, out of which he always selects
only one particular strategy, regardless of the other player‘s strategy. The objective
of the players is to maximize their gains or minimize their losses. A pure strategy is
a strategy that is not defined in terms of other strategies present in the game.
Examples of pure strategies that we will consider later are "hawk" and "dove" --they
represent very different ways of trying to obtain resources -- fighting and displaying.
B) Mixed Strategy:
A mixed strategy is an assignment of a probability to each pure strategy. This allows
for a player to randomly select a pure strategy. Since probabilities are continuous,
there are infinitely many mixed strategies available to a player, even if their strategy
set is finite. A strategy consisting of possible moves and a probability distribution
(collection of weights) which corresponds to how frequently each move is to be
played. A player would only use a mixed strategy when she is indifferent between
several pure strategies, and when keeping the opponent guessing is desirable - that
is, when the opponent can benefit from knowing the next move. Courses of action
that are to be selected on a particular occasion with some fixed probability are
called mired strategies. Thus, there is a probabilistic situation and objective of the
players is to maximize expected gains or to minimize expected losses by making the
choice among pure strategies with fixed probabilities. For example, Consider the
payoff matrix pictured to the right (known as a coordination game). Here one player
chooses the row and the other chooses a column. The row player receives the first
payoff, the column player the second. If row opts to play A with probability 1 (i.e.
play A for sure), then he is said to be playing a pure strategy. If column opts to flip a
coin and playoff if the coin lands heads and B if the coin lands tails, then she is said
to be playing a mixed strategy, and not a pure strategy.

Mixed Strategy game


3.3 Queuing Theory:
Queuing theory is the mathematical study of waiting lines, or queues. In queuing theory,
models are constructing, so that queue lengths and waiting times can predicted.
Queuing theory is generally considering a branch of operations research because the
results are often using when making business decisions about the resources needed to
provide a service.

3. 12
Decision Theory, Game Theory and Queuing Theory

3.3.1 Concept of Queuing Theory:


A queue is a waiting line (like customers waiting at a supermarket checkout counter);
queuing theory is the mathematical theory of waiting lines. More generally, queuing
theory is concerned with the mathematical modeling and analysis of systems that
provide service to random demands. A queuing model is an abstract description of such
a system. Typically, a queuing model represents the system's physical configuration, by
specifying the number and arrangement of the servers, which provide service to the
customers, and the stochastic (that is, probabilistic or statistical) nature of the demands,
by specifying the variability in the arrival process and in the service process. Queuing
theory examines every component of waiting in line to served, including the arrival
process, service process, number of servers, number of system place and the number
of "customers".

3.3.2 General Structure of Queuing Theory:

Fig. 3.1: Structure of Queuing Theory


The general structure of queuing theory is as follows:
1) Arrival Process:
The arrivals from the input population may be classified on different bases as
follows:
a) According to Source:
The source of customers for a queuing system can be infinite or finite. For
example, all people of a city or state (and others) could be the potential
customers at a superbazar.
b) According to Numbers:
The customers may arrive for service individually or in groups. Customers
visiting a beautician, students reaching at a library counter, and so on, illustrate
single arrivals. On the other hand, families visiting restaurants, ship-discharging
cargo at a dock are examples of bulk or batch arrivals.
c) According to Time:
Customers may arrive in the system at known (regular or otherwise) times, or
they might arrive in a random way.

3. 13
M.B.A. (Sem. - II) Decision Science
2) Service System:
There are two aspects of a service system-(a) structure of the service system and
(b) the speed of service.
a) Structure of the Service System:
By structure of the service system, we mean how the service facilities exist.
There are several possibilities.
b) Speed of Service:
In a queuing system, the speed with which service is provided can be expressed
in either of two ways-as service rate and as service time. The service rate
describes the number of customers serviced during a particular time. The
service time indicates the amount of time needed to service a customer. Service
rates and times are reciprocals of each other and either of them is sufficient to
indicate the capacity of the facility.
3) Queue Structure:
Another element of a queuing system is the queue structure. In the queue structure,
the important thing to know is the queue discipline, which means the order by which
customers are picked up from the waiting line for service. There are a number of
possibilities. They are:
a) First-come-first-served:
When the order of service of customers is in the order of their arrival the queue,
discipline is of the first-come- first-served type. For example, with a queue at the
bus stop the people who came first will board the bus first.
b) Last Come First Served:
Sometimes, the customers are serviced in an order reverse of the order in which
they enter so that the ones who join the last are served first.
c) Service in Random Order (SIRO):
Random order of service is defined as: whenever a customer is chosen for
service, the selection is made in a way that every customer in the queue is
equally likely to be selected. The time of arrival of the customers is, therefore of
no consequence in such a case.
d) Priority Service:
The customers in a queue might be rendered service on a priority basis. Thus,
customers may be called according to some identifiable characteristic (length of
job, for example) for service.

3.3.3 Terminology of Queuing Theory:


Terminology for study of queuing systems tends to be standard. Some variation
sometimes occurs, and where there are popular alternative forms of terminology, this
will made clear. The three main concepts in queuing theory are customers, queues,

3. 14
Decision Theory, Game Theory and Queuing Theory
and servers (service mechanisms). The meaning of these terms is reasonably self-
evident. In general, in a queuing system, customers for the queuing system are
generated by an input source. The customers are generated according to a statistical
distribution (at least, that is the simplifying assumption made for modeling purposes)
and the distribution describes their interarrival times, in other words, the times between
arrivals of customers. The customers join a queue. At various times, the server (service
mechanism) selects customers for service. The basis on which the customers is select,
called the queue discipline. The head of the queue is the customer who arrived in the
queue first. Another piece of terminology, which is sometimes used, is the tail of the
queue. The meaning of this varies depends upon the context and the source. It normally
means either all of the queue except the head or the last item in the queue, in other
words the customer who arrived last and is at the back of the queue. Both uses are in
common usage, and the terminology front and back of the queue will used to describe
the customers who arrived least recently and most recently (respectively) to avoid
ambiguity.
A) Input Source:
The input source is a population of individuals, and as such is called the calling
population. The calling population has a size, which is the number of potential
customers to the system. The size can be either finite or infinite. As will become
apparent, if the calling population is infinite, various simplifying assumptions can be
made which make the process of modeling queues much easier. Most queuing
models assume that the population is infinite.
B) Queue:
Queues can be either infinite or finite. If a queue is finite, it can only hold a limited
number of customers. Most queuing models assume an infinite queue, even
though this is almost certainly not strictly true in the majority of applications of
queuing theory. This assumption is mad, because it makes the modeling process
simpler. In addition, if the maximum queue size is significantly larger than the likely
number of customers at any one time, then to all intents and purposes it is infinite
in size. The amount of time which is a customer waits in the queue for is called
the queuing time. The number of customers who arrive from the calling population
and join the queue in a given period is model by a statistical distribution.
C) Queue Discipline:
The queue discipline is the method by which customers are select from the queue
for processing by the service mechanisms (also called servers). The queue
discipline is normally first-come-first-served (FCFS), where the customers are
process in the order in which they arrived in the queue; such that the head of the
queue is always process next. Most queuing models assume FCFS as the queue
discipline, and only this discipline will be consider in any detail in this project. More

3. 15
M.B.A. (Sem. - II) Decision Science
information on other queuing disciplines is available in the section on queuing
theory variations.
D) Service Mechanism (Server):
The service mechanism is the way that customers receive service once they are
selected from the front of a queue. Service mechanisms are also known a server
(in fact, this is the more common terminology). The amount of time, which a
customer takes to be serviced by the server, is called the service time. A statistical
distribution is used to model the service time of a server. Some queuing models
assume a single server, some multiple servers. For most general analysis, most
queuing models assume that the system either has a single server or allow the
number of servers to become a variable. This convention will be explored further in
the section on Kendall notation. The more detailed description of server is as
follows.

3.4 Queuing Models:


In queuing theory, a queuing model is used to approximate a real queuing situation or
system, so the queuing behavior can be analysed mathematically.
Queuing models allow a number of useful steady state performance measures to be
determined, including:
1) Average number in the queue, or the system,
2) Average time spent in the queue, or the system,
3) Statistical distribution of those numbers or times,
4) Probability the queue is full, or empty, and
5) Probability of finding the system in a particular state
These performance measures are important as issues or problems caused by queuing
situations are often related to customer dissatisfaction with service or may be the root
cause of economic losses in a business. Analysis of the relevant queuing models allows
the cause of queuing issues to be identified and the impact of proposed changes to be
assessed. There are two types of queuing model, single server and multi-server. These
are as follows.
1) Single Server:
In queuing theory, a discipline within the mathematical theory of probability, an
M/M/1 queue represents the queue length in a system having a single server, where
a Poisson process determines arrivals and job service times have an exponential
distribution. The model name is written in Kendall's notation. The model is the most
elementary of queuing models and an attractive object of study as closed-form
expressions can obtained for many metrics of interest in this model.
Poisson Arrivals and Service:
M/M/1/infinity/infinity represents a single server that has unlimited queue capacity
and infinite calling population, both arrivals and service are Poisson (or random)

3. 16
Decision Theory, Game Theory and Queuing Theory
processes, meaning the statistical distribution of both the inter-arrival times and the
service times follow the exponential distribution. Because of the mathematical
nature of the exponential distribution, a number of quite simple relationships can be
derived for several performance measures based on knowing the arrival rate and
service rate. This is fortunate because an M/M/1 queuing model can be used to
approximate many queuing situations.
a) M/M/1/∞ :
The M/M/1 is a single-server queuing model, which can be used to approximate
simple systems. The M/M/1 queuing system described as a queuing model
where:
Arrivals form a Poisson process i.e. interarrival time is exponentially distributed,
Service time is exponentially distributed, There is one server, The length of
queue in which arriving users wait before being served is infinite, The population
of users (i.e. the pool of users) available to join the system is infinite

There are many situations where an M/M/1 model could be used. For instance, it
can take a post office with only one employee, and therefore one queue. The
customers arrive, go into the queue, they are served, and they leave the system.
If the arrival process is Poisson, and the service time is exponential, it can use
an M/M/1 model. Hence, it can easily calculate the expected number of people
in the queue, the probabilities they will have to wait for a particular length of
time, and so on.
i) Steady State Distribution:
A birth and death process is a M/M/1 queue when λi = λ and μi = μ for all i.
Let pn represents the probability mass function of a discrete random variable
denoting the number of customers in the system in long run
pn = (1-ρ) ρn, ρ<1
Where,
ρ = λ/µ represents the traffic intensity of the system. For a stable system, the
intensity ρ must be less than 1. It can see above that the steady state
probabilities for an M/M/1 queue follows the geometric distribution with
parameter (1- ρ).
ii) Measures of Effectiveness:
Measure Expression
Average number of customers in the system (Ls) ρ/(1- ρ)
Average number of customers in the Queue (Lq) ρ2/(1- ρ)

3. 17
M.B.A. (Sem. - II) Decision Science
Expected waiting time in system (W) 1/(µ-λ)
Expected waiting time in queue (W q) ρ/( µ-λ)
Utilization Ρ

iii) Transient Solution:


The transient probabilities pn(t) = Pr {X(t) = n} for an M/M/1 queue are given
by
n i /2 n i 1 /2
In i 2 t In i 1 2 t
t
Pn t e j/2
n
1 Ij 2 t
j n i 2

For all n ≥ 0, where


n 2k
y/2
In y n 1
k 0 k! n k !
is the infinite series for the modified Bessel function of the first kind.
The M/M/1 model has sub-types, M/M/1/FIFO.
b) M/M/1/FIFO Queuing System :
It is a queuing model where the arrivals follow a Poisson process, service times
are exponentially distributed and there is only one server. In other words, it is a
system with Poisson input, exponential waiting time and Poisson output with
single channel. Queue capacity of the system is infinite with first in first out
mode. The first M in the notation stands for Poisson input, second M for Poisson
output, 1 for the number of servers and for infinite capacity of the system.
Formulas:

1) Probability of zero unit in the queue (P0) = 1

2
2) Average queue length (Lq ) =
(

3) Average number of units in the system (Ls) =

4) Average waiting time of an arrival (Wq) =


( )
1
5) Average waiting time of an arrival in the system (Ws) =

3. 18
Decision Theory, Game Theory and Queuing Theory
2) Multi Server :
This system is a multiserver model in which there are c servers and each server has
an independent and identically distributed exponential service time distribution, with
the arrival process again assumed to Poisson.
a) M/M/c/ :
M/M/c/ queue is a multi-server queuing model. In Kendall's notation, it
describes a system where arrivals form a single queue and are govern by a
Poisson process, there are c servers and job service times are exponentially
distributed. It is a generalisation of the M/M/1 queue, which considers only a
single server. The model with infinitely many servers is the M/M/∞ queue.
i) Steady State Distribution:
For this model, the steady state probabilities are given by:
n
1
Pn Po ,1 n c
n
n
1
Pn Po ,c n
cn cc
Where,
n
p pc 1
Po c 1
n 0 n! c! 1 n , ρ<1

ρ=λ/cµ, r= ρ=λ/µ

ii) Measures of Effective:


Measure Expression
Average number of customers In the
system (Ls)
Average number of customers In the
Queue (Lq)
Expected waiting time in System (W)

Expected waiting time in


System (W q)

The M/M/c model has sub-types, M/M/c/FIFO.


b) M/M/c/FIFO Queuing System:
Where,
Mean arrival rate= , Mean service rate =

3. 19
M.B.A. (Sem. - II) Decision Science
Number of servers=c, System capacity=N

i) Traffic Intensity : P
c
n c n-c -1

ii) Po =
c-1
1 λ 1 λ N
λ
+
n=o n! μ c! μ n=c cμ
n
1 λ
Pn= Po,n £ c
n! μ
And n
1 λ
= Po ,c < n £ N
c!c n-c μ
c
iii) L P λ p
1- pN-c - N- c 1- p pn-c ,
q= o
μ c! 1- P
2

Where p = λ

iv) λ /
Ls=Lq + ,whereλ / orλclf = Effective arrival rate
μ
c-1
=μ c - c - n Pn
n=0

Ls
v) Ws=
λ/
Lq 1
vi) Wq = / and Wq = Ws -
λ μ
N
vii) 1 λ
P n =N = Po
c!cN-c μ

Solved Problems\
A) Problems on EMV:
Example 1:
The manager of a flower shop promises its customers delivery within four hours on all
flower orders. All flowers are purchased on the previous day and delivered to Parker by
8.00 am the next morning. The daily demand for roses is as follows.
Dozens of roses: 70 80 90 100
Probability 0.1 0.2 0.4 0.3
The manager purchases roses for Rs 10 per dozen and sells them for Rs 30. All unsold
roses are donated to a local hospital. How many dozens of roses should Parker order
each evening to maximize its profits? What is the optimum expected profit?

3. 20
Decision Theory, Game Theory and Queuing Theory
Solution:
Since number of roses (in dozen) purchased is under control of decision-maker,
purchase per day is considered as ‘course of action' (decision choice) and the daily
demand of the flowers is uncertain and only known with probability, therefore it is
considered as a ‘state of nature` (event). From the data, it is clear that the flower shop
must not purchase less than 7 or more than 10 dozen roses per day .Also each dozen
roses sold within a day yields a profit of Rs (30 — 10) = Rs 20 and otherwise it is a loss
of Rs 10. Thus
Marginal profit (MP) = Selling price — Cost = 30 — 10 = Rs 20
Marginal loss (ML) = Loss on unsold roses = Rs 10
Using the information given in the problem, the various conditional profit (payoff`) values
for each
Combination of decision choice-event are given by
Conditional profit = MP Roses sold - ML Roses not sold
20D. if D S
20D- 10(S-D) = 30D-10S , if D<S

Where D denotes the number of roses sold within 1 day and S the number of roses
stocked .The resulting conditional prom values and corresponding expected payoffs are
computed in
States of Probability Conditional profit (Rs) Expected Payoff(Rs) due
Nature(Demand due to Course of Action to Courses of Action
per Day) (Purchase per day) (Purchase per day)

(1) 70 80 90 100 70 80 90 100


(2) (3) (4) (5) (1) (2) (1) (1) (1)
(3)
(4) (5)
70 0.1 140 130 120 110 14 13 12 11
80 0.2 140 160 150 140 28 32 30 28
90 0.4 140 160 180 170 56 64 72 68
100 0.3 140 160 180 200 42 48 54 60
Expected 140 157 168 167
monetary
value(EMV)

Since the highest EMV of Rs 168 corresponds to the course of action 90, the flower
shop should purchase nine dozen roses every day.

3. 21
M.B.A. (Sem. - II) Decision Science
Example 2:
A retailer purchases cherries every morning at Rs 50 a case and sells them for Rs 80 a
case. Any case that remains unsold at the end of the day can be disposed of the next
day at a salvage value of Rs 20 per case (thereafter they have no value). Past sales
have ranged from 15 to 18 cases per day. The following is the record of sales for the
past 120 days.
Cases sold 15 16 17 18
Number of days 12 24 48 36
Find out how many cases should the retailer purchase per day in order to maximize his
profit.
Solution:
Let Ni, (i = 1, 2, 3, 4) be the possible states of nature (daily likely demand) and S j= (j =
1, 2, 3, 4) be all possible courses of action (number of cases of cherries to be
purchased).
Marginal profit (MP) = Selling price — Cost = Rs (80 — 50) = Rs 30
Marginal loss (ML) = Loss on unsold cases = Rs (50 — 20) = Rs 30
The conditional profit (payoff) values for each act-event combination are given by
Conditional profit = MP Cases sold - ML Cases unsold
= (80 - 50) (Cases sold) - (50 -20) (Cases unsold)
30S if S 2 N
(80 - 50)S - 30(N-S) = 60S – 30 V if S<N

The resulting conditional profit values and corresponding expected payoffs are
computed in Table
States of Probability Conditional profit(Rs) Expected payoff(Rs) Due to
Nature Due to courses of courses of Action(purchase
(Demand action(Purchase per day) per day)
Per week)
(1) 15 16 17 18 15 16 17 18
(2) (3) (4) (5) (1) (1) (1) (1)
(2) (3) (4) (5)
15 0.1 450 420 390 360 45 42 39 36
16 0.2 450 480 450 420 90 96 90 84
17 0.4 450 480 510 480 180 192 204 192
18 0.3 450 480 510 540 135 144 153 162
450 474 486 474

Since the highest EMV of Rs 486 is corresponds to the course of action 17, the retailer
must purchase 17 cases of cherries every morning.

3. 22
Decision Theory, Game Theory and Queuing Theory
Example 3:
The probability of the demand for Lorries for hiring on any day in a given district is as
follows:
No. of lorries demanded 0 1 2 3 4
Probability 0.1 0.2 03 02 0.2
Lorries have a fixed cost of Rs 90 each day to keep the daily hire charges (variable
costs of running) Rs 200. If the lorry hire company owns 4 Lorries, what is its daily
expectation? If the company is about to go into business and currently has no lorries
how many lorries should it buy?
Solution:
It is given that Rs 90 is the fixed cost and Rs 200 is variable cost. Now the payoff values
with 4 lorries at the disposal of decision-maker are calculated as under
No. of Lorries 0 1 2 3 4
Demanded
Payoff 0-90 4 200-90 4 400-90 4 600-90 4 800-90 4
(with = -360 = -160 =40 =240 =-440

Example 4:
A newspaper boy has the following probabilities of selling a magazine:
No. of Copies Sold Probability
10 0.10
11 0.15
12 0.20
13 0.25
14 0.30
Total 1.00
Cost of copy is 30 paisa and sale price is 50 paisa. He cannot return unsold copies.
How many copies should be order?

Solution:
The no. of copies for purchases and for sales which have meaning to the newsboy is
10, 11, 12, 13 or 14. These are his sales magnitudes. There is no reason for him to buy
less than 10 or more than 14 copies.
Table, the conditional profit table, shows the profit resulting from any possible
combination of supply and demand. Stocking of 10 copies each day will always result in
a profit of 200 paisa irrespective of the demand. For instance, even if the demand on
some day is 13 copies, he can sell only 10 and hence his conditional profit is 200 paisa.

3. 23
M.B.A. (Sem. - II) Decision Science
When he stocks 11 copies, his profit will be 220 paisa on days when buyers request 11,
12, 13, or 14 copies. But on days when he has 11 copies on stock and buyers buy only
10 copies, his profit decreases to 170 paisa. The profit of 200 paisa on the 10 copies
sold must be reduced by 30 paisa, the cost of one copy left unsold. The same will be
true when he stocks 12, 13 or 14 copies. Thus the conditional profit in paisa is given by:
Payoff = 20 x copies sold - 30 x copies unsold
Possible Probability Possible stock Action
Demand (No. 10 11 12 13 14
of Copies) copies copies copies copies copies
10 0.10 200 170 140 110 80
11 0.15 200 220 190 160 130
12 0.20 200 220 240 210 180
13 0.25 200 220 240 260 230
14 0.30 200 220 240 260 280

Next, the expected value of each decision alternative is obtained by multiplying its
conditional profit by the associated probability and adding the resulting values. This is
shown in table.
Possible Probability Possible stock Action
Demand 10 11 12 13 14
copies copies copies copies copies
10 0.10 20 17 14 11 8
11 0.15 30 33 28.5 24 19.5
12 0.20 40 44 48 42 36
13 0.25 50 55 60 65 57.5
14 0.30 60 66 72 78 84
Total Expected Profit(Paisa) 200 215 222.5 220 205

The newspaper boy must, therefore, order 12 copies to earn the highest possible
average daily profit of 222.5 paisa. This stocking will maximize the total profits over a
period of time. Of course there is no guarantee that he will make a profit of 222.5 paisa
tomorrow. However, if he stocks 12 copies each day under the conditions n, he will
have average profit of 222.5 paisa per day. This is the best he can do because the
choice of anyone of the other four possible stock actions will result in a lower daily
profit.

B) Problems on Decision-making under Uncertainty:


Example 5:
A manufacturer manufactures a product, of which the principal ingredient is a chemical
X. Now the manufacturer spends Rs 1.000 per year on supply of X. but there is a
possibility that the price may soon increase to four times its present figure because of a

3. 24
Decision Theory, Game Theory and Queuing Theory
worldwide shortage of the chemical. There is another chemical Y., which the
manufacturer could use in conjunction with at third chemical Z, in order to give the same
effect as chemical X. Chemicals Y, and Z would together cost the manufacturer Rs
3,000 per year. However, their prices are unlikely to rise. What action should the
manufacturer take? Apply the maximin and minimax criteria for decision making give
two sets of solutions. If the coefficient of optimism is 0.4 then find the course of action
that minimizes the cost.
Solution:
The data of the problem is summarized in the following table (negative figures in the
table represent profit).

States of Nature Course of Action


S1(use Y and Z) S2(use X)
N1 (Price of X increases) -3000 -4000
N2(Price of X does not increase) -3000 -1000

a) Maximin Criterion:

States of Nature Courses of Action


S1 S2
N1 -3000 -4000
N2 -3000 -1000
Column(Minimum) -3000 -4000
Maximum of column minima = - 3.000. Hence the manufacturer should adopt action S1.

b) Minimax (or opportunity loss) Criterion:

States of Nature Course of Action


S1 S2
N1 -3000-(-3000)=0 -3000-(-4000)=1000
N2 -1000-(-3000)=2000 -1000-(-1000)=0
Maximum opportunity 2000 1000
Hence, Manufacturer should adopt minimum opportunity loss course of action S2.

c) Hurwicz Criterion: Given the coefficient of optimism equal to 0.4 the coefficient of
pessimism will be 1 - 0.4 = 0.6. Then according to Hurwicz, select course of action
than optimizes (maximum for profit minimum for cost) the payoff value.
H = α (Best payoff)+1(1- α)(Worst payoff)
= α (Maximum in column)+(1- α)(Minimum in column)

3. 25
M.B.A. (Sem. - II) Decision Science
Course of Action Best Payoff Worst Payoff H
S1 -3000 -3000 -3000
S2 -1000 -4000 -2800
Since course of action S2 has the least cost (maximum profit)
= 0.4(1000) + 0.6(4.000) = Rs 2.800 the manufacturer should adopt this.

Example 6:
An investor is given the following investment alterative and percentage rates of return.
States of Nature(Market condition)
Low Medium High
Regular shares 7% 10% 15%
Risky share -10% 12% 25%
Property -12% 18% 30%

Over the past 300 days, 150 days have been medium market conditions and 60 days
have had high market increases. Based on these data, state the optimum investment
strategy for the investment.
Solution:
According to the given information, the probabilities of low, medium and high market
conditions would be 90/300 or 0.30, 150/300 or 0.50 and 60/300 or 0.20, respectively.
The expected pay-offs for each of the alternatives are shown below:

Strategy
Market conditions Probability Regular share Risky share Property
Low 0.30 0.07 0.30 0.10 0.30 0.15 0.30
Medium 0.50 -0.10 0.50 0.12 0.50 0.25 0.50
High 0.20 -0.12 0.20 0.18 0.20 0.30 0.20
Expected values 0.136 0.126 0.230

Since the expected return of 23 per cent is the highest for property, the investor should
invest in this alternative.

Example 7:
A steel manufacturing company is concerned with the possibility of a strike. It will cost
an extra Rs. 20,000 to acquire an adequate stockpile. If there is a strike and the
company has not stockpiled, management estimates an additional expense of Rs.
60,000 on account of lost sales. Should the company stockpile or not if it is to use:

3. 26
Decision Theory, Game Theory and Queuing Theory
1) Minimin criterion
2) Minimax criterion
3) Minimax Regret criterion
4) Hurwicz criterion for =0.4
5) Laplace criterion
Solution:
Conditional cost table is constructed using the given data.
Conditional cost table (Rs.)
States of Nature Alternative
Stockpile, A1 Do not Stockpile, A2
Strike, S1 20,000 60,000
No strike, S2 20,000 0

1) Minimin Criterion:
Since the table represents costs, minimin criterion will be used. Here minimum of
alternative A1 is? 20,000 and of A2 is Rs. 0. Therefore, company should select
alternative A2 i.e., it should not stockpile and associated cost is Rs.0.
2) Minimax Criterion:
Again since the table represents costs, minimax criterion will be used. Maximum of
alternative A, i.e., it should stockpile and associated cost is Rs. 20,000.
3) Minimax Regret Criterion:
Conditional regret table is first constructed. For S1-row regret will he cost minus the
minimum cost of Rs. 20,000; for S2- row it will be cost minus the minimum cost of
Rs. 0.
Conditional cost table (Rs.)
States of Nature Alternative
Stockpile, A1 Do not Stockpile, A2
Strike, S1 0 40,000
No strike, S2 20,000 0
Maximum regret for alternative A1 is Rs. 20,000 and for A2 is Rs. 40,000.
Therefore, company should choose alternative A, with minimax regret of Rs. 20,000.
4) Hurwlcz Criterion (Weighted Average Criterion):
For = 0.4, the cost associated with alternative A1=
Rs. (20,000 x 0.4 + 20,000 x 0.6) = Rs. 20,000;
cost associated with alternative A2 = Rs. (60,000 x 0.4 + 0 x 0.6)
= Rs. 24,000.
Therefore, the company should stockpile and associated cost is Rs. 20,000.
5) Laplace Criterion (Equal Probability Criterion):
Equal probability cost for alternative.
1
A1 (20,000 20,000) 20,000 , equal probability cost for alternative.
2

3. 27
M.B.A. (Sem. - II) Decision Science
1
A2 (60,000 0) 30,000. ,
2
Therefore, the company should stockpile and associated cost is Rs. 20,000.

C) Problems on Pure-Strategy:
Example 8:
Find the optimum strategies for players A and B with game value for the following
game:
B1 B2 B3 B4
A1 5 3 8 5
A2 -4 -3 12 9
A3 8 3 -1 -5
A4 3 -1 2 3

Solution:
Using Minimax criteria select the row minimum and enclose it in a circle. Then, select
the column maximum and enclose it in a circle as shown below.
B1 B2 B3 B4 Row
Minima
A1 5 3 8 5 3

A2 -4 -3 12 9 -4
A3 8 3 -1 -5 -5
A4 3 -1 2 3 -1
Column 8 3 12 9
Maxima

It is clear that saddle point is (A1, B2) and therefore player A uses A1 and player B uses
B2, and value of game is 3.

Example 9:
Find the saddle point for the game
B
I II III
I -2 15 -2
A
II -5 -6 -4
III -5 20 -8

3. 28
Decision Theory, Game Theory and Queuing Theory
Solution:
To get a saddle point, enclose the minimum entry of each row by a circle and maximum
entry of each column by a square.
B

I II III Row Minima

I 15 -2
-2 -2
A II -5 -4 -6
-6
III -5 20 -8
-8
Column Maxima -2 20 -2

Evidently the game contains two saddle points in cells (I, I) and I, III) i.e., strategy I is
for A' and for B can be two optimum strategies namely I and III. They value of the game
in each ease is -2.

D) Problems on Mixed-Strategy For Zero Sum Games:


Example 10:
Below are two zero-sum games. For each one, find a mixed-strategy Nash equilibrium
that is Not also a pure-strategy Nash equilibrium.
1) 2)

Solution:
1)

Let co = the probability that the column player (player 1) plays column 1.
Let t1 = the probability that the row player plays row 1.
For an equilibrium, the column player needs to choose c1 so that the row player's
two rows give equal payoff. So:

3. 29
M.B.A. (Sem. - II) Decision Science
-6 c1 + 1 (1-c1) (the payoff to player 2 for playing row 1) = 5 c1 + -4 (1-c1) (the payoff
to player 2 for playing row 2)
5 = 8c1
c1 = 5/8
By the same line of reasoning, for equilibrium, the row player needs to choose r1 so
that the column player's two columns give equal payoff. So:
6 r1 + -5 (1-r1) (the payoff to player 1 for playing column 1) = -1 r1 + 4 (1-r1) (the
payoff to player 1 for playing column 2)
16r1 = 9
r1 = 9/16 So the mixed-strategy Nash equilibrium is:
player 1 chooses column 1 with probability 5/8, and column 2 with probability 3/8
player 2 chooses row 1 with probability 9/16, and row 2 with probability 7/16
The value of the game for player 1 is:
6 r1 + -5 (1-r1) = 6*9/16 – 5*7/16 = 1.1875
The value of the game for player 2 is -1.1875.
2)

Let c1 = the probability that the column player (player 1) plays column 1.
Let r1 = the probability that the row player plays row 1.
For an equilibrium, the column player needs to choose c1 so that the row player's
two rows give equal payoff. So:
-1 c1 -3 (1-c1) (the payoff to player 2 for playing row 1) = -2 c1 + 6 (1-c1) (the payoff
to player 2 for playing row 2)
10c1 = 9
c1 = 9/10
By the same line of reasoning, for an equilibrium, the row player needs to choose r1
so that the column player's two columns give equal payoff. So:
1 r1 + 2 (1-r1) (the payoff to player 1 for playing column 1)
1, -1 3, -3
2, -2 -6, 6 = 3 r1 + -6 (1-r1) (the payoff to player 1 for playing column 2)
10r1 = 8
r1 = 8/10 = 4/5
So the mixed-strategy Nash equilibrium is:
player 1 chooses column 1 with probability 9/10, and column 2 with probability 1/10
player 2 chooses row 1 with probability 4/5, and row 2 with probability 1/5
The value of the game for player 1 is:
1 r1 + 2 (1-r1) = 1* 4/5 + 2*1/5 = 6/5 = 1.2.
The value of the game for player 2 is -1.2.

3. 30
Decision Theory, Game Theory and Queuing Theory
Example 11:
Solve the following game
Player B
1 7 2
Player A 6 2 7
5 1 6

Solution:
Since all the elements in the third row are less than or equal to the corresponding
elements of row. Therefore, row III is dominated by row II and delete this dominated
row. The payoff matrix if given by:
Player B
1 7 2
Player A
6 2 7

The element of 3rd column is greater than or equals the corresponding elements of the
first column which gives that the column 3 is dominated by the column 1. This
dominated column is deleted and the reduced payoff matrix is given by:
Player B
1 7
Player A
6 2

The reduced payoff matrix is a 2x2 matrix.


The optimum mixed strategies for player A add B is given by
A1 A2 A3 B1 B2 B3
SA , p1 p2 1 and SB ,q q 1
p1 p2 0 q1 q 2 0 1 2
Where,
a 22 a 21
p1 , p1 p 2 1 p2 1 p1
(a11 a 22 ) (a12 a 21 )
a 22 a 21
q1 , q1 q 2 1 q2 1 q1
(a11 a 22 ) (a12 a 21 )
2 6 4 2 2 3
p1 p2 1
(2 1) (7 6) 10 5 5 5
2 7 5 1 1 1
q1 q 2 1q1 1
2 1 (7 6) 10 2 2 5

a11a 22 a12 a 21 2 1 7 6 40
The value of the game (v) 4
(a11 a 22 ) (a12 a 21 ) 2 1 (7 6) 10

3. 31
M.B.A. (Sem. - II) Decision Science
The optimal strategy is given by,
A1 A 2 A3 B1 B2 B3
SA 2 3 0 and SB 1 1 0
5 5 2 2
Value of game (v) = 4

E) Problems on Single Server M/M/1:


Example 12:
A mechanic repairs four machines. The mean time between service requirements is 5
hours for each machine and forms an exponential distribution. The mean repair time is
one hour and follows an exponential distribution. Machine downtime costs Rs. 25 per
hour and the mechanic costs Rs. 55 per day. Determine the following:
a) Probability that the service facility will be idle.
b) Expected number of machines waiting to be repaired, and being repaired.
c) Expected downtime cost per day.
Would it be economical to engage two mechanics, each repairing only two machines?

Solution:
Given, R = 4 machines, arrival rate, . = 1/5 = 0.2 machine per hour, service rate.
= l machine per hour. Then p = 0.2.
a) The probability that the service facility will be idle
n 1
R
R!
P0 0.4030
n 0 (R n)!

b) Expected number of machines to be out of order and being repaired

Ls R (1 P0 ) 1.015

c) Expected time a machine will wait in a queue for repairing


1 R
Wq =0.7 hours = 42 minutes
1 P0
Total cost with one mechanic = 55 + (8 x 25) = Rs. 255
If there are two mechanics, then R = 2 and P0 = 0.68.
WIDG, it is assumed that each mechanic with his two machines forms two mutually
exclusive Systems. Then, the expected number of machines to be out of order and
being repaired

Ls R (1 P0 ) 0.4machines

3. 32
Decision Theory, Game Theory and Queuing Theory
The expected downtime of the machine per day
= expected number of machines in the system x 8—hour day x number of
mechanics
= 0.4 x 8 x 2 = 6.4 hours per day.
Total cost of hiring two mechanics = two mechanic cost + machine downtime
= 2 x 55 + 6.4 x 25
= Rs.270 per day
Cost analysis suggests that it is not economical to engage two mechanics.

F) Problems on Steady State Probability:


Example 13:
A machine which at any time can be in either of two states, working = 1 or broken = 2.
Let X(t) be the random variable corresponding to the state of the copier at time t.
Suppose the time between transitions between working and broken are exponential
random variables with mean 1/2 week so q12 = 2, and the time between transitions
between broken and working are exponential random variables with mean 1/9 week so
q21 = 9. Suppose all these random variables along with X(0) are independent so that
X(t) is a Markov process. The generator matrix is,
-2 2
R = 9 -9 .
Find the steady state vector = ( 1, 2)
Solution:
Since R = 0 we get
-2 2
( 1, 2) 9 - 9 = (0, 0)
Therefore
-2 1 + 9 2 = 0
2 1 + 9 2 = 0
Therefore 2 = 2 1/9
In order to find 1 and 2 we need to use the fact that 1 + 2 = 1.
Combining this with 2 = 2 1/9 gives 1 + 2 1/9 = 1
or 11 1/9 = 1
or 1 = 9/11 and 2 = 2/11.



3. 33
M.B.A. (Sem. - II) Decision Science
Review Questions
Q.1. What is Linear Programming? Describe the mathematical model of L.P.
Q.2. What is decision theory? Explain the types of decision-making environments.
Q.3. What is game theory? Explain 2 by 2 zero sum game with dominance.
Q.4. What is queuing theory? Explain the general structure of queuing theory.
Q.5. Explain the queuing model in details.
Q.6. Write a short note on: pure game strategy and mixed game strategy.
Q.7. Problems for Practice:
1) A steel manufacturing company is concerned with the possibility of a strike. It will
cost an extra Rs. 20,000 to acquire an adequate stockpile. If there is a strike and
the company has not stockpiled, management estimates an additional expense of
Rs. 60,000 on account of lost sales. Should the company stockpile or not if it is to
use:
a) Minimax criterion
b) Minimax Regret criterion
c) Hurwicz criterion for =0.4
d) Laplace criterion

2) Mr. Girish wants to invest Rs. 10,000 in one of the three options A, B and C. The
pay-off for his investment depends on the nature of the economy (inflation,
recession or no change). The possible rectums under each economic situation are
given below:
Strategy Nature of Economy
Inflation E1 Recession E2 No Change E3
A 2,000 1,200 1,500
B 3,000 800 1,000
C 2,500 1,000 1,800
What course of action has he to take according to:
a) Pessimistic criterion (Maximin)
b) Optimistic criterion (Maximax)
c) Equally likely criterion (Laplace)
d) Regret criterion

3) Given is the following pay-off matrix.


State of Nature Probability Act
Do not Expand 200 Expand 400
expand(Rs.) units(Rs.) units (Rs.)
High Demand 0.4 2,500 3,500 5,000
Medium Demand 0.4 2,500 3,500 2,500
Low Demand 0.2 2,500 1,500 1,000
Using EMV criterion decide which of the act can be chosen as the best.

3. 34
Decision Theory, Game Theory and Queuing Theory
4) In a small town, there are only two stores ABC and XYZ that handle sundry goods.
The total number of customers is equally divided between the two because price
and quality of goods sold are equal. Both stores have good reputation in these
community, and they render equally good customer sr4vicd. Assume that a gain of
customer by ABC is a loss to XYZ and vice-versa. Both stores plan to run annual
pre-Diwali sales during the first week of November. Sales are advertised through a
local newspaper, radio and television media. With aid of an advertising firm, store
ABC constructed the game matrix given below:
Strategy of ABC Strategy of XYZ
Newspaper Radio Television
Newspaper 30 40 -80
Radio 0 15 -20
Television 90 20 50
Determine the optimal strategies and worth of such strategies for both ABC and
XYZ.
5) Is the following two parsons zero Sum game stable? Solve the game.
Player B
5 10 9 0
6 7 8 1
Player A
8 7 15 1
3 4 1 4
6) A tax consulting firm has three counters in its office to receive people who have
problems concerning their income, wealth and sales taxes. On the average 48
persons arrive in an 8 hour in a day. Each tax adviser spends 15 minutes on an
average on an arrival. lf the arrivals are poissonly distributed and service times are
according to exponential distribution. Find
a) The average number of customers in the system.
b) Average number of customers waiting to be served.
c) Average time a customer sends in the system.
7) A supermarket has two girls serving at the counters. The customers arrive in a
Poisson fashion at me of 12 per hour. The service time for each customer is
exponential with mean 6 minutes. Find:
a) The probability that an arriving customer has to wait for service,
b) The average number of customers in the system, and
c) The average time spent by a customer in the supermarket.



3. 35
CPM, PERT and UNIT

Sequencing
Problems
4
4.1 CPM
4.2 PERT
4.3 Network Calculations
4.4 Sequencing Problems

Introduction:
Programming Evaluation and Review Technique (PERT) and Critical Path Method
(CPM) are two techniques used in project management Project management is
necessary to ensure that a project is completed within the stipulated budget, within the
allocated time and perform to satisfaction. PERT was developed by US Navy in 1958
for managing its Polaris Missile Project. It is very useful device for planning time and
resources of a project. Polaris Missile project involved 3000 separate contracting
organizations and was regarded as the most complex project experience till that time.
Parallel efforts, at almost the same time, were undertaken by Du Pont Company, which
developed Critical Path Method (CPM) to plan and control the maintenance of chemical
plants.

4.1 CPM:
In 1957, DuPont developed a project management method designed to address the
challenge of shutting down chemical plants for maintenance and then restarting the
plants once the maintenance had been completed. Given the complexity of the process,
they developed the Critical Path Method (CPM) for managing such projects.
CPM provides the following benefits:
a) Graphical View: Provides a graphical view of the project.
b) Accurate Prediction: Predicts the time required to complete the project.
c) Discovers Criticalness in the Model: Shows which activities are critical to
maintaining the schedule and which are not.

4. 1
M.B.A. (Sem. - II) Decision Science

4.1.1 Concept:
CPM models the activities and events of a project as a network. Activities are depicted
as nodes on the network and events that signify the beginning or ending of activities are
depicted as arcs or lines between the nodes. The following is an example of a CPM
network diagram:

Fig. 4.1 CPM Network

4.1.2 Steps in CPM Project Planning:


Following are the steps in CPM project planning:
1) Specify the Individual Activities:
From the work breakdown structure, a listing can be made of all the activities in the
project. This listing can be used as the basis for adding sequence and duration
information in later steps.
2) Determine the Sequence of the Activities:
Some activities are dependent on the completion of others. A listing of the
immediate predecessors of each activity is useful for constructing the CPM network
diagram.
3) Draw the Network Diagram:
Once the activities and their sequencing have been defined, the CPM diagram can
be drawn. CPM originally was developed as an activity on node (AON) network, but
some project planners prefer to specify the activities on the arcs.
4) Estimate Activity Completion Time:
The time required to complete each activity can be estimated using past experience
or the estimates of knowledgeable persons. CPM is a deterministic model that does
not take into account variation in the completion time, so only one number is used
for an activity's time estimate.
5) Identify the Critical Path:
The critical path is the longest-duration path through the network. The significance
of the critical path is that the activities that lie on it cannot be delayed without

4. 2
CPM, PERT and Sequencing Problems
delaying the project. Because of its impact on the entire project, critical path
analysis is an important aspect of project planning.
The critical path can be identified by determining the following four parameters for
each activity:
a) ES- Earliest Start time:
The earliest time at which the activity can start given that its precedent activities
must be completed first.
b) EF- Earliest Finish time:
It is equal to the earliest start time for the activity plus the time required
completing the activity.
c) LF- Latest Finish time:
It is the latest time at which the activity can be completed without delaying the
project.
d) LS - Latest Start time:
It is equal to the latest finish time minus the time required to complete the
activity.
i) Slack Time:
The slack time for an activity is the time between its earliest and latest start
time, or between its earliest and latest finish time. Slack is the amount of
time that an activity can be delayed past its earliest start or earliest finish
without delaying the project. The critical path is the path through the project
network in which none of the activities have slack, that is, the path for which
ES=LS and EF=LF for all activities in the path. A delay in the critical path
delays the project. Similarly, to accelerate the project it is necessary to
reduce the total time required for the activities in the critical path.
Update CPM Diagram:
As the project progresses, the actual task completion times will be known and
the network diagram can be updated to include this information. A new critical
path may emerge, and structural changes may be made in the network if project
requirements change.

4.1.3 Important Definitions Under CPM & PERT:


1) Activity:
The work content required to be achieved to accomplish an event. It is a clearly
defined project element, a job or task which requires the consumption of resources
including time. The word activity has been adopted in preference to work content as
it also includes non-work actions like waiting (for instance, curing of a roof slab
before shuttering can be removed), whereas work signifies action or motion in time.
It is denoted by an arrow.

4. 3
M.B.A. (Sem. - II) Decision Science
2) Event:
The nodes or events represent points in time when certain activities have been
started or completed. In other words, an event describes the start or completion of a
activity. It is denoted by a numbered circle.
3) Path:
A path is an unbroken chain of activities from the initiating node to some other node,
generally to the last node indicating the end or completion of the project.
4) Dummy Activity:
A dummy activity is that activity which has a logical function only and consumes no
time or resources. It is denoted by a dotted arrow. There are two types of dummies:
a) Identity Dummy: It helps to keep the designation of each activity unique or
different from another.
b) Dependency Dummy: It helps to keep the logic correct.

4.1.4 Floats and Slacks:


There are many activities where the maximum time available to finish the activity is
more than the time required to complete it, i.e., its duration. The difference between the
two is known as Total Float available for that activity. So there are two terms used in
network analysis for calculations of project duration. These are:
A) Float:
Float indicates the free time associated with an event. It is the time available for an
activity in addition to its duration time. So, float or slack is the length of time an
activity can be delayed without delaying the whole project. When activities have no
slack time, this means that none of them can be delayed without delaying the entire
project, they are called critical activities. So slack is zero along the events on the
critical path, hence events having zeroed, slack or float show the critical path.
Type of Floats:
There are four types of floats as shown in figure below:
1) Total Float:
The total float of an activity represents the amount of time by which it can be
delayed without delaying the project completion date. In other words, it refers to
the amount of free time associated with an activity which can be used before,
during or after the performance of this activity. It is equal to the difference
between the total time available for the performance of an activity and the time
required for its performance.
For any activity, i - j, the total float can be calculated as follows:
Total Float = latest finish time — earliest finish time
= latest start time — earliest start time
= latest finish time - earliest start time — duration of the activity

4. 4
CPM, PERT and Sequencing Problems
2) Interfering Float:
Utilization of the float of an activity may, and is likely to, affect the float times of
the other activities in the network. That part of the total float which causes a
reduction in the float of the successor activities is called interfering float.
Formulated as the difference between the latest finish time of the activity in
question and the earliest starting time of the following activity, or zero, whichever
is larger; it indicates the portion of the activity float which cannot be consumed
without affecting adversely the float of the subsequent activity tor activities. ·
3) Free Float:
The free float is that part of the total float which can be used without affecting
the float of the succeeding activities. Thus, it is that value of the float which is
consumable when the succeeding activities (of the activity in question) are
started at their earliest starting times. Alternately, free float may be computed
as follows. lf the slack of float of an event is defined as the difference between
the earliest and latest event we can calculate the slack of the head event and
that of the tail event in respect of any activity. In that case,
Free float = Total float - Head slack
4) Independent Float:
The independent float time of an activity is the amount of float time which can be
used without affecting either the head or the tail events. It represents the amount
of float time available for an activity when its preceding activities are competed
at their latest and its succeeding activities begin at their earliest time-leaving the
minimum time available hr its performance. Any excess of this minimum time
over the duration of the activity is termed as the independent float associated
with it. The value of independent float is taken as follows:
In dependant Float = Free Float - Tall Slack

B) Slack:
The term ‗Slack‘ can be associated with both an event and an activity. In relation to
an event, g slack is the difference between its latest and earliest event time. In
relation to an activity, a slack will be synonymous to a float, i.e., (LST-EST) or (LFT -
EFT)
But, slack is ordinarily associated with an event and in such a case an activity will
have two slacks, viz; Head Slack (i.e., slack of its head event) and Tail Slack (i.e.,
slack of it tail event) where,
Head slack =LF - ES head event.
Tail slack = LF - ES of tail event.
Slack can be positive or negative depending upon the latest and earliest of an
event.

4. 5
M.B.A. (Sem. - II) Decision Science
4.2 PERT:
Complex projects require a series of activities, some of which must be performed
sequentially and others that can be performed in parallel with other activities. This
collection of series and parallel tasks can be modeled as a network. In 1957 the Critical
Path Method (CPM) was developed as a network model for project management. CPM
is a deterministic method that uses a fixed time estimate for each activity. While CPM is
easy to understand and use, it does not consider the time variations that can have a
great impact on the completion time of a complex project. The Program Evaluation and
Review Technique (PERT) is a network model that allows for randomness in activity
completion times. PERT was developed in the late 1950's for the U.S. Navy's Polaris
project having thousands of contractors. It has the potential to reduce both the time and
cost required to complete a project.

4.2.1 Concept of PERT:


PERT stands for Programme Evaluation and Review Technique. PERT is event
oriented. PERT is a probabilistic model i.e. it takes into account uncertainties involved in
the estimation of time of a job or an activity. It uses three estimates of the activity time -
Optimistic, Pessimistic and Most Likely. Thus, the expected duration of each activity is
probabilistic and expected duration indicates that there is 50% probability of getting the
job done within that time.

4.2.2 The Network Diagram:


In a project, an activity is a task that must be performed and an event is a milestone
marking the completion of one or more activities. Before an activity can begin, all of its
predecessor activities must be completed. Project network models represent activities
and milestones by arcs and nodes. PERT originally was an activity on arc network, in
which the activities are represented on the lines and milestones on the nodes. Over
time, some people began to use PERT as an activity on node network. For this
discussion, we will use the original form of activity on arc. The PERT chart may have
multiple pages with many sub-tasks. The following is a very simple example of a PERT
diagram:

Fig. 4.2: PERT Chart

4. 6
CPM, PERT and Sequencing Problems
The milestones generally are numbered so that the ending node of an activity has a
higher number than the beginning node. Incrementing the numbers by 10 allows for
new ones to be inserted without modifying the numbering of the entire diagram. The
activities in the above diagram are labeled with letters along with the expected time
required to complete the activity.

4.2.3 Steps in the PERT Planning Process:


PERT planning involves the following steps:
1) Identify Activities and Milestones:
The activities are the tasks required to complete the project. The milestones are the
events marking the beginning and end of one or more activities. It is helpful to list
the tasks in a table that in later steps can be expanded to include information on
sequence and duration.
2) Determine Activity Sequence:
This step may be combined with the activity identification step since the activity
sequence is evident for some tasks. Other tasks may require more analysis to
determine the exact order in which they must be performed.
3) Construct the Network Diagram:
Using the activity sequence information, a network diagram can be drawn showing
the sequence of the serial and parallel activities. For the original activity-on-arc
model, the activities are depicted by arrowed lines and milestones are depicted by
circles or "bubbles". If done manually, several drafts may be required to correctly
portray the relationships among activities. Software packages simplify this step by
automatically converting tabular activity information into a network diagram.
4) Estimate Activity Time:
Weeks are a commonly used unit of time for activity completion, but any consistent
unit of time can be used. A distinguishing feature of PERT is its ability to deal with
uncertainty in activity completion times. For each activity, the model usually includes
three time estimates:
a) Optimistic Time:
It is generally the shortest time in which the activity can be completed. It is
common practice to specify optimistic times to be three standard deviations from
the mean so that there is approximately a 1% chance that the activity will be
completed within the optimistic time.
b) Most likely Time:
It is the completion time having the highest probability. Note that this time is
different from the expected time.
c) Pessimistic Time:
It is the longest time that an activity might require. Three standard deviations
from the mean is commonly used for the pessimistic time.

4. 7
M.B.A. (Sem. - II) Decision Science
PERT assumes a beta probability distribution for the time estimates. For a beta
distribution, the expected time for each activity can be approximated using the
following weighted average:
Expected time = (Optimistic + 4 x Most likely + Pessimistic)/6
This expected time may be displayed on the network diagram.
To calculate the variance for each activity completion time, if three standard
deviation times were selected 0for the optimistic and pessimistic times, then
there are six standard deviations between them, so the variance is given by:
[(Pessimistic-Optimistic)/6]2.
5) Determine the Critical Path:
The critical path is determined by adding the times for the activities in each
sequence and determining the longest path in the project. The critical path
determines the total calendar time required for the project. If the activities outside
the critical path speed up or slows down (within limits), then the total project time
does not change. The amount of time that a non-critical path activity can be delayed
without delaying the project is referred to as slack time. If the critical path is not
immediately obvious, it may be helpful to determine the following four quantities for
each activity:
i) ES - Earliest Start time
ii) EF - Earliest Finish time
iii) LS - Latest Start time
iv) LF - Latest Finish time

a) Slack:
These times are calculated using the expected time for the relevant activities.
The earliest start and finish times of each activity are determined by working
forward through the network and determining the earliest time at which an
activity can start and finish considering its predecessor activities. The latest start
and finish times are the latest times that an activity can start and finish without
delaying the project. LS and LF are found by working backward through the
network. The difference in the latest and earliest finish of each activity is that
activity's slack. The critical path then is the path through the network in which
none of the activities have slack.
b) Variance:
The variance in the project completion time can be calculated by summing the
variances in the completion times of the activities in the critical path. Given this
variance, one can calculate the probability that the project will be completed by a
certain date assuming a normal probability distribution for the critical path. The
normal distribution assumption holds if the number of activities in the path is
large enough for the central limit theorem to be applied.

4. 8
CPM, PERT and Sequencing Problems
c) Project Crashing:
Since the critical path determines the completion date of the project, the project
can be accelerated by adding the resources required to decrease the time for
the activities in the critical path. Such a shortening of the project sometimes is
referred to as project crashing.
6) Update as Project Progresses:
Make adjustments in the PERT chart as the project progresses. As the project
unfolds, the estimated times can be replaced with actual times. In cases where
there are delays, additional resources may be needed to stay on schedule and
the PERT chart may be modified to reflect the new situation.

4.2.4 CPM/PERT Network Components or Network Diagram:


PERT / CPM networks contain two major components:
1) Activity:
An activity represents an action and consumption of resources (time, money,
energy) required to complete a portion of a project. Activity is represented by an
arrow.

Fig. 4.3: Activity


2) Event:
An event (or node) will always occur at the beginning and end of an activity. The
event has no resources and is represented by a circle. The ith event and jth event
are the tail event and head event respectively.

Fig. 4.4: Event


3) Merge and Burst Events:
One or more activities can start and end simultaneously at an event

Fig. 4.5: Merge and Burst Events

4. 9
M.B.A. (Sem. - II) Decision Science
4) Preceding and Succeeding Activities:
Activities performed before given events are known as preceding activities, and
activities performed after a given event, are known as succeeding activities.

Fig. 4.6: Preceding and Succeeding Activities

Activities A and B precede activities C and D respectively.


5) Dummy Activity :
An imaginary activity which does not consume any resource and time is called a
dummy activity. Dummy activities are simply used to represent a connection
between events in order to maintain logic in the network. It is represented by a
dotted line in a network.

Fig. 4.7: Dummy Activity

4.2.5 Rules to be Followed While Constructing a Network Diagram:


Rules that need to be followed while constructing a network diagram are as follows:
1) No single activity can be represented more than once in a network. The length of an
arrow has no significance.
2) The event numbered 1 is the start event and an event with highest number is the
end event. Before an activity can be undertaken, all activities preceding it must be
completed. This is, the activities must follow a logical sequence (or –
interrelationship) between activities.
3) In assigning numbers to events, there should not be any duplication of event
numbers in a network.
4) Dummy activities must be used only if it is necessary to reduce the complexity of a
network.
5) A network should have only one start event and one end event.

4. 10
CPM, PERT and Sequencing Problems
4.3 Network Calculations:

4.3.1 Calculating EST and EFT (Forward Pass Computations):


Before starting computations, the occurrence time of initial network event is fixed. Then,
the forward pass computation yields the earliest start and earliest finish time for each
activity (i, j) and indirectly the earliest expected occurrence time for each event.
1) Earliest Start Time(EST):
The earliest time at which an activity can start is immediately after the start has
occurred. It is represented as Es.
2) Earliest Finish Time (EFT):
For the earliest finish time for an activity, it is necessary that it must start at the
earliest time and therefore. It is represented as Ef.
This is mainly done in two steps:
Step 1: The computations begin from the ‗start‘ node and move towards the end` node.
For easiness, the forward = computations start by assuming the earliest occurrence
time of zero for the initial project event.
Step 2: i) Earliest starting time of activity (i, j) is the earliest event time of the tail end
event, i.e., (Es)ij= Ei,
ii) Earliest finish time of activity (i, j) is the earliest starting time + the activity time, i.e.,
(Ef)ij = (Es)ij + Dij or (Ef)ij = Ei = Dij
iii) Earliest event time for event j is the maximum of the earliest finish times of all
activities ending into that event. That is,
Ej = max [(Ef)ij for all immediate predecessor of (i , j)] or Ej = max [Ei + Dij]
i i

The computed ‗E‘ values are put over the respective circles representing each event.

4.3.2 Calculating LST and LFT (Backward Pass Computations)


The latest event time (L) indicates the time by which all activities entering into that event
must be completed without delaying the completion of the project. These can be
computed by reversing the method of calculation used for earliest event times.
1) Latest Start Time (LST):
The latest time at which an activity can finish is immediately before the latest time
an end event can take place. It is represented as Ls.
2) Latest Finish Time (LFT):
This is the latest time at which the activity can start without delaying the project
completion time. It is represented as Lf.
This is done in the following steps:
Step 1: For ending event assume E = L. Remember that all E‘s have been computed by
forward pass computations.

4. 11
M.B.A. (Sem. - II) Decision Science
Step 2: Latest finish time for activity (i, j) is equal to the latest event time of event j, i.e.,
(Lf)ij = Lj.
Step 3: Latest starting time of activity (i, j) = the latest completion time of (i, j) — the
activity time.
or (Ls)ij =(Lf)ij -Dij or (Ls)ij = Lj -Dij
Step 4: Latest event time for event i is the minimum of the latest start time of all
activities originating form that event, i.e.,
Li = min [(Ls)ij for all immediate successors of
j

(i, j)] = min [(Lf)ij - Dij] = min [Lj - Dij]


j j

The computed ‗L‘ values are put over the respective circles representing each event.

4.3.3 Slack:
Slack or float is the amount of time an event can be delayed beyond its TE without
affecting the TL of the final event. Thus, it is equal to s = TL – TE. For any event, slack
may be zero, positive or negative. The types of slack are as follows:
1) Zero Slack:
Zero slack (TL=TE) means that exactly enough time has been allowed for the
activity and spare time is not available, i.e., job would be on time.
2) Positive Slack:
Positive slack (TL>TE) means that there is enough time needed to finish the job. If
slack for end event is positive, i.e., directed time is late than the computed TE for
the end event, the project would be ahead of schedule. A relatively large positive
slack identifies the network path which will allow the reduction of resources in its
share without causing any delay in the completion of the project as a whole. These
spare resources can be transferred from such paths to other paths requiring
resources. This results in reduction of the total duration of the overall project.
3) Negative Slack:
Negative slack (TL<TE) means that sufficient time has not been allowed for
accomplishment of an event and indicates ―apparent trouble‖. Where negative slack
occurs, attention should be focused to these areas, most warranting the action to
reduce the time required to complete the job
4) Slack Time For An Event:
The slack time or slack of an event in a network is the difference between the latest
event time and earliest event time. An activity has head event slack and tail event
slack.
5) Head Event Slack (HES):
The head event slack of an activity in a network is the slack at the head (or terminal
point) of an activity. In other words, head event slack of an activity in a network is

4. 12
CPM, PERT and Sequencing Problems
the difference between the latest event time and earliest event time at its head (or
terminal point or node). It may be noted that HES should be calculated with the help
of network after calculating Earliest Time (E) and Latest Time (L).

4.3.4 Probability of Project Completion:


All conventional scheduling systems are based on fixed time estimate of activities,
which, in actual situations, may not be so. In the event of time estimates of jobs, not
predicted accurately, there would be uncertainty and risk. It is, therefore, natural for the
project-in-charge to know the risks involved ad extent of uncertainty associated with the
project. The main positive feature of PERT network over CPM network is the former‘s
ability to provide help for management decisions under conditions of uncertainty. The
concept provides probability that a certain project would be completed by the given
date.
Based on the spread of b-a, the activities may be called deterministic (b-a spread is
small) and variable (b-a spread is fairly large). Most industrial activities are deterministic
in nature, while activities of R and D projects are variable in nature. As discussed
above, three-time estimates (a, b, and m) are used for variable activities. For the
calculations of the probability of project completion in a given time, following points
should be kept in mind:
1) In majority of the situations, the data on probability of occurrence of an activity vs.
durations will conform to - distribution
2) The expected time divides the area under the curve into two parts, i.e., the
probability of completing an activity within its expected duration is 50%.
3) The expected time of the activity is located at one-third of the distance between
m and midrange away from most likely time.
4) Standard deviation and variance which are the measures of spread of data
and respectively equal to and help in determining the probability of
achieving the target completion date of the project or any stage of the project.
5) Although, the expected time of each activity independently has - distribution,
yet the completion time T has a normal distribution with mean and . This
statement can be clarified as under:
Suppose a project consists of n independent critical tasks with expected times of
. Each of these tasks, though expected to follow distribution, their
total time (project duration) , is expected to follow normal distribution with mean
and variance of critical path .

The conclusion is based on the concept of central limit theorem.

4. 13
M.B.A. (Sem. - II) Decision Science
Using the above mentioned principles, the probability of completion of project can
be established as under:
Let the expected times of the activity on the critical path.
Expected time of completion of the project,

T = scheduled time of the project

Once and have been calculated, then from tables of normal curve, the
probability corresponding to standard normal deviate (Z) can be read,
Here
Activity duration time has to be arrived by the experienced team having full
knowledge of the work to be done as the subsequent network analysis is based on
activity duration only. The time, so worked out, is entered in the network. Description
of activity is given over the arrow and time is indicated below the arrow, e.g., design
machine over three weeks. Unit of time should be uniform throughout the network in
terms of weeks, days, hours, months, etc.
Following the determination of the critical path, the floats for the noncritical activities
must be computed. Naturally, a critical activity must have zero float. In fact, this is
the main reason, it is critical. Before showing how floats are determined, it is
necessary to define two new times that are associated with each activity. These are
the latest start (LS) and the earliest completion (EC) times, which are defined for
activity (i, j) by

Duration of activity i, j.

4.4 Sequencing Problems:


Every organization wants to utilize is productive systems effectively and efficiently and
wants to maximize its profit by meeting the delivery deadlines. A number of jobs
involving many operations have to be performed and there are limited resources in
terms of plant and machinery on which the jobs have to be performed. It is necessary
that available facilities are optimally utilized and they are loaded, scheduled and
sequenced properly. A sequence is the order in which different jobs are to be
performed. When there is a choice that a number of tasks can be performed in different
orders, then the problem of sequencing arises. Such situations are very often
encountered by manufacturing units, overhauling of equipment or aircraft engines,
maintenance schedule of a large variety of equipment used in a factory, customers in a
bank or car servicing garage and so on.

4. 14
CPM, PERT and Sequencing Problems
The basic concept behind sequencing is to use the available facilities in such a manner
that the cost (and time) is minimized. The sequencing theory has been developed to
solve difficult problems of using limited number of facilities in an optimal manner to get
the best production and minimum costs.

4.4.1 Type of Sequencing Problem:


The following type of sequencing problem
A) n - Jobs One Machine Case:
This case of a number of jobs to be processed on one facility is very common in real
life situations. The number of cars to be serviced in a garage, number of engines to
be overhauled in one workshop, number of patients to be treated by one doctor,
number of different jobs to be machined on a lathe etc is the cases which can be
solved by using the method under study. In all such cases we are all used to ‗first
come first served' principle to give sense of satisfaction and justice to the waiting
jobs. But if this is not the consideration, it is possible to get more favourable results
in the interest of effectiveness and efficiency. The following assumptions are
applicable:
1) The job shop is static.
2) Processing time of the job is known.
The implication of the above assumption that job shop is static will mean that new
job arrivals do not disturb the processing of n jobs already being processed and the
new job arrivals wait to be attended to in next batch.
B) n - Jobs Through Two Machines:
The sequencing algorithm for this case was developed by Johnson and is called
Johnson's Algorithm. In this situation n jobs must be processed through machines
M1 and M2. The processing time of all the n jobs c: M1 and M2 is known and it is
required to find the sequence, which minimizes the 'me to complete all the jobs.
Johnson's algorithm is based on the following assumptions:
1) There are only two machines and the processing of all the jobs is done on both
the machines in the same order i.e. first on M1 and then on M2.
2) All jobs arrive at the same time (static arrival pattern) have no priority for job
completion.

Johnson's Algorithm involves following steps:


1) List operation time for each job on machine M1 and M2.
2) Select the shortest operation or processing time in the above list.
3) If minimum processing time is on M1, place the corresponding job first in the
sequence. If it is on M2, place the corresponding job last in the sequence. In
case of tie in shortest processing time, it can be broken arbitrarily.

4. 15
M.B.A. (Sem. - II) Decision Science
4) Eliminate the jobs which have already been sequenced as result of step 3.
5) Repeat step 2 and 3 until all the jobs are sequenced.

C) n - Job 3 Machine Case:


Johnson's algorithm which we have just applied can be extended and made use of n
jobs 3 machine case. If the following conditions hold good:
1) Maximum processing time for a job on machine M1 is greater than or equal to
maximum processing time for the same job.
or
2) Minimum processing time for n job on machine M3 is greater than or equal to
maximum processing time for n job on machine M2.

The following assumptions are made:


1) Every job is processed on all the three machines M1 , M2 and M3 in the same
order i.e. the job is first processed on M1 then on M2 and then on M3 .
2) The passing of jobs is not permitted.
3) Processing time for each job on the machine M1 , M2 and M3 are known.
In this procedure two dummy machines M1' and M2‘ are assumed in such a manner
that the processing time of jobs on these machines can be calculated as
Processing time of jobs on M1' = Processing time (M1 + M2)
Processing time of a job on M2‘ · Processing time (M2 + M1)
After this Johnson‘s algorithm is applied on M1' and M2' to find out the optimal
sequencing of jobs.

D) n Jobs m Machine Case :


Let there be 'n' jobs 1, 2, 3, ...... n and ‗m‗ machine M1, M2 , M3 ...... m. The order of
processing is M1, M2, M3 ......... m and no passing is permitted. The processing time
for the machine is shown below.

Job M1 M2 M3 m
1 a1 b1 c1 M1
2 a2 b2 c2 M2
3 a3 b3 c3 M3
    
    
n An Bn Cn Mn

If the following conditions are used, we can replace ‗m‘ machines by an equivalent
of two machines:

4. 16
CPM, PERT and Sequencing Problems
1) Min ai max of M2, M3 ......... (m- 1)
2) Min m max of M2, M3 ......... (m- 1)
when
M1 ' a bi ci  (m 1)i
M2 ' bi ci  (m 1)i mi

4.4.2 Priority Sequencing Rules:


The following priority sequencing rules are generally followed in production/service
system:
1) First Come First Served (FCFS):
As explained earlier, it is followed to avoid y heart bums and avoidable
controversies.
2) Earliest Due Date (EDD):
In this rule, top priority is allotted to the waiting job which has the earliest
due/delivery date. In this case the order of arrival of the job and processing time it
takes is ignored.
3) Least Stack Rule (LS):
It gives top priority to the waiting job whose slack ' is the least. Slack time is the
difference between the length of time remaining until the job is due and the length of
its operation time.
4) Average Number of Jobs:
In the System, It is defined as the average number o jobs remaining in the system
(waiting or being processed) from the beginning of sequence through time when the
last job is finished.
5) Average Job Lateness:
Jobs lateness is defined as the difference between the actual completion time of the
job and its due date. Average job lateness is sum of lameness of all j divided by the
number of jobs in the system. This is also called Average Job Tardiness.
6) Average Earliness of Jobs:
If a job is completed before its due date, the lateness value is negative and the
magnitude is referred as earliness of job. Mean earliness of the job is the sum of
earliness of all jobs divided by the number of jobs in the system.
7) Number of Tardy Job:
It is the number of jobs which are completed after the date.

4. 17
M.B.A. (Sem. - II) Decision Science
Solved Problems
A) Problems on CPM:
Example 1:
A project has the following activities:
Activity Duration (Days)
1-2 2
1-3 4
1-4 3
2-5 1
3-5 6
4-6 5
5-6 7
Required
a) Draw the project network.
b) Find the critical path and total project duration.
Solution:
a)

b)
Various Paths Duration of Paths
1-2-5–6 2 + 1 + 7 = 10
1-3-5–6 4 + 6 + 7 = 17
1-4–6 3+5=8
Hence the critical path is 1-3-5-6 with Project duration of 17 days.

Example 2:
Draw a network from the following activity and find critical path and total duration of
project.
Activity Immediate Predecessors Duration (Days)
A — 10
B — 24

4. 18
CPM, PERT and Sequencing Problems
C A 14
D C 21
E D 14
F E 25
G E 10
H D 20
I B, D 8
J F, G, H, I 13
K J 4
L J 12
M K, L 4
N M 4

Solution:
A network Diagram:

Critical Path:
Various Paths Duration of Paths
1 - 2 - 3 - 4 - 5 - 8 - 9 - 10 - 11 - 12 – 10 + 14 + 21 + 0 + 8 + 13 + 4 + 0 + 4 + 4
13 = 78
1 - 2 - 3 - 4 - 5 - 8 - 9 - 11 - 12 - 13 10 + 14 + 21 + 0 + 8 + 13 + 12 + 4 + 4 =
86
1 - 2 - 3 - 4 - 6 - 7 - 8 - 9 - 10 - 11 - 12 10 + 14 + 21 + 14 + 25 + 0 + 13 + 4 + 0 +
– 13 4 + 4 = 109
1 - 2 - 3 - 4 - 6 - 7 - 8 - 9 - 11 - 12 - 13 10 + 14 + 21 + 14 + 25 + 0 + 13 + 12 + 4
+ 4 = 117

4. 19
M.B.A. (Sem. - II) Decision Science
Example 3:
Draw the network from the following activity:
Activity Immediate Predecessors
A -
B A
C A
D B
E A
F B
G C, D
H F
I G
J G
K F
L H, I
M E, J
Solution:

Example 4:
Draw a network from the following activities and find a critical path and duration of
project.
Activity Duration (Days)
1-2 30
1-3 7
2-4 10
3-4 (Dummy one) 0
3-10 30
4-5 14
4-8 21

4. 20
CPM, PERT and Sequencing Problems
5-6 10
6-7 7
7-8 (Dummy one) 0
7-9 7
8-9 12
8-10 (Dummy one) 0
9-11 15
10-11 15
Solution:

Various Paths Duration of Paths


1 - 2 - 4 - 5 - 6 - 7 - 8 - 9 - 11 (30 + 10 + 14 + 10 + 7 + 0 + 12 + 15) = 98
1 - 2 - 4 - 5 - 6 - 7 - 9 – 11 (30 + 10 + 14 + 10 + 7 + 7 + 15) = 93
1 - 2 - 4 - 8 - 9 – 11 (30 + 10 + 21 + 12 + 15) = 88
1 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 11 (7 + 0 + 14 + 10 + 7 + 0 + 12 + 15) = 65
1 - 3 - 4 - 5 - 6 - 7 - 9 – 11

Example 5:
Draw the network from the following activity and find critical path and total project
duration.
Activity Immediate Predecessors Duration (Days)
A — 10
B — 9
C A 9
D A 8
E B 7
F B 11
G D, E 5

4. 21
M.B.A. (Sem. - II) Decision Science
Solution:

Various Paths Duration of Paths


1-2-5 10 + 9 = 19
1-2-4-5 10 + 8 + 5 = 23
1-3-4-5 9 + 7 + 5 = 21
1-3-5 9 + 11 = 20
Hence Critical Path is 1-2-4-5 i.e. (A-D-G) with Project duration of 23 days.

Example 6:
Draw the network for the following activities and find critical path and total duration of
project.
Activity Duration (Days)
1-2 20
1-3 25
2-3 10
2-4 12
3-4 5
4-5 10
Solution:

Various Paths Duration of Paths


1-2-4–5 20 + 12 + 10 = 42
1-2-3-4–5 20 + 10 + 5 + 10 = 45
1-3-4–5 25 + 5 + 10 = 40

Hence the critical path is 1 - 2 - 3 - 4 - 5 with project duration of 45 days.

4. 22
CPM, PERT and Sequencing Problems
B) Problems on PERT and Probability of Project Completion:
Example 7:
The project of constructing a small bridge in Wilmington, Pennsylvania consists of 10
major activities. Information pertaining to the project is given below:
Activity Optimistic(a) Most likely(m) Pessimistic(b)
A 2 5 8
B 4 7 10
C 4 9 14
D 6 10 20
E 1 3 5
F 3 6 9
G 4 5 12
H 6 8 10
a) Develop a PERT network for this project.
b) Find the critical path.
c) Compute the probability of completing the project in 29 weeks.
Solution:
a)
Activity (i, j) Expected Time E( ) Variance
A (1, 2) 5 1
B (2, 3) 7 1
C (2, 4) 9 2.78
D (3, 5) 11 5.44
E (4, 6) 3 0.44
F (4, 5) 6 1
G (5, 7) 6 1.78
H (6,7) 8 0.44

4. 23
M.B.A. (Sem. - II) Decision Science
b)

Critical path is A, B, D, G.
c) Probability that the project completion time T weeks:
K = 36
E (T) = 29

(From normal distribution table).

Example 8:
Draw a PERT diagram for a construction project with the activity information given
below:
Duration (weeks)
Immediate Optimistic Most likely Pessimistic
Activity
Predecessor(s) (a) (m) (b)
A - 7 16 28
B A 4 19 25
C A 10 16 37
D B 7 13 37
E B, C 13 19 33
F B 19 22 33
G D, E 4 7 19
H F, G 13 19 49
I B, C 13 25 37
J I, H 7 13 19

4. 24
CPM, PERT and Sequencing Problems
a) Identify the critical path.
b) Determine the probability of completing the project in two years (104 weeks).
Solution:
a)
Activity(i, j) Expected Time E( ) Variance
A (1, 2) 16.5 12.25
B (2, 3) 17.5 12.25
C (2, 4) 18.5 20.25
D (3, 5) 16 25
E (4, 5) 20.33 11.11
F (3, 6) 23.33 5.44
G (5, 6) 8.5 6.25
H (6, 7) 23 36
I (4, 7) 25 16
J (7, 8) 13 4

Critical path is A, C, E, G, H, J.
b) Probability that the product completion time T weeks:
K = 104
E (T) = 99.83

4. 25
M.B.A. (Sem. - II) Decision Science

(From normal distribution tables)

C) Sequencing Problems:
Example 9:
Eight jobs A. B, C, D, E, F, G arrive at one time to be processed on a single machine.
Find out the optimal job sequence, when their operation time is given in the table below.
Job(n) Operation time in
minutes
A 16
B 12
C 10
D 8
E 7
F 4
G 2
H 1
Solution:
For determining the optimal sequence, the jobs are selected in a non-descending
operation time as
follows.
Non-decreasing operation time sequence is H G F E D C B A
Total processing time
H=1
G = 1+2 = 3
F = 1+2+4 = 7
E = 1+2+4+7 =14
D = 1+2+4+7+8 = 22
C = 1+2+4+7+8+10 = 32
B = 1+2+4+7+8+10+12 = 44
A = 1+2+4+7+8+10+12+16 = 60
Average processing time = Total time/number of jobs = 183/8 = 23 minutes.
In case the jobs are processed in the order of their arrival i.e.
A B C D E F G H
the total processing time would have been as follows:
A =16
B = 16+12 = 28
C = 16+12+10 = 38
D = 16+12+10+8 = 46

4. 26
CPM, PERT and Sequencing Problems
E = 16+12+10+8+7 = 53
F = 16+12+10+8+7+4 = 57
G = 16+12+10+8+7+4+2 = 59
H = 16+12+10+8+7+4+2+1 = 60
Average processing time = 357/8 = 44.6, which is much more than the previous time.

Example 10:
A manufacturing company has 5 different jobs on two machines M1 and M2. The
processing time for each of the jobs on M1 and M2 is given below. Decide the optimal
sequence of processing of the jobs in order to minimize total time.
Job No. Processing Time
M1 M2
1 8 6
2 12 7
3 5 11
4 3 9
5 6 14
Solution:
The shortest processing time is 3 on M1 for job 4 so it will be sequenced as follows.
4
Next is job 3 with time 5 and M1, hence job 3 will be sequenced as
4 3
Next minimum time is for jobs 1 on M2 this will be sequenced last
4 3 1
After eliminating jobs 4, 3 and 1, the next with minimum time is job 5 on M1 so it will be
placed as
4 3 5 1
Now job 2 will be sequenced in the vacant space.
4 3 5 2 1
Example 11:
In a manufacturing process three operations have to be performed on machines M1, M2
and M3 in order M1 , M2 and M3 . Find out the optimum sequencing when the processing
time for four jobs on three machines is as follows:
Job M1 M2 M3
1 3 8 13
2 12 6 14
3 5 4 9
4 2 6 12
Solution:
Step 1: As the minimum processing time for job 2 on M. > maximum processing time of
job 2 on M2, Johnson‘s algorithm can be applied to this problem.

4. 27
M.B.A. (Sem. - II) Decision Science
Step 2: Let us combine the processing time of M1, M2 and M3 to form two dummy
machines M1' and M2'. This is shown in the matrix below:
Job M1’ M2’
1 11(3+8) 21(8+13)
2 185(12+6) 20(6+14)
3 9(5+4) 13(4+9)
4 8(2+6) 18(6+12)
Step 3: Apply Johnson‘s algorithm. Minimum time of 8 occurs for job 4 on M1‘ hence it
is sequenced first.
4 3 1
The next minimum time is for job 3 on M1‘ so it is sequenced next to job 4. Next
is job 1 and so on. So the optimal sequencing is
4 3 1 2

Example 12:
Determine the optimal sequence of performing 5 jobs on 4 machines. The machines are
used in the order M1 , M2 , M3 and M4 and the processing time is given below:
Job M1 M2 M3 m
1 8 3 4 7
2 9 2 6 5
3 10 6 6 8
4 12 4 1 9
5 7 5 2 3
Solution:
Step 1: Let us find out if any of the conditions stipulated is satisfied or not.
Condition 1. Min ai, max of M2 and M3 .
Min ai = 7
Max bi = 6
Max ci = 6
Hence the condition is satisfied.
Step 2: Let us form the matrix of new processing time by creating two fictitious machine
M1' and M2‘.

Job M1‘ = ai+bi+ci M2‘ = bi+ci+di


1 15(8+3+4) 14(3+4+7)
2 17(9+2+6) 13(2+6+5)
3 22(10+6+6) 20(6+6+8)
4 17(12+4+1) 14(4+1+9)
5 14(7+5+2) 10(5+2+3)

4. 28
CPM, PERT and Sequencing Problems
Step 3: Now solve solve 5 jobs 2 machines problem.
Minimum time of processing is for job 5 on machine M2‘ so it will be sequences
last.
5

Next minimum time is 13 for jobs 2 on machines M2‘ so it will be sequenced as


shown.
2 5

Next minimum time is for jobs 1 and 4 on machines M2‘ so it will be sequenced
as shown.
1 4 2 5

Next minimum time is 20 for jobs 3 on machines M2‘.

3 1 4 2 5



4. 29
M.B.A. (Sem. - II) Decision Science
Review Questions

Q.1. What is CPM and PERT? Explain the concept.


Q.2. Explain the Dummy Activities and events with example.
Q.3. What is mean by floats? Explain the types.
Q.4. What is sequencing problem? Explain the types.
Q.5. Explain the rule of priority sequencing.
Q.6. Write the note on: EST, LST, EST, LFT.
Q.7. Problems for Practice:
1) Draw a network from the following activities and find critical path and duration of
project.
Activity Duration(Days) Activity Duration(Days)
1-2 5 5-9 3
1-3 8 6-10 5
2-4 6 7-10 4
2-5 4 8-11 9
2-6 4 9-12 2
3-7 5 10-12 4
3-8 3 11-13 1
4-9 1 12-13 1

2) Draw a network from the following activities and find a critical path and total duration
of project.
Activity Duration(Days) Activity Duration(Days)
1-2 4 2-6 18
1-3 7 3-5 10
1-4 10 3-6 16
2-3 3 4-5 9
2-4 8 5-6 6
2-5 11 5-7 11
6-7 8

3) Draw a network from the following activity and find critical path and duration of
project.
Activity Duration(Days) Activity Duration(Days)
1-2 4 4-6(Dummy) 0
1-3 3 5-7(Dummy) 0
1-5 2 5-9 6
2-3(Dummy) 0 6-7 2

4. 30
CPM, PERT and Sequencing Problems
2-4 1 6-9 4
2-5 3 7-8 1
2-6 4 8-9 5
3-7 5

4) Draw a network from the following activity and find critical path and total duration of
project.
Activity Immediate Predecessors Duration (Days)
A — 10
B — 24
C A 14
D C 21
E D 14
F E 25
G E 10
H D 20
I B, D 8
J F, G, H, I 13
K J 4
L J 12
M K, L 4
N M 4

5) Six jobs are to be sequenced which require processing on two machines M1 and M2.
The processing time in minutes for each of the six jobs on machines M1 and M2 is
given below. All the jobs have to be processed in sequence M1, M2. Determine the
optimum sequence for processing the jobs so that the total time of all the jobs is
minimum. Use Johnson‗s Algorithm.
Jobs 1 2 3 4 5 6
Processing Machine M1 30 30 60 20 35 45
Time Machine M2 45 15 40 25 30 70

6) Solve the following sequencing problem when passing off is not allowed.
Jobs Machine Processing time in hours
A B C D
I 15 5 4 15
II 12 2 10 12
III 16 3 5 16
IV 17 3 4 17

4. 31
M.B.A. (Sem. - II) Decision Science
7) Draw a network from the following acclivity and End a critical path and total project
duration

Activity 1-2 1-3 2-4 2-5 3-5 4-5 4-6 5-6


Duration(weeks) 9 10.3 8 2 10 1 7 3

8) Draw the network from the following activities:

Activity A B C D E F G H I J K L M
Immediate - A A A B C C D F H I E, K, G L, J
Predecessors



4. 32
UNIT
Probability
5
5.1 Probability
5.2 Theorem of Probability
5.3 Probability Distribution
5.4 Binomial Probability Distribution
5.5 Normal Probability Distribution
5.6 Statistic Estimation

Introduction:
In day-to-day life, we all make use of the word 'probability'. But generally people have
no definite idea about the meaning of probability. For example, we often hear or talk
phrases like, “Probability it may rain today"; "it is likely that the particular teacher may
not come for taking his class today"; "there is a chance that the particular student may
stand first in the university examination"; "it is possible that the particular company may
get the contract which it bid last week"; “most probably I shall be returning within a
week"; “it is possible that he may not be able to join his duty". In otherworld, there is
involved an element of uncertainty or chance in all these cases. A numerical measure of
uncertainty is provided by the theory of probability. The aim of the probability theory is
to provide a measure of uncertainty. The theory of probability owes its origin to the
study of games of chance like games of cards, tossing coins, dice etc. But in modern
times, it has great importance is decision making problems.

5.1 Probability:
Probability is the chance that something will happen how likely it is that some event will
happen. Sometimes a probability can be measured with a number: "10% chance of
rain", or words such as impossible can be used, unlikely, possible, even chance, likely

5. 1
M.B.A. (Sem. - II) Decision Science
and certain. Example: "It is unlikely to rain tomorrow". The probability of an event is the
ratio of the number of cases favorable to it, to the number of all cases possible when
nothing leads us to expect that any one of these cases should occur more than any
other, which renders them, for us, equally possible.

5.1.1 Meaning and Definitions:


A) Meaning:
Probability is a measure or estimation of how likely it is that something will happen
or that a statement is true. Probabilities are given a value between 0 (0% chance or
will not happen) and 1 (100% chance or will happen). The higher the degree of
probability, the more likely the event is to happen or in a longer series of samples,
the greater the number of times such event is expected to happen.

B) Definitions:
1) Probability Definition:
The probability of a given event is an expression of likelihood or chance of
occurrence of an event. A probability is a number which ranges from 0 (zero) to
1 (one) - zero for an event which cannot occur and 1 for an event certain to
occur. How the number is assigned would depend on the interpretation of the
term „probability‟.
2) George G. Roussas:
“Let S be a sample space, associated with a certain random experiment and
consisting of finitely many sample points n, say, each of which is equally likely to
occur whenever the random experiment is carried out. Then the probability of
any event A, consisting of m sample points , is given by P (A) = m/n”.
3) Bruno de Finetti:
“Probability is the ratio between the favourable cases and the number of equally
probable cases”.

5.1.2 Properties of Probability:


Probability is of the following basic properties:
1) The probability of an event A lies between 0 and 1, i.e. 0 P(A) 1.
2) The sum of non-negative numbers to all possible results is unity, i.e. P ( ) = 1.
3) If two events A and B are mutually exclusive, the probability of occurrence of either
A or B is the sum of individual probabilities of A and B, i.e.
P (A U B) = P(A) + P(B).
4) Any two equivalent events will be assigned the same probability.

5. 2
Probability

5.1.3 Some Basic Concepts:


Before we give definition of the word probability, it is necessary to define the following
basic concepts and terms widely used in its study:
1) An Experiment:
When we conduct a trial to obtain some statistical information, it is called an
experiment.
Examples:
a) Tossing of a fair coin is an experiment and it has two possible outcomes: Head
(H) or Tail (T).
b) Rolling a fair die is an experiment and it has six possible outcomes: appearance
of l or 2 or 3 or 4 or 5 or 6 on the upper most face of a die.
c) Drawing a card from a well shuffled pack of playing cards is an experiment and it
has 52 possible outcomes.
2) Events:
The possible outcomes of a trial/experiment are called events. Events are generally
denoted by capital letters A, B, C etc.
Examples:
a) If a fair coin is tossed, the outcomes - head or tail are called events.
b) If a fair die is rolled, the outcomes 1 or 2 or 3 or 4 or S or 6 appearing up are
called events.
3) Exhaustive Events:
The total numbers of possible outcomes of a trial/experiment are called exhaustive
events. In other words, if all the possible outcomes of an experiment are taken into
consideration, then such events are called exhaustive events.
Examples:
a) In case of tossing a die, the set of six possible outcomes i.e. 1, 2, 3, 4, 5 and 6
are exhaustive events.
b) In case of tossing a coin, the set of two outcomes i.e. H and T are exhaustive
events.
4) Equally-Likely Events:
The events are said to be equally-likely if the chance of happening of each event is
equal or same. In other words, events are said to be equally likely when one does
not occur more often than the others.
Example:
a) If a fair coin is tossed, the events H and T are equally-likely events.
b) If a die is rolled, any face is as likely to come up as any other face. Hence, the
six outcomes 1 or 2 or 3 or 4 or 5 or 6 appearing up are equally likely events.
5) Mutually Exclusive Events:
Two events are said to be mutually exclusive when they cannot happen
simultaneously in a single trial. In other words, two events are said to be mutually

5. 3
M.B.A. (Sem. - II) Decision Science
exclusive when the happening of one excludes the happening of the other in a
single trial.
Example:
a) In tossing a coin, the events Head and Tail are mutually exclusive because both
cannot happen simultaneously in a single trial. Either Head occurs or tail occurs.
Both cannot occur simultaneously. The happening of head excludes the
possibility of happening of tail.
b) In tossing a die, the events 1, 2, 3, 4, 5 and 6 are mutually exclusive because all
the six events cannot happen simultaneously in a single trial. If number 1 turns
up, all the other five (i.e. 2, 3, 4, 5, or 6) cannot turn up.
6) Complementary Events:
Let there be two events A and B. A is called the complementary event of B and B is
called the complementary event of A if A and B are mutually exclusive and
exhaustive.
Examples:
a) In tossing a coin, occurrence of head (H) and tail (T) are complementary events.
b) In tossing a die, occurrence of an even number (2, 4, 6) and odd number (1, 3,
5) are complementary events.
7) Simple and Compound Events:
In case of simple events, we consider the probability of happening or not happening
of single events.
Example:
a) If a die is rolled once and A be the event that face number 5 is turned up, then A
is called a simple event. In case of compound events, we consider the joint
occurrences of two or more events.
b) If two coins are tossed simultaneously and we shall be finding the probability of
getting two heads, then we are dealing with compound events.
8) Independent Events:
Two events are said to be independent if the occurrence of one does not affect and
is not affected by the occurrence of the other.
Example:
a) In tossing a die twice, the event of getting 4 in the 2nd throw is independent of
getting 5 in the first throw.
b) In tossing a coin twice, the event of getting a head in the 2nd throw is
independent of getting head in the 1st throw.
9) Dependent Events:
Two events are said to be dependent when the occurrence of one does affect the
probability of the occurrence of the other events.
Example:
If a card is drawn from a pack of 52 playing cards and is not replaced; this will affect
the probability of the second card being drawn.

5. 4
Probability
5.2 Theorems of Probability:
The following important and basic theorems of probability:
A) Additional theorem of probability
B) Multiplication theorem of probability
C) conditional probability

5.2.1 Addition Theorem of Probability:


This is rule is related to the addition operation between two types of events to occur.
The addition theorem is basically based on following two conditions or events.
1) Mutually Exclusive Events:
Mutually exclusive events have no sample point common to them, therefore if A and
B are two mutually exclusive events then P A  B i.e., the intersection of two
mutually exclusive events is a null set and in this case P A  B 0
In case of mutually exclusive events,
P AB P A P B
lf A, B, C are three mutually exclusive events then,
P A  BC P A P B P C
In ease of finite number say n of mutually exclusive events
P(A1  A2  A3  AN ) P(A1 ) P(A2 )  P(AN )
2) Not-Mutually Exclusive Events:
If A and B an any two events then the probability that at least one of them occurs is
denoted by P A  B and is given by,
P AB P A P B P AB
Where,
P(A) = Probability of the occurrence of event A.
P(B) = Probability of the occurrence of event B.
P A  B = Probability of simultaneous occurrent of events A and B
If there are three events A, B and C. The probability of the occurrence of at least
one of them is given by,
P A  BC P A P B P C P AB P BC P A C P A B C ·

The important points to be considered are:


a) lf a number of events A1, A2, .... .... An Are mutually exclusive and exhaustive
then the sum of the individual probabilities of their happenings is equal to 1 i.e. ,

P A1 P A2 .................. P An 1

5. 5
M.B.A. (Sem. - II) Decision Science
b) If the events are finite and mutually exclusive then the probability of the
occurrence of at least one of them is equal to the sum of their individual
probabilities.
c) The event A and its compliment A can be considered as mutually exclusive and
exhaustive.
P A P A 1 P A l P A

Fig.5.1:

5.2.2 Multiplication Theorem of Probability:


1) Multiplication Theorem on Probabilities for Independent Events:
If two events A and B are independent, the probability that both of them occur is
equal to the product of their individual probabilities. i.e. P A  B = P (A).P(B)
Proof:
Out of n1 possible cases let m1 cases be favorable for the occurrence of the event A.
m1
P(A)
n1
Out of n2 possible cases, let my cases favorable for the of the event B
m2
P(B)
n2
Bach of n1 possible cases can be associated with each of the n2 possible cases.
Therefore the total number of possible cases for the occurrences of the event „A‟
and „B‟ is n1 n2 each of the ml favorable cases can be associated with each of the
m1 favorable cases. So the total number favorable cases the event „A‟ and „B‟ is m1
m2.
m1m 2 m1 m 2
P(A  B) . p(A).(B)
n1n 2 n1 n 2
Note:
a) The theorem can be extended to three or more independent events .If A,B,C
....... be independent events, then P A  B  C ,.... P A .P B .P C .......
b) If A and B are independent then the complements of A and B are also
independent. i.e., P(A B) = P(A).P(B).
P AB P A .P B .

5. 6
Probability

5.2.3 Conditional Probability:


The probability of an event B occurring when it is known that some event A has
occurred is called a conditional probability and is denoted by P (B|A). The symbol "P
(B|A) is usually read "the probability that B occurs given that A occurs" or simply the
probability of B, given A".
For example, consider the event B of getting a perfect square when a die is tossed. The
die is constructed so that even numbers are twice as likely to occur as the odd
numbers. Based on the sample space S = [1, 2, 3, 4, 5, 6], with probabilities of 1/9 and
2/9 assigned, respectively, to odd and even numbers, the probability of B occurring is
1/3.
Now suppose that it is known that the toss of the die resulted in a number greater than
3. We are now dealing with a reduced sample space A = {4,5,6}, which is a subset of S.
To find the probability that B occurs, relative to the space A, we must first assign new
probabilities to the elements of A proportional to their original probabilities such that
their sum is 1. Assigning a probability of w to the odd numbers in A and probability of
2w to the two even numbers, we have 5w =1 or w =1/5. Relative to the space A, we find
that B contains the single element 4. Denoting this event by the symbol (B|A), we write
P(B|A)= {4},and hence
2
P B| A
5
This example illustrates that events may have different probabilities when considered
relative to different sample spaces.
2 2/9 P(A B)
P B| A ,
5 5/9 P(A)
Where, P(A B) and P (A) are found from the original sample space S. In other words, a
conditional probability relative to a subspace A of S may be calculated directly from the
probabilities assigned to the elements of the original sample space S.
Definition:
The conditional probability of B, given A, denoted by P(B) is defined by:
P(A B)
P(B | A) Provided P (A)>0.
P(A)

5.3 Probability Distributions:


A probability distribution assigns a probability to each measurable subset of the
possible outcomes of a random experiment, survey, or procedure of statistical
inference. Examples are found in experiments whose sample space is non-numerical,
where the distribution would be a categorical distribution; experiments whose sample
space is encoded by discrete random variables, where the distribution can be specified

5. 7
M.B.A. (Sem. - II) Decision Science
by a probability mass function; and experiments with sample spaces encoded by
continuous random variables, where the distribution can be specified by a probability
density function.

5.3.1 Meaning and Definitions:


A) Meaning:
It is a statistical function that describes all the possible values and likelihoods that a
random variable can take within a given range. This range will be between the
minimum and maximum statistically possible values, but where the possible value is
likely to be plotted on the probability distribution depends on a number of factors,
including the distributions mean, standard deviation, skewness and kurtosis.
B) Definitions:
1) Naval Bajpai:
“All possible values of a random variables along with their corresponding
probabilities, so that sum of all these probabilities is unity; is called a probability
distribution of the random variable”.
2) Richard L. Scheaffer:
“Probability distribution of a random variable X is the description of the set of
possible values which X can take, along with the probability associated with
each of the possible values of X”.

5.3.2 Important Probability Functions:


1) Probability Mass Functions (P. M. F):
In probability theory and statistics, a probability mass function (pmf) is a function
that gives the probability that a discrete random variable is exactly equal to some
value. The probability mass function is often the primary means of defining a
discrete probability distribution, and such functions exist for either scalar or
multivariate random variables whose domain is discrete. A probability mass function
differs from a probability density function (p.d.f.) in that the latter is associated with
continuous rather than discrete random variables; the values of the latter are not
probabilities as such: a p.d.f. must be integrated over an interval to yield a
probability.
a) Formal Definition:
Suppose that X: S → A(A R) is a discrete random variable defined on a
sample space S. Then the probability mass function fX: A → [0, 1] for X is
defined as fx( x) Px( X x) Pr({s S : X (s) x}).
Thinking of probability as mass helps avoiding mistakes since the physical mass
is conserved as is the total probability for all hypothetical outcomes x:

5. 8
Probability

fx( x) 1

When there is a natural order among the hypotheses x, it may be convenient to


assign numerical values to them (or n-tuples in case of a discrete multivariate
random variable) and to consider also values not in the image of X. That is, fX
may be defined for all real numbers and fX(x) = 0 for all x X(S) as shown in the
figure. Since the image of X is countable, the probability mass function fX(x) is
zero for all but a countable number of values of x. The discontinuity of probability
mass functions is related to the fact that the cumulative distribution function of a
discrete random variable is also discontinuous. Where it is differentiable, the
derivative is zero, just as the probability mass function is zero at all such points.

Fig 5.2:
2) Probability Density Function:
In probability theory, a probability density function (p.d.f), or density of a continuous
random variable, is a function that describes the relative likelihood for this random
variable to take on a given value. The probability of the random variable falling
within a particular range of values is given by the integral of this variable‟s density
over that range that is, it is given by the area under the density function but above
the horizontal axis and between the lowest and greatest values of the range. The
probability density function is nonnegative everywhere, and its integral over the
entire space is equal to one.
a) Formal Definition:
A random variable X with values in a measurable space ( x, A) (usually Rn with
the Borel sets as measurable subsets) has as probability distribution the
measure X∗P on ( x, A) : the density of X with respect to a reference measure μ
on ( x, A) is the Radon–Nikodym derivative:
dX* P
f=
d
That is, f is any measurable function with the property that:
Pr[ X A] = dP f d for any measurable set A A.
x 1A A

5.3.3 Types of Probability Distribution


The various types of probability distributions are as shown in figure below:

5. 9
M.B.A. (Sem. - II) Decision Science
1) Discrete Probability Distributions:
A probability distribution is called discrete if its cumulative distribution function only
increases in jumps. More precisely, a probability distribution is discrete if there is a
finite or countable set whose probability is 1.
Discrete Distributions are characterized by u probability mass function, p such that
Pr[X = x] = p(x).

Fig.5.3:
If a random variable is a discrete variable, its probability distributions is a called a
discrete probability distributions.
Types of Discrete Probability Distribution:
a) Binomial Distribution
b) Poisson Distribution

2) Continuous Probability Distributions:


By one convention, a probability distribution is called continuous if its cumulative
distribution function is continuous, which means that it belongs to a random variable
X for which Pr[ X=x ] =0 for all x in R.
x
F x Pr X x f t dt
Discrete distributions and some continuous distributions (like the devil‟s staircase)
do not admit such a density.
Definition: The cumulative distribution F(x) of a continuous random variable X with
density function f(x) is,
x
F(x) P(x x) f (t)dt for x
As an immediate consequence one can write the two results,
P(a X b) F(b) F(a) and
dF(x)
(x) , if the derivative exists.
dx
Types of Continuous Probability Distribution:
The following continuous probability distributions are illustrated:

5. 10
Probability
a) Uniform Probability Distribution
b) Normal Probability Distribution
c) Exponential Probability Distribution
d) Students Distribution
e) Chi-Square Distribution
f) F Distribution

5.4 Binomial Distribution:


Binomial distribution is a discrete probability distribution. This distribution was
discovered by a Swiss Mathematician James Bernoulli. It is used in such situations
where an experiment results in two possibilities success and failure. Binomial
distribution is a discrete probability distribution which expresses the probability of one
set of two alternatives success (p) and failure (q).

5.4.1 Meaning and Definitions:


A) Meaning:
In probability theory and statistics, the binomial distribution is the discrete probability
distribution of the number of successes in a sequence of n independent yes/no
experiments, each of which yields success with probability p. Such a success/failure
experiment is also called a Bernoulli experiment or Bernoulli trial; when n=1, the
binomial distribution is a Bernoulli distribution. The binomial distribution is the basis
for the popular binomial test of statistical significance.
B) Definition:
1) TR Jain and SC Agarwal:
“Binomial distribution is defined and given by the following probability function:
n
Cx
where, p = probability of success, q = probability of failure = 1 – p, n = number of
trails, P(x) = probability of x successes in n trial”.
2) Micheal A. Bean :
“A random variable X is said to have a binomial distribution with parameters n
and p if its probability mass function is given by

Where n is a positive integer and p is a real number in the interval [0, 1].

5.4.2 Assumptions To Apply Binomial Distribution:


Binomial distribution can be used only under the following conditions:
1) Finite Number of Trials:
Under binomial distribution, an experiment is performed under identical conditions
for a finite and fixed number of trials i.e. number of trials is finite.

5. 11
M.B.A. (Sem. - II) Decision Science
2) Mutually Exclusive Outcomes:
Each trial must result in two mutually exclusive outcomes – success or failure. For
example, if a coin is tossed, then either the head (H) may turn up or the tail (T) may
turn up.
3) The Probability of Success In Each Trial is Constant:
In each trial, the probability of success, denoted by p remains constant. In other
words, the probability of success in different trials does not change. For example, in
tossing a coin, the probability of getting a head in each toss remains the same i.e. p
= P(H) = ½.
4) Trials Are Independent:
In binomial distribution, statistical independent among trials is assumed i.e. the
outcome of any trial does not affect the outcomes of the subsequent trials.

5.4.3 Characteristics of Binomial Distribution:


The following are the important properties or characteristics of binomial distribution:
1) Theoretical Frequency Distribution:
The binomial distribution is a theoretical frequency distribution which is based on
Binomial Theorem of algebra. With the help of this distribution, we can obtain the
theoretical frequencies by multiplying the probability of success by the total number
(N).
2) Discrete Probability Distribution:
The binomial distribution is a discrete probability distribution in which the number of
successes 0, 1, 2, 3,..…,n are given in whole numbers and not fractions.
3) Shape of Binomial Distribution:
The shape of binomial distribution depends on the values of p, q and n which is
shown below:
i) If p = ½, the binomial distribution is perfectly symmetrical for any value of n (see
figure B)
1
If P the binomial distribution will be asymmetrical i.e., the binomial
2
distribution is skewed. If p<1/2, the distribution will be positively skewed and if
p>1/2, the distribution will be negatively skewed. If the value of n increases for
p q the asymmetry on skewness in the distribution diminishes. See Fig. 5.4
given below:

Fig 5.4: Shape of Binomial Distribution

5. 12
Probability
4) Main Parameters:
The binomial distribution has two parameters n and p. The entire distribution can
be known from these two parameters.
5) Uses:
It has been found useful in those fields where the outcome is classified into
success and failure.

Example:
A fair coin is tossed thrice. Find the probability of getting:
1) exactly 2 Heads
2) at least 2 Heads
3) at the most 2 Heads
Solution:
1
Let p = probability of getting head when a coin is tossed =
2
1
q = the probability of tail =
2
n
and n=3, P(x) Cx q n x .p x
1 2
1 1 1 1 3
1) P(2H) = 3 C2 3
2 2 2 4 8
2) P (at least 2 Heads) = P(2H)+P(3H)

1 2 0 3
3 1 1 3 1 1
C2 C3
2 2 2 2
1 1 4 1
3 1
8 8 8 2
3) P (at most 2 Heads) = P(0H) + P(1H) + P(2H)
= 1- P(3H)
0 3
3 1 1
=1 C3
2 2
1 1 7
1 1 1
8 8 8

5.5 Normal Distribution:


A symmetrical distribution that is mounded up about the mean and is bell shaped and
becomes sparse at the extremes. The two tails never touch the horizontal axis. This is
the normal distribution, which is an important continuous probability distribution. This
distribution is also known as the Gaussian distribution after the name of the eighteenth
century mathematician-astronomer Karl Gauss, whose contribution in the development

5. 13
M.B.A. (Sem. - II) Decision Science
of the normal distribution was very considerable. As a vast number of phenomena have
approximately normal distribution, it has wide application in Statistics. In business, there
arise a number of situations where management has to make inferences by drawing
samples.

5.5.1 Meaning and Definitions:


A) Meaning:
In probability theory, the normal distribution is a very commonly occurring
continuous probability distribution function that tells the probability of a number in
some context falling between any two real numbers. The normal distribution closely
approximates the probability distributions of a wide range of random variable. In
statistics, normal distribution arises as a limiting case of several discrete and
continuous probability distributions.
B) Definitions:
1) Sundarapandian:
“A continuous random variable X with parameters and , where
and , is said to have a normal distribution or Gaussian distribution if its

probability density function is given by , where


”. If X is a normal r.v. with parameter it is denoted by
and standard normal variable is defined by .
2) TR Jain and SC Agarwal:
“Normal distribution is defined and given by the following probability function :

“.
where mean, standard deviation, e = 2.7183 and

5.5.2 Characteristics of the Normal Distribution:


The normal distribution has certain characteristics, which make it applicable to such
situations.
1) Perfectly Symmetrical And Bell Shaped:
The normal curve is perfectly symmetrical and bell shaped about mean. This means
that if we fold the curve along its vertical axis at the centre, the two halves would
coincide.
2) Unimodal Distribution:
It has only one mode i.e., it is unimodal distribution.
3) Equality of Mean, Median and Mode:
In a normal distribution, mean, median and mode are equal, i.e. X M Z

5. 14
Probability
4) Asymptotic to the Base Line:
Normal curve is asymptotic to the base line on either sides i.e. it has a tendency to
touch the base line but it never touches it. This is clear as follows:

Fig 5.5: Asymptotic to the base line


5) Range:
The normal curve extends to infinitely on either side i.e. it extends -∞ to + ∞.
6) Total Area:
The total area under the normal curve is 1.
7) Ordinate:
The ordinate of the normal curve at the mean is maximum.
8) Mean Ordinate:
The mean ordinate divides the whole area under the curve into two equal parts i.e.
50% on the right side and 50% on the left side.
9) Equidistance of Quartiles:
In a normal distribution, the quartiles are equidistant from median i.e.
Q3 – M M – Q1
10) Quartile Deviation:
2
In a normal distribution, the quartile deviation is 2/3 times the S.D. i.e. Q.D S.D
3
11) Mean Deviation:
4
In a normal distribution, the mean deviation is 4/5 times the S.D. i.e. M.D S. D
3

5.5.3 Assumptions of Normal Distribution:


The normal distribution is base on the following set of assumptions:
1) Independent Causes:
The forces affecting the event must be independent of one another i.e. they are
independent of each other.
2) Condition of Symmetry:
The operation of causal forces must be such that the deviation from mean on either
side is equal in number and size.
3) Multiple Causation:
The causal forces must be numerous and of approximately equal weight or
importance.

5. 15
M.B.A. (Sem. - II) Decision Science

5.5.1 Standard Normal Distribution:


The standard normal distribution is a normal distribution, with a mean of 0 and a
standard deviation of 1. Normal distributions can be transformed to standard normal
distributions by the formula:
x
z

Where, x is a score from the original normal distribution, is the mean of the original
normal distribution, and the standard deviation of original normal distribution. The
standard normal distribution is sometimes called the z-distribution.

5.6 Statistics Estimation:


Estimation theory as the name itself suggests refers to the technique and methods by
which population parameters are estimated from sample studies. Estimation of
parameter is absolutely essential when-ever sample study has been conducted. People
are interested, for a variety of reasons, in parameter values.
For example, a manufacturer would like to have some estimate about the future
demand of his product. a businessman would like to estimate his future sales and
profits, a production engineer would very much wish to know the percentage of
defective articles which his machine is likely to produce over a period of time, the
manufacturer of a motor tyres would like to know the approximate life of his tyres, a bulb
manufacturer would be interested to know about the length of life of the bulbs and so
on. Such estimates can be obtained either by the Census Method or "Sample Method
However, as pointed out earlier; generally sample studies are conducted to save time,
money and energy.

5.6.1 Interval Estimation:


In Interval estimation a probable range is determined within which the real value of the
parameter is expected to be (figure 5.9). While point estimate is a single value of
statistics used as an estimate of the population parameter. Interval estimate means the
population parameter given by two numbers between which the parameter is
considered. Generally, point sampling theory and tests of significance estimation does
not confidently tie down our information. Therefore, two values are computed at such a
way that the interval lies between the two values containing the parameter. An interval
so obtained is called interval estimate or confidence interval.
For example, studying a sample, one estimates that the average salary of a factory
worker is Rs. 600; it is a point estimate. At the same time one may estimate through a
sample study that an average salary to factory workers can lie between Rs. 600 and Rs.
700; this is an interval estimate.

5. 16
Probability
Interval estimates improve upon point estimates by providing a range of vales for .
Sampling from a g population, one utilizes observed sample values x1,x2, ..... xn to arrive
at two points that together define an interval or range of values for the parameter .
The two points represent a lower limit ˆ L and an upper limit ˆ U for the interval. In making
an interval estimate one claim with A known degree of confidence that the interval
contains the unknown value of the population parameter. For this reason, interval
estimates are often termed confidence intervals.
In statistics, interval estimation is the use of sample data to calculate an interval of
possible (or probable) values of an unknown population parameter, in contrast to point
estimation, which is a single number. Jerzy Neyman (1937) identified interval estimation
("estimation by interval") as distinct from point estimation ("estimation by unique
estimate"). In doing so, he recognized that then-recent work quoting results in the form
of an estimate plus-or-minus a standard deviation indicated that interval estimation was
actually the problem statisticians really had in mind. The process of estimating a
parameter of a given population by specifying an interval of values and the probability
that the true value of the parameter falls within this interval.
A) Meaning:
The purpose of an interval estimate is to provide information about how close the
point estimate, provided by the sample, is to the value of the population parameter.
A point estimator, however, good it may be, cannot be expected to coincide with the
true value of the parameter and in some cases may differ widely from it. In the
theory of interval estimation, we find an interval or two numbers within which the
value of unknown population parameter is expected to lie with a specified
probability. The value of a sample statistic that is used to estimate a population
parameter is called a point estimate.

B) Definitions:
a) D. R. Helsel, R. M. Hirsch:
“Interval estimates are the intervals which have a stated probability of containing
the true population value.”
b) Prem S. Mann:
“In interval estimation, an interval is constructed around the point estimate, and
it is stated that this interval is likely to contain the corresponding population
parameter”.
c) J. Gosling:
“An interval estimate is two numbers that define a range of values that will
enclose the unknown population parameter at some specified probability level”.
d) G. C. Beri:
“Interval estimate is a range of values used to estimate an unknown population
parameter”.

5. 17
M.B.A. (Sem. - II) Decision Science
C) Terms Used In Interval Estimation:
1) Point Estimate:
“It is the value of sample statistics that is used to estimate most likely value of
the unknown population parameter”.
2) Confidence Interval Estimate:
“It is the range of values that is likely to have population parameter value with a
specified level of confidence”.
3) The Estimation of Mean:
To illustrate how the possible size of errors can be appraised in point estimation,
suppose that the mean of a random sample is to be used to estimate the mean
of a normal population with the known variance . The sampling distribution of
for random examples of size n from a normal population with the mean and
the variance is a normal distribution with,

Thus, we can write

Where
and is such that the integral of the standard normal density from
equals . It follows that

5.6.2 The Standard Error Estimate:


The standard deviation of a sampling distribution of a statistic is often called its standard
error. Table 5.6 lists standard errors of sampling distributions for various statistics under
the conditions of random sampling from an infinite (or very large) population or of
sampling with replacement from a finite population.
The quantities , , , r and x, s, P, m, denote, respectively, the population and sample
means, standard deviations, proportions and rth moments about the mean.
If the sample sine N is large enough, the sampling distributions are normal or nearly
normal. For this reason, the methods are known as larger sampling methods, when N <
30, samples are called small.
When population parameters such as , p or r , are unknown; they may be estimated
closely by their corresponding sample statistics namely, s or sˆ N / (N 1)s , P and mr -
if the samples are large enough.

5. 18
Probability
Sampling Standard Error Special Remark
Distribution
Means This is true for large or small samples. The
x
N sampling distribution of means is very nearly
normal for N 30 even when the population
is non-normal.
x
, the population mean, in all cases.
Proportions p(1 p) pq The remarks made for means apply here as
p
N N well. p p in all cases.
Standard For N 100, the sampling distribution of s is
1
Derivations 2N very nearly normal. s is given by (1) only if
2 the population is normal (or approximately
1 2
2 normal). lf the population is non-normal, (2)
4N 2
can be used. Note that (2) reduces to (1)
2
when 2 and 4 3 4 , which is true for
normal populations. For N 100, s very
nearly.
First and 1.3626 The remarks made fur medians apply here as
third quartiles Q1 Q3
N well; Q1 and Q3 , are very nearly equal to the
first and third quintiles of the population.
Note that Q2 mod

1.7094
D1 D9
N
Deciles 1.4288 The remarks made for medians apply here as
D2 D8
N well. D1 , D2 , .... are very nearly equal to the
1.3180 first, second, ... deciles of the population.
D3 D7
N Note that D5 mod

1.2680
D4 D6
N
Semi- 0.7867 The remarks made for medians apply here as
interquartile Q
N well Q is very nearly equal to the population
ranges semi-interquartile range.
Variance 2 2 The remarks made for standard deviation
S2
N apply here as well. None that (2) yields (1) in
the case that the population is normal.
N 3 2 2
4 2 S2 (N 1) / N, which is very nearly 2 for
N 1
S2
N large N.
Coefficients v Here v / is the population of variation.
1 2v2
of variation 1
2N The given result holds for normal (or nearly
normal) population and N 100.

5. 19
M.B.A. (Sem. - II) Decision Science
Solved Problems
A) Problems on Probability:
Example 1:
A die is rolled, find the probability that an even number is obtained.
Solution:
Let us first write the sample space S of the experiment. S = {1,2,3,4,5,6}
Let E be the event "an even number is obtained". E = {2,4,6}
We now use the formula of the classical probability.
P (E) = n (E) / n(S) = 3/6 = 1/2.

Example 2:
Two coins are tossed, find the probability that two heads are obtained.
Solution:
Each coin has two possible outcomes H (heads) and T (Tails).
The sample space S is given by S = {(H,T),(H,H),(T,H),(T,T)}
Let E be the event "two heads are obtained". E = {(H,H)}
We use the formula of the classical probability.
P (E) = n(E) / n(S) = 1/4.

Example 3:
Two dice are rolled, find the probability that the sum is
1) equal to 1
2) equal to 4
3) less than 13
Solution:
1) The sample space S of two dice is shown below.
S = { (1,1),(1,2),(1,3),(1,4),(1,5),(1,6)
(2,1),(2,2),(2,3),(2,4),(2,5),(2,6)
(3,1),(3,2),(3,3),(3,4),(3,5),(3,6)
(4,1),(4,2),(4,3),(4,4),(4,5),(4,6)
(5,1),(5,2),(5,3),(5,4),(5,5),(5,6)
(6,1),(6,2),(6,3),(6,4),(6,5),(6,6)}
Let E be the event "sum equal to 1". There are no outcomes which correspond to a
sum equal to 1, hence
P (E) = n(E) / n(S) = 0 / 36 = 0
2) Three possible outcomes give a sum equal to 4: E = {(1,3), (2,2), (3,1)},
Hence, P (E) = n(E) / n(S) = 3 / 36 = 1 / 12
3) All possible outcomes, E = S, give a sum less than 13,
Hence, P (E) = n(E) / n(S) = 36 / 36 = 1.

5. 20
Probability
Example 4:
A dice is rolled and coin is tossed, find the probability that the dice shows an odd
number and the coin shows a head.
Solution:
The sample space S of the experiment described in question 5 is as follow
S = {(1,H),(2,H),(3,H),(4,H),(5,H),(6,H),(1,T),(2,T),(3,T),(4,T),(5,T),(6,T)}
Let E be the event "the die shows an odd number and the coin shows a head".
Event E may be described as follows
E= {(1,H), (3,H), (5,H)}
The probability P (E) is given by
P (E) = n(E) / n(S) = 3 / 12 = 1/4.

B) Problems on Conditional Probability:


Example 5:
We toss a fair coin three successive times. We wish to find the conditional probability
when A and B are the events.
A= {more heads than tails come up}, B = {1st toss is head}.
Solution:
The sample space consists of eight sequences,
,
Which we assume to be equally likely.
The event B consists of the four elements HHH, HHT, HTH, HTT, so its probability is
.
The even consists of the three elements outcomes HHH, HHT, HTH, so its
probability is
.
Thus, the conditional probability is
.

Because all possible outcomes are equally likely here, we can also compute
using a shortcut. We can bypass the calculations of P (B) and , and simply
divide the number of elements shared by A and B (which is 3) with the number of
elements of B(which is 4), to obtain the same result ¾.
Example 6:
A class consisting of 4 graduate and 12 undergraduate students is randomly divided
into 4 groups of 4. What is the probability that each group includes a graduate student?
Solution:

5. 21
M.B.A. (Sem. - II) Decision Science

Example 7:
A company has three sections which contribute 40%, 35% and 25%,
respectively, to total output. The following percentages of faculty units have been
observed:
2% (0.02)

3% (0.03)

4% (0.04)
There is a final check before output is dispatched. Calculate the probability that a unit
found faulty at this check has come from section 1( ).
Solution:
Let F represent a unit which has been found to be faulty.
Let probability that a unit chosen at random comes from
Let probability that a unit chosen at random comes from
Let probability that a unit chosen at random comes from

The percentages of faculty units are as follows:

The required probability may be expressed as


The unknown probability is P (F) to be slotted into the formula

Note that

5. 22
Probability

The faulty part can only have come from or or and so

Since P (F) is a denominator and the sum equals unity then the expression

must be equal to P (F).


Thus P (F)

Substitution into

gives

Also
= 0.3684
0.25×0.04
And P(S / F) =
3 0.0285
= 0.3509
Note 0.2807+0.3684+0.3509=1.000
Thus if a faulty unit is chosen at random then the probability that it has come from is
0.2807.

C) Problems on Normal Distribution:


Example 8:
Suppose the owner of a bakery knows that the daily demand for his whole meal bread
is a random variable having the mean = 400 loaves and the standard deviation = 20.
What is the probability that the demand for its bread will exceed 450 loaves?
Solution:
It is better to give a normal curve diagram to understand the implications of the problem.
This is shown in the given fig.
X - μ 450 - 400
z= = - 2.5
σ 20
The probability against z = 2.5 is 0.0062. The shaded area at the right tail-end in given
fig. shows this probability. Thus we can say that the probability of demand exceeding
450 loaves is extremely low being 0.0062 or merely 0.62%.

5. 23
M.B.A. (Sem. - II) Decision Science

Fig 5.6: Probability of Demand for Bread


Example 9:
The average monthly sales of 5000 firms are normally distributed. Its mean and
standard deviation are Rs. 36,000 and Rs. 10,000, respectively. Find
1) The number of firms having sales over Rs. 40,000;
2) The percentage of firms having sales between Rs. 38,500 and 41,000;
3) The number of firms having sales between Rs. 30,000 and Rs. 40,000.
The relevant extract of the Area Table (under the Normal Curve) is given below.
Z 0.25 0.40 0.5 0.6
Area 0.0987 0.1554 0.1915 0.2257
Solution:
Given and
Let X denotes the monthly sales of the firms.
1)

Since there are 5,000 firms, we multiply this value by 5,000.


Therefore, the number of firms having sales over Rs. 40,000 is
0.3446 5,000=1723.
2)

7
Therefore,
Hence, 9.28% of the firms have sales between Rs. 38,500 and Rs. 41,000.
3)

5. 24
Probability
Hence, the number of firms having sales between Rs. 30,000 and Rs. 40,000 is
0.3811 5,000 = 1906 approx.

Example 10:
Assuming that the height distribution of a group of men is normal, find the mean and
standard deviation, given that 84 percent of the men have heights less than 65.2 inches
and 68 percent have heights between 65.2 and 62.5 inches.
Solution:
, we have to find both and . For this there must be two simultaneous
equations.

Or (1)

(This is because of subtracting 68% from 84%)


Or (2)
Now subtracting equation (2) from (1), we get

Subtracting the value of in (2) above, we get

Or
Or
If we apply the value of and in the equations above, we can verify the accuracy of
the results.

Example 11:
If z is normally distributed with mean 0 and variance 1, find:
1)
2)
3)

Solution:
1) P( z 1.64)

5. 25
M.B.A. (Sem. - II) Decision Science
Against z = 1.64 the standard normal table gives the corresponding value of 0.4495.
Since we have to find z > - 1.64 as well, we have to add 0.5 in this value.
Hence P( z 1.64) 0.4495 0.5 0.9495

Fig 5.7:
2) P( 1.96 z 1.96)
Against z = 1.96, the corresponding value from the standard normal table is 0.4750.
Since this is to be taken for both sides of the normal curve, the required probability
is P( 1.96 z 1.96) 0.4750 0.4750 0.95

Fig 5.8:
3) P( z 1)
The term z indicates z ignoring positive and negative signs i.e., we have to
consider both left and right sides. The corresponding value of z = 1 from standard
normal table is 0.3413.
Hence P( z 1) 0.3413 0.3413 0.6826

Fig 5.9:
Example 12:
The average daily sales of 500 branch officials was Rs. 1,50,000 and the standard
deviation was Rs. 15,000. Assuming the distribution to be normal, indicate how many
branches have sales between:

5. 26
Probability
1) Rs 1,20,000 and Rs 1,45,000
2) Rs 1,40,000 and Rs 1,65,000
3) More than Rs 1,65,000
Solution:
1) Standard normal variate corresponding to 120 is (to simplify calculations, „000 is
omitted)
X 120 150
z 2
15
and corresponding to 145 is
145 150 5
z 0.33
15 15
From table, we find the areas corresponding to the values of z are 0.4772 and
0.1293.
Hence, the desired area between Rs 120 and Rs. 145

Hence, the expected number of branches having sales between Rs 1,20,000 and
Rs. 1,45,000 is approx.
2) Standard normal variate corresponding to 140 is

and corresponding to 165 is

From the table, the area corresponding to the z values are 0.2486 and 0.3413.
Hence, the s=desired area is 0.2486+0.3413=0.5899
Hence, the expected number of branches having sales between Rs. 1,40,000 and
Rs. 1,65,000 is approx.
3) More than Rs. 1,65,000

branches.

D) Problems on Binomial Distribution:


Example 13:
Find the chance of getting 3 successes in 56 trials when the chance of getting a
success in one trial is 2/3.
Solution:
Here, n = 5, p = 2/3, q = 1 - p = 1 – 2/3 = 1/3 and r = 3,
Substituting these values in general terms, the required chance is

5. 27
M.B.A. (Sem. - II) Decision Science
n 5

approx.

Example 14:
For a binomial distribution the mean is 4 and variance is 2. Find probability of getting
1) At least 2 successes,
2) At the most 2 successes.
Solution:
Given, the mean np = 4 and
is npq, which is 2.
Hence, q = npq/np =2/4=0.5

Having obtained n = 8
1) Probability of getting at least 2 successes means we have to get probabilities of 2,
3, 4, 5, 6, 7 and 8 successes. Adopting the same procedure as described in
previous example, we find from the table, the following probabilities:
2 0.1094
3 0.2188
4 0.2734
5 0.2188
6 0.1094
7 0.0312
8 0.0039
0.9649
Thus, the probability of getting at least 2 successes is 0.9649.
2) Now, we take up the second part of the problem. Here, we are required to find at the
most 2successes. From the table,we find the following probabilities:
0 success 0.0039
1 success 0.0312
2 success 0.1094
0.1445
Thus, the probability of getting at the most 2 successes is 0.1445. It may be noted
that if we add up the answers in (i) and (ii), the resultant figure is more than 1. This
is because the probability of 2 successes comes in both the parts (i) and (ii). If we
omit the probability of 2 successes from one part, say (ii), then the total will be

5. 28
Probability
0.9649
0.0039
0.0312
1.0000
i.e.1.

Example 15:
A perfect die is thrown a large number of times in sets of 8. The occurrence of 5 or 6 is
called success. In what proportion of the sets would you expect three successes?
Solution:
Since a die has 1 to 6 numbers, the probability of getting a 5 is 1/6. Again, the
probability of getting a 6 is 1/6.
Hence, the probability of getting a 5 or 6 (i.e. success) is p = 1/6 + 1/6 = 1/3. Therefore,
q = 1 – 1/3 = 2/3
Since the die is in sets of 8, the binomial distribution is

The expected frequency of 3 successes is


8

Hence, the proportion of the sets in which three successes are expected

Example 16:
If an average 8 ships out of 10 arrive safely at ports, obtain mean and standard
deviation of number of ships returning safely out of 150 ships.
Solution:
Probability of safe returning (p) is 8/10 = 0.8 and not safe returning (q) = 2/10 = 0.2.
The probability of 0, 1, 2, …. 150 ships safely returning out of total of 150 will be given
by the various terms of the expansion .
The mean of the distribution found by putting probabilities against the number of ships
returned safely will be np, that is, 150 0.8 = 120.
The standard deviation will be

5. 29
M.B.A. (Sem. - II) Decision Science

Mean = np = 120 & Standard deviation = 4.9.

Example 17:
A marksman can hit a target 2 out of 3 times. In 4 shots, what are his chances of hitting
it 0, 1, 2, 3 or 4 times?
Solution:
The probability that he will miss the target is 1-2/3, that is, 1/3.
Hence, the required chances of hitting the target are given by the expression

Hence, his chances if hitting the target are as in the given table:
Times Chance or probability
0 1/81
1 8/81
2 24/81
3 32/81
4 16/81

Example 18:
Comment on a binomial distribution whose mean = 7 and variance = 1 1.
Solution: For a binomial distribution
Mean = np = 7, Variance = npq = 11
npq 11
q 1.57
np 7
But the value of q cannot be more than one or it must lie between 0 and 1. Therefore,
the given data are inconsistent.

Example 19:
N = 1000; n = 5; P = 50% then find P(x = 2) by binomial distribution.
Solution:
N = 1000, n = 5, P = 50% = ½, q = 1-P = ½.
n 1 5 2 1 2
P(x 2)N Cx q n x P x 1000 5
C2 2 2

5. 30
Probability
3 2
5! 1 1
1000
2!3! 2 2
20 1 1
1000
2 8 4
1
1000 10
32
P(x 2) 312.5

E) Problems on Interval Estimation:


Example 20:
A random sample of 400 firms was taken to find out the average sale per customer. The
sample mean was found to be Rs 900 and the standard deviation Rs. 200. Construct an
interval estimate of the population mean with the confidence level of 95.44 percent.
Solution:
Lower point as indicated earlier is , where , where s is the estimate of
the standard deviation.
Thus,

Upper point as indicated earlier is

This can also be written as Rs. 900 . We are 95.44 %confident that the population
means lies between Rs. 880 and Rs. 920.

Example 21:
In the previous example, suppose we are interested in having estimate with a higher
confidence level, say 99.8 percent.
Solution:
The corresponding value of Z is 3. Using the same data as given in above example and
taking Z = 3, we find

5. 31
M.B.A. (Sem. - II) Decision Science
In other words, the population means lies between Rs. 870 and Rs. 930 and we are
almost 100 percent confident that it is so. Note that the interval between the lower and
upper points has widened as the level of confidence has increased. Conversely, if we
reduce the level of confidence, we shall find that the interval between the two points has
narrowed down.

F) Problems on Standard Error of Estimation:


Example 22:
The regression line of y on x is given by the equation:
y = 35.82 + 0.476 x
Solution:
The following table depicts the actual values of y and the estimated values of y :
(denoted by yest).
x 65 63 67 64

y 68 66 68 65

yest 66.76 65.81 67.71 66.28

y - yest 1.24 0.19 0.29 -1.28

Applying the formula for the Standard Error Estimate for this problem we have the
solution as follows.



5. 32
Probability
Review Questions

Q.1. What is Probability? Explain the properties.


Q.2. Define Normal distribution? What are the main characteristics of normal
distribution?
Q.3. What is probability distribution? Explain the types of probability distribution.
Q.4. Define Binomial distribution? What are the main characteristics of binomial
distribution?
Q.5. Explain statistical estimation with suitable examples;
Q.6. What do you mean by interval estimation? Explain the example.
Q.7. Problems for Practice:
1) X is a normally distributed variable with mean μ = 30 and standard deviation σ =4.
Find
a) P(x < 40)
b) P(x > 21)
c) c) P(30 < x < 35)
2) The annual salaries of employees in a large company are approximately normally
distributed with a mean of $50,000 and a standard deviation of $20,000.
a) What percent of people earn less than $40,000?
4) What percent of people earn between $45,000 and $65,000?
5) What percent of people earn more than $70,000?
3) Entry to a certain University is determined by a national test. The scores on this test
are normally distributed with a mean of 500 and a standard deviation of 100. Tom
wants to be admitted to this university and he knows that he must score better than
at least 70% of the students who took the test. Tom takes the test and scores 585.
Will he be admitted to this university?
4) A radar unit is used to measure speeds of cars on a motorway. The speeds are
normally distributed with a mean of 90 km/hr and a standard deviation of 10 km/hr.
What is the probability that a car picked at random is travelling at more than 100
km/hr?
5) The length of life of an instrument produced by a machine has a normal distribution
with a mean of 12 months and standard deviation of 2 months. Find the probability
that an instrument produced by this machine will last
a) Less than 7 months.
b) Between 7 and 12 months.
6) A company owns 400 laptops. Each laptop has an 8% probability of not working.
The manager randomly selects 20 laptops for his salespeople.
a) What is the likelihood that 5 will be broken?
b) What is the likelihood that they will all work?

5. 33
M.B.A. (Sem. - II) Decision Science
7) An HDTV is made from 100 components. Each component has a 0.005 probability
of being defective. What is the probability that an HDTV will not work perfectly?
8) The ABC Company manufactures toy robots. About 1 toy robot per 100 does not
work. X purchases 35 ABC toy robots. What is the probability that exactly 4 do not
work?
9) The LMB Company manufactures tyres. They claim that only 0.007 of LMB tires are
defective. What is the probability of finding 2 defective tires in a random sample of
50 LMB tyres?
10) A study indicates that 4% of American teenagers have tattoos. You randomly
sample 30 teenagers. What is the likelihood that exactly 3 will have a tattoo?
1 1 1
11) Given P(A) , P(A / B) and P(B / A) , find if:
4 4 2
a) A and B are mutually exclusive.
b) A and B are independent.
12) X can solve 80% of the problems while Y can solve 90% of the problems given in a
Statistics book; A problem is selected at random. What is the probability that atleast
one of them will solve the same?
13) In an examination, 30% of the students have failed in Engineering Mechanics, 20%
of the students have failed in Mathematics and 10% have failed in both the subjects.
A student is selected at random.
a) What is the probability that the student has failed in Engineering Mechanics if it
is known that he has failed in Mathematics?
b) What is the probability that the student has failed either in Engineering
Mechanics or in Mathematics?



5. 34
Bibliography
Bibliography
 Reference Books:
1) Quantitative Techniques in Management by N.D. Vohra Tata, McGraw Hill
Publications, 4th Edition
2) Quantitative Approaches to Management by Levin, Rubin, Stinson & Gardner
3) Operations Research Theory & Applications by J K Sharma- MacMillan
Publishers India Ltd., 4th Edition
4) Quantitative techniques & statistics By K L Sehgal Himalaya Publications
5) An introduction to management science: Quantititave approcach for decision
making- Cengage Learning-Anderson
6) Introduction to Operations Research by Billey E. Gilett, TMGH
7) Operations Research by Nita Shah, Ravi Gor, Hardik Soni, PHI
8) Managerial Decisions Modeling with Spreadsheets by Bal Krishnan, Render,
Stair, Jr., Pearson Education.
9) Operations Research by R. Pannerselvam, Prentice Hall India, 2nd Edition.

 Websites:
www.orsi.in
http://www.universityofcalicut.info/
http://mbaexamnotes.com/
https://studysoup.com
http://careercart.blogspot.in/



5. 35

You might also like