Operations Research Notes
Operations Research Notes
1. Determination of operation.
2. Determination of objectives.
3. Determination of effectiveness of measures.
4. Determination of type of problem, its origin and causes.
(ii) Research Phase:
1. Recommendations for remedial action to those who first posed the problem, this includes the
assumptions made, scope and limitations, alternative courses of action and their effect.
2. Putting the solution to work: implementation.
Without OR, in many cases, we follow these phases in full, but in other cases, we leave
important steps out. Judgment and subjective decision-making are not good enough. Thus
industries look to operation research for more objective way to make decisions. It is found that
method used should consider the emotional and subjective factors also.
For example, the skill and creative labour are important factors in our business and if
management wants to have a new location, the management has to consider the personal feeling
of the employees for the location which he chooses.
Characteristics (Features) of Operation Research:
(i) Inter-Disciplinary Team Approach:
This requires an inter-disciplinary team including individuals with skills in mathematics,
statistics, economics, engineering, material sciences, computer etc.
(ii) Wholistic Approach to the System:
While evaluating any decision, the important interactions and their impact on the whole
organisation against the functions originally involved are reviewed.
(iii) Methodological Approach:
O.R. utilises the scientific method to solve the problem
(iv) Objective Approach:
O.R. attempts to find the best or optimal solution to the problem under consideration, taking into
account the goals of the organisation.
Methodology of Operation Research:
1. Formulating the Problem:
The problem must be first clearly defined. It is common to start the O.R. study with tentative
formulation of the problem, which is reformulated over and again during the study. The study
must also consider economical aspects.
While formulating the O.R. study, analyists must analyse following major components:
(i) The environment:
Environment involves physical, social and economical factors which are likely to affect the
problem under consideration. O.R. team or analysts must study the organisation contents
including men, materials, machines, suppliers, consumers, competitors, the government and the
public.
(ii) Decision-makers:
Operation analyst must study the decision-maker and his relationship to the problem at hand.
(iii) Objectives:
Considering the problem as whole, objectives should be defined.
(iv) Alternatives:
The O.R. study determines as to which alternative course of action is most effective to achieve
the desired objectives. Expected reactions of the competitors to the alternative must also be
considered.
2. Deriving Solution:
Models are used to determine the solution either by simulation or by mathematical analysis.
Mathematical analysis for deriving optimum solution includes analytical or numerical procedure,
and uses various branches of mathematics.
3. Testing the Model and Solution:
A properly formulated and correctly manipulated model is useful in predicting the effect of
changes in control variables on the overall system effectiveness. The validity of the solution is
checked by comparing the results with those obtained without using the model.
4. Establishing Controls over the Solution:
The solution derived from a model remains effective so long as the uncontrolled variables retain
their values and the relationship. The solution goes out of control, if the values of one or more
variables vary or relationship between them undergoes a change. In such circumstances the
models need to be modified to take the changes into account.
5. Implementing the Solution:
Solution so obtained should be translated into operating procedure to make it easily
understandable and applied by the concerned persons. After applying the solution to the system,
O.R. group must study the response of the system to the changes made.
Historically, the term Operations Research originated during Second World War when U.S.A.
and Great Britain’s Armed Forces sought the assistance of Scientists to solve complex and very
difficult strategical and tactical problems of warfare, like making mines harmless or increasing
the efficiency of antisubmarine aerial warfare, etc.
Operations research employs mathematical logic to complex problems requiring managerial
decisions.
Operations research aids, in solving diverse business problems and in planning and investigation
of major operational decisions.
Operations Research (Operational Research, O.R., or Management science) includes a great
deal of problem-solving techniques like Mathematical models, Statistics and algorithms to aid in
decision-making. O.R. is employed to analyze complex real-world systems, generally with the
objective of improving or optimizing performance.
In other words, Operations Research is an interdisciplinary branch of applied mathematics and
formal science which makes use of methods like mathematical modeling, algorithms statistics
and statistics to reach optimal or near optimal solutions to complex situations.
It is usually worried about optimizing the maxima (for instance, profit, assembly line
performance, bandwidth, etc) or minima (for instance, loss, risk, cost, etc.) of some objective
function. Operational Research aids the management to accomplish its objectives utilizing
scientific methods.
Origin and History of Operations Research in Brief
While researching for operations research (O.R.) history, I discovered that history is not clear
cut, different people have diverse views of the same event.
Based on the history of Operations Research, it is believed that Charles Babbage (1791-1871)
is the father of Operational Research due to the fact that his research into the cost of
transportation and sorting of mail resulted in England’s universal Penny Post in 1840.
The name operations research evolved in the year 1940. During World War 2, a team of scientist
(Blackett’s Circus) in UK applied scientific techniques to research military operations to win the
war and the techniques thus developed was named as operation research.
As a formal discipline, operations research originated from the efforts of army advisors at the
time of World War II. In the years following the war, the methods started to be employed
extensively to problems in business, industry and society. Ever since then, OR has developed
into a subject frequently employed in industries including petrochemicals, logistics, airlines,
finance, government, etc.
Thus, the Operational Research began during World War II in great Britain with the
establishment of groups of scientists to analyze the strategic and tactical problems associated
with military operations. The aim was to discover the most efficient usage of limited military
resources by the application of quantitative techniques.
1. Cash flow analysis, long range capital requirement, investment portfolios, dividend policies,
2. Claim procedure, and
3. Credit policies.
(II) Marketing:
1. Physical distribution: Location and size of warehouses, distribution centres and retail outlets,
distribution policies.
2. Facilities Planning: Number and location of factories, warehouses etc. Loading and unloading
facilities.
3. Manufacturing: Production scheduling and sequencing stabilisation of production, employment,
layoffs, and optimum product mix.
4. Maintenance policies, crew size.
5. Project scheduling and allocation of resources.
(V) Personnel Management:
1. Mathematical models.
2. Language models.
(II) By Function:
1. Descriptive models.
2. Predictive models.
3. Normative models for repetitive problems.
(III) By Structure:
1. Physical models.
2. Analogue (graphical) models.
3. Symbolic or mathematical models.
(IV) By Nature of Environment:
1. Deterministic models.
2. Probabilistic models.
(V) By the Time Horizon:
1. Static models.
2. Dynamic models.
Characteristics of a Good Model:
1. Shortage costs.
2. Ordering costs.
3. Storage costs.
4. Interest costs.
This study helps in taking decisions about:
Formulate and solve using the graphical method a Linear Programming model for the previous
situation that allows the workshop to obtain maximum gains.
Decision Variables:
The optimal solution is and with an optimal value that represents the workshop’s profit.
Simplex Method
The Simplex Method or Simplex Algorithm is used for calculating the optimal solution to the
linear programming problem. In other words, the simplex algorithm is an iterative procedure
carried systematically to determine the optimal solution from the set of feasible solutions.
Firstly, to apply the simplex method, appropriate variables are introduced in the linear
programming problem, and the primary or the decision variables are equated to zero. The
iterative process begins by assigning values to these defined variables. The value of decision
variables is taken as zero since the evaluation in terms of the graphical approach begins with the
origin. Therefore, x1 and x2 is equal to zero.
The decision maker will enter appropriate values of the variables in the problem and find out the
variable value that contributes maximum to the objective function and removes those values
which give undesirable results. Thus, the value of the objective function gets improved through
this method. This procedure of substitution of variable value continues until any further
improvement in the value of the objective function is possible.
Following two conditions need to be met before applying the simplex method:
1. The right-hand side of each constraint inequality should be non-negative. In case, any linear
programming problem has a negative resource value, then it should be converted into positive
value by multiplying both the sides of constraint inequality by “-1”.
2. The decision variables in the linear programming problem should be non-negative.
Thus, the simplex algorithm is efficient since it considers few feasible solutions, provided by the
corner points, to determine the optimal solution to the linear programming problem.
Duality
AKTUTHEINTACTONE4 MAR 2019 1 COMMENT
The Duality in Linear Programming states that every linear programming problem has another
linear programming problem related to it and thus can be derived from it. The original linear
programming problem is called “Primal,” while the derived linear problem is called “Dual.”
Before solving for the duality, the original linear programming problem is to be formulated in its
standard form. Standard form means, all the variables in the problem should be non-negative and
“≥,” ”≤” sign is used in the minimization case and the maximization case respectively.
The concept of Duality can be well understood through a problem given below:
Maximize
Z = 50x1+30x2
Subject to:
4x1 + 3x2 ≤ 100
3x1 + 5x2 ≤ 150
X1, x2 ≥ 0
The duality can be applied to the above original linear programming problem as:
Minimize
G = 100y1+150y2
Subject to:
4y1 + 3y1 ≥ 50
3y1 +5y2 ≥ 30
Y1, y2 ≥ 0
The following observations were made while forming the dual linear programming problem:
1. The primal or original linear programming problem is of the maximization type while the dual
problem is of minimization type.
2. The constraint values 100 and 150 of the primal problem have become the coefficient of dual
variables y1and y2 in the objective function of a dual problem and while the coefficient of the
variables in the objective function of a primal problem has become the constraint value in the
dual problem.
3. The first column in the constraint inequality of primal problem has become the first row in a dual
problem and similarly the second column of constraint has become the second row in the dual
problem.
4. The directions of inequalities have also changed, i.e. in the dual problem, the sign is the reverse
of a primal problem. Such that in the primal problem, the inequality sign was “≤” but in the dual
problem, the sign of inequality becomes “≥”.
Transportation Problem
North West Corner Method
AKTUTHEINTACTONE4 MAR 2019 3 COMMENTS
The North-West Corner Rule is a method adopted to compute the initial feasible solution of the
transportation problem. The name North-west corner is given to this method because the basic
variables are selected from the extreme left corner.
The concept of North-West Corner can be well understood through a transportation problem
given below:
In the table, three sources A, B and C with the production capacity of 50 units, 40 units, 60 units
of product respectively is given. Every day the demand of three retailers D, E, F is to be
furnished with at least 20 units, 95 units and 35 units of product respectively. The transportation
costs are also given in the matrix.
The prerequisite condition for solving the transportation problem is that demand should be equal
to the supply. In case the demand is more than supply, then dummy origin is added to the table.
The supply of dummy origin will be equal to the difference between the total supply and total
demand. The cost associated with the dummy origin will be zero.
Similarly, in case the supply is more than the demand, then dummy source is created whose
demand will be equivalent to the difference between supply and demand. Again the cost
associated with the dummy source will be zero.
Once the demand and supply are equal, the following procedure is followed:
1. Select the north-west or extreme left corner of the matrix, assign as many units as possible to cell
AD, within the supply and demand constraints. Such as 20 units are assigned to the first cell, that
satisfies the demand of destination D while the supply is in surplus.
2. Now move horizontally and assign 30 units to the cell AE. Since 30 units are available with the
source A, the supply gets fully saturated.
3. Now move vertically in the matrix and assign 40 units to Cell BE. The supply of source B also
gets fully saturated.
4. Again move vertically, and assign 25 units to cell CE, the demand of destination E is fulfilled.
5. Move horizontally in the matrix and assign 35 units to cell CF, both the demand and supply of
origin and destination gets saturated. Now the total cost can be computed.
The Total cost can be computed by multiplying the units assigned to each cell with the concerned
transportation cost. Therefore,
Total Cost = 20*5+ 30*8+ 40*6+ 25*9+ 35*6 = Rs 1015
Least Cost Method
AKTUTHEINTACTONE4 MAR 2019 2 COMMENTS
The Least Cost Method is another method used to obtain the initial feasible solution for the
transportation problem. Here, the allocation begins with the cell which has the minimum cost.
The lower cost cells are chosen over the higher-cost cell with the objective to have the least cost
of transportation.
The Least Cost Method is considered to produce more optimal results than the North-west
Corner because it considers the shipping cost while making the allocation, whereas the North-
West corner method only considers the availability and supply requirement and allocation begin
with the extreme left corner, irrespective of the shipping cost.
Let’s understand the concept of Least Cost method through a problem given below:
In the given matrix, the supply of each source A, B, C is given Viz. 50units, 40 units, and 60
units respectively. The weekly demand for three retailers D, E, F i.e. 20 units, 95 units and 35
units is given respectively. The shipping cost is given for all the routes.
The minimum transportation cost can be obtained by following the steps given below:
1. The minimum cost in the matrix is Rs 3, but there is a tie in the cell BF, and CD, now the
question arises in which cell we shall allocate. Generally, the cost where maximum quantity can
be assigned should be chosen to obtain the better initial solution. Therefore, 35 units shall be
assigned to the cell BF. With this, the demand for retailer F gets fulfilled, and only 5 units are
left with the source B.
2. Again the minimum cost in the matrix is Rs 3. Therefore, 20 units shall be assigned to the cell
CD. With this, the demand of retailer D gets fulfilled. Only 40 units are left with the source C.
3. The next minimum cost is Rs 4, but however, the demand for F is completed, we will move to
the next minimum cost which is 5. Again, the demand of D is completed. The next minimum
cost is 6, and there is a tie between three cells. But however, no units can be assigned to the cells
BD and CF as the demand for both the retailers D and F are saturated. So, we shall assign 5 units
to Cell BE. With this, the supply of source B gets saturated.
4. The next minimum cost is 8, assign 50 units to the cell AE. The supply of source A gets
saturated.
5. The next minimum cost is Rs 9; we shall assign 40 units to the cell CE. With his both the
demand and supply of all the sources and origins gets saturated.
The total cost can be calculated by multiplying the assigned quantity with the concerned cost of
the cell. Therefore,
Total Cost = 50*8 + 5*6 + 35*3 +20*3 +40*9 = Rs 955.
Note: The supply and demand should be equal and in case supply are more, the dummy source is
added in the table with demand being equal to the difference between supply and demand, and
the cost remains zero. Similarly, in case the demand is more than supply, then dummy
destination or origin is added to the table with the supply equal to the difference in quantity
demanded and supplied and the cost being zero.
Vogel’s Approximation Method
AKTUTHEINTACTONE4 MAR 2019 3 COMMENTS
The Vogel’s Approximation Method or VAM is an iterative procedure calculated to find out
the initial feasible solution of the transportation problem. Like Least cost Method, here also the
shipping cost is taken into consideration, but in a relative sense.
The following is the flow chart showing the steps involved in solving the transportation problem
using the Vogel’s Approximation Method:
The concept of Vogel’s Approximation Method can be well understood through an illustration
given below:
1. First of all the difference between two least cost cells are calculated for each row and
column, which can be seen in the iteration given for each row and column. Then the largest
difference is selected, which is 4 in this case. So, allocate 20 units to cell BD, since the minimum
cost is to be chosen for the allocation. Now, only 20 units are left with the source B.
2. Column D is deleted, again the difference between the least cost cells is calculated for each
row and column, as seen in the iteration below. The largest difference value comes to be 3, so
allocate 35 units to cell AF and 15 units to the cell AE. With this, the Supply and demand of
source A and origin F gets saturated, so delete both the row A and Column F.
3. Now, single column E is left, since no difference can be found out, so allocate 60 units to the
cell CE and 20 units to cell BE, as only 20 units are left with source B. Hence the demand and
supply are completely met.
Now the total cost can be computed, by multiplying the units assigned to each cell with the cost
concerned. Therefore,
Total Cost = 20*3 + 35*1 + 15*4 + 60*4 + 20*8 = Rs 555
Note: Vogel’s Approximation Method is also called as Penalty Method because the difference
costs chosen are nothing but the penalties of not choosing the least cost routes.
Stepping Stone Method
AKTUTHEINTACTONE4 MAR 2019 1 COMMENT
The Stepping Stone Method is used to check the optimality of the initial feasible solution
determined by using any of the method Viz. North-West Corner, Least Cost Method or Vogel’s
Approximation Method. Thus, the stepping stone method is a procedure for finding the potential
of any non-basic variables (empty cells) in terms of the objective function.
Through Stepping stone method, we determine that what effect on the transportation cost would
be in case one unit is assigned to the empty cell. With the help of this method, we come to know
whether the solution is optimal or not.
The series of steps are involved in checking the optimality of the initial feasible solution using
the stepping stone method:
1. The prerequisite condition to solve for the optimality is to ensure that the number of occupied
cells is exactly equal to m+n-1, where ‘m’ is the number of rows, while ‘n’ is equal to the
number of columns.
2. Firstly, the empty cell is selected and then the closed path is created which starts from the
unoccupied cell and returns to the same unoccupied cell, called as a “closed loop”. For creating a
closed loop the following conditions should be kept in mind:
o In a closed loop, cells are selected in a sequence such that one cell is unused/unoccupied, and all
other cells are used/occupied.
o A pair of Consecutive used cells lies either in the same row or the same column.
o No three consecutive occupied cells can either be in the same row or column.
o The first and last cells in the closed loop lies either in the same row or column.
o Only horizontal and vertical movement is allowed.
Once the loop is created, assign “+” or “–“ sign alternatively on each corner cell of the loop, but
begin with the “+” sign for the unoccupied cell.
Repeat these steps again until all the unoccupied cells get evaluated.
Now, if all the computed changes are positive or are equal to or greater than zero, then the
optimal solution has been reached.
But in case, if any, value comes to be negative, then there is a scope to reduce the transportation
cost further. Then, select that unoccupied cell which has the most negative change and assign as
many units as possible. Subtract the unit that added to the unoccupied cell from the other cells
with a negative sign in a loop, to balance the demand and supply requirements.
Example, suppose the following matrix shows the initial feasible solution and stepping stone
method is adopted to check its optimality:
With the new matrix so formed, again the empty cells will be evaluated through a loop formation
and signs will be assigned accordingly. The cell with the highest opportunity cost will be
assigned the units, and this process will repeat until the best optimum solution is obtained or the
opportunity cost of all the unoccupied cells comes to be negative.
Modified Distribution Method (MODI Mothed)
AKTUTHEINTACTONE4 MAR 2019 3 COMMENTS
The Modified Distribution Method or MODI is an efficient method of checking the optimality
of the initial feasible solution.
The concept of MODI can be further comprehended through an illustration given below:
1. Initial basic feasible solution is given below
3. Next step is to calculate the opportunity cost of the unoccupied cells (AF, BD, BF, CD) by
using the following formula
Cij – (ui+Vi)
6. Again, repeat the steps from 1 to 4 i.e. find out the opportunity costs for each unoccupied
cell and assign the maximum possible units to the cell having the largest opportunity cost. This
process will go on until the optimum solution is reached.
The Modified distribution method is an improvement over the stepping stone method since; it
can be applied more efficiently when a large number of sources and destinations are involved,
which becomes quite difficult or tedious in case of stepping stone method.
Modified distribution method reduces the number of steps involved in the evaluation of empty
cells, thereby minimizes the complexity and gives a straightforward computational scheme
through which the opportunity cost of each empty cell can be determined.
Unit 3 Assignment Model & Game Theory
Assignment Model: Hungarian Algorithm and its Applications
AKTUTHEINTACTONE4 MAR 2019 3 COMMENTS
Assignment Problem is a special type of linear programming problem which deals with the
allocation of the various resources to the various activities on one to one basis. It does it in such a
way that the cost or time involved in the process is minimum and profit or sale is maximum.
Though there problems can be solved by simplex method or by transportation method but
assignment model gives a simpler approach for these problems.
In a factory, a supervisor may have six workers available and six jobs to fire. He will have to
take decision regarding which job should be given to which worker. Problem forms one to one
basis. This is an assignment problem.
1. Assignment Model:
Suppose there are n facilitates and n jobs it is clear that in this case, there will be n assignments.
Each facility or say worker can perform each job, one at a time. But there should be certain
procedure by which assignment should be made so that the profit is maximized or the cost or
time is minimized.
In the table, Coij is defined as the cost when jth job is assigned to ith worker. It maybe noted here
that this is a special case of transportation problem when the number of rows is equal to number
of columns.
Mathematical Formulation:
Any basic feasible solution of an Assignment problem consists (2n – 1) variables of which the (n
– 1) variables are zero, n is number of jobs or number of facilities. Due to this high degeneracy,
if we solve the problem by usual transportation method, it will be a complex and time consuming
work. Thus a separate technique is derived for it. Before going to the absolute method it is very
important to formulate the problem.
Suppose xjj is a variable which is defined as
1 if the ith job is assigned to jth machine or facility
0 if the ith job is not assigned to jth machine or facility.
Now as the problem forms one to one basis or one job is to be assigned to one facility or
machine.
1. Locate the smallest cost element in each row of the given cost table starting with the first row.
Now, this smallest element is subtracted form each element of that row. So, we will be getting at
least one zero in each row of this new table.
2. Having constructed the table (as by step-1) take the columns of the table. Starting from first
column locate the smallest cost element in each column. Now subtract this smallest element from
each element of that column. Having performed the step 1 and step 2, we will be getting at least
one zero in each column in the reduced cost table.
3. Now, the assignments are made for the reduced table in following manner.
(i) Rows are examined successively, until the row with exactly single (one) zero is found.
Assignment is made to this single zero by putting square □ around it and in the corresponding
column, all other zeros are crossed out (x) because these will not be used to make any other
assignment in this column. Step is conducted for each row.
(ii) Step 3 (i) in now performed on the columns as follow:- columns are examined successively
till a column with exactly one zero is found. Now , assignment is made to this single zero by
putting the square around it and at the same time, all other zeros in the corresponding rows are
crossed out (x) step is conducted for each column.
(iii) Step 3, (i) and 3 (ii) are repeated till all the zeros are either marked or crossed out. Now, if
the number of marked zeros or the assignments made are equal to number of rows or columns,
optimum solution has been achieved. There will be exactly single assignment in each or columns
without any assignment. In this case, we will go to step 4.
4. At this stage, draw the minimum number of lines (horizontal and vertical) necessary to cover all
zeros in the matrix obtained in step 3, Following procedure is adopted:
(i) Tick mark () all rows that do not have any assignment.
(ii) Now tick mark() all these columns that have zero in the tick marked rows.
(iii) Now tick mark all the rows that are not already marked and that have assignment in the
marked columns.
(iv) All the steps i.e. (4(i), 4(ii), 4(iii) are repeated until no more rows or columns can be marked.
(v) Now draw straight lines which pass through all the un marked rows and marked columns. It
can also be noticed that in an n x n matrix, always less than ‘n’ lines will cover all the zeros if
there is no solution among them.
5. In step 4, if the number of lines drawn are equal to n or the number of rows, then it is the
optimum solution if not, then go to step 6.
6. Select the smallest element among all the uncovered elements. Now, this element is subtracted
from all the uncovered elements and added to the element which lies at the intersection of two
lines. This is the matrix for fresh assignments.
7. Repeat the procedure from step (3) until the number of assignments becomes equal to the
number of rows or number of columns.
Maximization Assignment Problem
AKTUTHEINTACTONE4 MAR 2019 2 COMMENTS
There are problems where certain facilities have to be assigned to a number of jobs so as to
maximize the overall performance of the assignment. The problem can be converted into a
minimization problem in the following ways and then Hungarian method can be used for its
solution.
Reduce the matrix column-wise and draw minimum number of lines to cover all the zeros in the
matrix, as shown in Table.
Matrix Reduced Column-wise and Zeros Covered
Number of lines drawn ≠ Order of matrix. Hence not optimal.
Select the least uncovered element, i.e., 4 and subtract it from other uncovered elements, add it to
the elements at intersection of line and leave the elements that are covered with single line
unchanged, Table.
Added & Subtracted the least Uncovered Element
Now, number of lines drawn = Order of matrix, hence optimality is reached. There are two
alternative assignments due to presence of zero elements in cells (4, C), (4, D), (5, C) and (5, D).
Two Alternative Assignments
Therefore,
Game Theory
Concept of Game Theory
AKTUTHEINTACTONE4 MAR 2019 1 COMMENT
Game theory was introduced by a mathematician, John Von Neumann and an economist, Oskar
Morgenstern, in 1950s.
This theory aims at providing a systematic approach to business decision making of
organizations. It is applied to evaluate the situations where individuals and organizations have
contradictory objectives.
For example, while settling a war between two nations, every nation tries to get the settlement in
its favor only during peace meetings/negotiations.
In such a case, game theory helps in solving the problem and arriving at a common consensus.
Apart from this, the theory can be applied to analyze activities, such as legal and political
strategies and economic behavior.
Over a passage of time, the game theory has emerged as a vast and complex subject. The games
in the game theory are simple as well as complex. The main aim of applying the game theory is
to find out the best strategy to resolve a particular problem.
Moreover, the game theory helps organization by increasing the probability of earning maximum
profit and reducing the probability of losses. The game theory has applications in sociology,
psychology, and mathematics.
Assumptions of Game Theory:
The game theory provides an appropriate solution of a problem if its conditions are properly
satisfied. These conditions are often termed as the assumptions of the game theory.
Some of these assumptions are as follows:
1. Assumes that a player can adopt multiple strategies for solving a problem.
2. Assumes that there is an availability of pre-defined outcomes.
3. Assumes that the overall outcome for all players would be zero at the end of the game.
4. Assumes that all players in the game are aware of the game rules as well as outcomes of other
players.
5. Assumes that players take a rational decision to increase their profit.
Among the aforementioned assumptions, the last two assumptions make the application of the
game theory confined in real world.
Structure of a Game:
Game theory is based on the concept of strategy and payoffs. Strategy indicates an action that a
player takes when challenged to solve a particular problem. On the other hand, payoff refers to
the outcome of the strategy applied by the player. For example, two friends are playing coin
flipping game.
In this game, one friend tosses the coin, and the other friend calls for head or tail. In case, the
caller’s projection about the coin is correct, then he/she gets the coin. However, in case the
caller’s projection is wrong, then he/she would lose the coin and the other person gets the coin.
Therefore, in this game, the caller’s projection of head or tail would be regarded as the strategy
and the payoff would be the result of coin flipping, which means that either caller wins the coin
or tosser wins the coin. In coin flipping game, the outcome or payoff depends on the caller as
he/she projected the side of coin. However, in other games, the payoff may depend on more than
one player.
Let us understand the tabular representation of payoff and strategies adopted in a game with the
help of an example. Suppose two competing organizations, ABC and XYZ, decide to increase
their profits by making changes in the prices of products. In this case, it is assumed that both the
organizations can adopt two strategies. One is to increase the price level of their product and
another is to maintain the same price level.
As per these strategies, there can be four possible combinations of strategies, which are as
follows:
1. Both ABC and XYZ has increased the prices of their products
2. Only ABC has increased the price of its product, while XYZ has not made any changes in the
price level of its products.
3. Only XYZ has increased its prices, while ABC has maintained the constant price level.
4. Both ABC and XYZ have maintained the same price level of their products
Table-1 shows the tabular representation of payoffs and strategies of organizations ABC
and XYZ:
In Table-1, the first numerical value of every cell represents the payoff of ABC, while the second
numerical value in each cell represents the payoff of XYZ. The tabular representation of
strategies and payoffs is termed as payoff matrix. Therefore, in the present case. Table-1 is a
payoff matrix for organizations, ABC and XYZ.
Two Person Zero-Sum Game
AKTUTHEINTACTONE4 MAR 2019 1 COMMENT
The simplest model is a duopoly market in which each duopolist attempts to maximise his
market share.
Given this goal, whatever a firm gains (by increasing its share of the market) the other firm loses
(because of the decrease in its share).
Thus any gain of one rival is offset by the loss of the other, and the net gain sums up to zero.
Hence the name ‘zero-sum game’.
The assumptions of the model are:
1. The firms have a given, well-defined goal. In our particular example the goal is maximisation of
the market share.
2. Each firm knows the strategies open to it and to its rival, or concentrates on the most important
of these strategies.
3. Each firm knows with certainty the payoffs of all combinations of the strategies being
considered. This implies that the firm knows its total revenue, total costs and total profit from
each combination of strategies.
4. The actions chosen by the duopolists do not affect the total size of the market.
5. Each firm chooses its strategy ‘expecting the worst from its rival’, that is, each firm acts in the
most conservative way, expecting that the rival will choose the best possible counter-strategy
open to him. This behaviour is defined as ‘rational’.
6. In the zero-sum game there is no incentive for collusion, given assumption 4, since the goals of
the firms are diametrically opposed.
In order to find the equilibrium solution we need information on the payoff matrix of the two
firms. In our example the payoffs will be shares of the market resulting from the adoption of any
two strategies by the rivals. Assume that Firm I has four strategies open to it and Firm II has five
strategies. The payoff matrices of the duopolists are shown in tables 19.2 and 19.3.
Clearly the sum of the payoffs in corresponding cells of the two payoff tables adds up to unity,
since the numbers in these cells are shares, and the total market is shared between the two firms.
In general, in the two-person zero-sum game we need not write both payoff matrices because of
the nature of the game: the goals are opposing, and, in our example, the payoff table of Firm I
contains indirectly information about the payoff of Firm II. Still we start by showing both tables,
and then we show how the equilibrium solution can be found from only the first payoff matrix.
Choice of strategy by Firm I:
Firm I examines the outcomes of each strategy open to it. That is, Firm I examines each row of
its payoff matrix and finds the most favourable outcome of the corresponding strategy, because
the firm expects the rival to adopt the most advantageous action open to him. This is the
behavioural rule implied by assumption 5 of this model
Thus:
If Firm I adopts strategy A1, the worst outcome that it may expect is a share of 0.10 (which will
be realized if the rival Firm II adopts its most favourable strategy B 1).
If Firm I adopts strategy A2, the worst outcome will be a share of 0.30 (if the rival adopts the best
action for him, B2).
If Firm I adopts strategy A3, the worst outcome will be a share of 0.20 (if Firm II chooses the
best open alternative, B3).
If Firm I adopts strategy A4, the worst outcome will be a share of 0.15 (which would be realised
by action B2 of Firm II).
Among all these minima (that is, among the above worst outcomes) Firm I chooses the
maximum, the ‘best of the worst’. This is called a maximin strategy, because the firm chooses
the maximum among the minima. In our example the maximin strategy of Firm I is A2, that is,
the strategy which yields a share of 0.30.
Choice of strategy by Firm II:
Firm II behaves in exactly the same way. The only difference is that Firm II examines the
columns of its payoff table, because these columns include the results-payoffs of each of the
strategies open to Firm II. For each strategy, that is, for each column, Firm II finds the worst
outcome (on the assumption that the rival will choose the best), and among these worst outcomes
Firm II chooses the best. Thus, if Firm II uses its own payoff table, its behaviour is a maximin
behaviour identical to the behaviour of Firm I.
However, in the zero-sum game only one payoff matrix is adequate for the equilibrium solution.
In our example the first payoff table will be used not only by Firm I but also by Firm II. Thus
concentrating on the first payoff table we may restate the decision-making process of Firm II as
follows. Firm II examines the columns of the (first) payoff matrix because these columns contain
the information about the payoffs of its strategies.
For each column-strategy Firm II finds the maximum payoff (of Firm I) because this is the worst
situation the firm (II) will face if it adopts the strategy corresponding to that column. Thus for
strategy B{ the worst outcome (for Firm II) is 0-40; for strategy B2 the worst outcome is 0-30; for
strategy B3 the worst outcome is 0-50; for strategy fl4 the worst result is 0-60; for strategy Bs the
worst result is 0-50. Among these maxima of each column-strategy Firm II will choose the
strategy with minimum value. Thus the strategy of Firm II is a minimax strategy, since it
involves the choice of a minimum among the maxima payoffs. (Table 19.4.)
It should be stressed that although different terms are used for the choice of the two firms
(maximin behaviour of Firm I, minimax behaviour of Firm II), the behavioural rule for both
firms is the same: each firm expects the worst from its rival.
In our example the equilibrium solution is strategy A2 for Firm I and B2 for Firm II. This solution
yields shares 0 30 for Firm I and 0-70 for Firm II. It is an equilibrium solution because it is the
preferred one by both firms. This solution is called the ‘saddle point’, and the preferred strategies
A2 and B2 are called ‘dominant strategies’.
It should be clear that there exists no such equilibrium (saddle) solution if there is no payoff
which is preferred by both firms simultaneously. Under certain mathematical conditions other
solutions and strategy choices can be determined. The analysis of the resulting mixed strategies
requires a sophisticated exposition of utility theory and random selection which is beyond the
scope of this book.
1. Uncertainty Model:
The assumption that each firm knows with certainty the exact value of the payoff of each
strategy is unrealistic. The most probable situation in the real business world is that the firm, by
adopting a certain strategy, may expect a range of results for each counter-strategy of the rival,
each result with an associated probability. Thus the payoff matrix is constructed so as to include
the expected value of each payoff.
The expected value is the sum of the products of the possible outcomes of a pair of
strategies (adopted by the two firms) each multiplied by its probability:
where gsi = the sth of the n possible outcomes of strategy i of Firm I (given that Firm II has
chosen strategy j)
PS = the probability of the sth outcome of strategy i
For example, assume that Firm I chooses strategy A1 and Firm II reacts with strategy B1. This
pair of simultaneous strategies may yield the shares for Firm I each with a certain probability,
shown in the second column of table 19.5. Thus the expected payoff of the pair of strategies
A1 and B1 is
E(G1 1) = (0.00)(0.00) + (0.05) (0.05) + (0.15)(0.05) + … + (0.95)(0.02) + (1)(0) = 0.458
In a similar way we find the expected payoff of all combinations of strategies. Given the matrix
of expected payoffs, the behavioural pattern of the firms is the same as in the certainty model.
That is:
Firm I adopts the maximin strategy. It finds for each row the minimum expected payoff, and
among these minima the firm chooses the one with the highest value (the maximum among the
minima).
Firm II adopts the minimax strategy. It finds for each column the maximum expected payoff, and
among these maxima Firm II chooses the one with the smallest value (the minimum among the
maxima).
Although the uncertainty zero-sum game seems simple, its assumptions are quite stringent:
In Table-2, when the batsman’s expectation and the bowler’s ball type are same, then the
percentage of making runs by batsman would be 30%. However, when the expectation of the
batsman is different from the type of ball he gets, the percentage of making runs would reduce to
10%. In case, the bowler or the batsman uses a pure strategy, then any one of them may suffer a
loss.
Therefore, it is preferred that bowler or batsman should adopt a mixed strategy in this case. For
example, the bowler throws a spin ball and fastball with a 50-50 combination and the batsman
predicts the 50-50 combination of the spin and fast ball. In such a case, the average hit of runs by
batsman would be equal to 20%.
This is because all the four payoffs become 25% and the average of four combinations can
be derived as follows:
0.25(30%) + 0.25(10%) + 0.25(30%) + 0.25(10%) = 20%
However, it may be possible that when the bowler is throwing a 50-50 combination of spin ball
and fastball, the batsman may not be able to predict the right type of ball every time. This would
decrease his average run rate below 20%. Similarly, if the bowler throws the ball with a 60-40
combination of fast and spin ball respectively, and the batsman would expect either a fastball or
a spin ball randomly. In such a case, the average of the batsman hits remains 20%.
The probabilities of four outcomes now become:
Anticipated fastball and fastball thrown: 0.50*0.60 = 0.30
Anticipated fastball and spin ball thrown: 0.50*0.40 = 0.20
Anticipated spin ball and spin ball thrown: 0.50*0.60 = 0.30
Anticipated spin ball and fastball thrown: 0.50*0.40 = 0.20
When we multiply the probabilities with the payoffs given in Table-2, we get
0.30(30%) + 0.20(10%) + 0.20(30%) + 0.30(10%) = 20%
This shows that the outcome does not depends on the combination of fastball and spin ball, but it
depends on the prediction of the batsman that he can get any type of ball from the bowler.
Saddle Point
AKTUTHEINTACTONE4 MAR 2019 1 COMMENT
In mathematics, a saddle point or minimax point is a point on the surface of the graph of a
function where the slopes (derivatives) in orthogonal directions are all zero (a critical point), but
which is not a local extremum of the function. An example of a saddle point shown on the right
is when there is a critical point with a relative minimum along one axial direction (between
peaks) and at a relative maximum along the crossing axis. However, a saddle point need not be
in this form. For example, the function {\displaystyle f(x,y)=x^{2}+y^{3}} {\displaystyle
f(x,y)=x^{2}+y^{3}} has a critical point at {\displaystyle (0,0)} (0,0) that is a saddle point since
it is neither a relative maximum nor relative minimum, but it does not have a relative maximum
or relative minimum in the y-direction.
The name derives from the fact that the prototypical example in two dimensions is a surface that
curves up in one direction, and curves down in a different direction, resembling a riding saddle
or a mountain pass between two peaks forming a landform saddle. In terms of contour lines, a
saddle point in two dimensions gives rise to a contour graph or trace in which the contour
corresponding to the saddle point’s value appears to intersect itself.
Odds Method
AKTUTHEINTACTONE4 MAR 2019 1 COMMENT
The odds-algorithm is a mathematical method for computing optimal strategies for a class of
problems that belong to the domain of optimal stopping problems. Their solution follows from
the odds-strategy, and the importance of the odds-strategy lies in its optimality, as explained
below.
The odds-algorithm applies to a class of problems called last-success-problems. Formally, the
objective in these problems is to maximize the probability of identifying in a sequence of
sequentially observed independent events the last event satisfying a specific criterion (a “specific
event”). This identification must be done at the time of observation. No revisiting of preceding
observations is permitted. Usually, a specific event is defined by the decision maker as an event
that is of true interest in the view of “stopping” to take a well-defined action. Such problems are
encountered in several situations.
Two different situations exemplify the interest in maximizing the probability to stop on a last
specific event.
1. Suppose a car is advertised for sale to the highest bidder (best “offer”). Let n potential buyers
respond and ask to see the car. Each insists upon an immediate decision from the seller to accept
the bid, or not. Define a bid as interesting, and coded 1 if it is better than all preceding bids, and
coded 0 otherwise. The bids will form a random sequence of 0s and 1s. Only 1s interest the
seller, who may fear that each successive 1 might be the last. It follows from the definition that
the very last 1 is the highest bid. Maximizing the probability of selling on the last 1 therefore
means maximizing the probability of selling best.
2. A physician, using a special treatment, may use the code 1 for a successful treatment, 0
otherwise. The physician treats a sequence of n patients the same way, and wants to minimize
any suffering, and to treat every responsive patient in the sequence. Stopping on the last 1 in
such a random sequence of 0s and 1s would achieve this objective. Since the physician is no
prophet, the objective is to maximize the probability of stopping on the last.
Game Theory Pure and Mixed Strategies, Principle of Dominance
THESTREAK24 MAY 2018 2 COMMENTS
Pure strategy
A pure strategy is an unconditional, defined choice that a person makes in a situation or game.
For example, in the game of Rock-Paper-Scissors,if a player would choose to only play scissors
for each and every independent trial, regardless of the other player’s strategy, choosing scissors
would be the player’s pure strategy. The probability for choosing scissors equal to 1 and all other
options (paper and rock) is chosen with the probability of 0. The set of all options (i.e. rock,
paper, and scissors) available in this game is known as the strategy set.
Mixed strategy
A mixed strategy is an assignment of probability to all choices in the strategy set. Using the
example of Rock-Paper-Scissors, if a person’s probability of employing each pure strategy is
equal, then the probability distribution of the strategy set would be 1/3 for each option, or
approximately 33%. In other words, a person using a mixed strategy incorporates more than one
pure strategy into a game.
The definition of a mixed strategy does not rule out the possibility for an option(s)to never be
chosen (eg. pscissors= 0.5, prock = 0.5, ppaper = 0). This means that in a way, a pure strategy can also
be considered a mixed strategy at its extreme, with a binary probability assignment (setting one
option to 1 and all others equal to 0). For this article, we shall say that pure strategies are not
mixed strategies.
In the game of tennis, each point is a zero-sum game with two players (one being the server S,
and the other being the returner R). In this scenario, assume each player has two strategies
(forehand F, and backhand B). Observe the following hypothetical in the payoff matrix:
The strategies FS or BS are observed for the server when the ball is served to the side of the
service box closest to the returner’s forehand or backhand, respectively. For the returner, the
strategies FR and BR are observed when the returner moves to the forehand or backhand side to
return the serve, respectively. This gives us the payoffs when the returner receives the serve
correctly (FS,FR or BS,BR), or incorrectly (FS,BR or BS,FR). The payoffs to each player for every
action are given in pure strategy payoffs, as each player is only guaranteed their payoff given the
opponent’s strategy is employed 100% of the time. Given these pure strategy payoffs, we can
calculate the mixed strategy payoffs by figuring out the probability each strategy is chosen by
each player.
So you are Roger. It is apparent to you that a pure strategy would be exploitable. If you serve to
the backhand 100% of the time, it would be easy for the opponent to catch on and return from the
backhand side more often than the forehand, maximizing his expected payoff. Same goes for the
serve to the forehand. But how often should you mix your strategy and serve to each side to
minimize your opponent’s chances of winning? Calculating these probabilities would give us our
mixed strategy Nash equilibria, or the probabilities that each strategy is used which would
minimize the opponent’s expected payoff. In the following article, we will look at how to find
mixed strategy Nash equilibria, and how to interpret them.
Pure and Mixed Strategies:
In a pure strategy, players adopt a strategy that provides the best payoffs. In other words, a pure
strategy is the one that provides maximum profit or the best outcome to players. Therefore, it is
regarded as the best strategy for every player of the game. In the previously cited example
(Table-1), the increase in the prices of organizations’ products is the best strategy for both of
them.
This is because if both of them increase the prices of their products, they would earn maximum
profits. However, if only one of the organization increases the prices of its products, then it
would incur losses. In such a case, an increase in prices is regarded as a pure strategy for
organizations ABC and XYZ.
On the other hand, in a mixed strategy, players adopt different strategies to get the possible
outcome. For example, in cricket a bowler cannot throw the same type of ball every time because
it makes the batsman aware about the type of ball. In such a case, the batsman may make more
runs.
However, if the bowler throws the ball differently every time, then it may make the batsman
puzzled about the type of ball, he would be getting the next time.
Therefore, strategies adopted by the bowler and the batsman would be mixed strategies,
which are shown ion Table-2:
In Table-2, when the batsman’s expectation and the bowler’s ball type are same, then the
percentage of making runs by batsman would be 30%. However, when the expectation of the
batsman is different from the type of ball he gets, the percentage of making runs would reduce to
10%. In case, the bowler or the batsman uses a pure strategy, then any one of them may suffer a
loss.
Therefore, it is preferred that bowler or batsman should adopt a mixed strategy in this case. For
example, the bowler throws a spin ball and fastball with a 50-50 combination and the batsman
predicts the 50-50 combination of the spin and fast ball. In such a case, the average hit of runs by
batsman would be equal to 20%.
This is because all the four payoffs become 25% and the average of four combinations can
be derived as follows:
0.25(30%) + 0.25(10%) + 0.25(30%) + 0.25(10%) = 20%
However, it may be possible that when the bowler is throwing a 50-50 combination of spin ball
and fastball, the batsman may not be able to predict the right type of ball every time. This would
decrease his average run rate below 20%. Similarly, if the bowler throws the ball with a 60-40
combination of fast and spin ball respectively, and the batsman would expect either a fastball or
a spin ball randomly. In such a case, the average of the batsman hits remains 20%.
The probabilities of four outcomes now become:
Anticipated fastball and fastball thrown: 0.50*0.60 = 0.30
Anticipated fastball and spin ball thrown: 0.50*0.40 = 0.20
Anticipated spin ball and spin ball thrown: 0.50*0.60 = 0.30
Anticipated spin ball and fastball thrown: 0.50*0.40 = 0.20
When we multiply the probabilities with the payoffs given in Table-2, we get
0.30(30%) + 0.20(10%) + 0.20(30%) + 0.30(10%) = 20%
This shows that the outcome does not depends on the combination of fastball and spin ball, but it
depends on the prediction of the batsman that he can get any type of ball from the bowler.
Principle of Dominance
The principle of dominance in Game Theory (also known as dominant
strategy or dominance method) states that if one strategy of a player dominates over the other
strategy in all conditions then the later strategy can be ignored.
A strategy dominates over the other only if it is preferable over other in all conditions. The
concept of dominance is especially useful for the evaluation of two-person zero-sum
games where a saddle point does not exist.
Dominant Strategy Rules (Dominance Principle)
If all the elements of a column (say ith column) are greater than or equal to the corresponding
elements of any other column (say jth column), then the ith column is dominated by the jthcolumn
and can be deleted from the matrix.
If all the elements of a row (say ith row) are less than or equal to the corresponding elements of
any other row (say jth row), then the ith row is dominated by the jth row and can be deleted from
the matrix.
Dominance Example: Game Theory
ad – bc
Value of the game, V
——————
=
–
(a + d) – (b +
c)
Algebraic Method Example 1: Game Theory
Consider the game of matching coins. Two players, A & B, put down a coin. If coins match (i.e.,
both are heads or both are tails) A gets rewarded, otherwise B. However, matching on heads
gives a double premium. Obtain the best strategies for both players and the value of the game.
Player B
I II
Player A I 2 -1
II -1 1
Solution.
This game has no saddle point.
1 – (-1) 2
p
———————– = —-
=
(2 + 1) – (-1 – 1) 5
1 – p = 3/5
1 – (-1) 2
q
———————– = —-
=
(2 + 1) – (-1 – 1) 5
1 – q = 3/5
2 X 1 – (-1) X (-1)
1
V ————————
= —-
= –
5
(2 + 1) – (-1 – 1)
Example 2: Algebraic Method in Game Theory
Solve the game whose payoff matrix is given below:
Player B
I II
Player A I 1 7
II 6 2
Solution.
This game has no saddle point.
2 – 6 2
p
———————– = —-
=
(1 + 2) – (7 + 6) 5
1 – p = 3/5
2 – 7 1
q
———————– = —-
=
(1 + 2) – (7 + 6) 2
1 – q = 1/2
1 X 2 – (7 X 6)
V ————————
= 4
= –
(1 + 2) – (7 + 6)
1. If the values of zj – cjare positive, the inclusion of any basic variable will not increase the value
of the objective function. Hence, the present solution maximizes the objective function. If there
are more than one negative values, we choose the variable as a basic variable corresponding to
which the value of zj – cjis least (most negative) as this will maximize the profit.
2. The numbers in the replacing row may be obtained by dividing the key row elements by the
pivot element and the numbers in the other two rows may be calculated by using the formula:
New old (corresponding no. of key row) X (corresponding no. of
number= number- key column)
pivot element
Calculating values for table 2
x3 row
a11 = -1 – 1 X ((-1)/1) = 0
a12 = 2 – (-1) X ((-1)/1) = 1
a13 = 1 – 0 X ((-1)/1) = 1
a14 = 0 – 0 X ((-1)/1) = 0
a15 = 0 – 1 X ((-1)/1) = 1
b1 = 4 – 3 X ((-1)/1) = 7
x4 row
a21 = 3 – 1 X (3/1) = 0
a22 = 2 – (-1) X (3/1) = 5
a23 = 0 – 0 X (3/1) = 0
a24 = 1 – 0 X (3/1) = 1
a25 = 0 – 1 X (3/1) = -3
b2 = 14 – 3 X (3/1) = 5
x1 row
a31 = 1/1 = 1
a32 = -1/1 = -1
a33 = 0/1 = 0
a34 = 0/1 = 0
a35 = 1/1 = 1
b3 = 3/1 = 3
Table 2
cj 3 2 0 0 0
Basic variables Solution values
cB x1 x2 x3 x4 x5
B b (= XB)
0 x3 0 1 1 0 1 7
0 x4 0 5 0 1 -3 5
3 x1 1 -1 0 0 1 3
zj-cj 0 -5 0 0 3
Calculating values for the index row (zj – cj)
z1 – c1 = (0 X 0 + 0 X 0 + 3 X 1) – 3 = 0
z2 – c2 = (0 X 1 + 0 X 5 + 3 X (-1)) – 2 = -5
z3 – c3 = (0 X 1 + 0 X 0 + 3 X 0) – 0 = 0
z4 – c4 = (0 X 0 + 0 X 1 + 3 X 0) – 0 = 0
z5 – c5 = (0 X 1 + 0 X (-3) + 3 X 1) – 0 = 3
Key column = x2 column
Minimum (7/1, 5/5) = 1
Key row = x4 row
Pivot element = 5
x4 departs and x2 enters.
Calculating values for table 3
x3 row
a11 = 0 – 0 X (1/5) = 0
a12 = 1 – 5 X (1/5) = 0
a13 = 1 – 0 X (1/5) = 1
a14 = 0 – 1 X (1/5) = -1/5
a15 = 1 – (-3) X (1/5) = 8/5
b1 = 7 – 5 X (1/5) = 6
x2 row
a21 = 0/5 = 0
a22 = 5/5 = 1
a23 = 0/5 = 0
a24 = 1/5
a25 = -3/5
b2 = 5/5 = 1
x1 row
a31 = 1 – 0 X (-1/5) = 1
a32 = -1 – 5 X (-1/5) = 0
a33 = 0 – 0 X (-1/5) = 0
a34 = 0 – 1 X (-1/5) = 1/5
a35 = 1 – (-3) X (-1/5) = 2/5
b3 = 3 – 5 X (-1/5) = 4
Don’t convert the fractions into decimals, because many fractions cancel out during the process
while the conversion into decimals will cause unnecessary complications.
Simplex Method: Final Optimal Table
cj 3 2 0 0 0
Basic Solution
cB variables x1 x2 x3 x4 x5 values
B b (= XB)
0 x3 0 0 1 -1/5 8/5 6
2 x2 0 1 0 1/5 -3/5 1
3 x1 1 0 0 1/5 2/5 4
zj-cj 0 0 0 1 0
Since all the values of zj – cj are positive, this is the optimal solution.
x1 = 4, x2 = 1
z = 3 X 4 + 2 X 1 = 14.
The largest profit of Rs.14 is obtained, when 1 unit of x2 and 4 units of x1 are produced. The
above solution also indicates that 6 units are still unutilized, as shown by the slack variable x3 in
the XBcolumn.
Unit 4 Sequencing & Queuing Theory
Johnsons Algorithm for n Jobs and Two Machines
AKTUTHEINTACTONE5 MAR 2019 1 COMMENT
Johnson’s algorithm is used for sequencing of ‘n’ jobs through two work centres. The purpose is
to minimise idle time on machines and reduce the total time taken for completing all the jobs.As
there are no priority rules since all job have equal priority, sequencing the jobs according to the
time taken may minimise the idle time taken by the jobs on machines. This reduces the total time
taken.
The algorithm can be fulfilled in the following steps.
Step 1: Find the minimum among the time taken by machine 1 and 2 for all the jobs.
Step 2a: If the minimum processing time is required by machine 1 to complete the job, place the
associated job in the first available position in the final sequence. Then go to step 3. (If it is a tie
you may choose either of them, for applying the above rule.)
Step 2b: If the minimum processing time is required by machine 2 to complete the job, place the
associated job in the last available position in final sequence. Then go to step 3. (If it is a tie you
may choose either of them, for applying the above rule.)
Step 3: Remove the assigned job from consideration and return to step 1 until all the positions in
the sequence are filled.
Consider the following example:
PARTS
MACHINES P1 P2 P3
M1 30 14 30
M2 25 18 19
The minimum time taken is 14, by P2, on M1. So P2 will be first in the sequence; since P2 is
assigned, we remove it from consideration.
The next least time is 19, taken by P3 on M2. Since it is on M2, it is assigned is the last position
in the sequence.
The remaining middle position is then taken up by P1.
n Jobs and Three Machines
AKTUTHEINTACTONE5 MAR 2019 2 COMMENTS
This case is similar to the previous case except that instead of two machines, there are three
machines. Problems falling under this category can be solved by the method developed by
Johnson. Following are the two conditions of this approach:
The smallest processing time on machine A is greater than or equal to the greatest processing
time on machine B, i.e.,
Min. (Ai) ≥ Max. (Bi)
The smallest processing time on machine C is greater than or equal to the greatest processing
time on machine B, i.e.,
Max. (Bi) ≤ Min. (Ci)
At least one of the above two conditions must be satisfied.
If either or both of the above conditions are satisfied, then we replace the three machines by two
fictitious machines G & H with corresponding processing times given by
Gi = Ai + Bi
Hi = Bi + Ci
Where Gi & Hi are the processing times for ith job on machine G and H respectively.
After calculating the new processing times, we determine the optimal sequence of jobs for the
machines G & H in the usual manner.
Example 1 : Processing n Jobs Through 3 Machines
The MDH Masala company has to process five items on three machines:- A, B & C. Processing
times are given in the following table:
Item Ai Bi Ci
1 4 4 6
2 9 5 9
3 8 3 11
4 6 2 8
5 3 6 7
Find the sequence that minimizes the total elapsed time.
Solution.
Here, Min. (Ai) = 3, Max. (Bi) = 6 and Min. (Ci) = 6. Since the condition of Max. (Bi) ≤ Min. (Ci)
is satisfied, the problem can be solved by the above procedure. The processing times for the new
problem are given below.
Item Gi = Ai + Bi Hi = Bi + Ci
1 8 10
2 14 14
3 11 14
4 8 10
5 9 13
The optimal sequence is
1 4 5 3 2
4 1 5 3 2
Mark the processing times of job 1 & job 2 on X-axis & Y-axis respectively.
Draw the rectangular blocks by pairing the same machines as shown in the following figure.
Starting from origin O, move through the 450 line until a point marked finish is obtained.
The elapsed time can be calculated by adding the idle time for either job to the processing time
for that job. In this illustration, idle time for job 1 is 5 (3+2) hours.
Elapsed time = Processing time of job 1 + Idle time of job 1
= (3 + 4 + 2 + 6 + 2) + 5 = 17 + 5 = 22 hours.
Likewise, idle time for job 2 is 2 hours.
Elapsed time = Processing time of job 2 + Idle time of job 2
= (5 + 4 + 3 + 2 + 6) + (2) = 20 + 2 = 22 hours.
Queuing Theory
M/M/I/FIFO
THESTREAK24 MAY 2018 3 COMMENTS
M/M/1 (N/FIFO) System : Queuing Models
It is a queuing model where the arrivals follow a Poisson process, service times are
exponentially distributed and there is only one server. Capacity of the system is limited to N with
first in first out mode.
The first M in the notation stands for Poisson input, second M for Poisson output, 1 for the
number of servers and N for capacity of the system.
ρ = λ/μ
1−ρ
Po = ——–
1 − ρN + 1
ρ (N + 1)ρN+1
Ls = ——– − ———–
1–ρ 1 − ρN + 1
Lq = Ls – λ/μ
Lq
Wq = —-
λ
Ls
Ws = —-
λ
ρ = 36/48 = 0.75
N=9
1– 0.75
Po = ————-
1- (0.75)9 + 1
=
0.26
0.75 (9 + 1)(0.75)9+1
Ls = ——– – ———————-
1 – 0.75 1– (0.75)9 + 1
= 2.40 or 2 students.
M/M/1 Queuing System (∞/FIFO)
It is a queuing model where the arrivals follow a Poisson process, service times are exponentially
distributed and there is only one server. In other words, it is a system with Poisson input,
exponential waiting time and Poisson output with single channel.
Queue capacity of the system is infinite with first in first out mode. The first M in the notation
stands for Poisson input, second M for Poisson output, 1 for the number of servers and ∞ for
infinite capacity of the system.
Formulas
λ
Probability of zero unit in the
1 − —–
queue (Po) =
μ
λ2
Average queue length (Lq ) = ——–
μ (μ – λ )
λ
Average number of units in the
——–
system (Ls) =
μ–λ
λ
Average waiting time of an arrival
———-
(Wq) =
μ(μ – λ )
1
Average waiting time of an arrival
———
in the system (Ws) =
μ–λ
Example 1
Students arrive at the head office of Universal Teacher Publications according to a Poisson input
process with a mean rate of 40 per hour. The time required to serve a student has an exponential
distribution with a mean of 50 per hour. Assume that the students are served by a single
individual, find the average waiting time of a student.
Solution.
Given
λ = 40/hour, μ = 50/hour
Average waiting time of a 40
student before receiving service ——— = 4.8 minutes
(Wq) = 50(50 – 40)
Example 2
New Delhi Railway Station has a single ticket counter. During the rush hours, customers arrive
at the rate of 10 per hour. The average number of customers that can be served is 12 per hour.
Find out the following:
1 1
= 2
Average service time = — — hour
minutes =
μ 30
μ = 30/hour
(12)2 4
Average queue length,
———– = —
Lq =
30(30 – 12) 15
12 2
Average number of
——- = —-
customers, Ls =
30 – 12 3
1
Average time spent at 3.33
———- =
the petrol pump = minutes
30 – 12
Example 4
Universal Bank is considering opening a drive in window for customer service. Management
estimates that customers will arrive at the rate of 15 per hour. The teller whom it is considering
to staff the window can service customers at the rate of one every three minutes.
Assuming Poisson arrivals and exponential service find
15
Average number in the system = ———- = 3 customers
20 – 15
15
Average waiting time in line = ———— = 0.15 hours
20(20 – 15)
1
Ws = ———- = 30 minutes
12 – 10
(10)2
25/6
Lq = ———– =
customers
12(12 – 10)
10
Wq = ——— = 25 minutes
12(12 – 10)
Application of Poisson and Exponential distribution in estimating Arrival Rate and Service Rate
AKTUTHEINTACTONE5 MAR 2019 1 COMMENT
Both the Poisson and Exponential distributions play a prominent role in queuing theory. The
Poisson distribution counts the number of discrete events in a fixed time period; it is closely
connected to the exponential distribution, which (among other applications) measures the time
between arrivals of the events. The Poisson distribution is a discrete distribution; the random
variable can only take nonnegative integer values. The exponential distribution can take any
(nonnegative) real value.
Considering a problem of determining the probability of n arrivals being observed during a time
interval of length t, where the following assumptions are made.
1. Probability that an arrival is observed during a small time interval (say of length v) is
proportional to the length of interval. Let the proportionality constant be l, so that the probability
is lv.
2. Probability of two or more arrivals in such a small interval is zero.
3. Number of arrivals in any time interval is independent of the number in nonoverlapping time
interval.
These assumptions may be combined to yield what probability distributions are likely to be,
under Poisson distribution with exactly n customers in the system.
Suppose function P is defined as follows:
P (n customers during period t) = the probability that n arrivals will be observed in a time
interval of length t
This is the Poisson probability distribution for the discrete random variable n, the number of
arrivals, where the length of time interval, t is assumed to be given. This situation in queuing
theory is called Poisson arrivals. Since the arrivals alone are considered (not departures), it is
called a pure birth process.
The time between successive arrivals is called inter-arrival time. In the case where the number
of arrivals in a given time interval has Poisson distribution, inter-arrival times can be shown to
have the exponential distribution. If the inter-arrival times are independent random variables,
they must follow an exponential distribution with density f(t) where,
Poisson and Exponential Distribution Practice Problems
The practice problems of poisson and exponential distributions are given below
Example: In a factory, the machines break down and require service according to a Poisson
distribution at the average of four per day. What is the probability that exactly six machines
break down in two days?
What will be the waiting time for a customer before service is complete?
What will be the average length of the queue?
What will be the probability that the queue length exceeds a certain length?
How can a system be designed at minimum total cost?
How many servers should be employed?
Should priorities of the customers be considered?
Is there sufficient waiting area for the customers?
Applications of Queue Model for Better Service to the Customers
AKTUTHEINTACTONE5 MAR 2019 1 COMMENT
Jockeying in Queuing Theory
What is jockeying?
If you’ve ever been stuck in traffic, you’re already acquainted with a perfect analogy for
jockeying in queuing theory. Annoyed drivers zip back and forth between lanes in a desperate
attempt to crawl faster than the cars next to them.
And often, they choose wrong.
In the same vein, consumer habits dictate that customers want to get to the register fast. If we see
another lane moving quicker than our own, we rationalize that it’s faster to jump ship and try
your luck on the next lane.
This line-jumping is called jockeying behavior.
Models of consumer behavior show exactly why this happens. Think of the statistics. With one
in four lines, you have a 25% chance of being in the fastest lane.
In all likelihood, another lane is moving faster than yours.
The Hamletian question of “to hop lines, or not to hop lines” actually has a definitive answer.
Jockeying not only leads to customers becoming annoyed when they choose poorly, but it also
slows down the overall efficiency of queues.
One long snaking line — called a serpentine line — is the most efficient customer queuing
system.
If one register is stuck waiting for management to confirm a price check, customers aren’t
forced to wait longer. They simply move to the next available register when it’s their turn.
Serpentine lines also promote fairness — what may be the root of our sense of justice. In a
serpentine line, nobody who arrives after you will ever be served before you.
The law of first come, first serve is in full force.
Unfortunately, there’s a reason why more companies don’t use single lanes to prevent jockeying
in queuing theory. The psychology of queuing tells us why.
Many customers look at one long snaking line and think, “Thanks, but no thanks!” A serpentine
line looks too long and appears to be moving too slow.
Consumer habits are influenced by perceptual biases that get in the way of efficiency and
persuade them to test their luck gambling.
The question is, how do you prevent people’s perceptions from getting the better of them?
Balking in Queuing Theory
Balking in queuing theory is when customers look at a waiting line and decide it’s not worth
their time to queue up and wait.
Unlike queuing in amusement parks, queuing in stores isn’t promising a thrill for patience. And,
in contrast with hospitals, stores don’t deal with life-or-death situations, so there’s always an
alternative.
So how do you deal with balking? To prevent customers from leaving, businesses have to
persuade them to wait.
Luckily, thanks to researchers studying queuing theory, there are a number of solutions to
balking.
Display the Wait Time to Prevent Customers From Leaving
The uncertainty of how long a line will take is a killer.
It’s typical of customer habits to see one long line and start calculating under the assumption that
length is equal to time. It’s a natural thing and a major cause of balking in queuing theory.
According to research on customer buying behavior, people overestimate their wait time by
around 36%. We have a perceptual knack to inflate our suffering.
Customers have to be persuaded to wait, lest their imaginations send them to the parking lot.
What’s the solution? Display the wait time.
A simple counter telling customers how long exactly they have to wait anchors a customer’s
patience. The customer who would normally balk at a single-long line decides to wait, since the
counter reads it will only take five minutes to checkout.
In the words of David H. Maister:
“Waiting in ignorance creates a feeling of powerlessness, which frequently results in visible
irritation and rudeness on the part of customers.”
People not only fear the unknown — they can also get annoyed by it.
Sign-In to Wait on Line
Why not trade in physical lines for a virtual queuing system? A queue management system can
replace lines and prevent balking in queuing theory altogether.
Set up tablets throughout your store to allow customers to reserve their spot in line. When it’s
their turn, simply call them to the front for check out.
Customers feel empowered and can distract themselves by examining items — possibly leading
to an additional impulse purchase.
A queue management system also personalizes the experience. Instead of having customers
pull a number and waiting for their number to be called, a worker calls out their name.
Reneging in Queuing Theory
“Reneging” is so bad of a behavior, it even sounds dangerous. As if your customers become
renegades.
In truth, reneging may be even worse than balking. It’s when a customer has joined the line but
decides they don’t want to wait.
So what do they do? Consumer habits dictate that they renege — they simply walk out the door.
Reneging in queuing theory signals that a customer came in committed to make a purchase. But
then they weigh their time versus their desire for the item, and the scale tips towards “I’m out of
here”.
One mistake, and your customers are leaving for your competitors.
As essayist Lance Morrow put it:
“Waiting can seem an interval of non-being, the black space between events and the outcomes of
desires. It makes time maddeningly elastic, it has a way of seeming to compact eternity into a
few hours.”
While waiting in line doesn’t necessarily invoke the same melodrama, the sentiment is certainly
fitting. Models of consumer behavior show that customers don’t want to waste their time.
What can be done to fix that?
Entertain Your Customers
Seattle band Nirvana unwittingly described the behavior of most customers when they sang
“Here we are now, entertain us!”
The point is, keep your customers distracted from waiting. Any method is fine, even installing
new TVs in your location.
In a study by Millward Brown examining models of consumer behavior, 84% of grocery store
shoppers reported that watching digital signs made checking out far less daunting.
In addition, over 70% of surveyed customers said they would watch screens, while 15.7% were
unsure.
That’s a potentially whopping number of people who would be entertained while waiting in line.
Further evidence suggests that nearly half of customers are more willing to shop at stores with
screens.
As David Maister writes, “Occupied time feels shorter than unoccupied time”. Sounds
obvious?
Because it really is.
Further evidence suggests that perceived wait time is reduced as much as 35% by digital signs.
Also, seven in ten customers make a purchase because something caught their eye from a digital
sign.
Businesses don’t have to employ televisions to overcome reneging in queuing theory. You’re
free to get creative.
For example, Atlanta International Airport hired a violinist to play while people waited to be
patted down by the TSA. Not only did the soothing sounds ease customer’s anxiety but it turned
customer’s attention away from the anxiety of metal detectors to the sound of Bach.
Turn to customer feedback to understand how to build this superior customer experience.
Collect Customer Analytics
While we’ve discussed customer queuing systems to improve customers’ time waiting, there’s an
additional overlooked component of queuing theory.
Exceptional customer service can override waiting times. 91% of Americans report that
customer service dictates who will receive their business.
Learn who your customers are. Hointer’s Seattle store collects analytics on customer buying
behavior, to learn which customers like being helped by a service representative and which like
to be left alone.
More than just developing models of consumer behavior, it’s a measure of how sales
representatives are to communicate with customers. In turn, it teaches customers what kind of
shopping experience they can expect by visiting the store.
Relaxed customers who receive a service they need are less likely to balk, renege, or jockey on
lines. They’re more likely to continue visiting the store, effectively becoming your loyal
customers.
It’s a win-win situation that increases customer retention and meets the recommendations of
queuing theory.
Applying the latest research in queuing theory helps businesses keep customers happy, to
improve the customer’s experience, and improve business. What’s not to like?
But it requires businesses to understand the psychology of queuing and empathize with their
customers. The question all businesses need to ask is, “What would make my experience
waiting in queue better?”
Because understanding consumer behavior and the hurdles posed by customer queuing systems
improves the shopping experience for everyone.
This table shows that the average annual total cost during the seventh year is minimum. Hence,
the machine should be replaced after the 7th year.
Replacement of Assets which Fail Suddenly
AKTUTHEINTACTONE5 MAR 2019 1 COMMENT
In some situations, failure of a certain item occurs all of a sudden, instead of gradual
deterioration (e.g., failure of light bulbs, tubes, etc.).
The failure of the item may result in complete breakdown of the system. The breakdown implies
loss of production, idle inventory, idle labour, etc. Therefore, an organization must prepare itself
against these failures.
“Failure to prepare is preparing to fail.” – Ben Franklin
Thus, to avoid the possibility of a complete breakdown, it is desirable to formulate a suitable
replacement policy. The following two courses can be followed in such situations.
Individual replacement policy. Under this policy, an item may be replaced immediately after
its failure.
Group replacement policy. Under this policy, the items are replaced in group after a certain
period, say t, irrespective of the fact that items have failed or not. If any item fails before its
preventive replacement is due, then individual replacement policy is used.
In situations where the items fail completely, the formulation of replacement policy depends
upon the probability of failure. Mortality tables or Life testing techniques may be used to obtain
a probability distribution of the failure of items in a system.
Mortality Tables
M(t) = Number of items surviving at time t
M(t – 1) = Number of items surviving at time (t – 1)
N = Total number of items in the system
The probability of failure of items during the interval t and (t – 1) is given by
M(t – 1) – M(t)
—————-
N
The conditional probability that any item survived upto age (t – 1) and will fail in the next period
is given by
M(t – 1) – M(t)
—————-
M(t – 1)
Example 1
Following mortality rates have been observed for certain type of light bulbs.
Time (weeks) 0 1 2 3 4 5 6 7 8 9 10
Number of bulbs still
100 94 82 58 40 28 19 13 7 3 0
operating
Calculate the probability of failure.
Solution.
Here, t is the time (weeks) and M (t) is the number of bulbs still operating. The probability of
failure can be calculated as shown in the following table.
Table
Probability of
Time failure
M (t)
(t) pi = [ M (t – 1)- M
(t) ] / N
0 100 —-
(100 – 94)/100 =
1 94
0.06
2 82 (94 – 82)/100 = 0.12
3 58 (82 – 58)/100 = 0.24
4 40 (58 – 40)/100 = 0.18
5 28 (40 – 28)/100 = 0.12
6 19 (28 – 19)/100 = 0.09
7 13 (19 – 13)/100 = 0.06
8 7 (13 – 7)/100 = 0.06
9 3 (7 – 3)/100 = 0.04
10 0 (3 – 0)/100 = 0.03
Example 2
Following mortality rates have been observed for a certain type of electronic component.
Month 0 1 2 3 4 5 6
% surviving at the end of the month 100 97 90 70 30 15 0
There are 10000 items in operation. It costs Re 1 to replace an individual item and 35 paise per
item, if all items are replaced simultaneously. It is decided to replace all items at fixed intervals
& to continue replacing individual items as and when they fail. At what intervals should all items
be replaced? Assume that the items failing during a month are replaced at the end of the month.
Solution.
Probability of
% surviving at the end of
Month failure
the month
pi
0 100 —-
(100 – 97)/100 =
1 97
0.03
(97 – 90)/100 =
2 90
0.07
(90 – 70)/100 =
3 70
0.20
(70 – 30)/100 =
4 30
0.40
(30 – 15)/100 =
5 15
0.15
(15 – 0)/100 =
6 0
0.15
The given problem can be divided into two parts.
1. Individual replacement.
2. Group replacement.
Case I
It should be noted that no item survives for more than 6 months. Thus, an item which has
survived for 5 months is sure to fail during sixth month.
The expected life of each item is given by
= Σ xipi, where xi is the month and pi is the corresponding probability of failure.
= (1 X 0.03) + (2 X 0.07) + (3 X 0.20) + (4 X 0.40) + (5 X 0.15) + (6 X 0.15)
= 4.02 months.
∴ Average number of replacement every month = N/(average expected life) = 10000/4.02 =
2487.5
= 2488 items (approx.).
Here average cost of monthly individual replacement policy = 2488 X 1 = Rs. 2488/-, (the cost
being Re 1/- per item).
Case II
Let Ni denote the number of items replaced at the end of ith month.
Calculating values for Ni
N0 = Number of items in the beginning = 10,000
st
N1 = Number of items during the 1 month X probability that an item fails during 1st month of
installation
= 10000 X 0.03 = 300
N2 = Number of items replaced by the end of second month
nd
=(Number of items in beginning X probability that these items will fail in 2 month) + (Number
of items replaced in first month X probability that these items will fail during second month)
=N0P2 + N1P1
=(10000 X 0.07) + (300 X 0.03) = 709
N3 = N0P3 + N1P2 + N2P1
= (10000 X 0.20) + (300 X 0.07)+ (709 X 0.03) = 2042
N4 = N0P4 + N1P3 + N2P2+ N3P1
= (10000 X 0.40) + (300 X 0.20)+ (709 X 0.07) + (2042 X 0.03) = 4171
N5 = N0P5 + N1P4 + N2P3+ N3P2+ N4P1
= (10000 X 0.15) + (300 X 0.40)+ (709 X 0.20) + (2042 X 0.07) + (4171 X 0.03) = 2030
N6 = N0P6 + N1P5 + N2P4+ N3P3 + N4P2 + N5P1
= (10000 X 0.15) + (300 X 0.15)+ (709 X 0.40) + (2042 X 0.20) + (4171 X 0.07) + (2030 X
0.03) = 2590.
From the above calculations, it is observed that Ni increases upto fourth month and then
decreases. It can also be seen that Ni will later tend to increase and the value of Ni will oscillate
till the system acquires a steady state.
The optimum replacement cycle under group replacement is given in the following table.
Total Cost of Average
End Cost of all Total
no. of Cumulative replacement cost per
of replacement cost
items no. of failure after failure (Re month
month (Rs. 0.35/ item) (Rs.)
failed 1/ item) (Rs.)
1 300 300 300 3500 3800 3800
2 709 1009 1009 3500 4509 2254.50
3 2042 3051 3051 3500 6551 2183.66
4 4171 7222 7222 3500 10722 2680.50
5 2030 9252 9252 3500 12752 2550.40
6 2590 11842 11842 3500 15342 2557.00
The above table shows that the average cost during the third month is minimum. Thus, it would
be economical to replace all the items every three months.
Meaning of Networking
THESTREAK24 MAY 2018 4 COMMENTS
Meaning of Network Technique
Network technique is a technique for planning, scheduling (programming) and controlling the
progress of projects. This is very useful for projects which are complex in nature or where
activities are subject to considerable degree of uncertainty in performance time.
This technique provides an effective management, determines the project duration more
accurately, identifies the activities which are critical at different stages of project completion to
enable to pay more attention on these activities, analyse the scheduling at regular interval for
taking corrective action well in advance, facilitates in optimistic resources utilisation, helps
management for taking timely and better decisions for effective monitoring and control during
execution of the project.
Objectives of Network Analysis:
Network analysis entails a group of techniques for presenting information relating to time and
resources so as to assist in the planning, scheduling, and controlling of projects. The information,
usually represented by a network, includes the sequences, interdependencies, interrelationships,
and criticality of various activities of the project.
A network analysis has following objectives:
Optimistic Time (O):the minimum possible time required to accomplish a task, assuming
everything proceeds better than is normally expected.
Pessimistic Time (P):the maximum possible time required to accomplish a task, assuming
everything goes wrong (excluding major catastrophes).
Most likely Time (M): the best estimate of the time required to accomplish a task, assuming
everything proceeds as normal.
Example of the three time estimates
The Critical Path is determined when analyze a projects schedule or network logic diagram and
uses the Critical Path Method (CPM). The CPM provides a graphical view of the project,
predicts the time required for the project, and shows which activities are critical to maintain the
schedule.
The seven (7) steps in the CPM are: [1]
1. List of all activities required to complete the project (see Work Breakdown Structure (WBS)),
2. Determine the sequence of activities
3. Draw a network diagram
4. Determine the time that each activity will take to completion
5. Determine the dependencies between the activities
6. Determine the critical path
7. Update the network diagram as the project progresses
The CPM calculates the longest path of planned activities to the end of the project, and the
earliest and latest that each activity can start and finish without making the project longer. This
process determines which activities are “critical” (i.e., on the longest path) and which have “total
float” (i.e., can be delayed without making the project longer). [1]
The CPM is a project modeling technique developed in the late 1950s by Morgan R. Walker of
DuPont and James E. Kelley, Jr. of Remington Rand.
Resource Planning and Meaning of Crashing
THESTREAK24 MAY 2018 2 COMMENTS
Resource Planning
Resource planning refers to the strategy for planned and judicious utilisation of resources.
Resource planning is essential for sustainable existence of all forms of life.
Resource planning is essential for India as there is enormous diversity in the availability of
resources. For example the state of Rajasthan has vast potential for the development of solar and
wind energy but is deficient in water resources.
The cold desert of Ladakh has rich cultural heritage but is deficient in water and some strategic
minerals.
The state of Arunachal Pradesh has abundance of water resources but lacks infrastructure which
shows mere availability of resources in the absence of technology and institutions hinders
development.
This shows that the resource planning is needed at the national, regional, state and local levels
for balanced development of a country.
Meaning of Crashing
Crashing is a schedule compression technique used to reduce or shorten the project schedule.
The PM can various measures to accomplish this goal. Some of the common methods used are
1. Crashing is the technique to use when fast tracking has not saved enough time on the schedule. It
is a technique in which resources are added to the project for the least cost possible. Cost and
schedule tradeoffs are analyzed to determine how to obtain the greatest amount of compression
for the least incremental cost.
2. Crashing refers to a particular variety of project schedule compression which is performed for
the purposes of decreasing total period of time (also known as the total project schedule
duration). The diminishing of the project duration typically take place after a careful and
thorough analysis of all possible project duration minimization alternatives in which any and all
methods to attain the maximum schedule duration for the least additional cost.
3. When we say that an activity will take a certain number of days or weeks, what we really mean is
this activity normally takes this many Project Management Triangle days or weeks. We could
make it take less time, but to do so would cost more money. Spending more money to get
something done more quickly is called “crashing”. There are various methods of project schedule
crashing, and the decision to crash should only take place after you’ve carefully analyzed all of
the possible alternatives.The key is to attain maximum decrease in schedule time with minimum
cost.
4. Crashing the schedule means to throw additional resources to the critical path without
necessarily getting the highest level of efficiency.
5. Crashing is another schedule compression technique where you add extra resources to the project
to compress the schedule. In crashing, you review the critical path and see which activities can
be completed by adding extra resources. You try to find the activities that can be reduced the
most by adding the least amount of cost. Once you find those activities, you will apply the
crashing technique.