0% found this document useful (0 votes)
70 views54 pages

Cos 419-Operations Research

Uploaded by

Ezeuko Emmanuel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views54 pages

Cos 419-Operations Research

Uploaded by

Ezeuko Emmanuel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

COS 419- OPERATIONS RESEARCH II

INTRODUCTION

- Operation Research (OR), often referred to as management science, is


simply a scientific approach to decision making that seeks to best
design and operate a system, usually requiring the allocation of scarce
resources.

- By a system, we mean an organization of interdependent components


that work together to accomplish the goal(s) of the system. For
example, ANAMMCO Motor Company is a system whose goal consists
of maximizing the profit that can be earned by producing quality utility
vehicles.

- The term OR was coined during world war II when British Military
leaders asked scientists and engineers to analyze several military
problems such as the deployment of radar and the management of
convoy, bombing, antisubmarine, and mining operations. Quite recently,
we have the deployment of drones by the USA and Apache helicopters
by UK during the war in Libya. Etc.

- The Scientific approach to decision making usually involves the use of


one or more mathematical models. A mathematical model is a
mathematical representation of an actual situation that may be used to
make better decisions or simply to understand the actual situation
better.

- It may be necessary to recall from “precious” discussions in the subject


of OR, that OR begins by scientifically, carefully observing and
formulating problem and then constructing a scientific model that
attempts to abstract the essence of the real problem.
After, it is hypothesized that the model is a precise representation of the
essential elements of the situation being modeled so that conclusions
obtained from model are valid for the real problem.

- This hypothesis (as may be precipitated) is necessarily modified and


verified experimentally.

Therefore, the key issues in OR are;


- Problems
- Hypothesis about the problems and models
- Conclusions from the models
- Concerns with practical management
- Adoption of a broader view towards resolution of conflicts of interest

1
among the components of the organization for reasons of the most
judicious allocation of scarce resources.

Discussion:
1 Why is it necessary to undertake prudent allocation of resources?

Hint: (i) Wants are unlimited, resources are not


(ii) The notions of opportunity cost, contribution margin etc

2. What is the essence of OR to computing science.

Hint: Information science, decision support system etc

- At this juncture, we may confidently assert that we might have been


familiar with such terminologies as objective function, functional
constraints, linear programming models, tableau, simplex method etc.

- In this course, (CS 414), Operation Research II, the following topics are
to be discussed;
- Network analysis
- Games theory
- Inventory problems
- Reliability problems, and
- Dynamic Programming
- Relevants Texts

(1) Operations Research: Applications and Algorithms


Author: Wayne L. Winston
4th Edition: (2004)

(2) Introduction to Operations Research


Author: Frederick S. Hillier and Gerald Lieberman
McGraw Hill: (2010).

2
Lecture 0: Introduction to Linear Programming

For reasons of recapitulation –

- Linear programming (LP) is an important tool for solving optimization


problems.
- LP has been used to solve optimization problems in diverse industries
such as banking, education, forestry, petroleum, military, and transport.
- LP is a fundamental component of Operations Research and thus,
deserves to be considered in this course. Nevertheless, it is important
to understand the general characteristics shared by all linear
programming problems.

- The characteristics shared by all linear programming problems are:

(1) Decision variables - These describe the decisions to be made.

(2) Objective function - In any linear programming problem, the decision


maker wants to maximize (usually revenue or profit) or minimize
(usually costs) some function of the decision variables. The function to
be maximized or minimized is called the objective function.

(3) Constraints - Constraints are the necessary restrictions imposed on the


decision variables, for the simple and logical reason that resources are
often limited in real life.

(4) Sign Restrictions - This is sometimes regarded in some texts as the


non negativity constraint. The sign restriction is required to complete
the formulation of a linear programming problem and addresses the
sign that the decision variables can assume, nonnegative values or both
negative and positive values. Example: products (i.e. physical products)
have to be non negative represented as Xi ≥ 0, while cash balance, if
money owed is more than money on hand, could be considered negative.

Formulating a Linear Programming Problem – Example

Giapetto’s Woodcarving, Inc., manufactures two types of wooden toys:


soldiers and trains. A soldier sells for $27 and uses $10 worth of raw
materials. Each soldier that is manufactured increases Giapetto’s variable
labour and over head costs by $14. A train sells for $21 and uses $9 worth of
raw materials. Each train built increases Giapettos’ variable labour and over
head costs by $10. The manufacture of wooden soldiers and trains requires
two types of skilled labour: carpentry and finishing. A soldier requires 2 hours
of finishing labour and 1 hour of carpentry labour. A train requires 1 hour of
finishing and 1 hour of carpentry labour. Each week, Giapetto can obtain all
the needed raw material but only 100 finishing hours and 80 carpentry hours.

3
Demand for train is unlimited, but at most 40 soldiers are bought each week.
Giapetto wants to maximize weekly profit (revenue – cost)
Formulate a mathematical model of Giapetto’s situation that can be used to
maximize Giapetto’s weekly profit [Winston, 2004].

Solution
- To be able to formulate an appropriate model, we consider the
aforementioned characteristics of linear programming problems.

(1) Decision variables


These are clearly the number of soldiers and trans to be manufactured
each week. We assign as follows;
x1 = number of soldiers per week
x2 = number of trains per week.

(2) Objective Function


As this determines the function to be maximized or minimized, we note
that in Giapettos’ case, the fixed costs (such as rent and insurance) do
not depend on the values of x1 and x2.
Thus, Giapetto can focus on maximizing: (weekly revenues) – (raw
material purchase cost) – (other variable costs).
- The weekly revenues and costs can be expressed in terms of x1 and x2.
Where as it is inconsiderable to manufacture more soldiers (i.e. x1 > 40)
than can be sold for each week, so we assume that all toys produced
will be sold.
- Proceeding,
Weekly revenues = weekly revenues from soldiers + weekly revenues
from trains = (dollars/soldier) (soldiers/week) + (dollar/train)
(trains/week)
= 27 x1 + 21 x2
Also,
Weekly raw material costs = 10x1 + 9x2
Others weekly variable costs = 14x1 + 10x2
Then Giapetto wants to maximize
(27x1 + 21x2) – (10x1 + 9x2) – (14x1 + 10x2) = (3x1 + 2x2) = 3x1 + 2x2
where the variable Z is conventionally used to denote objective function,
Giapettos? Objective function becomes,
maximize Z = 3x1 + 2x2 (1)
(Note the objective f x n coefficients or contribution to profit)

(3) Constraints
We note that, from (1), equation 1, Z can increase arbitrarily by
increasing the respective values of x1 and x2. However, the values of x1
and x2 are limited by the following restrictions.
(i) There are only 100 hrs available for finishing each week.
(ii) There are only 80 hrs available for carpentry each week.

4
(iii) And at most 40 soldiers should be produced each week.

Whereas the amount of raw materials available is assumed to be unlimited, no


restrictions have been placed on this.
- In order to formulate a mathematical model of the Giapetto’s problem,
we have to express constraints (i) to (iii) in terms of decision variables
x1 and x2.

To express constraints (i), note that total finishing hrs/week = (finishing


hrs/soldiers) (soldiers made/week) + (finishing/train) ( trains
made/week)
= 2(x1) + 1(x2) = 2x1 + x2
Thus, constraint (i) implies that finishing hrs per week -
2x1 + x2 ≤ 100 (2)

- To express constraint (ii), we note that total carpentry hrs/week =


(carpentry her/soldier) (soldiers/week) + (carpentry hrs/train)
( train/week) = 1(x1) + 1(x2) = x1 + x2
Thus constraints (ii) may be written as carpentry hrs per week -
x1 + x2 ≤ 80 (3)

- Finally, we express the fact that at most 40 soldiers per week can be
sold by limiting the weekly production of soldiers to at most 40 soldiers.
This yields the following constraint;
Maximum demand for soldiers: x1 ≤ 40 (4)

- Thus, equations (2) – (4) express constraints i – iii in terms of the


decision variables. They are therefore regarded as the constraints for
the Giapetto’s linear programming problem.

NB The coefficients of the decision variables in the constraints are called


technological coefficients, as they reflect the technology used to
produce different products. The number on the right hand side of the
inequality is the constraints right hard side (or rhs), representing
available quantity.

(4) Sign restrictions: clearly the sign restrictions for Giapetto? LP problem
are
x1 ≥ 0 (5)
and x2 ≥ 0 (6)

We therefore formulate the optimization model for the Giapettos’ LP problem


as follows;
maximize = 3x1 + 2x2 (1)
subject to (s.t)
2x1 + x2 ≤ 100 (2)

5
x1 + x2 ≤ 80 (3)
x1 ≤ 40 (4)
x1 ≥ 0 (5)
x2 ≥ 0 (6)

“subject to” (s.t) means that the values of the decision variable x1 and x2 must
satisfy all constraints and all sign restrictions. Note however, that the sign
restrictions are usually considered as being separate from the functional
constraints.
The above optimization model, as a case of linear programming problem, can
be solved by using any of the popular methods, such as the simplex method,
the tableau, graphical method, etc.

We conclude this section by looking at the definitions.

A linear programming problem (LP) is an optimization problem for which we


have the following.

(1) We attempt to maximize (or minimize) a linear function of the decision


variables. The function that is to be maximized or minimized is called
the objective function.

(2) The value of the decision variables must satisfy a set of constraints.
Each constraint must be a linear equation or linear inequality.

(3) A sign restriction is associated with each variable. For any variable xi,
the sign restriction specifies that xi must be either nonnegative (xi ≥ 0)
or unrestricted in sign (urs)

NEXT

Solution to Giapetto’s LP problem (simplex and tableau).

The linear programming problem is;


Maximize = 3x1 + 2x2 (1)
Subject to (s.t)
2x1 + x2 ≤ 100 (2)
x1 + x2 ≤ 80 (3)
x1 ≤ 40 (4)
x1 ≥ 0 (5)
x2 ≥ 0 (6)

The above corresponds to Giapetto’s LP problem where the goal is to


maximize the profit subject to limited resources.

6
Solution
(1) Introducing sleek variables gives the ff set of equations.

Z = 3x1 + 2x2 (1)


2x1 + x2 + x3 = 100 (2)
x1 + x2 + x4 = 80 (3)
x1, + x5 = 40 (4)
x1, x2 ≥ 0 (5) and (6).

- This will correspond to an initial set of feasible solution (x1, x2, x3, x4, x5,
Z).
The above solution is the augmented solution of the simplex method in
algebraic form.
- By virtue of the above method, a basic feasible solution is an
augmented corner point feasible solution. Each basic situation has
defining variables; non-basic variables set equal to zero (decision
variables) and the basic variables remaining, which are the
simultaneous solution of all the systems of the number of equations for
the problem in equality form, after setting the non basic variables to
zero.

Simplex method: Algebraic approach


This is concerned with the following steps:
(1) Initialization step: How the initial corner point feasible solution is
selected.
(2) Iteration step: The criterion for selecting the defining equation (non-
basic variable) to be replaced by a basic variable. Thus, the following
questions arise;
(i) how is the replacement identified?
(ii) how can a new solution be identified without completely resolving
a new system of defining equations? (The system of equations with the
new set of non-basic variables).
(3) Stopping Rule: - how is the test of optimality performed?

Answers:
(1) The simplex method can start at any corner print feasible solution. In
our example, the non-basic variables can be set to zero to give an initial
solution.
(0, 0, x3, x4, x5, Z) = (0, 0, 100, 80, 40, 0)
- notice that each equation has just one basic variable with a coefficient
of +1 and the basic variable does not appear in any other equation. So
the solution is not optimal.
- By iterative steps the simple method moves from current feasible
solution to a better adjacent basic feasible solution. This involves
replacing one non-basic variable (called the entering basic variable) with
a new one (called the leaving basic variable.

7
- This move will definitely increase the size of Z and is performed by
choosing the variable that has the biggest impact on Z. In our example,
x1 has the higher coefficient of 3 and this becomes our first entering
basic variables.
- The leaving basic variable is the one whose nonnegativity constraint
imposes the smallest upper bound on how much the entering basic
variable can be increased. In our example, where the possible
candidates are (x3, x4, x5), the calculation to determine leaving basic
variables follows;

Basic Variable Equation Result


X3 (2) X3 = 100 - 2x1 - x2 x1 < 100/2 = 50
X4 (3) X4 = 80 - x1 - x2 x1 < 80
X5 (4) X5 = 40 - x1 x1 < 40

8
Since x5 imposes the smallest upper bound on x1, the leaving basic
variable is x5. Therefore, x5 becomes a basic variable and its value is set
to zero in the new basic feasible solution.
- How can the new basic feasible solution be identified most conveniently?
- We can solve for the new values of the remaining basic variables.
- We convert the system of equation into the same convenient form we
had at the initialization step: i.e. each equation will have just one basic
variable with a coefficient of +1 and this basic variable does not appear
in any other equation. This we can do by performing the following;
(a) multiplying an equation by a non-zero constant
(b) adding a multiple of one equation to another equation.

Our equation of reference is equation (4)


x1 + x5 = 40 (4)
as x1 has become a basic variables and has a coefficient of +1 we must
eliminate it from other equation including equation (1).
Thus,
New equation (1) = old equation (1) + (3) x new equation (4)
= Z - 3x1 - 2x2 + 3(x1 + x3) = 120
Z - 2x2 + 3x5 = 120 (1)

New (2) = old equation (2) - 2(new equation (4))


2x1 + x2 + x3 - 2(x1 + x5) = 20 i.e. 100 - 80
x2 + x3 - 2x5 = 20 (2)

New (3) = old (3) - 1 (new equation (4))


x1 + x2 + x 4 - (x1 + x5) = 40 i.e. 80 – 40
x2 + x4 – x5 = 40 (3)

Thus, our new set of standard equations:


Z – 2x2 + 3x5 = 120 (1)
x2 + x3 - 2x5 = 20 (2)
x2 + x4 - x5 = 40 (3)
x1 + x5 = 40 (4)
while x1, x2 ≥ 0 (5) and (6)

The new solution i.e.


(x 1, x2, x3, x4, x5 Z) = (40, 0, 20, 40, 0, 120)

- Stopping rule - The objective function equation i.e equation (1) is


checked for optimality. If all the non basic variables have negative
coefficient so that increasing any of them would decrease Z, the current
solution is optimal, and the algorithm stops.
However, from equation (1), with x5 remaining 0, increasing the value of
x2 will have a further increasing impact on Z, thus the solution is not yet

9
optimal.
nd
- We therefore continue to determine the 2 learning basic:

Basic variable equation


x3 x3 = 20 - x2 + 2x5 x2 < 20
x4 x4 = 40 - x2 - x5 x2 < 40
x1 x5 = 40 - x5 + 0 x2 x2 < ∞

.. The new leaving basic variable is x3


while the entering basic variables is x2.
Thus, new equation (2) becomes
x2 + x3 - 2x5 = 20
(2)

No change as the coefficient of x2 is + 1


Whereas

- New equation (1) = old equation (1) + 2 x ( new equation (2) )


= Z - 2x2 + 3x5 + 2(x2 + x3 - 2x5) = 120 + 40
= Z + 2x3 - x5 = 160 (1)

- New equation (3) = old equation (3) - 1 x (new equation (2))


= x2 + x4 - x5 - (x2 + x3 – 2x5) = 40 – 20
= x4 - x3 - x5 = 20 (3)

- Equation (4) remains unchanged.


Thus the new set of equation becomes
Z + 2x3 - x5 = 160 (1)
x2 + x3 - 2x5 = 20 (2)
x4 - x3 - x5 = 20 (3)
x1 + x5 = 40 (4)
and x1, x2 ≥ 0 (5), (6)

Now observe that the basic variables x1, x2, x4 all have coefficients of +1 and
appear only once each in the set of equations.

Secondly, increasing the values of the variables x3 and x5 in the objective


function equation (1) will have no further impact on Z. Thus solution is optimal
and the algorithm stops with the following optimal solution.
(x1, x2, x3, x4, x5, Z) = ( 40, 20, 0, 20, 0, 160)
Thus,
x1 no of solution per week = 40
x2 no of trains per week = 20.

10
Lecture 1: Network Models

- Networks arise in numerous settings and in a variety of guises.


Transportation, electrical, and communication networks have become
very common and essential elements of our lives.
- In the perspective of operations research, network models are widely
used for solving optimization problems in such diverse areas as
production, distribution, project planning, facilities location, resource
management, financial planning, etc.
- Succinctly, network models provide dependable visual and conceptual
aid for representing the relationships between the components of
systems, and as such, have been found useful in virtually every field of
scientific, social, and economic endeavour.
- Network optimization models have become rapidly evolving component
of operations research. There are algorithms and software, which are
available for solving network optimization problems that proved difficult
in the past years. It is possible to recognize that many network
optimization models are actually special types of linear programming
problems.
- As a matter of fact, there are quite a number of important kinds of
network problems. These include; the shortest Path problem, maximum
flow problem, minimum spanning tree problems, minimum cost
problems, the Critical Path Method (CPM), etc.
To proceed we look at the basic terminologies associated with network
problems.

Basic Terminologies
- A graph or network is defined by two sets of symbols: nodes and arcs.
- The vertices of a graph or network are also called nodes and are points
of interest in a graph.
- An arc is an ordered pair of vertices and represents a possible direction
of motion that may occur between vertices.
Thus,
Arc (j,k) contained in a network implies that motion is possible from
node j to node k. where j is the initial node and k terminal node.
- A chain is a sequence of arcs such that every arc has exactly one vertex
in common with the previous arc.
- A path is a chain in which the terminal node of each arc is identical to
the initial node of the next arc.
Examples
1 4

2 3

11
Observe that: (1, 2) – (2, 3) - (4, 3) is a chain but not a path. But
(1, 2) – (2, 3) – (3, 4) is a chain and a path. The path
(1, 2) – (2, 3) – (3, 4) represent a way to travel from node 1 to node 4.

Shortest – Path Problems


Shortest path problems are concerned with finding the shortest path, i.e. the
path of minimum length, as each arc in a network is assumed to be
associated with a length, from any node to another.

Example:
Consider the following Powerco example where power is apparently
generated from plant 1 and transmitted through substations to city 1 (node
6) as shown in the figure below.
3
4 2 4 2

Plant 1 1 2
3 6 City 1
3 2
3 5

For power to be sent from plant 1 (node 1) to city 1 (node 6), it must pass
through relay substations (nodes 2-5). In consideration of the above figure, if
the cost of transmitting power is proportional to the distance traveled, it
becomes necessary to determine the shortest path between plant 1 and city 1
in order to be operating at a minimum cost.
- It is important to note that with simple and small networks, finding the
shortest path can simply be reduced to summing up the feasible path
ways between the source and destination and determining the shortest
path. However, in reality, the shortest path problem will more often
involve large and complicated networks and or internetworks. This
scenario demands a reliable scientific approach for determining the
shortest path. One of such methods is the popular Dijkstra’s algorithm.

12
Dijkstra’s Algorithm (DA):

- DA can be used to find the shortest path from a node to another where
all arc lengths are non-negative.
- DA works by the use of labels: temporal and permanent.
- To begin, node 1 is given a permanent label of 0, then each node i that is
connected to node 1 is given a temporal label equal to the length of the
arc joining node 1 to node i.
- Each other node, apart from node 1, and those directly connected to
node 1, will have a temporal label ∞.
- From the set of labels, the node with the smallest temporal label is
made permanent.
- When node i has become the (k + 1)th node to be made a temporal label,
th
then node i is the k closest node to node 1.
1
- At this point, the temporal label of any node, eg. node i , is the length of
1
the shortest path from node 1 to node i that passes only through nodes
contained in the k – 1 closest nodes to node 1.
- For each node j that now has a temporal label and is connected to node
i by an arc, we replace node j ‘s temporary label with;
-
Node j’s current temporal label
Min{
node i’s permanent label + length of arc (i, j)

[Where the function min (a, b) is the smaller of a and b].

- The new temporal label from node j is the length of the shortest path
from node 1 to node j that passes only through nodes contained in the k
closest nodes to node 1.
- Therefore, the smallest temporal label is made a permanent label.
- The node with this new permanent label is the (k + 1)th closest node to
node 1.
- This process is continued until all the nodes have a permanent label.
- To find the shortest path from node 1 to node j, work backwards from
node j by finding nodes having labels differing by exactly the length of
their connecting arc. Where it is intended to determine the shortest path
from node 1 to node j, the labeling process can stop as soon as node j
receives a permanent label.

13
Illustration of Dijkstra’s Algorithm

- Recall the Powerco example, where we have the following figure;

3
2
4 4 2

Plant 1 1 2
6
3 City 1
3 2
3 5

Solution
- We use “*” to denote a permanent label.
- In line with the steps of the DA method, node 1 receives a permanent
label of 0 and the adjacent nodes (2 and 3) are labeled as their arc
lengths from node 1, while others are ∞.

Thus, the first set of labels becomes.


[0*, 4, 3, ∞, ∞, ∞]
- Now node 3 has the smallest temporal label, and therefore is made
permanent, the set of labels changes to [0*, 4, 3*, ∞, ∞, ∞]

We now know that node 3 is closest to node 1. We thus determine the new
temporary labels for all nodes that are connected to node 3 by a single arc.
From the figure, this is node 5.
Where;
New node 5 temporary label = min {∞, 3 + 3} = 6
Our new set of labels becomes
[0*, 4*, 3*, ∞, 6, ∞]
- with node 5 receiving a temporary label of 6, node 2’s label which is 4
becomes the smallest temporary label and therefore is made
permanent.
- Now modes 4 and 5 are connected to the newly labeled permanent label,
which is node 2, we obtain/change the temporal labels of nodes 4 and 5.
New node 4 temporal label = min {∞, 4 + 3} = 7.
while
new node 5 temporal used = min {6, 4+ 2} = 6.
- with this, node 5 has the smallest temporal label and therefore is given
a permanent label. The new set of labels becomes.
[0*, 4*, 3*, 7, 6*, ∞]
- only node 6 is connected to node 5, so node 6, temporal label will be
obtained by min {∞, 6 + 2 } = 8.
- This gives up node 4 also be with the smallest temporal label and
therefore is made permanent and our new set of labels become;

14
[0*, 4*, 3*, 7*, 6*, 8]

- For the fact that node 6 is connected to the newly permanent labeled
node 4, we must re-determine node 6’s temporal label by
Min {8, 7 + 2} = 8.
We now make node 6, label permanent, and our set of label becomes
[0*, 4*, 3*, 7*, 6*, 8*]
- To determine shortest path from node 1 to node 6, we work backwards
from node 6 though the nodes having labels differing by exactly the
length of the connecting arc. To do this effectively we reproduce the
diagram showing the nodes with their permanent labels.

- The powerco example: nodes, arcs, and permanent labels.

4 3 7
4 2 4 2

Plant 1 1 2 8
3 6 City 1
3 2
3 5
3 6

- From the above, the difference between node 6, and 5, permanent label
= 2 = length of arc (5, 6), so we go back node 5.
- The difference between node 5, and node 2, permanent label = 2 =
length of arc (2, 5). So, we may go back node 2, and then node 1. This
gives the shortest path as
1–2–5- 6
- Observe that both paths add up to a total length of 8. Also, there exists
alternative shortest path (problems with multiple optimal solutions).

Next
- Equipment replacement example
- Shortest – Path problem as a transshipment problem.
Then
- maximum – flows problems
- CPM and PERT
- MCNFP and minimum Spanning Tree Problems.

15
Shortest Path Problems (contd)

Equipment Replacements Example

A new car purchased (at time 0) for $12,000.

The cost of maintaining a car during a year depends on its age at the
beginning of the year, as given in table 1. To avoid the high maintenance costs
associated with an older car, an old car may be traded in and a new car
purchased. The price received on a trade-in depends on the age of the car at
the time of trade-in as in table 2. For simplicity, it is assumed that at any time,
it costs $12,000 to purchase a new car.

The goal is to minimize the net cost:


Purchasing cost + maintenance costs – money received in trade-in incurred
during the next five years. Formulate this problems as a shortest – path
problem.

Where:

Table 1
Car maintenance cost
Age of Annual Maint
car(yrs) Costs
0 2,000
1 4,000
2 5,000
3 9,000
4 12,000

Table 2
Car Trade-in Prices
Age (yrs) Annual Maint
Costs
1 7,000
2 6,000
3 2,000
4 1,000
5 0

Discussion: Before we proceed, how do companies decide on the modalities


to sell off official vehicles to staff? It is significant to consider this strategy as
a relevant aspect of staff motivation. What about other ‘equipments’?

16
Solution
- The network is to be one of six nodes: 1 to 6
- Node i is the beginning of year i.
- For i < j, an arc (i, j) corresponds to purchasing a new car at the
beginning of year i and keeping it until the beginning of year j.
- The length of arc (i, j), Ci,j is the total net cost incurred in owning and
operating a car from the beginning of year i to the beginning of year j if a
new car is purchased at the beginning of year i and this car is traded in
for a new car at the beginning of year j.
Thus;
Ci,j = maintenance cost incurred during years i, i + 1, …, j- 1 + cost of
purchasing car at beginning of year i – tradein value received at beginning of
yr j.
Applying the above formula,
C12 = 2 + 12 – 7 = 7
C13 = 2 + 4 + 12 – 6 = 12
:
C16 = 2 + 4 + 5 + 9 + 12 + 12 – 0 = 44
C23 = 2 + 12 – 7 = 7
:
C25 = 2 + 4 + 5 + 12 = 21
:
C34 = 2 + 12 – 7 = 7
:
C34 = 2 + 12 – 7 = 7
:C36 = 2 + 4 + 5 12 – 2 = 21
:C56 = 2 + 12 – 7 = 7
And so on.

We observe that the length of any path from node 1 to 6 is the net cost
incurred during the next five years corresponding to a particular trade–in
strategy.

17
The shortest path problem as a Transshipment problem
- Optimization problems which deal with finding the shortest path
between node i and node j in a network may be seen as a transshipment
problem.

- Accordingly, the objective function will be to minimize the cost of


sending one unit of an item from node i (source) to node j (destination)
where all other intervening nodes are regarded as transshipment points.
This type of optimization problem is common to courier and cargo
operators, airline operators, and several others in the transportation
industry.

- For the reasons of illustration, we can formulate the balanced


transportation problem associated with finding the shortest path from
plant 1 to city 1 of the figure already used. Therein, we want to send one
until from node 1 to node 6.

- In transportation models, node 1, i.e. the source, is usually regarded as


a supply point – a point that can send goods to another point but
cannot receive goods from any other point. Similarly, node 6 is treated
as a demand point – a point that can receive goods from other points
but can not send goods to any other point. All the other nodes (nodes 2
– 5) are regarded as transshipment points – where a transshipment
point is point that can both receive goods from other points and send
goods to other points.

- To find optimal solution to transshipment problem is possible using


transshipment representation of shortest – path problems (you can
study them on your own), or by using LINGO (software approach) or a
spreadsheet optimizer. (Research through relevant literature).

Maximum – Flow Problems


- Maximum – flow problems pertain to network models in which the arcs
are attributed with a capacity that constrain the quantity of a product
(oil, passengers, etc) that may be shipped through the arc.

- With respect to the capacity constraints, the objective function is to


transport the maximum amount of flow from starting point/node (called
the source) to a terminal point (called the sink)

- Solutions for solving the maximum flow problems include the familiar
linear programming method, Ford-Fulkerson method, etc.

18
LP Solution of maximum-flow problem
- To formulate an LP model for problems of this nature for a network
involving the flow of products, the arcs represent pipelines of different
diameter.

- For a flow to be feasible, it must have two observable (obvious)


characteristics;
(1) 0 ≤ flow through each arc ≤ arc capacity and
(2) flow into node i = flow out of node i.

- We thus assume that no product (for instance, oil) gets lost while being
pumped through the network, so at each node, a feasible flow must
satisfy (2), which is the conservation of flow constraint.

- By convention, an artificial arc from sink back to source makes


provision to write the conservation- of- flow constraint for the source
and the sink.

Examples:
(1) Airline maximum flow
(2) Match making both in the texts.

NB
- We ignore the use of LINDO and LINGO due to limitation (constraints)
imposed by unavailability of systems and time.

- Observe that only salient features of the main paradigms inherent in


Network models are being mentioned, in order to facilitate further
studies by the students. Thus, you may study Ford-Fulkerson on your
own

19
CPM and PERT

- Network models can be used as an aid in scheduling large complex


projects that consist of many activities.
- Where the duration of each activity is known, Critical Path Method (CPM)
can be used to determine the length of time required to complete a
project.
- CPM also can be used to determine how long each activity in the project
can be delayed without delaying the completion of the project.
- CPM was developed in the late 1950s, by researchers at DuPont and
Sperry Rand.
- On the other hand, where the duration of the activities is not known with
certainty, the Program Evaluation and Review Technique (PERT) can be
used to determine/estimate the probability that the project will be
completed by a given deadline.
- PERT was developed in the late 1950s, just as CPM, by consultants
working on the development of Polaris missile.

Discussion
- Necessary consideration in the planning and budgeting of software
systems development projects.
- What about failure of software projects?
- Consider the philosophy of failure to plan implying planning to fail.

CPM and PERT have been used successfully in many applications, which
include;
(1) Scheduling construction projects
(2) Facilities relocation
(3) System (computer) installations
(4) Ship building
(5) Designing and Marketing new products
(6) Completion of corporate mergers and acquisitions.

Requirements for CPM and PERT


- List of activities that make up the project: Project completion occurs at
the point when all the activities have been completed.
- Consideration for predecessor activities for each activity: A project
network is used to depict the precedence relationships between
activities.
- In activity network diagrams, activities are represented by directed arcs,
and nodes represent the completion of a set of activities (events). The
activities network is called Activity on Arc (AoA) network.

20
Examples.
(1)
A B
1 2 3

Node 2 represents the completion if activity A and the commencement of


activity B.

(2)
A
1
C
3
B
2

Activities A and B are predecessors of activity C. node 3 is the event that


activities A and B are completed.

(3)
B
A
1 2 C

Activity A is the predecessor of both B and C. Node 2 is the event that activity
A is completed.

Rules for Constructing an AOA representation to project:


(1) Node 1 represents the start of the project. An arc should lead from
node 1 to represent each activity that has no predecessors.
(2) A node (the finish node) representing the completion of the project
should be included in the network.
(3) Number the nodes in the network so that the node representing the
completion of an activity always has a larger number than the node
representing the beginning of an activity (incremental numbering).
(4) An activity should not be represented by more than one arc in the
network.
(5) Two nodes can be connected by at most one arc: an implication of
rule 4.

To avoid the violation of rules 4 and 5, it is sometimes necessary to utilize a


dummy activity that takes zero time.

21
Thus, instead of having
A
C
1 2
B

We should have;

A C
1 2

B Dummy arc
3

NB
- Two key building blocks in CPM are the concepts of early event time (ET)
and late event time (LT) for an event. Students should study these event
times and know how they are computed.
- We cannot conclude this section without stating how to find a critical
path, at least by definition:

A path from node 1 to the finish node that consists entirely of critical
activities is called a critical path.
Where,
- A critical activity is any activity with a total float of zero.

Where;
- The total float represented by TF(i,j) of the activity (i,j) is the amount by
which the starting time of activity (i,j) could be delayed beyond its
earliest possible starting time without delaying the completion of the
project (assuming no other activities are delayed).
- Observe that the linear programming approach can be used to find
critical path, as well as the LINGO system. However, we skip these and
PERT details owing to paucity of time.

Assignment
With the aid of a suitable illustration discuss each of the following
concepts:
(1) Minimum spanning tree
(2) Minimum cost network flow problem
(3) Critical Path method (CPM)
(4) Program evaluation and review technique (PERT)

22
Conclusion of Network Models
- Networks of some type arise in a wide variety of contexts. Network
representations are very useful for portraying the relationships and
connections between the components of systems.
- Frequently, flow of some type must be sent through a network, so a
decision needs to be made about the best way to do this.
- The kinds of network optimization models and algorithms introduced,
thus far, provide a powerful tool for making such decisions.
- The minimum cost flow problem plays a central role among these
network optimization models, both because it is so broadly applicable
and because it can be solved extremely efficiently by some of the
methods we have seen (shortest path: Dijkstra, LP, Simplex etc), and
maximum flow problem, in addition to transshipment and assignment
problems.
- Whereas all these models are concerned with optimizing the operation
of an existing network, the minimum spanning tree (not discussed) is a
prominent example of a model for optimizing the design of a new
network.
- The CPM method of time–cost trade-offs provides a powerful way of
using a network optimization model to design a project so that it can
meet its deadline with a minimum total cost.
- The topic, network models so treated, has only scratched the surface of
the current state of the art of network methodology. Therefore, you may
wish to consider further work in this subject area.
- Because of their combinatorial nature, network problems often are
extremely difficult to solve. However, great progress is being made in
developing powerful modeling techniques and solution methodologies
that are opening up new vistas for important applications.
- Infact, recent algorithmic advances are enabling researchers to solve
successfully some complex network problems of enormous size.

23
Lecture 2: Game Theory

- A basic feature in many real life situations is the presence of conflict


and competition. The duo have become so common in political
campaigns, military battles, parlour games, advertising and marketing
campaigns, by competing firms in almost every industry. In the light of
existential survivability, it is somewhat imperative for decision makers
to be tactical, as well as prudent, with choices and/or courses of action.
- Game theory is a mathematical theory that deals with the general
features of competitive situations in a manner that is both formal and
abstract. The theory places emphasis on the decision- making process
of the adversaries.
- The usefulness of Game theory is even more prominent by the fact that
quality decisions (decisions of high quality) can be reached in cases
where two or more decision makers have conflicting interest. Thus,
Game theory is in contrast to other decision making scenarios, as they
are with effect to a single decision maker.
- For simplicity reasons, the scope of this section will carefully avoid
delving into rather complicated types of competitive situations. Thus,
the focus is on the simplest cases of two-person zero-sum games and
two-persons constant sum games.

Two-Person Zero – Sum Games:


Characterised by:
(1) Existence of 2 players; the row player and the column player
(2) The row player most choose 1 of m strategies, at the same time, the
column player must choose 1 of n strategies.
(3) If the row player chooses ith strategy while column player chooses
jth strategy, then the row player receives a reward equivalent to aij,
while the column player loses the amount aij. Thus, row player’s
reward comes from the column player.

Example: Matrix representation of a 2- P 0-Sum Game.


Row player’s strategy Column player’s Strategy
1 2 … n
1 a11 a12 … a1n
2 a21 a22 … a2n
.. …. …. … ….
M am1 am2 … amn

(4) A 2–P 0–Sum game has the property that for any choice of strategies,
the sum of the rewards to the players is zero. i.e. every dollar won by a person
comes out of the other person’s pocket: conflict of interest.

NB The theory of how two-person zero-sum games should be played was


developed by John Von Neumann and Oskar Morgenstern based on the

24
assumptions that follow:
Basic Assumption of 2-person 0-sum Game Theory

- Each player chooses the best possible knowing that his opponent is
aware of his strategy.
- The implication of the above assumption is that the row player should
choose the row having the largest minimum, while the column player
chooses the column that has the lowest maximum as illustrated with
the following textbook example:

Row player Column Players Strategy Row


Strategy minimum
1 2 3
1 4 4 10 4
2 2 3 1 1
3 6 5 7 5
Column Max 6 5 10

- Based on the assumption, if the row player chooses row 1, the column
player will choose either column 1 or 2.
- Similarly, it is obvious the choice of column for column player, if the row
player chooses row 2 or 3.
- In the same vein if the column player chooses column 1, the row player
will choose row 3 (highest reward), and so on.
- Based on the assumption of a 2–person 0-sum game and the analysis
of the game matrix, we discover that the game matrix has the property
of satisfying the Saddle point condition, which implies.

Max (row minimum) = min (column maximum) - (1)


all rows all column

- Any 2- person 0 – sum game satisfy (1) is said to have a saddle point.
- If a game has a saddle point, the common figure on both sides is called
the value of the game to the row player.
- An easy way to spot a saddle point is
- smallest number in its row, and
- largest number in its column.
- A saddle point (coined from a horse’s saddle point) can also be thought
of as an equilibrium point in that neither player can benefit from a
unilateral change in strategy; neither player has an incentive to move
away from the saddle point.
- Nevertheless, we note that there are games without saddle point –
examples are given in texts.

25
Discussion
- Real life business scenarios which we can classify as 2 – person
0 –sum games.

Solution (Class work)


(2) Two-Person Constant-sum Games

- Where 2 players can still be in conflict even when the game is not zero-
sum.
- Defined as a game scenario in which the row player’s reward and the
column player’s reward add up to a constant value C, for any choice of
both players’ strategies.
- Suffice it to say that a 2-person 0-sum game is a 2-person constant-
sum game where C = 0.
- In 2-person constant-sum games, both players are in total conflict: a
unit increase in one players reward will always result in a unit decrease
in the other player’s reward. Same strategy can be used to find optimal
strategies as in 2-person 0-sum game.

Example: Constant sum TV game (from text book)


- A case of two networks vying for audience of 100million viewers during
time slot 8 to 9pm. The networks must simultaneously announce the
type of show they will air in that time slot. The table below shows the
possible choices for each network and the number of network 1 viewers
(in millions) for each choice.

Network 2 Row
Network 1 Western Soap Opera Comedy Minimum
Western 35 15 60 15
Soap Opera 45 58 50 45
Comedy 38 14 70 14
Column maximum 45 58 70

Discussion
For example, if both networks choose a western, the matrix indicates that 35
million people will watch network 1 and 100 – 35 = 65 million people will
watch network 2 indicating a constant-sum game with C = 100 (million).

Question
- Does this game have a saddle point?
- What is the value of the game to network 1?
- What is the value of the game to network 2?

Solution
Find the strategy that satisfies:
Max (row minimum) = min (column maximum) = 45

26
[network1 choosing soap opera and network 2 choosing western]

Question
Prove that neither network can do better by unilaterally changing strategy, i.e.
by moving away from the saddle point, if any.

Two-Person Zero-Sum Games without a saddle point

- There are instances where 2-person 0-sum games can come up without
a saddle point.
- For example: Odds and Evens.
Consider two players, odd and even, who simultaneously choose the
number of fingers (1 or 2) to put out. If the sum of fingers put out by
both players is odd, Odd wins £1 from Even, and Even wins & £1
otherwise. Consider Odd to be the row player and Even column,
determine whether the game has a saddle point.

Table: Reward matrix for Odds and Even.


Row Player (odd) Column Player (Even) Row
1 finger 2 finger Minimum
1 finger -1 +1 -1
2 finger +1 -1 -1
Column
Maximum +1 +1

Solution
Max (row minimum) = -1
and min (column maximum) = + 1

Thus;
Max (row minimum) ≠ min (column maximum) No saddle point.
Thus, it remains a problem how to determine optimal strategy. Observe that
for any strategy, there is a player who can benefit by unilaterally changing her
strategy. We still need to find a way out (i.e. optimization)
- Strategies for finding optimal solutions for 2-person 0-sum games
without saddle point includes, Randomized strategies, Domination, and
Graphical solution.

(1) Randomnised or Mixed Strategies


- Proceeds by allowing each player to select a probability of playing each
strategy.
- Considering the Odd and Even game of putting out sum of odd and even
numbers (i.e 1 or 2 finger) for each, we define.
X1 = prob that odd puts out one finger.
X2 = ,, ,, ,, ,, ,, two fingers

27
Y1 = prob that even puts at one finger
Y2 = ,, ,, ,, ,, ,, two fingers

- If X1 ≥ 0, X2 ≥ 0, and X1 + X2 = 1
Then (X1, X2) is a randomized or mixed strategy for Odd. The same goes
for the column player Even.
- Any mixed strategy (X1, X2,…, Xm) for the row player is a pure strategy, if
any of the Xi = 1.
- A pure strategy is one in which the player chooses the same action.

- In the context of randomized strategies, the assumption from the row


player’s point of view may be stated as follows;
- Row player should choose X1 and X2 to maximize her expected reward
under the assumption that column player knows the values of X1 and X2.
Recall that X1 and X2 are probabilities. However, column player gets to
know at the instant the game is played.

Graphical Solution of Odds and Even

- Recall that X1 + X2 = 1
Then; X2 = 1 – X1
Thus mixed strategy may be written as (X1, 1 – X1)
From the table:
Even
Odd 1 2
1 -1 +1
2 +1 -1
If Even puts out one finger and Odd chooses a mixed strategy of (X1, 1 – X1),
then Odd’s expected reward is
(-1) X1 + ( +1) (1 – X1) = 1 – 2X1

If Even pats out 2 fingers; odd’s expected reward


(+1) X1 + (-1) (1 – X1) = 2 X1 - 1

Graphically: Expected reward to odd

(0,1)A Even Picks 1 E (1, 1)


Even picks 2
B x1

(0, -1)D C (1, -1)

AC = Odd’s reward with X1 if Even picks 1

28
DE = Odd’s reward with X1 if Even picks 2

- From the graph, Odd’s expected reward will be given by the Y –


coordinate in DBC.
- For Odd to maximize her expected reward, Odd should choose the value
of X1 corresponding to point B, which is the intersect of lines AC and DE.
where
1 - 2X1 = 2X1 - 1 and X1 = ½.
Thus,
Odd should choose a mixed strategy (½, ½). This mixed strategy
ensures that odd will have, at least, expected reward of zero no matter
the strategy that Even puts out. This can be verified by considering the
reward equations:
1 – 2X1, and 2X1 - 1 respectively.

NB The case for Even can be considered in a similar fashion.


- Zero is considered to be the ceiling.
- Permit me also to say that linear Programming can be used to find the
value and optimal strategies for any 2-person 0-sum game.
- The students can complete this chapter and read up other paradigms of
interest in Game theory.

Conclusion
- The general problem of how to make decisions in a competitive
environment is a very common and important one.
- The fundamental contribution of game theory is that it provides a basic
conceptual framework for formulating and analyzing such problems in
simple situations.
- However, there is a considerably gap between what the theory can
handle and the complexity of most competitive situations arising in
practice. Therefore, the conceptual tools of game theory usually play
just a supplementary role in dealing with these situations.
- Finally, because of the importance of the general problem, research is
continuing with some success to extend the theory to more complex
situations.

29
Lecture 3: Inventory Models

- The term inventory represents the stocks of goods being held for future
use or sale. Placing orders to replenish inventories soon enough to
avoid shortages has become strategic for the sustenance of any
company dealing with physical products, any of such companies stand
to benefit from the techniques of scientific inventory management.

- In other words inventory maintenance is relevant to all categories of


businesses dealing with physical products such as manufacturing,
wholesale, and retail.
For example, manufacturers will need inventories of both raw materials
for internal use and finished products awaiting shipment.

- Despite the usefulness of holding/carrying sufficient inventory, the


annual costs associated with “storing” inventory can be quite large and
can show up with significant impact on a company’s bottom line. Thus,
a corporate strategic plan should consider reducing costs associated
with inventory by avoiding unnecessarily large inventories as an
essential step that can enhance company’s/firms competitiveness.

- It is notable that some Japanese companies were pioneers in


introducing the just-in-time inventory system, which is a system of
planning and scheduling inventory that warrants the availability of
materials just at the time of their use, resulting in huge savings.

- The application of operations Research Techniques in Inventory, called


scientific inventory management is providing a powerful tool for gaining
a competitive advantage/edge.

Steps for Using Scientific Inventory Management

(1) Formulating a mathematical model that describes the behaviour of


the inventory system.
(2) Determining an optimal inventory policy with respect to the model
formulated in (1)
(3) Using a computerized information processing system to maintain a
record of the current inventory levels.
(4) Using this record of current inventory levels, to apply the optimal
inventory policy to signal when and how much to replenish inventory.

The mathematical inventory models, used for the purposes of


scientific inventory management, are of two main categories. This
categorization is according to the predictability of demand involved.

30
The categories are: deterministic models and stochastic models for
predictable and unpredictable (random) demand respectively.

Components of Inventory Models

- It is a common knowledge that inventory policies affect profitability.


Profitability position is the difference between total revenue and total
cost. With respect to inventory models, some of the costs that
determine profitability are:
(1) Ordering costs.
(2) Holding costs, and
(3) Shortage costs, etc.
Other relevant factors include;
(4) Revenues
(5) Salvage costs, and
(6) Discount rates.

(1) Ordering and set-up costs


These costs are associated with planning an order or producing a good,
and which do not depend on the size of the order or on the production
run. This cost, can be represented by function c(Z), where C represents
the unit price paid and Z the amount ordered. The ordering cost is
normally assumed to be composed of two parts: a term that is directly
proportional to the amount ordered and a term that is a constant k for Z
positive and 0 for Z = 0. For this case
c(Z) = cost of ordering Z units
0 if Z = 0
={
k + cZ if Z > 0.
Where k = setup cost and c = unit cost. The constant k includes the
administrative cost of ordering or, when producing, the costs involved in
setting up to start a production run. Note that there are other
assumptions that can be made about ordering cost.

- The unit purchasing cost c above is the variable cost associated with
purchasing a single unit. It includes: the variable labour cost, variable
overhead cost, and raw material cost associated with purchasing or
producing a single unit. It also includes shipping costs, if goods are
sourced externally.

(2) The holding costs.


This is sometimes referred to as the storage cost and represents all the
costs associated with the storage of the inventory until it is sold or used.
In other words, it can be described as the cost of carrying one unit of
inventory for one time period. Included are; the storage cost, cost of
capital tied up, insurance, protection, theft, taxes attributed to storage,

31
cost due to possibility of spoilage (or expiry, or obsolescence). Usually,
however, the most significant component of holding cost is the
opportunity cost incurred by tying up capital in inventory.

(3) The shortage cost (or stockout cost)


- This is sometimes called the unsatisfied demand cost, and it is the cost
incurred when the quantity demanded exceeds the available stock. The
storage cost depends on the specific nature of demand; whether the
customers can accept a late delivery, that is back-ordered, or not,
without backlogging (no backlogging) which results in lost sales.

- Many costs are associated with stockouts. If back-ordering is allowed,


placement of back orders usually results in an extra cost. Stockouts
often cause customers to go elsewhere to meet current and future
demands, resulting in lost sales and lost goodwill. We can easily
conjecture other consequences of stock-outs.

- As mentioned earlier, inventory models are usually classified as either


deterministic or stochastic depending on whether the demand for a
period is known or is a random variable having a know probability
distribution. Another classification refers to whether the current
inventory level is being monitored continuously or monitored
periodically. In continuous review, an order is placed as soon as the
stock level falls down to the prescribed reorder point. While in periodic
review, the inventory level is checked at discrete intervals, e.g. weekly,
monthly, quarterly, or even yearly, when ordering decisions are made
even if the inventory level dips below the reorder point between the
preceding and current review times.

- For reasons of simplicity and time, we shall limit our discussion to


deterministic continuous review models.

Deterministic Continuous Review Models:


- The most common inventory situation faced by manufacturers, retailers,
and wholesalers is that stock levels are depleted over time and then are
replenished by the arrival of a batch of new units. A simple model
representing this situation is the economic order quantity model (EOQ
model).

- Units of the product under consideration are assumed to be withdrawn


from inventory continuously at a known constant rate. It is further
assumed that inventory is replenished when needed by ordering,
through either by purchasing or by producing, a batch of fixed size,
which all arrive simultaneously at the desired time.

32
The Basic Economic Order Quantity (EOQ) model
- The attributes of the EOQ model, just mentioned, may be described as;
repetitive ordering, constant demand, constant lead time (i.e time in
between orders and their arrival), and continuous ordering, which
implies placing an order any time.

# Assumptions of the Basic EOQ model


- For the basic EOQ model to hold, certain assumptions are required and
unit of time is taken as one year for definiteness. The assumptions are;
(1) Demand is deterministic and occurs at a constant rate.
(2) Ordering and setup cost k is incurred when an order of any size, say q is
placed.
(3) The lead time for each order is zero
(4) No shortages are allowed.
(5) The cost per unit – year of holding inventory is h.

- We define D to be the number of units demanded per year. Thus,


assumption (1) implies that during any time interval of length t years, an
amount Dt is demanded.

- The setup cost k of assumption (2) is in addition to a cost of pq of


purchasing or producing the q units ordered (recall that unit purchasing
cost p does not depend on the size of the order). This excludes many
situations such as discount for large orders, which we consider outside
the scope of our discussion.
rd
- The 3 assumption implies that orders arrive as soon as they are placed;
th
while the 4 implies that all demands must be met on time. Assumption
5 implies that a carrying cost of $h will be incurred if 1 unit is held for
one year, if 2 unit are held for half a year, or it ¼ unit is held for four
years. In sum, if I units are held for T years, a holding cost of ITh is
incurred.

- Given these five assumptions, the EOQ model determines an ordering


policy that minimizes the yearly sum of ordering cost, purchasing cost,
and holding cost.

Derivation of Basic EOQ model

- We proceed with some simple observations; considering inventory level


= I:

1. Orders are placed when I = 0 and not when I > 0 to avoid unnecessary
holding cost.

33
2. Also, orders are placed to prevent a shortage from occurring.

- Observations 1 and 2 show that the policy that minimizes yearly cost
must place an order whenever I = 0.

- The same quantity is ordered each time an order is placed.

- We may wish to denote the quantity that is ordered each time that I = 0
as q.

- We thus are to determine the value of q that minimize annual cost, call it
q*.

- Let TC(q) be the total annual cost incurred if q units are order each time
I = 0.

Note that;
TC (q) = annual cost of placing orders
+ annual purchasing cost
+ annual holding cost
NB
- The units ordered per period = q

- The total unit ordered in each yr in D which is the sum of the units per
year.

- The total number of orders = D/q

- Thus, the purchasing cost per year = PD while purchasing cost per
period = Pq.

Since each order is for q units, to meet annual demand of D units, you
place D/q number of orders per year.

Hence,
Ordering cost/year = (ordering cost/ order) (orders/year) = K D/q
- For all values of q the per-unit purchasing cost is P. Since we purchase
D units per year,
Purchasing cost/year = (purchasing cost/ unit) ( unit purchase/year) =
PD

- To compute the holding cost; If I units are held in one year at the cost of
$h, then the total holding cost = $hI

- If the inventory level varies over time, the holding cost for a specific

34
time period = hTI1 where I1 is the inventory level at time T.

- The average inventory level form time O to time T is given by


I1 (t) = ʃ0TI (t) dt / T [integral of I(t)dt running from zero to T divided by
T]
and the total holding cost between time 0 and time T is
ʃ0T hI(t)dt = hTI1(T)

- To determine the annual holding cost, we need to examine the


behaviour of I (inventory level) over time. Assuming an order q arrives at
time 0, with year demand D, it will take q/D yrs for inventory to reach
zero. Since demand during any period of length t is Dt, the inventory at
any time will decline along a straight line of slope – D. When inventory
reaches zero, an order of size q is placed and arrives instantaneously,
raising the inventory level back to q. Given these observations, the
behaviour of I over time is given by;

J(t)
q

t
q/D 2q/D 3q/D

- Note that any interval of time that begins with the arrival of an order and
ends the instant before the next order is received is called a cycle.
- From the above figure, the cycle length is q/D of repeated cycle. Hence,
each year will contain 1/q /D = D/q cycles.
- The average inventory during any cycle is simply half of the maximum
inventory attained during the cycle. This result will hold in any model for
which demand occurs at a constant rate and no shortages are allowed.

Thus, for the present model, average inventory = q/2 units.


Annual holding cost = Holding cost / year = (hold cost/cycle)
(cycles/year).

- Since the average inventory level during each cycle is q/2 and each
cycle length is q/D holding cost/cycle = q/2 (q/D)h = q2 h/2D
Then
Holding cost/year = q2h/2D (D/q) = hq/2

Combing ordering cost, purchasing cost, and holding cost, we obtain.


TC(q) = kD/q + PD + hq/2

- To find the value of q that minimizes TC(q), we set TC1 (q) equal to zero.

35
This yields;
1
TC (q) = -kD/q 2 + h/2 = 0 (1)
½
Equation (1) is satisfied for q = ± (2 kD/h)
Since.
½
q = - (2kD/h) makes no sense, it is more intelligible to assume that
the EOQ, q* will be given by
q* = (2kD/h) ½

36
Proof:
11 3
Since TC (q) = 2kD/q > 0 for all q > 0.
We know that TC(q) is a convex function.
Any point TC1(q) = 0 will minimize TC(q).
Thus, q* does indeed minimize total annual cost.

Hint:
Exclude the burden of proof that q* minimizes cost. Emphasis is on derivation
of q*.

Inventory Policy Development Examples


(1) A television manufacturing company produces its own speakers,
which are used in the production of its tv sets. The tv sets are
assembled on a continuous production line at a rate of 8,000 per
month, with one speaker needed per set. The speakers are produced
in batches because they do not warrant setting up a continuous
production line, and relatively large quantities can be produced in a
short time. Therefore, the speakers are placed into inventory until
they are needed for assembly into television sets on the production
line. The company is interested in determining when to produce a
batch of speakers and how many speakers to produce and how
many speakers to produce in each batch.

Several costs must be considered to go forward:


Each time a batch is produced, a setup cost of $12,000 is incurred.
This cost includes cost of “tooling up”, administrative costs, record
keeping, etc. note that the existence of this cost argues for
producing speakers in large batches.
(2) The unit production cost of a single speaker (excluding the setup
cost) is $10, independent of the batch size produced. In general this
may not necessarily be constant.
(3) The production of speaker in large batches leads to a large inventory.
The estimated holding cost of keeping a speaker in stock is $0.30
per month. This cost includes the cost of capital tied up in inventory.
Since the money invested in inventory cannot be used in other
productive ways, this cost of capital consists of the lost return (i.e
opportunity cost) as alternative uses of the money must be forgone.
Other components of the holding cost include storage space,
insurance, taxes based on value, and personnel cost to oversee and
protect the inventory.
(4) Company policy prohibits deliberately planning for shortages of any
of its components. However, a shortage of speakers occasionally
crops up, and it has been estimated that each speaker’s
unavailability costs $1.10 per month. This shortage cost includes the
extra cost of installing speakers after the television set is fully
assembled otherwise, the lost interest because of the delay in

37
receiving sales revenue, the cost of extra record keeping, etc.

38
Solution
From the EOQ formular;
q* = (2kD/h) ½
Where k = 12,000, h = 0.30 and d = 8,000

Then
q* = (2)(8,000)(12,000)
0.30 = 25,298
and each cycle length t = q/D

thus,
t* = 25,298
8,000 = 3.2months.

Hence, the optimal solution is to set up the production facilities to produce


speakers once every 3.2 months and to produce 25,298 speakers each time. It
is observable that the total cost curve is flat near this optimal value, so any
similar production run that might be more convenient, say 24,000 speakers
every 3 months, would nearly be optimal. This observation is obtained from
the test of the sensitivity of the total cost to small variations in the order
quantity (EOQ).

Example 2
(2) Braneast Airlines uses 500 taillights per year. Each time an order for
taillight is placed, an ordering cost of the $5 is incurred. Each light costs 40c,
and the holding cost is 8c/light/year. Assume that demand occurs at a
constant rate and shortages are not allowed. What is the EOQ?

How many orders will be placed each year? How much time will elapse
between the placements of orders?

Solution
We have that k = $5, h = $0.08/light/per year, and D = 500 lights/year.
The EOQ q* = (2(5) (500)/0.08)1/2 = 250

Hence, the airline should place an order for 250 taillights each time inventory
reaches zero.
orders/year = D/q* = 500/250 = 2 orders/year
The time between placement (or arrival) of orders is simply the length of a
cycle. Since the length of each cycle is q*/D, the time between orders will be:
q*/D = 250/500 = ½ year.

- It is also observable that slight deviation from the EOQ will result in only
slight deviation in cost, in most situations: an issue seen in the sensitivity of
total cost to small variations in the order quantities. Thus, moderate errors in

39
the determination of EOQ can be tolerable.
Conclusion of Inventory Theory

- We have just introduced only rather basic kinds of inventory model(s)


here, but they serve the purpose of introducing the general nature of
inventory models.

- However, we have attempted to present proper representations of


actual inventory situations that occur in practice.

- We have looked at the basic EOQ model. The model can sometimes be
modified to include some type of stochastic demand, such as the
stochastic continuous – review model.
- The elementary revenue management models, not discussed though,
are a starting point for the sophisticated kinds of revenue management
analysis that now is extensively applied in the airline industry and other
service industries with similar characteristics.

- Current trends in global economy show that multiechelon inventory


models are playing vital roles in the management of the supply chain of
many companies.

- Nevertheless, inventory models are still evolving to deal with more


complicated problems and challenges that continue to arise in practice.

40
Lecture 4: Dynamic Programming

(1) Introduction: (Hillier and Lieberman, 2010)


- Dynamic programming (DP) in a useful mathematical technique for
making a sequence of interrelated decisions. It provides a systematic
procedure for determining the optimal combination of decisions

- In contrast to Linear Programming (LP), there does not exist a standard


mathematical formulation of the dynamic programming problem.
Rather, DP is a general type of approach to solving problems, in such a
way that particular equations used must be developed to fit each
situation.

- In the light of the above peculiar nature of DP, a certain degree of


ingenuity and insight into the general structure of dynamic
programming problems is required to recognize when and how a
problem can be solved by DP procedures.

- The competence to solve problems by DP can best be developed by


exposure to a wide variety of DP applications and a study of the
characteristics that are common to all the situations.

- Next we proceed to look at some of the characteristics of DP problems.


After, we shall take a look at some examples, discuss the two types of
DP problems; deterministic and probabilities DP problems, we shall also
take look at some of the uses of DP, before we finally conclude.

(2) Characteristics of Dynamic Programming Problems


- DP is a technique that is useful in solving many optimization problems
and in most applications, it obtains solutions by working backward from
the end of a problem toward the beginning, thus breaking up large and
complex problems into smaller and more trackable problems.

- Some of the characteristics common to most application of DP are;


(1) The problem can be divided into stages, with a policy decision
required at each stage. As we shall see in some of the examples,
a decision may or may not be required at each stage and in some
cases, a stage is the amount of time or cost that has elapsed or
has been incurred since the beginning of a problem.
(2) Each stage has a number of states associated with the beginning
of that stage. Where, a state represents the information that is
needed at any stage to make an optimal decision. In general, the
states are the various possible conditions in which the system
might be at that stage of the problem. The number of states can
be finite or infinite.
(3) The decision chosen at any stage describes how the state at the

41
current stage is transformed into the state at the next stage.
(4) Given the current stage, the optimal decision for each of the
remaining stages must not depend on previously reached states
or previously chosen decisions. This idea is known as the
principles of optimality in DP. In other words, the optimal
immediate decision depends on only the current state and not on
how the stage is reached.
(5) If the states for the problem have been classified into one of T
stages, there must be a recursion that relates the cost or reward
earned during stages t, t + 1, …., T to the cost or reward earned
from stages t + 1, t + 2, …, T. In essence, the recursion formalizes
the working-backward procedure.

- In consideration of the above characteristics, we can proceed to


describe how to make optimal decisions [Winston, 2004]. Let’s assume
that the initial state during stage 1 is i1, to use the recursion, we begin by
finding the optimal decision for each state associated with the last
stage. Then we use the recursion described in characteristics 5 to
determine fT-1(.) (along with the optimal decision) for every stage T-1
state. We use the recursion to determine fT-2(.) (along with the optimal
decision) for every stage T-2 state.
We continue in this fashion until we have computed f1(i1) and the
optimal decision when we are in stage 1 and state i1. Then our optimal
decision in stage 1 is chosen from the set of decision attaining f1(i1).
Choosing this decision at stage 1 will lead us to some stage 2 state
(call it state i2) at stage 2. Then at stage 2, we choose any decision
attaining f2(i2). We continue in this fashion until a decision has been
chosen for each stage.

(3) Example of Dynamic Programming Problems


Example
(1) A Network Problem
- Some applications of DP reduce to finding shortest path between two
points (nodes) in a network. DP (working backward) can be used to find
the shortest path in the following example.

- Joe Cougar lives in New York City, but he plans to drive to LA to seek
fame and fortune.

- Joe’s funds are limited, so he has decided to spend each night on his
trip at a friend’s house

- Joe has friends in Columbus, Nashville, Louisville, Kansas City, Omaha,


Dallas, San Antonio, and Denver.

- Joe knows that after one day’s drive, he can reach Columbus, Nashville,

42
or Louisville.

- After three days of driving, he can reach Kansas City, Omaha, or Dallas.

- After three days of driving, he can reach San Antonio or Denver.


- Finally, after four days, he can reach LA.

To minimize the number of miles traveled, where should Joe spend each night
of the trip? The actual road mileages between cities are given below.

680
Columbu Kansas
s2 City 5
580 580 610
790
900 760 790 540 Denver 8 1,030
New York Nashville Omaha 6
1 3
Los Angeles
Stage 1 660 940 10

770 510 1,050 790 San 1,390 Stage 5


Antonio 9
stage 4
Liousville Dallas 7
4
270

stage 2 stage 3

Solution
- For Joe’s case, the shortest path between New York and LA can be
found by working backwards.
th
- All the cities that Joe can be in at the beginning of the n day of his trip
are classified as stage n cities.

- As Joe can be in San Antonio or Denver, for example, at the beginning of


the 4th day, we classify San Antonio and Denver as stage 4 cities. Day 1
begins when Joe leaves New York.

- Working backwards implies solving an easier problem that will


eventually help to solve a complex problem.

- We thus proceed by finding the shortest path to LA from stage 4 cities,


from stage 3 cities, and so on to New York that is four days away.

- Considering numbers 1, .., 10, of the figure, used to label the cities, we
note Cij to be the road mileage between city i and city j. where, for
example, C35 = 580.

43
- Also, we let ft(i) be the length of the shortest path from city i to LA, given
that city i is a stage t city.

- We can now proceed to determine the shortest path to LA from all the
stages, working backwards:
(1) Stage 4 Computations:
- stage 4 cities each has only one path to LA we thus observe that f4 (8) =
1,030, the shortest path from Denver to LA simply being the only path
from Denver to LA. Similarly, F4 (9) = 1,390, the shortest (and only) path
from San Antonio to LA.

(2) Stage 3 Computations:


- We determine the shortest path for each of the stage 3 cities to LA;

Where;
f3 (5), f3 (6), and f3 (7) each has two paths to LA

- For f3 (5)
Path 1 - city 5 to city 8, and the shortest path to LA
Path 2 - city 5 to city 9, and then shortest path to LA

Thus
Path 1 - C58 + f 4 (8) and Path 2 = C59 + f4 (9)
Therefore the shortest distance from city 5 to LA

C58 + f4 (8) = 6 10 + 1,030 = 1,640 *


f3(5) = Min{
C 59 + f4 (9) = 790 + 1,390 = 2,180

- The shortest path from city 5 to city 10, f3(5), shown by “ * “ is the path 5 – 8
– 10.

Similarly, C68 + f3(8) = 540 + 1, 030 = 1,570 *


f3(6) = Min{
C69 + f4(9) = 940 + 1, 390 = 2,330

The shortest path from city 6 = 6 – 8 – 10 and


C28 + f4 (8) = 790 + 1,030 = 1,820
f3(7) = Min{
C79 + f4 (9) = 270 + 1,390 = 1,660*
and the shortest path from city 7 = 7 – 9 – 10.

(3) Stage 2 Computations:


Having determined f3(5), f3(6), and f3(7) respectively, we work
backwards to determine f2(2), f2(3), and f2(4), which is the shortest path
from City 2, 3, and 4 to LA respectively, for instance, observe that the

44
shortest path from city 2 to 10 will begin by going through city 2 to 5, 6,
or 7 and uses the shortest path from that city to city 10 i.e. LA.
- From the figure, each stage 2 city has 3 paths to LA. We can compute
as follows for each city;
C25 + f3 (5) = 680 + 1,640 = 2, 320*
F2(2) = min{ C26 + f3 (6) = 790 + 1.570 = 2, 360
C27 + f3 (7) = 1,050 + 1,660 = 2, 710
F2(2) =
Thus the shortest path (f2 (2) = 2,320) is from city 2 to 5 to city 10 (5 – 8- 10)
Similarly,
C35 + f3 (5) = 580 + 1,640 = 2,220*
F2 (3) = min{ C36 + f3 (6) = 760 + 1,570 = 2,330
C37 + f3 (7) = 660 + 1,660 = 2,320

Thus f2 (3) = 2,220 implying that the shortest path from city 3 to 10 consists
of are 3 – 5 and from 5 to 10 (5 – 8 -10)
again, C45 + f3 (5) = 510 + 1,640 = 2, 150*
f2 (4) = min{ C46 + f3 (6) = 700 + 1,570 = 2, 270
C47 + f3 (7) = 830 + 1,660 = 2, 490
Thus, f2 (4) = 2,150.

(4) Stage 1 computations:


Having determined f2(2) f2(3), and f2(4), we can work one more stage
backward to find f1(1) and the shortest path from city 1 to city 10. This must
begin by going though city 2, city 3, or city 4.
Therefore; C12 + f2 (2) = 550 + 2,320 = 2,870*
F1 (1) = min{ C13 + f2 (3) = 900 + 2,220 = 3,120
C14 + f2 (4) = 770 + 2 150 = 2, 920
Thus f1(1) = 2, 870
This value determines the shortest path from city 1 to city 10 going from city 1
to city 2 and then following the shortest path from 2 to 10, which is 2 – 5 – 8 –
10. Equivalent to New York →Columbus → Kansas City → Denver → LA with
the distance of 2,870 miles (length = 2, 870 miles).

45
Example 2: The stage Coach Problem
- The stage coach problem concerns a mythical fortune seeker in
Missouri who decided to go west to join the gold rush in California
during the mid- 19th century. The journey would require traveling by
stagecoach through unsettled country where there was serious danger
of attack by marauders (in our case, kidnappers and Boko Haram).
Although his starting point and destination were fixed, he had
considerable choice as to which states (or territories that subsequently
became states) to travel through en route. The possible routes are
shown in the figure that follows, where each state is represented by a
circled letter and the direction of travel is always from left to right in the
diagram.
Thus, four stages (stage – coach runs) were required to travel from his
point of embarkation in state A (Missouri) to his destination in state j
(California). The fortune seeker was a prudent man who was quite
concerned about his safety. After some thought, he came up with a
rather clever way of determining the safest route. Life insurance policies
were offered to stagecoach passengers. Because the cost of the policy
for taking any given stagecoach run was based on a careful evaluation
of the safety of that run, the safest route should be the one with the
cheapest total life insurance policy. The cost for the standard policy on
the stagecoach run from state i to state j, which is denoted by Cij, is
shown by length of arc linking i to j. determine the route that minimizes
the total cost of policy.

7 1
B E
2 4
6 3 4 H 3
4 6
A C 2 F J
4 3
3 4 1 3 4
I
D G 3
5

Figure: The stagecoach problem.

Computational Efficiency of Dynamic Programming


- It will be common to question the essence of DP at instances where it is
possible to enumerate all the possible paths.
- However, it may be reasonable to conjecture the problem of
enumerating all the possible paths in very large networks. This is where
the efficiency of DP can come in handy.
- Further work for students. Using a large network x nodes (x ≥ 27), prove
the computational efficiency of dynamic programming over

46
manual/explicit enumeration in terms of additions and comparisons.

47
(4) Types of Dynamic Programming
- There are two basic types of DP; Deterministic and Probabilistic
Dynamic Programming.
- Deterministic dynamic programming problems represent the case
where the state at the next stage is completely determined by the state
and policy decision at the current stage.
- Probabilistic dynamic programming differs from deterministic dynamic
programming in that the state at the next stage is not completely
determined by the state and policy decision at the current stage. Rather,
there is a probability distribution for what the next state will be.
However, this probability distribution still is completely determined by
the state and policy decision at the current stage.
- Typical examples and illustration of these types dynamic programming
are considered beyond the scope of this course.

(5) Using Dynamic Programming to Solve Problems


- In addition to network problems (shortest path, minimum cost) that we
have seen so far, DP has proven efficiency in solving inventory problems,
resource allocation problems, equipment replacement problems, etc.
- We can only consider the case of an inventory problem, the necessary
characteristics that can warrant the use of DP to solve such inventory
problems. Students can study further from texts.

Dynamic Programming can be used to solve an inventory problem with the


following
Characteristics:
(1) Time is broken up into periods, the present period being period 1, the
next period 2, and the final period T. at the beginning of period 1, the
demand during each period is known.
(2) At the beginning of each period, the firm must determine how many
units should be produced. Production capacity during each period is
limited.
(3) Each period’s demand must be met on time from inventory or current
production. During any period in which production takes place, a
fixed cost of production as well as a variable per-unit cost is incurred.
(4) The firm has limited storage capacity. This is reflected by a limit on
end-of-period inventory. A per-unit holding cost is incurred on each
periods ending inventory.
(5) The firm’s goal is to minimize the total cost of meeting on time the
demands for periods 1, 2,…, T.

In this model, the firm’s inventory position is reviewed at the end of each
period (say, at the end of each month), and the model is called a periodic
review model. Whereas the continuous reviews model is one in which the firm
knows its inventory position at all times and may place an order or begin
production at any time (recall reorder point). Dynamic programming can be

48
used to determine a production schedule that minimizes the total cost
incurred in inventory problems of this nature. We can verify this on our own.
(6) Conclusion:
- Dynamic programming is a useful technique for making a sequence of
interrelated decision.
- DP requires formulating an appropriate recursive relationship for each
individual problem.
- Nevertheless, DP provides a great computational savings over using
exhaustive enumeration to find the best combination of decisions,
especially for large problems. For example, if a problem has 10 stages
with 10 states and 10 possible decisions at each stage, then exhaustive
enumeration must consider up to 10 billion combinations, whereas
dynamic programming need make no more than a thousand
calculations (10 for each state at each stage).
- So far, we have seen the fundamentals of DP as a necessary step
towards understanding concepts of DP and the development of interest
for further work in this all important subject.

49
Lecture 5: Reliability Problems
Definition
- Reliability theory is the subject of estimating the distribution of machine
failure times and the distribution of time to failure of a system.
- For very obvious reasons, it is very necessary to estimate that a system
of machines can work for a desired amount of time.
- I suppose in OR we should also consider the reliability issues of models
used describe/represent real life problems – further work in this subject.

Distribution of Machine Life


- We assume the length of time (call it x) until failure of a machine is a
continuous random variable having a distribution function F(t) = P (x ≤ t)
and a density function f(t).
- Thus, for small Δt (change in time), the probability that a machine will
fail between time t and t + Δt is approximately f(t)Δt.
- The failure rate of a machine at time t [call it r(t)] is defined to be (1/Δt)
times the probability that the machine will fail between time t and time t
+ Δt, given that the machine has not failed by time t. Thus,

r(t) = (1/Δt) prob (X is between t and Δt | x > t)


= Δt f (t) = f(t)
Δt (1 – F (t)) = (1 – F (t)
- If r(t) is an increasing function of t, then the machine is said to have an
increasing failure rate (IFR). If r(t) is a decreasing function of t, the
machine is said to have a decreasing failure rate (DFR)
–λt
- Consider an exponential distribution which has F(t) = λe and F(t) = 1
–λt
- λe . Then we find that:
–λt –λt
r(t) = λe /e = λ
Thus, a machine whose lifetime follows an exponential random variable
has constant failure rate.

Weibull Random Variable


- This is the most frequently used random variable to model time till
failure of a machine.
- The Weibull random variable has the following density and distribution
function (we could define failure density function as ‘perhaps’ the
amount of failures per unit time period).
- The density fax: f(t) = αxα-1/β(e-(t/β)t)
and
The distribution fax: F(t) = 1 - ℮ -(t/β)α
- It can be shown that if β < 1 the Weibull random variable exhibits DFR,
and if β > 1, then the Weibull random variable exhibits IFR.

50
Common Types of Machine Combinations
Three common types are;
- A series system: function as long as each machine functions.

- A parallel system: function as long as at least one machine functions.

- A K out of n system: A k out of n systems consists of n machines and is


considered functional/working as long as k machines are working.

Conclusion
- Reliability modeling, and its understanding/studies, is essential in
today’s use of integrated systems.
- It is very essential to have a picture of, and a programme for, the extent
or measure of dependability or reliability of systems or machines in
order to address the problems that may be associated with machine
down times, which have adverse implications on operations and of
course revenue.
- The study of reliability modeling as part of operations research is
essential owing to the impact that failures of machines can have on the
operations of organizations. Therefore, reliability issues must be
factored in and accounted for in both planning and management of
virtually all operations of an organization.
- We have only dealt with the preliminaries of reliability modeling, so that
students can at least be familiar with the terminology and appreciate
the need to look out for the reliability rating and warrantee provisions of
machines/systems that they may encounter.

51
Revision
In thus course, COS 414 , Operations Research II, just concluded, we have
discussed the following topics (some in parts other in details);
- Network analysis
- Games Theory
- Inventory Problems
- Dynamic Programming, and
- Reliability Problems

- These have been in accordance with what we set out to achieve at the
beginning of the course. So far, we have kept faith with the curriculum of
the course, but it is hoped that the concepts introduced will be useful to all
concerned both for enterprise related reasons and for further work/studies,
regardless of the constraints of time and resources.
- For exams purposes, students are to pay particular attention to all the
topics covered and most especially first four in the above list.

52
Assignment – Dynamic Programming Problem

The World Health Council is devoted to improving health care in the under
developed countries of the world. It now has five medical teams available to
allocate among three such countries to improve their medical care, health
education, and training programs. Therefore, the council needs to determine
how many teams (if any) to allocate to each of these countries to maximize
the total effectiveness of the five teams. The teams must be kept intact, so
the number allocated to each country must be an integer. The measure of
performance being used is additional person years of life. For a particular
country, this measure equals the increased life expectancy in years times the
country’s population. Table 1 gives the estimated person years of life (in
multiples of 1,000) for each country for each possible allocation of medical
team.

Qn: Which allocation maximizes the measure of performance?

Table 1
Medical Thousand of Additional person-years of life
Teams Country
1 2 3
0 0 0 0
1 45 20 50
2 70 45 70
3 90 75 80
4 105 110 100
5 120 150 130

Hints
- This problem requires making three interrelated decisions, namely how
many medical teams to allocate to each of the three countries.
Therefore, even though three countries can be considered as three
stages in a dynamic programming formulation. Then decision variables
Xn (n, = 1,2,3) are the number of teams to allocate to stage (country) n.
- To determine states, you need to determine;
- What changes from one stage to the next.
- How to describe the status at the current stage.
- Current state information that can be used to determine optimal policy
etc.

53
- The exam will be of 2 parts: The first will be compulsory 5 questions, which
will test the understanding of the major concepts in the course. The
second part will be three or four questions to answer two, which will deal
with all implementation of the concepts discussed in the course.
- May thanks and best of luck! 22/07/2011.

f3(C1) = min {6 + f4 (H)/ 6 + f 4 (1)} = {6 + 6/ 6 + 8} = 12


f2 (B) = min { 14 + f3 (E) / 8 + f3 (f)/6 + f3 (G)}
8 14 12
f2 (C) = min {6 + f3 (E) / 4 + f3 (F)/8 + f3 (G)}
f2 (D) = min {8 +/ 6 + /12}
18 14 16
fr (A) = min {4 + f2 (E) / 9 + f2 (C)/6 + f2 (D)}

Initial stage 1 has in state detail ft -1 (.) along with the optimal decision for
every stage T – 1 state.

54

You might also like