OperationsResearch 2
OperationsResearch 2
Faculty of Engineering
Industrial Engineering
Operations Research II
Final Project
1.1 Introduction
The integrality conditions are the conditions in which is stipulated that some variables of decision, or
all, must have integer values. (Eppen, Gould, Schmidt, Moore, Weatherford, 2000:290).
The PLE models are models of PL which additional characteristic is that one or some variables of
decision have to adopt integer values.
For convenience a pure integer problem is defined to have all integer variables. Otherwise, a problem
is a mixed integer program if it deals with both continuous and integer variables. (Taha, 2007:349).
In real life it is common that when we are solving a problem, we are looking for an integer result.
As we know, the LP helps us to solve Linear Problems analyzing a possible solution space from
which we finally will find an optimal solution. Given the hypothesis of finding how many pieces
of bread a baker would make in order to get more profits (maximize) or saving money (minimize),
after analyzing the problem, we could find that he may produce 1500.43 pieces of bread. In this
case, due to the nature of the product, we could say that he could make the decision to bake 1500
pieces or even 3 pieces without any problem. This is what we call a Rounded Solution.
This method is what we call a Rounded Solution. The use of this kind of solutions is not acceptable for
administration in cases where rounding is not important meaningful in practice […] In general, the
bigger the values of the decision variables of the PL solution, the more likely it is that a rounded solution
results acceptable in practice. (Eppen, Gould, Schmidt, Moore, Weatherford, 2000:289).
But what if we are talking about critical decisions, for example, in the case of manufacturing 4.5
airplanes. In this case the decision is not that simple and the problem is not only rounding up to 5
or down to 4. Due to the nature of the process of manufacturing an airplane which includes working
with a lot of resources which are usually expensive and difficult to set, including processes,
persons, etc. The decision is not that simply. In that case, we are looking for an integer solution,
which needs another path to find it.
In general, is too much simpler to solve Linear Programming problems than Integer Programming
Problems. (Hillier, F., Lieberman, G., 1998: 601).
However, the study for implementation of this kind of analysis, there is a big disadvantage on
ILP algorithm because of its lack of consistency. Its performance on the computer presents an
inherent roundoff error which is normally significative on real applications and it is something that
we have to keep in mind when working with ILP.
Integer Linear Programs (ILPs) are linear programs with some or all the variables restricted to integer
(or discrete) values. […] A drawback of ILP algorithms is their lack of consistency in solving integer
problems. Although the algorithms are proven on the computer (with its inherent machine roundoff
error) is a different experience. You should keep this point in mind as you study the ILP algorithms.
(Taha, 2007:349).
In order to find an Integer Result while using Linear Programming, it is a must to follow an Integer
Programming Problem Statement which is about to solve the problem like a normal Linear
Programming Problem and then applicate the correspondent method in order to find an integer
solution from the continuous optimum adding the correspondent special constraints until finding
an optimum extreme which will satisfy the integer requirements.
There are two useful methods starting from the step 3 which are: B&B and Cutting-Plane
method.
The ILP algorithms are based on exploiting the tremendous computational success of LP. The strategy
of these algorithms involves three steps:
Step 1. Relax the solution space of the ILP by deleting the integer restriction on all integer variables
and replacing any binary variable y with the continuous range 0 <= y <= 1. The result of the relaxation
is a regular LP.
Step 2. Solve the LP, and identify its continuous optimum.
Step 3. Starting from the continuous optimum point, add special constraints that iteratively modify the
LP solution space in a manner that will eventually render an optimum extreme point satisfying the
integer requirement.
Two general method have been developed for generating the special constraints in step 3.
Although neither method is consistently effective computationally, experience show that the B&B
method is far more successful than the cutting-plane method. (Taha, 2007:369-370).
Once we have relaxed the space of solution of our Model of Linear Programming and we realized
the solution is not an integer solution, we must proceed to use the B&B Algorithm (Branch and
Bound). In general, the idea of the B&B Algorithm is: Once we found the feasible set of solutions
for a determinate model, we will proceed to divide it into smaller sub-sets under the condition of
not overlapping them. Then, the procedure of B&B deletes some sub-sets which minimize the total
amount of the feasible solutions by making a partial enumeration of the solutions and not an
exhaustive like in a normal LP.
The first B&B Algorithm was developed in 1960 by A. Land and G. Doig for the general mixed and
pure ILP problem. Later, in 1965, E. Balas developed the additive algorithm for solving ILP problems
with pure binary (zero or one) variables. The additive algorithm´s computations were so simple (mainly
addition and subtraction) that it was hailed as a possible breakthrough in the solution of general ILP.
Unfortunately, it failed to produce the desired computational advantages. Moreover, the algorithm,
which initially appeared unrelated to the B&B technique, was shown to be but a special case of the
general Land and Doig algorithm. (Taha, 2007:370).
ILP1 has become LP1. Solve the LP1 model by the simplex method. (Using TORA)
LP1 Model
Step 2. Branching. After confirming that the relaxing of the LP1 did not give us an integer solution,
we choose one of the two results arbitrarily ( 𝑥1 𝑜𝑟 𝑥2 ) and we branch it in both superior and
inferior values.
NOTE. In the case we have a mixed result where one of the variables of the decision is an integer
value, we cannot choose it for branching.
We select 𝑥1 = 3.750
𝑥1 ≤ 3 𝑥1
𝑥1 ≥ 4
ILP3
ILP2
Why is this affirmation valid? Because there are no integer values for 𝑥1 in the region which will be
deleted when we constrain the variable of decision on any of both directions. As the values we are
deleting are and as we are looking for an integer, we have not deleted any feasible 3 < 𝑥1 < 4 point
of the feasible set for IPL1. (Eppen, Gould, Schmidt, Moore, Weatherford, 2009:305).
𝑥1
ILP2 ILP3
We chose one branch for starting to explore it. We chose branch X1<= 3 and we stablish the new
model.
LP2 Model
As we do not have yet a pure integer solution, we continue exploring the branches. We continue
exploring the same branch and we chose now X2 because X1 is now an integer value.
ILP3
𝑥2
LP4 LP5
LP4 Model
We have finished in the examination of this branch, because we have found a pure integer
solution.
𝑥1
ILP3
𝑥2
Z= 23 LP5
This Branch X1= 3
is completely
X2=4
explored.
LP5 Model
𝑥1
ILP3
𝑥2
Z= 23
No
This Branch X1= 3
is completely Feasible
X2=4
explored.
We continue exploring the tree and we proceed to explore branch ILP3.
𝑥1
ILP3
𝑥2
Z= 23
No
This Branch X1= 3
is completely Feasible
X2=4
explored.
𝑥1 = 4
𝑥2 = 3.667
𝑧 = 22.333
In this case we can continue exploring the branch, we chose X2 because X1 is already an integer.
𝑥1
ILP3
𝑥2
𝑥2 ≤ 3 𝑥2 ≥ 4
ILP6 ILP7
Z= 23
No
This Branch X1= 3
is completely Feasible
X2=4
explored.
ILP3
𝑥2
𝑥2 ≤ 3 𝑥2 ≥ 4
ILP6 ILP7
Z= 23
No
This Branch X1= 3
is completely Feasible
X2=4 𝑥1 ≤ 4
explored.
𝑥1 ≥ 5
ILP8 ILP9
ILP3
𝑥2
𝑥2 ≤ 3 𝑥2 ≥ 4
ILP6 ILP7
Z= 23
No
This Branch X1= 3
is completely Feasible
X2=4 𝑥1 ≤ 4
explored.
𝑥1 ≥ 5
This Branch Z= 19
ILP9
is completely X1= 4
explored. X2=3
We explore ILP 9.
As we can see, the value of Z is lower than other branches which offer us an integer result, so it is
not advisable to continue because even though we could probably find a feasible result it won’t
be an optimal solution.
We explore ILP7.
The procedure can also be done in TORA, for instance we will use the same Example:
Step 4. Go to SOLVE Menu and Save your Project and Choose Solve Problem and Go to Output
Screen.
Step 5. Choose either Automated or User-Guide B&B. The difference between them will be the
presentation of the results, either direct or by branches.
For exploring the Nodes manually, it is important to take account the next Chart that according to
the colors of the Nodes it indicates if we can continue exploring, if the Nod is Fathomed or if we
have gotten a Best Solution.
Step 6. Click the Green Node and Select a Variable of Decision to Continue Exploring.
An office furniture manufacturer, produces two kinds of Desktops: Executives and Secretarial. The
company has two facilities in which he produces the Desktops. The facility 1, which is an ancient
facility and operates with double shift of 90 hours per week. The facility 2 is a newest one and it
doesn´t operate at its total capacity. However, in order to operate the 2nd facility to work with
double shift as the first facility, the producer has found operator for working in both shifts. At this
moment, every shift of the facility runs 27 hour per week. Extra prima to the 2nd shift workers is
not payed.
The company has competed with success in the past, giving a price of $ 350,00 to the executive
Desktop. However, it seems to be that the company will have to reduce the price of the Secretarial
Desktop at $ 275,00 with the goal to be in a competitive position. The company has been
experimenting excess of costs in the last eight to ten weeks; that is why, the administrators have
fixed a weekly budget constraint over the production costs.
The weekly budget for the total production of Executive Desktops is of $ 2100,00, while the
budget for the Secretarial Desktops is $ 2350,00.
The administrators would like to determine what is the number of every sort of Desktops that
must be manufactures in every facility with the goal of maximize the profits in the next week.
Exercise 2.1IntegerProgramming_Desktops2.1
Exercise 2.2IntegerProgramming_Desktop2.2
Exercise 2.2.1IntegerProgramming_Desktop2.2.1
Exercise 2.2.1.1IntegerProgramming_Desktop2.2.1.1
Exercise 2.2.1.1.1IntegerProgramming_Desktop2.2.1.1.1
Exercise 2.2.1.1.2IntegerProgramming_Desktop2.2.1.1.2
Exercise 2.2.1.2IntegerProgramming_Desktop2.2.1.2
Exercise 2.3IntegerProgramming_Desktop2.3
Exercise 2.4 IntegerProgramming_Desktop2.4
Exercise 2.4.1.4IntegerProgramming_Desktop2.4.1.4
Exercise 2.4.1.4.1IntegerProgramming_Desktop2.4.1.4.1
Exercise 2.4.1.4.1.1IntegerProgramming_Desktop2.4.1.4.1.1
Exercise 2.4.1.4.1.1.0IntegerProgramming_Desktop2.4.1.4.1.1.0
Exercise 2.4.1.4.1.1.1IntegerProgramming_Desktop2.4.1.4.1.1.1
Exercise 2.4.1.4.1.1.2IntegerProgramming_Desktop2.4.1.4.1.1.2
Exercise 2.4.1.4.1.1.2.1IntegerProgramming_Desktop2.4.1.4.1.1.2.1
Exercise 2.4.1.4.1.1.2.2IntegerProgramming_Desktop2.4.1.4.1.1.2.2
Exercise 2.4.1.4.1.1.3IntegerProgramming_Desktop2.4.1.4.1.1.3
Exercise 2.4.1.4.1.2IntegerProgramming_Desktop2.4.1.4.1.2
Exercise 2.4.1.4.1.2.1IntegerProgramming_Desktop2.4.1.4.1.2.1
Exercise 2.4.1.4.1.2.1IntegerProgramming_Desktop2.4.1.4.1.2.1
Exercise 2.4.1.4.1.2.1.2IntegerProgramming_Desktop2.4.1.4.1.2.1.2
Exercise 2.4.1.4.1.2.2IntegerProgramming_Desktop2.4.1.4.1.2.2
Exercise 2.4.1.4.2IntegerProgramming_Desktop2.4.1.4.2
Exercise 2.4.2IntegerProgramming_Desktop2.4.2
Exercise 2.5IntegerProgramming_Desktop2.5
As student of engineering, it is interesting to realize that we can face to LP problems which solution
do not satisfy the requirements of our computations. As I redacted in the introduction, sometimes,
continuous solutions do not give us an answer in order to take a decision. There are several
processes which demand integer solutions in order to make a decision, let the problem be the
resources to use, the cost to produce or simply the number of employees we need.
On my opinion this is a creative and mathematically complex method which departs from the
relaxed solution and explore it through cut until getting an optimal solution to the PL problem once
we work under a first feasible space of solutions.
As a student I realized sometimes we have problems that could be solved from a relaxed solution
and then we can apply other algorithms or methods to find an optimal solution just like the B&B.
We must not give up on finding the required solution of a problem only because in the first space
of solutions we do not find an optimal, instead, we can continue exploring that relaxed space of
solutions and we could little by little limiting or restricting this space until forcing it to give a
possible optimal solution and I mean in general, not only for LP or Operations Research Problems.
1.4 Gomory´s Cutting-Plane Algorithm
Ralph Edward Gomory (Born May 7, 1929) is an American applied mathematician and executive.
Gomory worked at IBM as a researcher and later as an executive. During that time, his research led to
the creation of new areas of applied mathematics.
Gomory has written extensively on the nature of technology development, industrial competitiveness,
models of international trade, and the function of the corporation in a globalizing world. (Wikipedia, La
encyclopedia libre, 2019).
In practice, the optimization of ILP usually take ten times longer and often hundreds or thousands of
times longer in comparation of LP where Integer constraints are not required. (Eppen, Gould, Schmidt,
Moore, Weatherford, 2000:290).
The cutting plane method is an algorithm that just like the B&B, seeks an integer solution of a
relaxed PL solution which in its first solution is not able to offer integer solutions. This is likely a
more complex method than B&B because it is necessary to relax the LP model by using Simplex
Method and then the treatment to design the cutting planes demands to use the Dual Method for
Recovering Feasibility.
The operation of this algorithm is about to create new constraints (Linear equations) which let cut
the relaxed space of solutions in order to find an Integer Solution in the vortex generated through
the original constraints and the new feasible constraints.
A Cutting-Plane for an IP problem it´s a new functional constraint that reduces the feasible region for
the relaxed LP without eliminate feasible solutions for the original PE problem. (Hillier, F., Lieberman,
G., 1998: 601).
1. Consider functional constraint which form is to be <= only with negative coefficients.
2. Find a group of variables (called constraint´s minimum cover) such:
a) The restriction is violated if all the variables in the group are equal to 1 and all the other
variables are equal to 0.
b) But the constraint is satisfied if the value of any of these variables change to from 1 to
0.
3. If N represents the number of the variables in the group, the gotten cutting plane has the
form: Sum of the variables in the group <= N -1.
1.4.1 Gomory´s Algorithm.
The added cuts do not eliminate any of the original feasible integer points, but must pass through at
least one feasible or infeasible integer point. These are basic requirements of any cut. (Taha, 2007:243).
Gomory´s Algorithm starts in the relaxed space of solutions of an LP problem (Simplex or its
alternatives):
Let´s see the general explanation of the algorithm through exercise 1 as follows:
Exercise 1. CuttingPlane_1
Exercise 2. CuttingPlane_2
The quantity of two foods that a patient can consume have been restricted. According to the
prescription by the Physician, this are nutritional requirements per day: 1000 Units of Nutrient A,
2000 of Nutrient B and 1500 of Nutrient C. There are two available alimentary sources F1 and F2.
Every oz. of the Source F1 contains 100 Units of A, 400 of B and 200 of C. Every oz. contains 200
of A, 250 of B and 200 of C. The alimentary sources cost $ 6.00 and $ 8.00 per oz.
Exercise 3. CuttingPlane_3
A Company faces the problem of determining in what project investing during the next 4 years.
The company has a limited annual budget for investments. There are 4 available projects. They are
characterized for their estimated present value and their capital required annual costs as shown:
The acquisition of new machinery can only be done in case of Plant Spanning happens and they
wish to invest looking for new products.
1.4.2 Cutting Plane Algorithms Conclusion
Ironically, the first algorithms developed for ILP, included the celebrated algorithm published by Ralph
Gomory in 1958, were based on Cutting-Planes (generated in another way), but this approach resulted no
to be satisfactory in practice (except for some special problems). However, these algorithms are supported
only in the Cutting Planes. (Hillier, F., Lieberman, G., 1998: 601).
It is important to remark that most of the algorithms are not created for running in a computer
because most of the studies of the algorithms were started before the existence of that powerful
machine. So, according to this asseveration, Gomory´s algorithm is getting obsolete because of the
discovering of new techniques in Cutting Plane methods. Something interesting is that Professor
Gomory is alive and that he made interesting and important contributions to the administrative and
engineering sciences.
To know how to handle Cutting-Plane methods, such as B&B it´s important as engineer as I
said because during an engineering career, we will face situations where the answer of our LP
problems must be Yes or No, or an Integer Solution, so it is important not to discard the main space
of feasible solutions of a LP, instead, we could apply B&B and Cutting methods in order to explore
this space of solutions to get an optimal solution.
Something really important for me and as an experience working with Students of Engineering
is that they are normally subjected to mathematical computations, and they do not take into account
all the other variables involved in an analysis of decision, which is really grave because square and
closed thinking, blocks optimal solutions which could be hidden in the main space of solutions, but
we must find the way to get them.
It has apparently arisen a new in the solution methodology of ILP, with a series of articles published in
1980. The new algorithmic approach involves the combination of automated preprocessed problems,
the generation of Cutting Planes and the B&B techniques. There have recently been several
investigations with the objective to develop algorithms (even heuristics) for Integer Non-Linear
Programming, and this area of investigation remains very active. (Hillier, F., Lieberman, G., 1998: 630).
2. Network Models (Graph Models)
In real life we can see how efficient is the representation of an LP model as a Network. We can see
Transport, Electrical or Distribution networks among other important Network models. Network
Models are pretty useful when the problem must be solved through different stages.
A Network is a useful tool for developing complex LP problems using different algorithms such
as: The shortest route, Minimal Expansion Tree or Maximum Flow algorithms.
Searching a graph means systematically following the edges of the graph so as to visit the vertex of the
graph. A graph searching algorithm can discover much about the structure of a graph. Many algorithms
begin by searching their input graph to obtain this structural information. Several other graph algorithms
elaborate on basic graph searching. Techniques for searching a graph lie at the heart of the field of
algorithms. (Cormen, 2009: 589).
Network problems arises from different situations. The transport, electrical and communication
networks predominate in daily life. Network representation is widely used in different areas such as
production, project planning, facilities localization, resources administration, distribution or financial
planning, among others. A Network representation provides a general a powerful panorama and it is
conceptual help to visualize the relationships between the components of the systems used in almost all
the scientific, social and economics areas. (Eppen, Gould, Schmidt, Moore, Weatherford, 2000:405).
In order to work with Network models, let´s define an adequate Network Terminology. It is
important to declare that even though this terminology is almost standardized, even the nodes or
even the arcs could represent or a cost, or an action, depending on the algorithm we are using. 1
Node
Determined Cost of
moving from arc 2 to 4
Vertex Node or
or vice versa.
Source, it is often
represented by
an S instead the
number 1.
1
An Arc could be also Called “Edge” and a Route “Path”, depending on the author.
Directed Arc (The flow
Node is determined forward)
Consider a non-directed network with two special nodes called Source and Destination. A non-negative
distance is associated with every link or non-directed arc. The target is to find the shortest path (the
trajectory with the minimal total distance) form Source to Destination. (Hillier, F., Lieberman, G.,
1998:411).
As its name indicates, the Shortest Route Problem is an algorithm which target is to find the
minimal possible cost, time, or resources investment between the main node to on f the others, in
order to optimize a process. This is more usually a transportation algorithm and it is very useful for
transportation logistics planning. Let´s explain both useful algorithms Dijkstra and Floyd.
The shortest-route problem determines the shortest route between a source and destination in a
transportation network. Another situation can be represented by the same model. (Taha, 2007:243).
Target of the umpteenth iteration: To find the umpteenth (n) shortest node to the Source. (For
n=1,2…n to be the closest destination node).
Data for the umpteenth iteration: n-1 closest nodes to the origin (Found un the previous
iterations), included its shortest path and the distance from the origin. (These nodes and the origin
are called solved nodes; the rest are non-solved nodes).
Candidates for the umpteenth shortest node: Every solved node that has a direct connection
through an arc with one or more non solved nodes, provides a candidate, and the candidate will be
the non-solved node with the shortest value on its arc or link. (The ties provide additional
candidates).
Computations of the umpteenth shortest node: For every solved and its candidates, we must
sum the distance between them and the distance of the (previous) shortest path from the source to
this current solved node. The candidate with the shortest distance is the umpteenth closest node
(the ties provide solved additional nodes), and its shortest path generates this distance.
A) Dijkstra´s Algorithm
Dijkstra’s algorithm is designed to determine the shortest routes between the source node and every
other node in the network. (Taha: 2005, 248)
Where:
B
15
100 20
50
10
A 30 C 60 E
𝑖 = 𝐴, 𝐵, 𝐶, 𝐷 & 𝐸
Dijkstra´s Algorithm:
Starting Node A.
i= A Tag= [0, -]
This is because the distance necessary to get the Starting Node Ui= 0 and there isn´t a precedent
Node i-1 to get the Node A.
Step 1. Established a Chart indicating both (i,j) possibilities and start to iterate according to the
algorithm.
As we can see in the Network, Node A can be connected with both Nodes B and C, so let´s compute
according to the algorithm. Mark as permanent the Tag with the minimum value and continue
iterating.
We can realize that Node C gives the shortest route compute, so we mark it as Permanent and we
leave the compute for Node B as Temporary.
Iteration 2. We can realize that from Node C we can reach Nodes D and E, so let´s compute and
select the temporary and the permanent Node(s).
i/j A B C D E
[𝑢𝐴 , −] = [0, −]
E
After having applied the algorithm, we proceed to Find the possible Nodes from our Permanent
Node.
i/j A B C D E
[𝑢𝐴 , −] = [0, −] [𝑢𝐵 , 𝐷] = [55, 𝐷] [𝑢𝐶 , 𝐴] = [30, 𝐴] [𝑢𝐷 , 𝐶] = [40, 𝐶] [𝑢𝐸 , 𝐷] = [90, 𝐷]
Iteration 3. According to the Network, from node D we can reach Nodes B and E, let´s compute
according to the algorithm.
We can realize that in this case we are ready to choose the shortest path for Nodes B and E. having
i/j A B C D E
[𝒖𝑨 , −] = [𝟎, −] [𝒖𝑩 , 𝑫] = [𝟓𝟓, 𝑫] [𝒖𝑪 , 𝑨] = [𝟑𝟎, 𝑨] [𝒖𝑫 , 𝑪] = [𝟒𝟎, 𝑪] [𝒖𝑬 , 𝑫] = [𝟗𝟎, 𝑫]
or
[𝒖𝑬 , 𝑪] = [𝟗𝟎, 𝑪]
a tie for Node D from Nodes C and D, in this case we can select whichever we want.
Something I found as an opportunity is that even this algorithm is useful, it helps only to find
the shortest distance or cost between and x and y nodes and not the most economical path for all
the route which is useful but not at all.
B) Floyd´s Algorithm
Floyd´s Algorithm is more general because it allows the determination of the shortest route between any two nodes
in the network. (Taha, 2007:248).
Floyd´s Algorithm represents a network of n nodes through a square matrix with n rows and n
columns. The entry (𝑖, 𝑗) of the matrix indicates the distance 𝑑𝑖,𝑗 from node i to j, which is finite
if i is directly linked to j. and otherwise, infinite.
Given 3 nodes, i, j and k with its respective connection distances, it is shorter to get from j to i
passing for k if:
Given that conditions it´s better to replace the direct path 𝑖 → 𝑗 for the indirect path 𝑖 → 𝑘 → 𝑗.
Floyd’s Algorithm Computations
Step 0. Define the both initial distance 𝐷0 and nodes sequence 𝑆0 matrix. All the elements in the
diagonal are blocked.
Establish k=1
𝑀𝑎𝑡𝑟𝑖𝑥𝐷0 =
1 2 … j … n
1 X 𝑑1,2 … 𝑑𝑖,𝑗 … 𝑑1,𝑛
… … … X … … …
… … … … … X …
𝑀𝑎𝑡𝑟𝑖𝑥 𝑆0 =
1 2 … j … n
1 X 2 … 𝑗 … 𝑛
2 1 X … 𝑗 … 𝑛
… … … X … … …
i 𝑖 𝑖 … X … 𝑛
… … … … … X …
n 𝑛 𝑛 … 𝑛 … X
General Step k. Define row and column k as pivot and column row. Apply the triple operation to
every element 𝑑𝑖,𝑗 en 𝐷𝑘−1 , for all i & j.
b) From 𝑆𝑛 , determine the intermediate node 𝑘 = 𝑠𝑖,𝑗 , which provides the path 𝑖 → 𝑘 → 𝑗 .
If 𝑠𝑖,𝑘 = 𝑘 and 𝑠𝑘,𝑗 = 𝑗 , stop; all the intermediate nodes of the path have been found.
Otherwise, repeat the procedure between the nodes I and k, and between k and j.
Floyd´s algorithm is even more useful than Dijkstra´s because it offers us the shortest path
between whichever pair of nodes, which is too much more interesting than knowing the shortest
route between a source and a node n. The powerfulness of Floyd´s Algorithm is very useful for
logistics because transport it´s a very important subject in the Chain Supply not only for distribution
but for provisioning. All distributor and provider must economize distance and resources in order
to optimize the return of its business and for competing in the fierce market.
As a travel lover I found interesting the way on how I can plan my routes by myself to optimize
my time and my money.
2.3 Minimal Spanning Tree Problem
Because a tree is a type of graph, in order to be precise, we must define a tree in terms of not just edges,
but its vertices as well. (Cormen, 2009: 359).
As its name indicates, Minimal Spanning Tree is a network graph in which we can link its nodes
according to the total minimum length of all the branches or links.
The Minimal Spanning Tree Problem, is similar to the main version of the shortest path. Both
graphs are networks non-directed and linked through arcs or edges which represent a distance, a
cost, time, etc.
As in the problem of the shortest route, the result of this algorithm is about the sum of the value
of the links with the minimum total length.
For the Shortest Path problem this property is that the selected link must provide a trajectory between
the Source and the destination. For the Minimal Spanning Tree, the required property is that selected
links must provide a trajectory between every pair of nodes. (Hillier, F., Lieberman, G., 1998:416).
1. We have the nodes of a network but not the links between them. Instead potential
links are given with its positive lengths in case of selecting it.
2. We need to design the network with enough links to satisfy the requirement of
having a link between every pair of nodes.
3. The target is to satisfy this requirement for minimizing the total length of all the
links inserted in the network.
The problem of the Minimal Spanning Tree could be solved in a fairly direct way, as it occurs that we
are talking about of one of the few algorithms of OR in which we could be greedy in every stage of
the procedure of the solution which leads to the end, an optimal solution! (Hillier, F., Lieberman, G.,
1998:417).
Even with this procedure at first sight it could seem that the election of the initial node would affect
the result, it is not actually like that. (Hillier, F., Lieberman, G., 1998:417).
2
According to: (De Cos, H. 2020, diapositive 12)
Exercise 1. MinimalST_1
Exercise 2. MinimalST_2
Exercise 2. MinimalST_3
2.3.3 Minimal Spanning Tree Conclusion
The Minimal Spanning Tree is a fun algorithm which is very useful and easy to compute. The
simplicity of this algorithm makes it too interesting because I could realize that an algorithm
doesn’t need to be always difficult and a tiring activity.
I found this algorithm so useful for logistics and for delivering, especially when we have a
specific route to cover during a journey. I thought about a DHL deliverer who has an order to
accomplish during a job journey, through this simple algorithm the company could save, time,
money and effort.
Finally, during the investigation for developing this algorithm I found very interesting the
application we can give to it, from Telecommunications Networks Design, passing through
Transport Networks, High Voltage Electrical Transmission Networks, Design of the Minimum of
Wires for Electrical Devices or Design of Pipelines for connecting between different cities, etc.
2.4 Maximum Flow Problem
This model along with the shortest path model, results interesting by itself. It is also presented as a sub
model for solving other more complex problems. For these reasons and for its theoretical fundaments,
sometimes we say that these two models are of capital importance for network theory. (Eppen, Gould,
Schmidt, Moore, Weatherford, 2000:248).
The Ford Fulkerson algorithm for Maximum Flow is an algorithm developed by Delbert Ray
Fulkerson (Illinois, Ch., 1924-1976) and Lester Randolph Ford Jr. (Houston, Tx., 1917-2017) and
published in 1956 in the USA. The official name of the algorithm is max-flow min-cut theorem.
The theorem relates two quantities: the maximum flow through a network, and the minimum
weight of a cut of the network. To state the theorem, each of these quantities must first be defined.
Let N = (V, E) be a directed graph, where V denotes the set of vertices and E is the set of edges.
Let´s ∈ V and t ∈ V be the source and the sink of N, respectively. The capacity of an edge is a
mapping c : E → R+, denoted by cuv or c(u, v) where u,v ∈ V. It represents the maximum amount
of flow that can pass through an edge.
Maximum Flow Problem: What is the highest rate at which material can be transported from the source
to the sink without violating any capacity constraint?
As Kirchhoff’s Currents Law, the sum of the incoming flows into a vertex must be equal to the sum of
the outcoming flows from the vortex. (De Cos, 2020: Diapositive 30).
In the Maximum Flow Model there is only one Source Node and only one Sink Node. The goal
is to find the maximum amount of total (money, oil, water, etc.) that can circulate through the
network (from the Source to the Sink) in a time unit. The amount of flow by time unit in every arc
is limited by capacity constraint. […] The flow capacity of the nodes is not specified. The only
requirement is to satisfy the next equation for every node (except the Source and the Sink)
max 𝑓
𝑓, 𝑠𝑖 𝑖 = 1
Σ𝑗 𝑥𝑖𝑗 − Σ𝑗 𝑥𝑗𝑖 = { −𝑓, 𝑠𝑖 𝑖 = 𝑛 }
0, 𝑖𝑛 𝑎𝑛𝑜𝑡ℎ𝑒𝑟 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑠
We visualize that:
1. The variables 𝑥𝑖𝑗 denote flow/time through the arc (𝑖, 𝑗) that links the node I with the node
j.
2. Consider the i-th constraint, for a fix value of i. The sum Σ𝑗 𝑥𝑗𝑖 encompasses all j for which
the arc (𝑖, 𝑗) with i fixed, belongs to the network. Then, the sum Σ𝑗 𝑥𝑖𝑗 is the total outcoming
flow from node i. Likewise, the sum Σ𝑗 𝑥𝑖𝑗 encompasses all j for which the arc (𝑖, 𝑗) with i
fixed. So, Σ𝑗 𝑥𝑖𝑗 is the total incoming flux to the node i.
3. The symbol f is a variable which denotes the total flow/time passing through the network.
4. 𝑢𝑖𝑗 Denotes the capacities of the flow/time through the different arcs.
Both as student of engineering and as a student I consider very important the fact of being able
to optimize resources by seeking the fastest way and the cheaper way to get something. As we have
been studying this kind of algorithms open us the panorama to visualize an integral way to get less
effort and more gains.
As the theory said, Shortest Path, Minimal Spanning and Maximum or Maximal Flow, are
important algorithms which besides Dynamic Programming could help us to solve more complex
problems.
In engineering I can visualize the application of this algorithms when planning the physical part
of a company in areas as electricity, wire network design, communications and manufacture in
order to get the more possible flow from a Source to a Sink, but we can also see the application of
Maximum Flow in administrative areas as logistics or project management or investigation and
development.
As artist, this kind of algorithms help me to grow as producer and creative. As producer because
I can find a way to save money and effort an as an artist because it opens my mind to think in a
concrete manner, what could be the way I can organize my artistical and esthetical resources.
2.5 CPM & PERT
The task of administrate such big projects is an ancient an honorable art. Approximately in 2600 b.C.
the Egyptian built a Great Pyramid for the King Cheops. The Greek historian Herodotus said that
400,000 men worked during 20 years for building this structure. Even though in the present those figures
are in question, there is no doubt of the enormity of the project. The book of Genesis tells us that Babel´s
Tower was not finished because God made impossible that the builders communicate between them.
This project has a special importance, because it establishes a historic precedent over the popular habit
to charge the divine intervention as the reason of the failures. (Eppen, Gould, Schmidt, Moore,
Weatherford, 2000:658).
Critical Path Method (CPM) and Program Evaluation and Review Technique (PERT) are two
useful network model methods for helping administrators and engineers to plan, schedule and
control projects. This kind of methods operates through the knowledge area of time (management
and administration of time). Both methods are useful tools for providing analytic means for
scheduling activities.
A project is defined as a collection of interrelated activities which each activity consuming time and
resources. (Taha, 2007: 276).
The basic operation of both methods is though defining the activities of the projects and their
precedence relationship which will be represented by a network, after having developed this
relationship, we must run specific computations in order to develop a schedule for the project.
The two techniques, CPM and PERT, which were developed independently, differ in that CPM assumes
deterministic activity durations and PERT assumes probabilistic durations. (Taha, 2007: 276).
2.5.1 Network Model Representation
According to Taha (2007): “The nodes of the network establish the precedence relationship
among the different activities”
Rule 3. To maintain the correct precedence relationships, the following questions must be answered
as each activity is added to the network:
The answers to these questions may require the use of dummy activities to ensure correct
precedence among the activities.
A Dummy Activity is false in the sense that it does not require time or resources. It only
provides a pedagogical element which let us draw a representation of network that keeps
correctly the relationships of precedence appropriated.
Example:
Activity Predecessor Time
Days
Start 0
A Start 2
B Start 5
C A 4
D B,C 6
E D 3
F E 8
G E 10
End F,G 0
2.5.2 Gantt Chart
Gantt Chart or Gantt Diagram is a useful tool used in Project Administration and Industrial
Engineering. Because of its simplicity, it is a convenient way to visualize the Critical and Non-
Critical activities belonging to a project. In general terms the Chart is like the first quadrant of a
two variables Cartesian Plane where we find in the horizontal axis the duration of the activities and
in the vertical axis every single activity of a project. In spite of its utility we have to take account
its simplicity, Gantt Chart is restricted to tell us the chronological order of a series of activities but
in case of a delay or a problem, we cannot find anything but general information in the chart.
Gantt Graph was developed by Henry L. Gant in 1918 and it continues being a popular tool in the
programming of the production and of projects. Its simplicity and its clear graphic display have
established it as a useful devise for simple programming problems. […] Every activity is on the vertical
axis. The horizontal axis is the time, and the anticipated duration of every activity, just like the real
duration, are represented by a bar with the correspondent length. The graph indicates also the earliest
start of every single activity. […] Gantt Chart omits to reveal certain important information as the
immediate predecessor of other activities. […] The general weakness of Gantt Charts is reflected in its
inutility to make inferences about precedence. (Eppen, Gould, Schmidt, Moore, Weatherford, 2007:660-
661).
CPM was developed in 1957 by J.E. Kelly from Remington Rand and M.R. Walker from DuPont. It
differs from PERT basically in details about how they treat time and costs. (Eppen, Gould, Schmidt,
Moore, Weatherford, 2007:659).
The Critical Path Method (CPM) estimates the longest duration in a network diagram which
determines the shortest time for completing a project and determines the flexibility of the schedule
with the slacks.
This method is used when there is not resources limitations and it is characterized on having a
total slack of zero in the critical Path.
Activity (Duration)
Slack
Activity (Duration)
A Total Slack4 is the time a task is able to delay without impacting the duration of the project.
3
Free Slack is also known as Free Floating (FF)
4
Total Slack is also known as Total Floating (TF)
2.6.1 CPM Algorithm and Critical Path Computations
The result in the CPM is the establishment of the time schedule for the project. To achieve this
objective in a convenient way, it is necessary to do special computations that produce the next
information:
An activity is said to be critical if there is no “leeway” in determining its start and finish times. A
noncritical activity allows some scheduling slack, so that the start time of the activity can be advanced
or delayes within limits without affecting the completation date of the entire project.
To carry out the necessary computations, we define an event as a point in time at wich activities are
terminated and others are started. In term of the network, an event corresponds to a node. (Taha,
2007:282).
The next nomenclature is useful for the computations, even though there are another ways to
represent the method, the algorithm should always be the same.
The CPM computations consists of two passes: The forward pass determines the earliest
occurrence times of the events, and the backwards pass calculates their latest ocurrence times.
Forward Pass (Earliest Ocurrence Times, ). The computations star at node 1 and advance
recursively to enf node n.
The forward pass is complete when 𝑛 at node n has been computed. By definition 𝑗 represents
the longest path (duration) to node j.
Backward Pass (Latest Occurrence Times 𝚫). Following The completion of the forward pass,
the backward pass computations start at node n and end at node 1.
Initicial Step. 𝑆𝑒𝑡 Δ𝑛 = 𝑛 to indicate tha the earliest and the latest occurrences of the last node
of the project are the same.
General Step j. Given that nodes p, q,… and v are linked directly to node j by outgoing activities
(p,j), (q,j), …, and (v,j) and that the earliest occurrence times of events (nodes) p,q,…, and v have
already been computed, then the earliest occurrence time of event j is computed as:
The backward pass is completed when Δ1 at node 1 is computed. At this point Δ1 = 1 (= 0).
Based on the preceding calculations, an activity (i, j) will be critical if it satisfies three conditions:
1. Δ𝑖 = 𝑖
2. Δ𝑗 = 𝑗
3. Δ𝑗 − Δ𝑖 = 𝑗 − 𝑖 = 𝐷𝑖𝑗
The three conditions state that the earliest and latest occurrence times of end nodes i and j are equal
and the duration 𝐷𝑖𝑗 fits “tightly” in the specified time span. An activity that does not satisfy al
three conditions is this noncritical.
By definition, the critical activities of a network must constitute an uninterrupted path that spans
the entire network from start to finish.
2.6.1.1 General Procedure for CPM
Step 1. Stablish the duration and the precedence of the activities of a project, it could be done by a
specialist, it is more likely that these activities are already stablished.
Step 3. Apply the CPM Algorithm in order to find the earliest precedence, that means, apply the
forward pass.
Step 4. Once finished the forward pass, apply the backward pass in order to find the latest
occurrence.
Step 6. Find and Draw the Critical Path in the Network Model Diagram.
The following table gives the activities for buying a new car. Construct the project network:
Network Model
Exercise 2. CPM_2
Network Model
Exercise 3. CPM_3
A company is in the process of preparing a budget for launching a new product. The following
table provides the associated activities and their durations. Construct the project network.
Network Model
Exercise 4. CPM_4
2.6 CPM Conclusion
In conclusion I found very useful the knowledge of CPM method because of its «simplicity». First
of all, the selection of the appropriate activities and its classification in critical and non-critical,
offer us a panoramic view of how important will be the compliance of an activity and how fast we
must do it. On the other hand, we can realize how big is the slack we can permit us in order to
finish a project in a timely manner.
I think that the study of Project Management area is too important for industrial engineering and
besides all the other tools we have already learnt, it is suitable to know how we can start to manage
a project. In the area where I work which is performing arts, I found so useful this method because
we normally start the projects (Staging and Production of Theater Plays) with a planning table,
always analyzing the different creative tools but not paying enough attention to administrative and
management details as the time we should establish for every single activity and the way we must
accomplish the schedule.
Actors, musicians, painters, and other performers normally don’t receive management tools for
developing projects which make so difficult the optimization of the (already scarce) resources. That
is why I feel I will launch this and other Operations Research methods like PERT in the planning
of my next Theater Play Staging and Production in order to save money and potentialize the
resources we have which as I said are normally very little.
About the algorithm I think that even though there are different programs that can helps us to
compute the results of our Operation Research, it is always necessary a person who manage the
project, I mean, there must be a human who launch both the establishment of the duration of the
activities (either through statistics or through any other analysis) and the network, according to the
correspondent precedence.
As an engineer it is very important to think about the precedence of the activities, the Latin-
American ideology is normally used to let the tasks until the last minute which creates a lot of
problems both in everyday life and in industry, which leads to a shock in a determined point. The
goal of Operations Research is to find organized methods to save money and time and to optimize
every sort of resources.
Finally, I have found very useful Gantt Chart because of its simplicity of interpretation and its
visual utility as a graph. If I had known this method before I think I could have saved a lot of time
in past projects.
2.7 PERT
PERT was developed at the end of the 50´ by the Naval Bureau of Special Projects, in cooperation with
the management consulting firm of Booz, Allen y Hamilton. The technique received a substantial
favorable publicity due to its use in the program of engineering and development of the missile Polaris,
a complicated project which included 250 contractors and more than 9,000 subcontractors. Since then,
it has been widely adopted in other branches of the government and industry and applied to very
different projects, such as the building of facilities, buildings, freeways, administration of the
investigation, development of products, new informatic systems installations and so on. Nowadays, a
lot of companies and government bureaus demand to its contractors the use of PERT […] To send an
American to the moon during the era of the Apollo project, the US Aviation used PERT for
contributing its part to the project with six weeks of anticipation. It included more than 32,000 events
and a hundred thousand of activities, but only a bunch of centenars needed a constant revision. (Eppen,
Gould, Schmidt, Moore, Weatherford, 2000:659).
The difference between CPM and PERT is that the second one assumes probabilistic duration times
based in three estimations which are Optimistic Time (a), Most Likely Time (m) and Pessimistic
Time (b). Their name indicates a quality of the measurement of time which will be averaged in
order to find an average time that we will call specific time. Let´s see this information according
to Taha (2007):
The range (a,b) encloses all possible estimates of the duration of an activity. The estimate m lies
̅̅̅ , and variance 𝝈𝟐 ,
somewhere in the range (a,b). Based on the estimates, the average duration time, 𝒕𝒆
are approximated as:
𝒂 + 𝟒𝒎 + 𝒃
̅̅̅
𝒕𝒆 =
𝟔
𝒃− 𝒂 𝟐
𝝈𝟐 = ( )
𝟔
2.7.1 PERT Algorithm5
Suppose that all the activities in the network are statistically independents, and compute at first the
𝒕𝒆 and the variance 𝝈𝟐 . If there is only one path from starting node to node j, then the
average ̅̅̅
average is the sum of the expected durations, 𝒕𝒆, of all the activities all the way this path and
variance is the sum of the variances, 𝝈𝟐 , of the same activities.
If more than one path leads to node j, it is necessary to select the longest average duration path to
node j. If two or more paths have the same average, we select the path with the biggest variance
because it reflects the maximum incertitude and, therefore, it leads to a more conservative
estimation of the probabilities.
The fact of that the times of the activities are random variables implies that the termination time of the
project is also a random variable. There is a potential variability in the general termination time. (Eppen,
Gould, Schmidt, Moore, Weatherford, 2000:676).
Once given the average and variance of the path to node j, 𝒕𝒆{𝒆𝒋 } and 𝝈𝟐 {𝒆𝒋 } , the probability that
node j occurs in the time 𝑆𝑗 is represented in an approximated way by the normal standard
distribution, z:
5
If we want to work PERT as CPM, it is necessary to find the Estimated Time (te) and use them as Absolute
Duration (D) and we work CPM as normally. (De Cos, 2020).
2.7.2 General Procedure for PERT
Step 1. Stablish the duration and the precedence of the activities of a project, it could be done by a
specialist, it is more likely that these activities are already stablished. In this case and as we have
Step 2. Determine and design the correspondent Network Model according to the precedence
Chart.
Step 3. Compute the estimated time te according to the formula given by the algorithm.
Step 4. According to the te, find the Longest Path in the Network Model and record the path.
Step 5. Find and Draw the Critical Path in the Network Model Diagram.
Step 2. According to the Longest Path Record, compute the individual (Node Path) standard
deviation which will be the square root of the sum of the involved variances for getting every single
Node.
Step 3. Write Down the name of the Project and Click Enter.
Step 4. Capture the Data of the Problem Row by Row. For getting a New Row just click Tab or
Select a Row and then Click on Edit Grid and then Insert Column or Row.
Step 5. Once finished the capture of the data, Click on SOLVE Menu and Select where to save the
file and Save.
Step 6. Click on Solve the Problem and then Click on Go to Output Screen.
For being able to compute PERT in Excel it is necessary to download the complements for PERT
available in internet.
Once installed the complements we proceed to open excel and follow the next steps:
Step 1. Open Excel
Step 2. Go to complements and select OM_IE and then New Problem.
Step 3. Define the project by filling up the window. The amount of activities must be the total
amount of activities included in the problem plus two other activities that will represent the Start
and the End.
Select the maximum number of predecessor and Random in the Chart Activity Times.
Do not forget to
sum 2 activities.
5+2= 7
Step 4. Press Ok.
Step 5. Fulfill the Chart with the Given Data of the problem and finally press Solve
Step 6. For getting the graph click on Graph.
Get the
graph here.
Solve the
problem here.
Activity Precedence a m b
A / 3 5 7
B / 4 6 8
C A 1 3 5
D A 5 8 11
E B, C 1 2 3
F B, C 9 11 13
G D 1 1 1
H E,D 10 12 14
Network Model
Exercise2. PERT_2
Exercise3. PERT_3YOUTUBE
2.7.3 PERT Conclusion
As an Industrial Engineering student, I found this method so useful, the reasons of my opinions are
already expressed in the conclusion of CPM method. Both are very appropriate methods for Project
Management and Planning because we can realize about Critical and Non-Critical activities in
order to prioritize and establish an order to accomplish our goals.
In this case I found very viable the fact of having three different times which included three
different stages, the worst, the better and the average which averaged give us a more real stage
from which we can start to compute in order to have an actual panorama of our project. As we are
working with different values, we have that fact of the existence of variability which is taken
account and those results give us the percentual of chances to accomplish our project, expressed in
probability of success for an occurrence.
Even CPM gives a good panorama, helped by Gantt Chart, I found good to approach the
capacity of PERT for being a little bit more exact or at least to compute the probability of the facts,
in order to have more certainty.
According to the area of the arts, I think that it is always a good idea to propose the record of
the activities the artists do in order to be able to have a database where we can collect data for
developing project planning in order to optimize the processes and to get more profits of the
available resources.
3. Dynamic Programming6
Diciamo che il nome deriva dal fatto che si riferiva al proceso di risolvere un problema compiendo una
serie di decisioni una dopo l´altra, quindi Dynamic derivava dal fatto che prendevi le decisioni un senso
temporale [...] (Algoritmi-UniTN, 2019).
In general terms, Dynamic Programming is about to divide a big problem (a project) in different
smaller problems which will be solved one by one in order to find a general solution. Every smaller
problem should be solved once and its solution must be recorded in order to contribute to solve the
main problem.
6
“Programming” in this context refers to a tabular method, not to writing computer code. (Cormen, 2009:359)
According to the Chart there is a series of steps to follow in order to get the solution of a Dynamic
Programming Problem:
1. It is necessary to definite the solution in a recursive way, that means a description of the
solution per se, if we are looking for a shortest path, a longest path, etc.
2. It is necessary to definite the value solution in a recursive way: Shortest Path, Maximization
of Profits, etc.
3. Dynamic Programming is used when after dividing the problem the subproblems are not
repeated and we need to solve all of them to get the final solution, on the other hand, when
the problems are repeated and we need to solve only a few and record the solutions and
then recreate them we need to use Memoization.
4. Finally, we can get a Numerical Output.
Un po´ di storia:
Il termine Dynamic Programming è stato coniato da Richard Bellman agli inizi degli anni ´50,
nell´ambito dell´ottimizzazione matematica. Inizialmente si riferiva al processo di risolvere un
problema compiendo le migliori decisioni una dopo l´altra. «Dynamic» doveva dare un senso temporale
e «Programming» si rifereiva all´idea di creare programmazioni ottime, per esempio nel campo della
logistica. (Algoritmi-UniTN, 2019).
What really makes interesting Dynamic Programming (DP) is the fact that trough smaller stages,
we can analyze single variable subproblems which is simpler than analyzing multiple variables at
the same time. The model of a DP is a recursive equation which is mandatory linked to the other
sub problems or stages in order to guarantee that every single optimal solution for any stage is also
feasible for the total problem.
This is the Diligence Problem a conceptual design for giving a concrete literal interpretation of
PD problems.
A mythical salesman of the USA must travel to the west trough hostile lands using a diligence as
transport. Even its starting point and destination are fixed. He has a considerable number of options
to choose in order to get his destination. The salesman offers insurance to the passenger of the
diligences and he needs to determinate the surest path for decreasing his costs of insurance policies.
The nest Network shows the different options he has in order to get the destination:
Step 1. Establish the Network according to its given precedence and distances. Even it is logic to
use the shortest forward path, we will avoid it, and we will also avoid the solution by trial and error.
Step 2. Divide the entire problem in stages.
Fn (S, Xn). Total Cost of the best global political for the remaining stages.
Stage 4
S Xn C Fn
8 10 3 3
9 10 4 4
Stage 3
S Xn C+Fn Fn
5 8 9 1+3= 4 4+4=8 4
6 8 9 6+3=9 3+4=7 7
7 8 9 3+3=6 3+4=7 6
Step 4. Interpret the table and choose the correspondent path or paths for minimizing the cost of
the policy.
According to the table and reading it forward we can give the next interpretation:
We are allowed to start from node:
First Option. 1 to 4 – 4 to 5 – 5 to 8 – 8 to 10.
Second Option. 1 to 4 – 4 to 6 – 6 to 9 – 9 to 10.
Third Option. 1 to 3 – 3 to 5 – 5 to 8 – 8 to 10.
So, we have these three paths minimize the cost of the policy to 11 units.
First Path: 1-4-5-8-10
Second Path: 1-4-6-9-10
Finally, we can conclude that the salesman can choose one of the three options to minimize the
cost of the insurance policy.
3.2 Characteristics of Dynamic Programming Problems
The first characteristic of a Dynamic Programming Problem is that DP Problems are normally
complex Problems that need to be divided into Stages for solving them. These problems are solved
stage by stage, normally backwards, that means starting from the last stage to the first stage in a
recursive way.
A dynamic-programing algorithm solves each subproblem just once and then saves its answer in a table,
thereby avoiding the work of recomputing the answer every time it solves each subproblem. […] Such
problems can have many possible solutions. Each solution has a value, and we wish to find a solution
with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the
problem, as oppose to the optimal solution, since there may be several solutions that achieve the optimal
value. (Cormen, 2009: 359).
The Principle of Optimality also known as Optimal Substructure is a very important property that
problems need to have in order to be eligible for a Dynamic Programming Solution. (CSBreakdown,
2015).
The Principle of Optimality is a property that problems must have in order to be solved through
Dynamic Programming, it roughly means that the General Problem could be divided into stages
and that every single stage has an optimal solution which is feasible and collaborate in a recursive
manner to solve the entire problem.
A problem has optimal substructure if an optimal solution can be constructed efficiently form optimal
solutions of its sub-problems. (Cormen, 2009 in CSBreakdown, 2015).
This method has a bunch or areas of opportunity, so to speak… When we have a complex process, we
need to separate it in stages, once we have it separated in stages it is easier to find a better way to solve
our problems. (Héctor De Cos, 2020).
As we have already seen, DP offers us the opportunity to seek for an optimal solution in a project
which can be divided into stages and solved stage by stage in a recursive manner.
The nature of the recursive computations in DP refers to the (normally backwards) search for
optimal solutions of every single stage or subproblem which will be the input of the next
subproblem until finishing the different stages in which we have divided our main problem, or the
project.
7
Deterministic, i.e., the values between the states are well defined. (Héctor de Cos, 2020).
The manner in which the recursive computations are carried out depends on how we decompose the
original problem. In particular, the subproblems are normally linked by common constraints. As we
move from one subproblem to the next, the feasibility of these common constraints must be maintained.
(Taha, 2007: 400).
According to Taha (2007), both forward and backward recursions yield the same solution.
Although the forward procedure appears more logical, DP literature invariably uses
backward recursion. The reason for this preference is that, in general, backward recursion
may be more efficient computationally.
Example:
1) The Stages
2) The Shortest Forwards Path
3) The Shortest Backwards Path
a) Definition of the stages 1,2 and 3.
14
Finding.
F1(x2) = min {d (x1, x2) + f0(x1)}
F1(x2) = min {5 + 0} = 5
F1(x2) =5
F2(x5) =13
F1(x3) =11
F0(x1) =0 F3(x7) =7
F2(x6) =16
14
F1(x4) =6
3.5.2 Going Backwards
In this case is the same process but going forwards, starting at Stage n and finishing at Stage 0.
Finding.
F1(x3) = min [{d (x3, x5) + f2(x5)}, {d (x3, x6) + f2(x6)}]
F1(x3) = min [{2 + 4}, {3 + 4}] = min [{6}, {7}] = 6
F1(x3) = 6
0
Exercise 1. DynamicProgrammingDet_1
A mythical salesman of the USA must travel to the west trough hostile lands using a diligence as
transport. Even its starting point and destination are fixed. He has a considerable number of options
to choose in order to get his destination. The salesman offers insurance to the passenger of the
diligences and he needs to determinate the surest path for decreasing his costs of insurance policies.
The nest Network shows the different options he has in order to get the destination:
Let´s suppose that we wish to select the shortest road path between two cities. The network in the
figure provides the possible routes between the depart city in node 1 and the destination city in
node 7. The routes pass across intermediate cities designated by nodes 2 to 6.
Exercise 3. Dynamic ProgrammingDet_3.8
The owner of a three-supermarket chain bought five loads of fresh strawberries. The owner wishes
to know how to assign the five loads to the supermarkets to maximize its expected gain. For
administrative reasons, he doesn’t want to divide the loads between the supermarkets. However,
he is willing to assign zero loads to any of the supermarkets if necessary. The next table provides
the estimated gain in every supermarket when an assignation of a load:
Supermarket
Loads
1 2 3
0 0 0 0
1 5 6 4
2 9 11 9
3 14 15 13
4 17 19 18
5 21 22 20
8
The Solution of this problem is available on Youtube. (Juan Pablo Requena, 2020)
𝑓𝑖 (𝑥𝑖 ) = 𝑚𝑎𝑥 𝑎𝑙𝑙 𝑓𝑒𝑎𝑠𝑖𝑏𝑙𝑒 { 𝑑(𝑥𝑖 , 𝑥(𝑖+1) ) + 𝑓 ∗𝑖+1 (𝑥𝑖+1 )}
(𝑥𝑖 ,𝑥𝑖+1 )𝑟𝑜𝑢𝑡𝑒𝑠
For Stage 3
Example for:
𝑓2 (𝑥0 ) = 𝑚𝑎𝑥 𝑎𝑙𝑙 𝑓𝑒𝑎𝑠𝑖𝑏𝑙𝑒 { 𝑑(𝑥0 , 𝑥(0) ) + 𝑓3 (𝑥0 )}
(𝑥𝑖 ,𝑥𝑖+1 )𝑟𝑜𝑢𝑡𝑒𝑠
For Stage 2
𝑓𝑖 (𝑥𝑖 ) = 𝑚𝑎𝑥 𝑎𝑙𝑙 𝑓𝑒𝑎𝑠𝑖𝑏𝑙𝑒 { 𝑑(𝑥𝑖 , 𝑥(𝑖+1) ) + 𝑓𝑖+1 (𝑥𝑖+1 )}
(𝑥𝑖 ,𝑥𝑖+1 )𝑟𝑜𝑢𝑡𝑒𝑠
On my opinion, and after having work hard in order to understand the importance of the algorithms,
Dynamic Programming seems to me an important way to visualize hoy a problem can be solved in
other way than normally and easier than the normal way we are used to, I mean forwards.
What I learnt from this chapter is that sometimes we must think in how to solve a part of a big
problem, to establish an algorithm and to let mathematical produces do its job. It seems to be
magical but it is a hard work because establishing algorithms is not as easy as it seems, we must
develop a high brain capacity and it demands to be creative and to know about different way to
solve problems in different areas of the mathematical kingdom such as algebra, calculus or
statistics.
As the name indicates, Deterministic Dynamic Programming is used when the values between the
States or Nodes is already “Determined” or given and its immovable. Even we can often find these
problems in practice or real world, there are situations when we cannot determine the value between
the states and these values depend on certain probability, thus we must use Probabilistic Dynamic
Programming.
Probabilistic Analysis is the use of probability in the analysis of problems. Most commonly, we use
probabilistic analysis to analyze the running time of an algorithm. Sometimes we use it to analyze other
quantities, such as the hiring cost in procedure HIRING-ASSISTANT. In order to perform a
probabilistic analysis, we must use knowledge of, or make assumptions about, the distribution of the
inputs. Thus, we are, in effect, averaging the running time over all possible inputs. When reporting such
a running time, we will refer to it as the average- case running time.
We must be very careful in deciding on the distribution of inputs. For some problems, we may
reasonably assume something about the set of all possible inputs, and then we can use probabilistic
analysis as a technique for designing an efficient algorithm and as a means for gaining insight into a
problem. For other problems, we cannot describe a reasonable input distribution, and in these cases, we
cannot use probabilistic analysis. (Cormen, 2009:116).
As we can see the main difference between Deterministic Dynamic Programming and Probabilistic
Dynamic Programming is that the value of the next states of decision are not given in a determined
number, instead there are a distribution of probability to determine what will be the value of this
next state, given in a probabilistic way, because even knowing the values of the nest states they
continue being probabilistic values and as a law of probability says, the fact that something is
99.9999999% to happen, doesn´t means that it couldn’t happen the 0.00000000001%.
Dynamic Probabilistic Programming differs of Deterministic in that the state in the next stage is not
determined at all for the state and the political of decision of the current stage. Instead, it exists a
Distribution of Probability to determine what will be the next state. However, this distribution of
probability is well determined by the state and the political of decision in the current stage. (Hillier, F.,
Lieberman, G., 1998:562).
3.6.1 Probabilistic Dynamic Programming Algorithm(s).
As we already know the Algorithms to develop in Dynamic Programming, change according to the
problem. Even there is a general way to find the relation between 𝑓𝑛 (𝑠𝑛 , 𝑥𝑛 ) and its optimal
∗
𝑓𝑛+1 (𝑠𝑛 , 𝑥𝑛 ) in Deterministic Dynamic Programming, this relationship will be even more complex
in Probabilistic Dynamic Programming and it will depend on the global objective function.
Remembering that we must find the global function for solving a problem, let´s use an
example:
Imagine that you have $5,000 for investing and that you will have the opportunity to do in wherever
you want of two investments (A or B) at the beginning of the next three years. It exists uncertainty
respect to the returns of both investments. If you invest in A, you can lose all your money or (with
higher probability you could get $10,000 (you would earn $5,000) at the end of the year. If you
invest in B, you can get the same $5,000 or (with lower probability) you could get $10,000 at the
end of the year. The probabilities for both events are as shown:
You are allowed (at most) one investment per year and you can only invest $ 5,000 every time.
(Any other additional amount of money is useless).
a) Use Dynamic Programming in order to find the investment political that maximizes the
expected amount of money at the end of the three years. (Hillier, 1998: 574-575)
Step 1. Draw the Network
Step 2. Establish the global recursive function for all available decisions.
𝑋𝑛 = 𝐾𝑖𝑛𝑑 𝑜𝑓 𝑖𝑛𝑣𝑒𝑠𝑡𝑚𝑒𝑛𝑡 {𝐴 𝑜𝑟 𝐵}
𝑆𝑛 = 𝐸𝐴𝑣𝑎𝑖𝑙𝑎𝑏𝑙𝑒 𝑆𝑡𝑎𝑡𝑒𝑠 𝑓𝑜𝑟 𝑆𝑡𝑎𝑔𝑒𝑠 𝑛
𝑓𝑛 (𝑆𝑛 , 𝑋𝐴
𝑓𝑛∗ (𝑆𝑛 ) = max { }
𝑓𝑛 (𝑆𝑛 , 𝑋𝐵
∗ ∗
𝑓𝑛 (𝑆𝑛 , 𝑋𝐴 ) = 0.3 [𝑓𝑛+1 (𝑆𝑛−5000 )] + 0.7 [𝑓𝑛+1 (𝑆𝑛+5000 )]
∗ ∗
𝑓𝑛 (𝑆𝑛 , 𝑋𝐵 ) = 0.9 [𝑓𝑛+1 (𝑆𝑛 )] + 0.1 [𝑓𝑛+1 (𝑆𝑛+5000 )]
𝑓3 (𝑆5000 , 𝑋𝐵 ) = 0.9 [𝑓4∗ (𝑆5000 )] + 0.1 [𝑓4∗ (𝑆10000 )]
Step 3. Apply the recursive function for n stages.
Stage 4.
Stage 3.
Stage 2.
Stage 1.
Attention. This is not a Dynamic Programming Problem, as the states Sn are well determined.
The confusion could come because the determined cost of the States in this problem are given
in probabilities, but this is not a distribution of probability (p1+p2+…+pn= 1).
Let´s remember.
Consider an electronic system with four components, every one of them must work for the
system to run. The reliability of the system could get better if multiple parallel units are
installed in one or more of the components. The next chart shows the probability that the
respective components run if they consist of 1, 2 or 3 parallel units:
The probability that the respective components run is the product of the probabilities that the
respective components run.
In the next chart we present the cost (in hundred dollars) of installing 1, 2 or 3 parallels units
in the respective components.
Parallel Cost
Units Component 1 Component 2 Component 3 Component 4
1 1 2 1 2
2 2 4 3 3
3 3 5 4 4
∗
𝑓𝑛 (𝑆𝑛 , 𝑋𝑛 ) = [𝑑{𝑆𝑛 , 𝑆(𝑛−𝑋𝑛) } ][𝑓𝑛+1 𝑆(𝑛−𝑋𝑛) ]
𝑚𝑎𝑥
𝑓𝑛 (𝑆𝑛 , 𝑋𝑛 ) = 𝑓 (𝑆 , 𝑋 )
𝑋𝑛 𝑛 𝑛 𝑛
𝑋𝑛 = 𝐶𝑜𝑠𝑡 𝑜𝑓 𝑆𝑒𝑙𝑒𝑐𝑡𝑖𝑛𝑔 𝑜𝑟 𝐼𝑛𝑠𝑡𝑎𝑙𝑙𝑖𝑛𝑔 1,2 𝑜𝑟 3 𝑈𝑛𝑖𝑡𝑠 𝑓𝑜𝑟 𝑒𝑣𝑒𝑟𝑦 𝐶𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡 1,2,3 & 4
Step 3. Solve stage by stage.
Stage 5.
Stage 4.
Stage 3.
Stage 2.
Stage 1.
9
The Solution of this problem is available on Youtube. (Juan Pablo Requena, 2020)
Probabilistic Dynamic Programming Conclusion
I realized how important is to integrate the different areas of knowledge we are developing in
college. As in Deterministic Dynamic Programming, it is very useful to be able to establish a
problem in a network and being capable of discovering how to solve one stage and to determine an
algorithm for solving a problem from stage to stage.
Working backwards had given me an idea of how problems can be solved in an alternative way,
easier and quicker.
Finally, I would like to add that not all the problems which treats with probability are problems
related to probabilistic methods, it is important to analyze what the problem really demands for
avoiding and preventing confusions or misunderstood. I will also want to add that we are in a
virtuous circle where teacher learns from students and vice versa.
4. Decision Analysis
To be or not to be, that is the question…
William Shakespeare.
In general terms, decisions theory deals with decision against nature. This phase is referred to a situation
where the result of an individual decision depends on the action of another agent (nature) over which
we don’t have control. (Eppen, Gould, Schmidt, Moore, Weatherford, 2000:443).
As human being we have the capacity to choose among different choices. Imagine the earlies 1200
when science where out of the law and the inquisition forced people to believe in the power of God
or the nature, in this stage of the human history, the decision was submitted to God´s will which
let the nature to decide what it will happen to the people. There was no weather forecast, so the
simple decision to use or not an umbrella was the experience and the power of nature. Today is
different, we have more certainty for some decisions we have to take, but we have not yet
discovered how to see the future so we are normally everyday submitted to take decisions, some
decisions we take are under certainty, some others under uncertainty and some others under risks.
We could mention thousands of examples but there is one decision that is out of our hands, even if
we take care of us, we will die, but we don´t know when, so we are all on risk of dying every day,
some of us have the certainty to be in shape and sane, some of us are sick and almost dying but we
cannot control nature and there could be a success which can change our life and this is out of our
power of control.
Success or failure will largely depend on the decisions we make. A good decision is based on available
information and considers all the possible alternatives. A bad decision doesn’t use all the available
information and doesn´t use appropriate quantitative techniques. (Ingeniería Industrial UCA, 2018)
Analysis decision is a way to take the control (while possible) or the risk to make a decision
under different circumstances:
Someone is in front of different options or choices, from 2 to infinite, he must take a decision and
even deciding not to do anything is a decision. Once chosen, this alternative will receive all the
power of the nature state, which will impact the decision maker.
It is important to remark that in this model the benefits or the damages caused by the decision made,
affects only the decision maker and that nature don’t care about the result of its own decision . (Eppen,
Gould, Schmidt, Moore, Weatherford, 2000:443).
4.2Table or Matrix of Payoffs
When a decision is to be made, the decision makers is located in the next scenario:
Nature States
Alternatives
1 2 … m
1 R1,1 R1,2 … R1, m
2 R2,1 R2,2 … R2, m
… … … … …
n Rn,1 Rn,2 … Rn, m
Example:
This is Morgana Baldissera, she will be walking in the street all the evening and the weather
forecast is to be 50 – 50. She hesitates on carrying her umbrella on, if she got wet, she will have
to pay $30.00 USD in the laundry. This is the Matrix of Payoffs:
Theoretically, it is simple to solve a model with only one state of the nature. We simply select the
decision with the highest return. In the practice to find this decision it´s another story. (Eppen, Gould,
Schmidt, Moore, Weatherford, 2000:445).
4.3 Decision Under Uncertainty and Under Risk
4.3.1 Decision Under Uncertainty
Decision making under uncertainty, as under risk, involves alternative actions whose payoffs
depend on the (random) states of nature. Specifically, the payoff matrix of a decision problem with
m alternatives actions and n states of nature can be represented as:
𝑆1 𝑆2 … 𝑆𝑚
𝑎1 𝑣(𝑎1 , 𝑆1 ) 𝑣(𝑎1 , 𝑆2 ) … 𝑣(𝑎1 , 𝑆𝑚 )
𝑎2 𝑣(𝑎2 , 𝑆1 ) 𝑣(𝑎2 , 𝑆2 ) … 𝑣(𝑎2 , 𝑆𝑚 )
…
…
𝑎𝑛 𝑣(𝑎𝑛 , 𝑆1 ) 𝑣(𝑎𝑛 , 𝑆2 ) … 𝑣(𝑎1𝑛 , 𝑆𝑚 )
Where:
𝑎𝑖 = 𝐴𝑐𝑡𝑖𝑜𝑛 𝑜𝑟 𝐴𝑙𝑡𝑒𝑟𝑛𝑎𝑡𝑖𝑣𝑒
𝑆𝑗 = 𝑆𝑡𝑎𝑡𝑒 𝑜𝑓 𝑁𝑎𝑡𝑢𝑟𝑒
GoferBroke Company owns lots of land in which could there be oil (petroleum). A geologist
consultor has informed to the administration that he thinks that there is a possibility of ¼ to find
oil.
Due to this possibility, another oil company has offered to buy the lots of land in $ 90,000.00 USD,
however the GoferBroke Company is considering to keep them in order to drill them. The cost for
drilling is $ 100, 000.00 USD, if they find oil, the expected revenue will be of $ 800, 000.00 USD;
so, the expected return for the company (after reducing the cost for drilling) will be of $ 700,000.00
USD. If there is no oil, they will lose $ 100,000.00 USD.
Before making the decision to drill or to sell, another option is to carry out a detailed seismological
exploration in the area for obtaining a better estimation of the probability to find oil.
This company works without a lot of capital so a loss of $ 100, 000.00 USD would be so serious.
Nature States
Alternatives Finding Oil Not finding oil
1/4 3/4
To drill looking
700,000 -100,000
for oil
To sell the land 90,000 90,000
Probability 0.25 0.75
(a priori)
This solution is based in ensuring the best of the worst for every player. (Taha, 2007: 533).
a) For every possible option, find the minimum payoff on all the possible nature states.
b) Find the maximum of the results of the step a).
c) Chose the option which corresponds to this minimum payoff
The minimax criterion is based on the conservative attitude of choosing the best of the worst possible
conditions. If 𝑣(𝑎𝑖 , 𝑠𝑗 ), it’s a lost, we select the action according to the minimax criterion:
𝑚𝑖𝑛 𝑚𝑎𝑥
{ 𝑣(𝑎𝑖 , 𝑆𝑗 )}
𝑎𝑖 𝑆𝑗
𝑚𝑎𝑥 𝑚𝑖𝑛
{ 𝑣(𝑎𝑖 , 𝑆𝑗 )}
𝑎𝑖 𝑆𝑗
𝑎𝑗 = 𝐴𝑙𝑡𝑒𝑟𝑛𝑎𝑡𝑖𝑣𝑒𝑠
𝑆𝑗 = 𝑆𝑡𝑎𝑡𝑒𝑠 𝑜𝑓 𝑁𝑎𝑡𝑢𝑟𝑒
(Taha, 2007:516)
Minimum
Possible
Payoffs
-100,000
90,000
In this case we have chosen both worst scenarios and from the two worst we have chosen the less
bad or the better of the two.
This criterion is valid for using when challenging a rational opponent which is up to win, but not
against nature because we don´t know its behavior and it would represent a very conservative
behavior of the decision maker.
4.3.1.2 Maximum probability Criterion
a) Identify the most probable Nature State (The one who presents the best “a priori”
probability)
b) Find the Maximum Payoff for this State of the Nature already chosen.
c) Chose this alternative of decision.
Nature States
Alternatives Finding Oil Not finding oil
1/4 3/4
To drill looking
700,000 -100,000
for oil
To sell the land 90,000 90,000 Max Payoff
for a)
Probability 0.25 0.75
(a priori)
As we can see this option is not as intelligent as it seems, if we imagine that our distribution of
probability is divided into 5 different states and besides it is uniform, for example:
Nature States
Probability 1.9 1.8 2.1 2.0 2.2
We can realize that we have a biggest probability but it is not representative because it is so close
to the other, even in cases as the oil problem, the 25% of probability is not negligible so it is
recommendable to take it into account.
4.3.2.3 The Hurwicz Criterion
This criterion is designed to reflect decision-making attitudes, ranging from the most
optimistic to the most pessimistic. Define 0 ≤ ∝ ≤ 1 , and assume that 𝑣(𝑎𝑖 , 𝑆𝑗 ) represents
gain. Then the selected action must be associated with:
𝑚𝑎𝑥 𝑚𝑎𝑥 𝑚𝑖𝑛
{∝ 𝑣(𝑎𝑖 , 𝑆𝑗 ) + (1−∝) 𝑣(𝑎𝑖 , 𝑆𝑗 )}
𝑎𝑖 𝑆𝑗 𝑆𝑗
Nature States
Alternatives Finding Oil Not finding oil
1/4 3/4
To drill looking
700,000 -100,000
for oil
To sell the land 90,000 90,000
Probability 0.25 0.75
(a priori)
According to ∝= 0.25
𝑚𝑎𝑥 𝑚𝑎𝑥 𝑚𝑖𝑛
{∝ 𝑣(𝑎𝑖 , 𝑆𝑗 ) + (1−∝) 𝑣(𝑎𝑖 , 𝑆𝑗 )} = 100,000
𝑎𝑖 𝑆𝑗 𝑆𝑗
We can find the Crossing Point in order to find the Optimal Value for ∝
This result indicates us that if ∝ ≤ 𝟎. 𝟐𝟑𝟕𝟓 we must reject the project because it is extremely
risky, otherwise we must accept it with its intrinsic risk.
Exercise 1. Decision Analysis with Incertitude, Decision Tree, Sensibility Analysis.
4.3.2 Decisions under Risk
Under conditions of risk, the payoffs associated with each decision alternative are described by
probability distributions. For this reason, decision making under risk ca be based on the expected value
criterion, in which decision alternatives are compared based on the maximization of expected profit or
the minimization of expected cost. However, because the approach has limitations, the expected value
criterion can be modified to encompass other situations. (Taha, 2007:500).
The expected value criterion seeks the maximization of expected average profit or the minimization of
expected cost. The data of the problem assumes that the payoff associated with each decision
alternative is probabilistic. (Taha, 2007: 500).
Where:
𝐸𝑉𝑖 = 𝐸𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝑉𝑎𝑙𝑢𝑒
𝑎𝑖𝑗 = 𝐼𝑠 𝑡ℎ𝑒 𝑝𝑎𝑦𝑜𝑓𝑓 𝑐𝑜𝑟𝑟𝑒𝑠𝑝𝑜𝑛𝑑𝑎𝑛𝑡 𝑡𝑜 𝑡ℎ𝑒 𝑎𝑙𝑡𝑒𝑟𝑛𝑎𝑡𝑖𝑣𝑒 𝑖 𝑎𝑛𝑑 𝑡ℎ𝑒 𝑠𝑡𝑎𝑡𝑒 𝑜𝑓 𝑛𝑎𝑡𝑢𝑟𝑒 𝑗.
P= Represents the chances of occurrence of the States of the Nature, whose chances of
occurrence are probabilities.
The best alternative is related to 𝐸𝑉𝑖∗ , depending, respectively, on whether the payoff of the
problem represents profits (income) or loss (Expense).
𝐸𝑉𝑖∗ = 𝑚𝑎𝑥𝑖 {𝐸𝑉𝑖 } or 𝐸𝑉𝑖∗ = 𝑚𝑖𝑛𝑖 {𝐸𝑉𝑖 }
Example.
GoferBroke Company owns lots of land in which could there be oil (petroleum). A geologist
consultor has informed to the administration that he thinks that there is a possibility of ¼ to find
oil.
Due to this possibility, another oil company has offered to buy the lots of land in $ 90,000.00 USD,
however the GoferBroke Company is considering to keep them in order to drill them. The cost for
drilling is $ 100, 000.00 USD, if they find oil, the expected revenue will be of $ 800, 000.00 USD;
so, the expected return for the company (after reducing the cost for drilling) will be of $ 700,000.00
USD. If there is no oil, they will lose $ 100,000.00 USD.
Before making the decision to drill or to sell, another option is to carry out a detailed seismological
exploration in the area for obtaining a better estimation of the probability to find oil.
This company works without a lot of capital so a loss of $ 100, 000.00 USD would be so serious.
Nature States
Alternatives Finding Oil Not finding oil
1/4 3/4
To drill looking
700,000 -100,000
for oil
To sell the land 90,000 90,000
Probability 0.25 0.75
(a priori)
Decision Tree.
Nothing assures me that the most likely will always happen, so we have to be careful with those risks.
(Hector de Cos, 2020).
The probabilities used in the expected value criterion are usually determined from historical data. In
some cases, these probabilities can be adjusted using additional information based on sampling or
experimentation. The resulting probabilities are referred to as posterior (Or Bayes´) probabilities
determined from raw data. (Taha, 2007: 506).
GoferBroke Company owns lots of land in which could there be oil (petroleum). A geologist
consultor has informed to the administration that he thinks that there is a possibility of ¼ to find
oil.
Due to this possibility, another oil company has offered to buy the lots of land in $ 90,000.00 USD,
however the GoferBroke Company is considering to keep them in order to drill them. The cost for
drilling is $ 100, 000.00 USD, if they find oil, the expected revenue will be of $ 800, 000.00 USD;
so, the expected return for the company (after reducing the cost for drilling) will be of $ 700,000.00
USD. If there is no oil, they will lose $ 100,000.00 USD.
Before making the decision to drill or to sell, another option is to carry out a detailed seismological
exploration in the area for obtaining a better estimation of the probability to find oil.
This company works without a lot of capital so a loss of $ 100, 000.00 USD would be so serious.
Nature States
Alternatives Finding Oil Not finding oil
1/4 3/4
To drill looking
700,000 -100,000
for oil
To sell the land 90,000 90,000
Probability 0.25 0.75
(a priori)
The cost of a Seismological Exploration is $ 30,000.00 USD with the next Probabilities
(Conditional Probabilities)
𝑚1 = 𝑂𝑖𝑙
𝑚2 = 𝑁𝑜𝑡ℎ𝑖𝑛𝑔
0.25 700,000
Oil
To Drill
4
m1 0.75
Nothing
-100,000
Favorable
Exploration 0.25 90,000
SSF Oil
To Sell
V1 5
m2 0.75
Nothing
90,000
0.25 700,000
Oil
To Drill
Unfavorable 6
m1 0.75
Exploration Nothing
SSD
V2 -100,000
0.25 90,000
Oil
To Sell
7
m2 0.75
Nothing
90,000
Step 1. Conditional Probabilities: 𝒑{𝒗𝒋 |𝒎𝒊 }
𝒑{𝒗𝒋 |𝒎𝒊 } 𝒗𝟏 𝒗𝟐
𝒎𝟏 0.6 0.4
𝒎𝟐 0.2 0.8
Step 2. Joint Probabilities: 𝒑{𝒎𝒊 , 𝒗𝒋 } = 𝒑{𝒗𝒋 |𝒎𝒊 } ∗ 𝒑{𝒎𝒊 } , 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖 𝑎𝑛𝑑 𝑗.
𝒑{𝒎𝟏 , 𝒗𝟏 } = 𝟎. 𝟔 ∗ 𝟎. 𝟐𝟓 = 𝟎. 𝟏𝟓
𝒑{𝒎𝟏 , 𝒗𝟐 } = 𝟎. 𝟒 ∗ 𝟎. 𝟐𝟓 = 𝟎. 𝟏
𝒑{𝒎𝟐 , 𝒗𝟏 } = 𝟎. 𝟐 ∗ 𝟎. 𝟕𝟓 = 𝟏. 𝟓
𝒑{𝒎𝟐 , 𝒗𝟐 } = 𝟎. 𝟖 ∗ 𝟎. 𝟕𝟓 = 𝟎. 𝟔
Σ(𝒑{𝒎𝒊 , 𝒗𝒋 }) = 𝟏
Σ(𝒑{𝒎𝒊 , 𝒗𝒋 }) = 𝟎. 𝟏𝟓 + 𝟎. 𝟏 + 𝟎. 𝟏𝟓 + 𝟎. 𝟔 = 𝟏
𝒑{𝒎𝒊 , 𝒗𝒋 } 𝒗𝟏 𝒗𝟐
𝒎𝟏 0.15 0.1
𝒎𝟐 0.15 0.6
The sum of all Joint Probabilities is 1.
𝒑{𝒗𝟏 } = Σ(𝒑{𝒎𝒊 , 𝒗𝟏 }) , 𝒊 = 𝟏, 𝟐
𝒑{𝒗𝟏 } = 𝟎. 𝟏𝟓 + 𝟎. 𝟏𝟓 = 𝟎. 𝟑𝟎
𝒑{𝒗𝟐 } = Σ(𝒑{𝒎𝒊 , 𝒗𝟐 }) , 𝒊 = 𝟏, 𝟐
𝒑{𝒗𝟐 } = 𝟎. 𝟏 + 𝟎. 𝟔 = 𝟎. 𝟕
𝒑{𝒗𝟏 } 𝒑{𝒗𝟐 }
0.3 0.7
Step 4. Apply the Bayes Theorem
𝒑{𝒎𝒊 , 𝒗𝒋 }
𝒑{𝒗𝒋 |𝒎𝒊 } =
𝒑{𝒗𝒋 }
𝒑{𝒎𝒊 , 𝒗𝒋 } 𝒗𝟏 𝒗𝟐
𝒎𝟏 0.15 0.1
𝒎𝟐 0.15 0.6
𝒑{𝒗𝟏 } 𝒑{𝒗𝟐 }
0.3 0.7
𝒑{𝒎𝒊 , 𝒗𝒋 }
𝒑{𝒗𝟏 |𝒎𝟏 } =
𝒑{𝒗𝒋 }
𝒑{𝒗𝒋 |𝒎𝒊 } = 𝒗𝟏 𝒗𝟐
𝒎𝟏 0.5 0.1426
𝒎𝟐 0.5 0.857143
𝒎𝟏 0.5 0.1426
𝒎𝟐 0.5 0.857143
𝒗𝟏 For Favorable Exploration SSF. (Discounting the cost of the Exploration $ 30,000 USD)
𝒗𝟐 For Unfavorable Exploration SSD. (Discounting the cost of the Exploration $ 30,000 USD)
𝒗𝟏 For Favorable Exploration SSF. (Without Discounting the cost of the Exploration $ 30,000
USD)
𝒗𝟐 For Unfavorable Exploration SSD. (Without Discounting the cost of the Exploration $
30,000 USD)
The Expected Value of the Perfect Information refers to best possible scenario which servs to
evaluate the potential of an experimentation process in a decision-making process.
To be a decision maker implies a high level of responsibility, that is why is important to know
different methods which can help us to make a quality decision with a mathematical support based
in experience and raw data.
In life as in engineering we face problems every day, to solve these problems we must take a
decision and it is very important to visualize in which scenario we are standing. Sometimes we
have certainty but even with certainty nobody knows the future. As the future is uncertain, we must
find the way to get the maximum certainty for making sure that we have the best opportunity to be
successful when making a decision.
Making decisions with uncertainty and under risk is a difficult situation but we can attenuate those
difficulties by finding the expectations of a decision through statistical computations, as we have
done with the example of Drilling or Selling the land.
Even though it is very important to use this kind of tools, we must remember that sometimes these
computations do not include other important factors as the current situation of a company, the risks
of the market, the season, between thousands and thousands of variables, so it is also important to
use personal and criteria and experience in order to take a good decision, both based in qualitative
and quantitative information.
As an extra I could say that we have to be always prepared for the worst scenarios, because
according to the experience the worst decisions are always based in super optimistic circumstances,
we must remember that a very optimistic panorama ignores the difficulties and the stones along the
way, I do not mean not to be optimistic, but to be analytic.
Remember, that we cannot know the future at all and that murphy´s law is always stalking.
If toast can land butter-side down, it will do. (El país, 2015)
5. Bibliography
Castillo, L. (2016, february, 17). Progrmación dinámica. [Video File] Retrieved from
https://www.youtube.com/watch?v=uyFl45Yza3k
Cormen, T., Leiserson, C., Rivest, R., Stein, C. (2009). Introduction to Algorithms. Third Edition.
United States of America: The MIT Press.
CSBreakdown, (2015, May, 16). Principle of Optimality - Dynamic Programming. [Video File]
Retrieved from https://www.youtube.com/watch?v=_zE5z-KZGRw
Eppen, G.G., Gould, F.J., Schmidt, C.P., Moore, J.H., Weatherford, L.R. (2000). Investigación de
Operaciones en la Ciencia Administrativa. México: Prentice-Hall.
Grupo Gedosa. (2017, junio, 17). Ruta crítica CPM Method. ¿Qué es? ¿Cómo se calcula?
Terminología y ejercicios. [Video File] Retrieved from https://www.youtube.com/watch?v=-
MDR5bkwnGQ
Héctor De Cos, (2020, marzo,19) IO2 PERT [Video File] Retrieved from
https://www.youtube.com/watch?v=CLrCf0t1ARU&t=1221s
Héctor De Cos, (2020, April, 19) IO2 Programación Dinámica Determinista. [Video File]
Retrieved from https://www.youtube.com/watch?v=c_gQjAin-Kg
Héctor De Cos, (2020, April, 29) IO2 Programación Dinámica Probabilista. [Video File]
Retrieved from https://www.youtube.com/watch?v=RfvHxUb5Zuo&t=853s
Héctor De Cos, (2020, May, 25) IO2 Toma de decisiones sin experimentación. [Video File]
Retrieved from https://www.youtube.com/watch?v=6hK28F4BFng
Ingeniería Industrial UCA, (2018, October, 10). 01 01 Unidad 1 Teoría de decisiones Conceptos.
[Video File] Retrieved from https://www.youtube.com/watch?v=YEu_scG4T44
Juan Pablo Requena. (06 de mayo de 2020). Programación Dinámica Probabilística Problema de
las inversión A y B. [Video File] Retrieved from
https://www.youtube.com/watch?v=4YtPUNsg6k4
Lind, D., Marchal, W., Wathen. (2015). Estadística Aplicada a los Negocios y la Economía.
Decimosexta Edición. México: McGrawHill Education.
Plan de Mejora. (2019, marzo, 21). Calcula la Ruta crítica y diagrama PERT CPM rápido y Fácil
en Excel. [Video File] Retrieved from
https://www.youtube.com/watch?v=Vn5Ei090NuM&t=253s
Ralph E. Gomory, December 12th, 2019), Wikipedia, La encyclopedia libre. Consulting Data: June,
6th, 2020. https://en.wikipedia.org/wiki/Ralph_E._Gomory
Rubio, J. (21 de junio de 2015). 8 leyes de Murphy que tienen base científica. El país.
https://verne.elpais.com/verne/2015/06/19/articulo/1434705663_423636.html