0% found this document useful (0 votes)
22 views13 pages

Lec10 Dss PDF

Uploaded by

ibrahim999077
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views13 pages

Lec10 Dss PDF

Uploaded by

ibrahim999077
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

MODELING

And
Analysis
Lecture 10
Learning Objectives
❑ Describe Modeling for MSS (a critical component)
❑ Understand Static and dynamic models
❑ Treating certainty, uncertainty, and risk
--------------------------------------------------------------------------------------------
MSS Modeling
❑ A key element in most MSS
❑ Leads to reduced cost and increased revenue.
Major Modeling Issues
▪ Problem identification and environmental analysis: scanning the
environment to figure out what problems exist and can be solved via a
model
▪ Variable identification: identifying the critical factors in a model and
their relationships
▪ Forecasting: predicting the future
▪ Use of multiple models: combining them to solve many parts of a
complex problem
▪ Model categories: selecting the right type of model for the problem or
sub-problem
▪ Model management: coordinating a firm’s models and their use
▪ Knowledge-based modeling: how to take advantage of human
knowledge in modeling
Static and Dynamic Models
❑ Static models take a single snapshot of a situation. During this snapshot
everything occurs in a single interval.
▪ For example: A decision on whether to make or buy a product is static
in nature.
▪ A quarterly or annual income statement is static. Though it represents a
year’s operations, it occurs in a fixed time frame.
▪ The time frame can be “rolled” forward, but it is nonetheless static.
▪ Most static decision-making situations are presumed to repeat with
identical conditions.
▪ For example, Process simulation begins with steady-state, which
models a static representation of a plant to find its optimal operating
parameters.
▪ A static representation assumes that the flow of materials into the plant
will be continuous and unvarying.
▪ Steady-state simulation is the main tool of process design, when
engineers must determine the best trade-off between capital costs,
operating costs, process performance, product quality, environmental
and safety factors.
▪ The stability of the relevant data is assumed in a static analysis.

- 78 -
❑ Dynamic Models: Dynamic models represent scenarios that change over time.
▪ A simple example is a 5-year profit-and-loss projection in which the
input data, such as costs, prices and quantities, change from year to
year.
▪ Dynamic models are time dependent. For example, in determining how
many checkout points should be open in a supermarket one must take
the time of day into consideration, because different numbers of
customers arrive during each hour. Demands must be forecasted over
time
▪ Dynamic simulation, in contrast to steady-state simulation, represents
what happens when conditions vary from the steady-state over time.
There might be variation in the raw materials (e.g., clay) or an
unforeseen (even random) incident in some of the processes. This
methodology is used in plant control design.
▪ Dynamic simulations are important because they use, represent, or
generate trends and patterns over time.
▪ They also show average per period, moving averages, and
comparative analysis (e.g., profit this quarter against profit in the same
quarter of last year.)
▪ Moving Averages: An indicator frequently used in technical analysis
showing the average value of a security's price over a set period.
Moving averages are generally used to measure momentum and define
areas of possible support and resistance.
❑ Furthermore, once a static model is constructed so describe a given situation-
say, product distribution can be expanded to represent the dynamic nature of
the problem.
▪ For example, the transportation model (a type of network flow
model) describes a static model of product distribution. It can be
expanded to a dynamic network flow model to accommodate inventory
and backordering.
What is the Transportation Model?
▪ Transportation Model is a special case of LPP(Linear Programming
Problem) in which the main objective is to transport a product from
various sources to various destinations at total minimum cost.
▪ In Transportation Models, the sources and destinations are known, the
supply and demand at each source and destinations are also known.
▪ It is designed to find the best arrangement for transportation such that
the transportation cost is minimum.
What is Dynamic Network Flow Model?
▪ A dynamic network consists of a graph with capacities and transit
times on its edges. Flow moves through a dynamic network over time.
Edge capacities restrict the rate of flow and edge transit times
determine how long each unit of flow spends traversing the network.
▪ Capacity is the maximum flow that may be sent through an edge or a
vertex. Edge is a connection between two vertices of a graph. Often in
operations research, a directed graph is called a network, the vertices
are called nodes and the edges are called arcs.
▪ A directed graph, also called a digraph, is a graph in which the edges
have a direction. This is usually indicated with an arrow on the edge;
more formally, if v and w are vertices, an edge is an unordered pair

- 79 -
{v,w}, while a directed edge, called an arc, is an ordered pair (v,w) or
(w,v).

Directed graph
Categories of Models

Types of Decision-Making Environment


❑ The types of decisions people make depend on how much information or
knowledge they have about the problem scenario. there are there decision-
making environment as follows
❑ Certainty
❑ Uncertainty
❑ Risk
Decision Making Under Certainty
❑ In decision making under certainty, the decision maker knows with certainty
what conditions will subsequently occur and affect the decision outcomes.

- 80 -
Decision-Making under Uncertainty
❑ In decision-making under uncertainty, decision maker have no information or
knowledge at all about the various outcomes. That is, they do not know the
likelihood (or probability) that a specific outcome will occur.
❑ When no probabilities are available, the situation is referred to as decision
making under uncertainty.
❑ Steps of decision making under uncertainty (Ignorance):
▪ Construct a Payoff Table
▪ Select a Decision-Making Criterion
▪ Apply the Criterion to the Payoff Table
▪ Identify the Optimal Solution

Build 50 and demand = 50 units

Build 100 and demand = 50 units

- 81 -
Build 100 and demand = 100 units

Build 100 and demand = 150 units

Build 150 and demand = 50 units

Build 150 and demand = 100 units

Build 150 and demand = 150 units

❑ The decision criteria are based on the decision maker’s attitude toward life
❑ These include an individual being pessimistic or optimistic, conservative or
aggressive

❑ Criteria for making decisions under uncertainty:


▪ Maximax
▪ Maximin
▪ Equally likely
▪ Criterion of realism
▪ Minimax regret

❑ First four criteria calculated directly from decision payoff table.


❑ Fifth minimax regret criterion requires use of opportunity loss table.

❑ Maximax:

The Optimistic Point of View


▪ Select the “best of the best” strategy

- 82 -
▪ Evaluates each decision by the maximum possible return associated
with that decision (Note: if cost data is used, the minimum return is
“best”)
▪ The decision that yields the maximum of these maximum returns
(maximax) is then selected.
▪ For “risk takers”
▪ Doesn’t consider the “down side” risk
▪ An estimation of a security's potential to suffer a decline in price if the
market conditions turn bad.
▪ Ignores the possible losses from the selected alternative

❑ Maximin:
The Pessimistic Point of View
▪ Select the “best of the worst” strategy
▪ Evaluates each decision by the minimum possible return associated
with the decision
▪ The decision that yields the maximum value of the minimum returns
(maximin) is selected
▪ For “risk averse” decision makers
▪ A “protect” strategy
▪ Worst case scenario the focus

- 83 -
❑ Equally likely (Laplace-Bayes)
▪ The Laplace criterion approach interprets the condition of
“uncertainty” as equivalent to assuming that all states of nature
are equally likely to occur.

▪ First Alternative = 399,600


▪ Second Alternative = 565,800
▪ Third Alternative = 499,500
▪ Equally likely criterion finds decision alternative with highest
average payoff = 565,800

▪ Another example, in a model, assuming all states are equally


likely means that since there are four states, each state occurs
with probability 0.25.

❑ Criterion of Realism (Hurwitz)


▪ Often called weighted average, the criterion of realism decision
criterion is a compromise between optimistic and pessimistic decision.
▪ Select coefficient of realism, , with value between 0 and 1.
▪ When  is close to 1, decision maker is optimistic about future.
▪ When  is close to 0, decision maker is pessimistic about future.
▪ Formula for criterion of realism =
▪  (maximum payoff for alternative) + (1- ) (minimum payoff for
alternative)
▪ Assume coefficient of realism  = 0.80.
▪ Best decision would be alternative 3 build 150.

- 84 -
▪ This alternative has highest weighted average payoff: $920000
▪ Alternative 1 Build 50 (0.80) (400000) +(0.20) (400000) =
320000+80000=400000
▪ Alternative 2 Build 100 (0.80) (800000) +(0.20) (100000) =
640000+20000=660000
▪ Alternative 3 Build 150 (0.80) (1200000) +(0.20) (-200000) =
960000 – 40000=920000 ➔ Realism

❑ Minimax Regret Criterion


▪ Final decision criterion is based on opportunity loss.
▪ Develop opportunity loss (regret) table.
▪ Determine opportunity loss of not choosing the best alternative for
each state of nature (or the regret by failing to choose the “best”
decision)
▪ Opportunity loss, also called regret for any state of nature, or any
column is calculated by subtracting each outcome in column from best
outcome in the same column.
▪ The alternative with the minimum of the maximum regrets for each
alternative is selected.
▪ Best outcome for low demand is $400,000 as result of first alternative,
“Build 50."
▪ Subtract all payoffs in column from $400,000.
▪ Best outcome for medium demand is $800,000 that is the result of
second alternative, “Build 100."
▪ Subtract all payoffs in column from $800,000.
▪ Best outcome for high demand is $1,200,000 that is the result of third
alternative, “Build 150."
▪ Subtract all payoffs in column from $1,200,000.
▪ The following table illustrates computations and shows complete
opportunity loss table.

- 85 -
▪ Once the opportunity loss table has been constructed, locate the
maximum opportunity loss within each alternative.
▪ Pick the alternative with minimum value
▪ Minimax regret choice is second alternative, "Build 100." Regret of
$400,000 is minimum of maximum regrets over all alternatives.

Decision-Making under Risk


❑ In decision making under risk, the probabilities are used to obtain expected
values of outcomes for each decision alternative.
Expected Value Method Defined
A calculation that summarizes the expected costs, revenues, profits or other
value of an alternative based on several possible value amounts, each of which
has a different probability.
❑ Values and Probabilities
▪ How do you get them?
o Educated guess: a guess based on knowledge and experience
and therefore likely to be correct. "a prognosis can necessarily
be only an educated guess"
o Historical data
▪ An educated guess should be used when you do not have historical
data, or when it is not appropriate to use the historical data.
▪ The summation of expected values would require first using one of the
previous methods.
❑ The expected value criterion is useful generally in two cases:
▪ Long run planning is appropriate, and decision situations repeat
themselves.
▪ The decision maker is risk neutral
❑ Limitations of expected value method
▪ Does not consider risk aversion
▪ Results in one number
▪ Subjective nature
❑ Calculation the expected value:
▪ For each decision calculate the expected payoff as follows:

- 86 -
▪ Expected payoff = ∑(Probability)(Payoff)
▪ (The summation is calculated across all the states of nature)
▪ Select the decision with the best expected payoff

Expected Value of Perfect Information


▪ The difference between the expected payoff under certainty and the
expected payoff under risk.
Expected value of perfect information = Expected payoff under
certainty - Expected payoff under risk

▪Expected value with perfect information is expected or average return,


if one has perfect information before decision has to be made.
▪ Choose best alternative for each state of nature and multiply its payoff
times probability of occurrence of that state of nature:
▪ Expected value with perfect information (EV with PI) =
(best payoff for first state of nature) x (probability of first state of
nature) + (best payoff for second state of nature) x (probability of
second state of nature) + . . . + (best payoff for last state of nature) x
(probability of last state of nature)
▪ EVPI = EV with PI - maximum EV
▪ It is also the smallest expected regret of any decision alternative
❑ EV with PI and EVPI
▪ Best outcome for state of nature low demand is “build 50” with a
payoff of $400,000
▪ Best outcome for state of nature medium demand is “build 100” with a
payoff of $800,000
▪ Best outcome for state of nature high demand is “build 150” with a
payoff of $1200,000
▪ Expected value with perfect information = 400000 x 0.2 + 800000 x
0.5 + 1200000 x 0.3=
80000+400000+360000=$ 840000
▪ If one had perfect information, an average payoff of $840,000 if
decision could be repeated many times.
▪ Maximum expected value without perfect information is $ 660,000.

- 87 -
▪ EVPI = EV with PI - maximum EV
= $840000 - $660000= $180000
Expected Opportunity Loss
▪ An alternative approach in decision making under risk is to minimize
Expected Opportunity Loss (EOL).
▪ Opportunity Loss is also called regret.
▪ The EOL for an alternative is computed as the weighted average off
all possible regrets for that alternative, where the weight are the
probabilities of the different outcomes. That is,
EOL (Alternative i) = (Regret of first outcome) x (probability of
first outcome) + (Regret of second outcome) x (probability of
second outcome) + ------- + (Regret of last outcome) x (probability
of last outcome)

EOL for Alternative Build 50 = 0 + 200,000 + 240,000 = 440,000


EOL for Alternative Build 100 = 60,000 + 0 + 120,000 = 180,000
EOL for Alternative Build 150 = 120,000 + 150,000 + 0 = 270,000

Note that the minimum EOL will always result in the same decision alternative
as the maximum EV
-------------------------------------------------------------------------------------------------
Question Review
1. Discuss the difference between decision making under certainty, decision
making under uncertainty and decision making under risk.
2. State the meaning of Expect Value, educated guess, Expected Value of Perfect
Information, moving average, transportation model, dynamic network and a
directed graph
3. Ahmed’s Bicycle Shop is considering three options for its facility next year.
Ahmed can expand his current shop, move to large facility, or make no
change. With a good market, the annual payoff would be £ 76,000 if he
expands, £90,000 if he moves, and £40,000 if he does nothing. With an
average market, his payoffs will be £30,000, £41,000, and £15,000,
respectively. With a poor market, his payoff will be -£17,000, -£28,000, -
£4,000 respectively.
a. Which option should Ahmed choose if he uses the maximax criterion?
b. Which option should Ahmed choose if he uses the maximin criterion?
c. Which option should Ahmed choose if he has uses the equally likely
criterion?

- 88 -
d. Which option should Ahmed choose if he uses the criterion of realism
with  = 0.4?
e. What option should Ahmed Choose if he uses the minimax regret
criterion?
4. Ahmed (see problem 3) has gathered some additional information. The
probabilities of good, average, and poor markets are 0.25, 0.45, and 0.3,
respectively.
a. Using EVs, what option should Ahmed choose? What is the maximum
EV?
b. Using EOL, what option should Ahmed choose? What is the minimum
EOL?
c. Compute the EVPI and show what it is the same as the minimum EOL.
5. T/F Question
1) MSS modeling leads to reduced cost and increased revenue.
2) Problem identification and environmental analysis means scanning the
environment to figure out what problems exist and can be solved via a
data.
3) Variable identification means identifying the critical factors in a model
and their actions.
4) Static models take a single snapshot of a situation. During this
snapshot everything occurs in a single interval.
5) A decision on whether to make or buy a product is dynamic in nature.
6) A quarterly or annual income statement is static. Though it represents a
year’s operations, it occurs in a fixed time frame.
7) In static frame, the time frame can’t be “rolled” forward, but it is
nonetheless static.
8) Steady-state simulation is the main tool of process design, when
engineers must determine the best trade-off between capital costs,
operating costs, process performance, product quality, environmental
and safety factors.
9) Static models are time dependent.
10) Dynamic simulation, in contrast to steady-state simulation, represents
what happens when conditions vary from the steady-state over time.
11) Dynamic simulations are important because they use, represent, or
generate trends and patterns over time.
12) Average is an indicator frequently used in technical analysis showing
the average value of a security's price over a set period.
13) Moving averages are generally used to measure momentum and define
areas of possible support and resistance.
14) Once a static model is constructed can be extracted to represent the
dynamic nature of the problem.
15) In dynamic network flow model, capacity is the maximum flow that
may be sent through an edge or a vertex.
16) In dynamic network flow model, edge is a connection between two
vertices of a graph
17) The objective of optimization of problem with few alternatives
category is to find the good enough solution for a smaller number of
alternatives.
18) Linear and other mathematical programming models are techniques
used for find the best solution in optimization via algorithm category.

- 89 -
19) Simulation is used for find a good enough solution by experimenting
with the static model of the system.
20) Heuristics find a good enough solution using common sense rules.
21) In decision making under certainty, the decision maker knows with
certainty what conditions will subsequently occur and affect the
decision outcomes.
22) In decision-making under uncertainty, decision maker have no
information or knowledge at all about the various outcomes. That is,
they do not know the likelihood but probability that a specific outcome
will occur.
23) When probabilities are available, the situation is referred to as decision
making under uncertainty.
24) Select a decision-making criterion is an important step to make a
decision under certainty.
25) The optimistic point of view doesn’t consider the “down side” risk.
26) The optimistic point of view ignores the possible earnings from the
selected alternative.
27) The pessimistic point of view evaluates each decision by the maximum
possible return associated with the decision.
28) From the pessimistic point of view, the decision that yields the
maximum value of the minimum returns is selected.
29) A “protect” strategy in decision making under uncertainty tends to
select maximax criterion instead of maximin criterion.
30) The Laplace criterion approach interprets the condition of
“uncertainty” as equivalent to assuming that all states of nature are
equally likely to occur.
31) Criterion of Realism often called weighted balance.
32) In decision making under risk, the probabilities are used to obtain
expected values of outcomes for each decision alternative.
33) In decision-making under risk an educated guess of values and
Probabilities should be used when you do not have historical data, or
when it is not appropriate to use the historical data.
34) The expected value criterion is useful when the decision maker is risk
taker.
35) One of the most important limitations of expected value method is
does not consider risk aversion.
36) Expected value of perfect information equal expected payoff under
certainty minus expected payoff under uncertainty.
37) Expected value with perfect information is expected or average return,
if one has perfect information before decision has to be made.
38) An alternative approach in decision making under risk is to minimize
Expected Opportunity Loss.

- 90 -

You might also like