Institute of Technology School of Mechanical and Industrial Engineering Operation Research Instructor: Getu Girma (M.SC.)
Institute of Technology School of Mechanical and Industrial Engineering Operation Research Instructor: Getu Girma (M.SC.)
Institute of technology
School of mechanical and industrial engineering
Operation Research (Stochastic approach)
Is any variable whose value changes over time in an uncertain way.
{ x(t)=t€T }
1.2. Applications
A stochastic process can be used to model
As time passes, one can count additional items therefore, counts are non-negative
integers and hence are discrete.
a. Binomial processes
b. Poisson process
Con’t…
The Poisson process is one of the most widely used counting process. It usually used in
scenarios where we are counting the occurrence of certain events that appear to happen at a
certain rate.
e.g. suppose that from historical data, we know that earth quake occurs in certain area with
a rake of 2 per month. Therefore, from this information, the timing of earth quake seems
to be completely random. Thus , we can conclude that, the Poisson process might be a
good models for earth quake.
Here, in Poisson process time is continuous and unknown for occurrence of event where ,
state of events are discrete.
Cont…
In the practice, the Poisson process or it is extension have been used to model.
a. N(0) =0
c. The number of events/arrivals in any interval of length t is Poisson distributed with mean
λt. That is for all S, t ≥ 0 we have:-
a. Find the probability that there are 2 customers between 10:00 and 10:20
b. Find the probability that there are 3 customers between 10:00 and 10:20 and 7 customers
between 10:20 and 11:00
Solution
a. Here the arrival rate per hour is 10 customers (i.e. λ =10) and the interval between 10:00
and 10:20 has length t=1/3hrs
Thus, if N(t) is number of arrivals in that interval time, we can write Xn Poisson (λ t).
solution
Therefore, P[x=2] = P{N(1/3) =2}
= e – λt (λt)n /n!
=e-10(1/3) [10(1/3)]2/2!
=e-3.33*5.555
= 0.1986
=0.2 Ans
Con’t…
=0.22*0.147
=0.0325
1.5. Markov chain
This latter type of example referred to as the "brand-switching" problem will be used
to demonstrate the principles of Markov analysis in the following discussion
Con’t…
Handling temporal dependency in stochastic modeling , while often challenging is some times
necessary.
For a discrete time stochastic process with a discrete state space if the future state of the
process depends only on the current state of the system and the system is called a markov
chain
A stochastic process is a markov process if the occurrence of a future state depends only on
the immediately preceding state.
A markov chain is a random sequence in which the dependency of the successive events goes
back only one unit in time. On other words, the future probabilistic behavior of the process
depends by its past history. This is called markov property.
Con’t…
Depending on the time index over which the state of the process /
system changes, markov chain categories in to two:-
1. Discrete time markov chain :- is a markov process in which state
transition only occurs at fixed times.
Index set ( time) T is finite and countable.
- the state transition can only occurs at fixed time.
2. Continuous time markov chain:- is a markov process in which the
state can change at any time. (Time) index set T is infinite. i.e. t ≥ 0
State of the process can change any time
inter arrival of the state change is exponentially distributed.
1.6. The Characteristics of Markov Analysis
For example, the probability of a customer's trading at National in month 2, given that the
customer initially traded with Petroco, is = Np(2)
The probabilities of a customer's trading with Petroco and National in a future period, i, given
that the customer traded initially with National, are defined as:-
If a customer is presently trading with Petroco (month 1), the following probabilities exist:
Pp(1) = 1.0
Np(1) = 0.0
In other words, the probability of a customer's trading at Petroco in month 1, given that the
customer trades at Petroco, is 1.0.
These are the same probabilities we computed by using the decision tree analysis in
Figure F.1. However, whereas it would be cumbersome to determine additional
values by using the decision tree analysis, we can continue to use the matrix
approach as we have previously:
The state probabilities for several subsequent months are as follows:
Notice that as we go farther and farther into the future, the changes in the state probabilities
become smaller and smaller, until eventually there are no changes at all. At that point every
month in the future will have the same probabilities.
For this example, the state probabilities that result after some future month, i,
are:-
As in the previous case in which Petroco was the starting state, these state probabilities also
become constant after several periods. However, notice that the eventual state probabilities
(i.e., 0.33 and 0.67) achieved when National is the starting state are exactly the same as the
previous state probabilities achieved when Petroco was the starting state. In other words,
the probability of ending up in a particular state in the future is not dependent on the
starting state.
Example 2.
State 1 is non-rainy day and state 2 is rainy day, obtain the following.
A. probability that day 1 is non-rainy day given that day 0 is rainy day
B. Prob. That day 2 is rainy day, given that day 0 is a non-rainy day.
P (1) , in this case is [0.7, 0.3] because it is given that Day 0 Is non rainy.
Since 0.7*0.7+0.3*0.4=0.61
and 0.7*0.3+0.3*0.6=0.39
Prob. That day 100 is a rainy day given that day 0 is non-rainy day
P (n) = P (0) x P (N) which is far into the future.
Con’t…
The last point / value shows the steady state probability vector.
And it can show how to obtained steady state probability values.
Example 3.
A small community has two gasoline service stations, Petroco and
National. The residents of the community purchase gasoline at the two
stations on a monthly basis.
Property 1: Transition probabilities for a given beginning state of the system sum to one.
"What information will Markov analysis provide?" The most obvious information
available from Markov analysis is the probability of being in a state at some future
time period, which is also the sort of information we can gain from a decision tree.
Figure F.1. Probabilities of future states, given that a customer trades with Petroco this month
To determine the probability of a customer's trading with Petroco in month 3,
given that the customer initially traded with Petroco in month 1, we must add the
two branch probabilities in Figure F.1 associated with Petroco:-
0.36 + 0.08 =0.44, the probability of a customer's trading with Petroco in month 3.
0.24 + 0.32 = 0.56, the probability of a customer's trading with National in month 3
Con’t…
Figure F.2. Probabilities of future states, given that a customer trades with National
this month. Look buying from the same brand is higher than other brand. E.g.
petroco to petroco=0.12 and national to national =0.64
Con’t…
This same type of analysis can be performed under the condition that a customer
initially purchased gasoline from National, as shown in Figure F.2. Given that
National is the starting state in month 1, the probability of a customer's purchasing
gasoline from National in month 3 is
0.08 + 0.64 = 0.72 and the probability of a customer's trading with Petroco in month 3
is
Notice that for each starting state, Petroco and National, the probabilities of ending up
in either state in month 3 sum to one
Final solution for probability of trade in month3
Although the use of decision trees is perfectly logical for this type of analysis, it is
time consuming and burdensome. For example, if Petroco wanted to know the
probability that a customer who trades with it in month 1 will trade with it in month
10, a rather large decision tree would have to be constructed. Alternatively, the same
analysis performed previously using decision trees can be done by using matrix
algebra techniques.
1.7. Steady-State Probabilities
The probabilities of 0.33 and 0.67 in our example are referred to as
steady-state probabilities.
The steady-state probabilities are average probabilities that the system will be in a
certain state after a large number of transition periods. This does not mean the
system stays in one state. The system will continue to move from state to state in
future time periods; however, the average probabilities of moving from state to
state for all periods will remain constant in the long run. In a Markov process,
after a number of periods have passed, the probabilities will approach steady state.
Therefore, Steady-state probabilities are average, constant probabilities that the
system will be in a state in the future.