0% found this document useful (0 votes)
58 views45 pages

Institute of Technology School of Mechanical and Industrial Engineering Operation Research Instructor: Getu Girma (M.SC.)

The document provides an introduction to stochastic modeling and Markov chains. It discusses key concepts such as: - Deterministic vs stochastic models and the random nature of stochastic models. - Applications of stochastic processes including stock prices, machine failures, and traffic loads. - Classification of stochastic processes based on time (discrete vs continuous) and state (discrete vs continuous variables). - Poisson processes and their use in modeling events like earthquakes, customer arrivals, and machine failures. - Markov chains and their ability to model probabilistic state transitions over time, like customer brand switching. - Characteristics of Markov analysis including providing probabilistic information rather than recommendations.

Uploaded by

solomon guade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views45 pages

Institute of Technology School of Mechanical and Industrial Engineering Operation Research Instructor: Getu Girma (M.SC.)

The document provides an introduction to stochastic modeling and Markov chains. It discusses key concepts such as: - Deterministic vs stochastic models and the random nature of stochastic models. - Applications of stochastic processes including stock prices, machine failures, and traffic loads. - Classification of stochastic processes based on time (discrete vs continuous) and state (discrete vs continuous variables). - Poisson processes and their use in modeling events like earthquakes, customer arrivals, and machine failures. - Markov chains and their ability to model probabilistic state transitions over time, like customer brand switching. - Characteristics of Markov analysis including providing probabilistic information rather than recommendations.

Uploaded by

solomon guade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 45

Ambo University

Institute of technology
School of mechanical and industrial engineering
Operation Research (Stochastic approach)

Instructor: Getu Girma (M.Sc.)


[email protected]
Chapter one
General introduction about the course
1.1. Deterministic mode Vs Stochastic model

Deterministic model is specified by a set of equations that describe


exactly how the system will evolve over time.

In stochastic model, evaluation is at least partially random and if the


process / model run for several times, it will not give identical result.

Different run of stochastic model / model are often called realization


of the process.
Con’t….

What is stochastic model?

Why do we need to study stochastic process?

Stochastic process is a mathematical model of a probabilistic


experiment that evolves in time in uncertain way.

Is any variable whose value changes over time in an uncertain way.

e.g. stock price, inventory level, customer arrival at service center.


Con’t…
In probability theory, a stochastic process is a collection of (indexed)
random variable.
These collection of random variables are frequently used to represent the
evaluation of random quantity (x) or time (t).
It is a study of how a random variable evolves over time .
Formal definition of stochastic process.
A stochastic process is a family of random variables.

{ x(t)=t€T }
1.2. Applications
A stochastic process can be used to model

The sequence of daily price of a stock.

The sequence of failure time of a machine

The sequence of hourly traffic loads at a given road

We usually tend to focused on the dependencies in the sequencing of values


generated by the process.

E.g. -how to future prices of stock depends on the past price?

- What fraction of time that a machine is idle?


1.3. Classification of stochastic process
It can be classified into 4 categories based on two dimensions,
A. Time and
B. State or variables.
A. Based on the time dimension at which the random variables take a value.
1. Discrete time stochastic process;- time index set T is countable
2. Continuous time stochastic process:- time index set T is uncountable
B. Based on the state dimension ( the values of random variable assume)
3. Discrete variable stochastic process:- only certain discrete values are poss.
4. Continuous variable stochastic process:- underling variable can take value
with uncertain range.
1.4. Poisson process and Exponential Distribution

Counting process ;- a stochastic process X is counting process if x(t) is the


number of item counted by the time t.

As time passes, one can count additional items therefore, counts are non-negative
integers and hence are discrete.

- It is the study of the occurrence of events.

Counting process can be classified into two;-

a. Binomial processes

b. Poisson process
Con’t…

Counting process;- A stochastic process {N(t), t≥0} is said to be counting process if


N(t) represents the total number “ event” that occur by time t and if the following
conditions holds.
A. N(t) ≥ 0 , for all t ≥0
this condition tell us that the counting process must start at t=0 and number be
non-negative.
B. N(t) is integer value--- the counting process is discrete state space, when arrivals
discrete.
C. If S ≤ t, then N(s) ≤ N(t)….. The counting process must be non- decreasing in
time t.
D. For S ≤ t, the increment N(s, t)=N(t)-N(s) equals the number of the events that
occurs in the interval [ s, t]……independent increment
counting process property can be classified into two
A. Independent increment
A counting process is said to possess independent if the number of
events that occur in disjoint time intervals are independent.
B. Stationary increments:- a counting process is said to possess
stationery increments if the distribution of the number of events that
occur in any interval of time depends only the length of time interval
and not on the time intervals end point.
In the other words the process has stationary increments if the number
of events in the interval [s, s + t ] has the same distribution for all S.
The Poisson Process

The Poisson process is one of the most widely used counting process. It usually used in
scenarios where we are counting the occurrence of certain events that appear to happen at a
certain rate.

e.g. suppose that from historical data, we know that earth quake occurs in certain area with
a rake of 2 per month. Therefore, from this information, the timing of earth quake seems
to be completely random. Thus , we can conclude that, the Poisson process might be a
good models for earth quake.

Here, in Poisson process time is continuous and unknown for occurrence of event where ,
state of events are discrete.
Cont…

In the practice, the Poisson process or it is extension have been used to model.

The number accidents at a site or in an area.

The number of customers entering the super market at time t

The requests for individual documents on web server

The number of failure of machine at time t

Queuing system and etc.


Cont…

Poisson process can be defined in three ways (but equivalently)

1,the counting process { N(t) , t ≥ 0 } is said to be a Poisson process with rate λ,

λ ≥ 0 if all of the following conditions holds.

a. N(0) =0

b. The process N(t) has independent increments

c. The number of events/arrivals in any interval of length t is Poisson distributed with mean
λt. That is for all S, t ≥ 0 we have:-

=P[N(t+s)- N(s) =n] …=e – λt . (λt)n /n!


where n= 0, 1, 2, 3, ….n
Example1
The number of customer arriving at a grocery store can be modeled by a Poisson process
with intensity λ =10 customers per hour

a. Find the probability that there are 2 customers between 10:00 and 10:20

b. Find the probability that there are 3 customers between 10:00 and 10:20 and 7 customers
between 10:20 and 11:00

Solution

a. Here the arrival rate per hour is 10 customers (i.e. λ =10) and the interval between 10:00
and 10:20 has length t=1/3hrs

Thus, if N(t) is number of arrivals in that interval time, we can write Xn Poisson (λ t).
solution
Therefore, P[x=2] = P{N(1/3) =2}
= e – λt (λt)n /n!
=e-10(1/3) [10(1/3)]2/2!
=e-3.33*5.555
= 0.1986
=0.2 Ans
Con’t…

b. Here we have two non-overlapping intervals [10:00 ,10:20] and [10:20 ,


11:00]. Thus we, can write the probabilities of independent increments as;-

P[ 3 arrival in 10:00 , 10:20] * p [7 arrival 10:20 ,11:00]

P{N(t1) =3 and N(t) =7}

P{N(t1) =3* P{N(t2) =7

=[e-10(1/3) [10(1/3)]3/3! *[ e-10(2/3) *10(2/3)7]/7!

=0.22*0.147

=0.0325
1.5. Markov chain

 Markov analysis is different in that it does not provide a recommended


decision. Instead, Markov analysis provides probabilistic information
about a decision situation that can aid the decision maker in making a
decision.

 In other words, Markov analysis is not an optimization technique; it is


a descriptive technique that results in probabilistic information
Con’t…

Markov analysis is specifically applicable to systems that exhibit probabilistic


movement from one state (or condition) to another, over time. For example, Markov
analysis can be used to determine the probability that a machine will be running one
day and broken down the next or that a customer will change brands of cereal from
one month to the next.

This latter type of example referred to as the "brand-switching" problem will be used
to demonstrate the principles of Markov analysis in the following discussion
Con’t…

Handling temporal dependency in stochastic modeling , while often challenging is some times
necessary.

For a discrete time stochastic process with a discrete state space if the future state of the
process depends only on the current state of the system and the system is called a markov
chain

A stochastic process is a markov process if the occurrence of a future state depends only on
the immediately preceding state.

A markov chain is a random sequence in which the dependency of the successive events goes
back only one unit in time. On other words, the future probabilistic behavior of the process
depends by its past history. This is called markov property.
Con’t…
Depending on the time index over which the state of the process /
system changes, markov chain categories in to two:-
1. Discrete time markov chain :- is a markov process in which state
transition only occurs at fixed times.
Index set ( time) T is finite and countable.
- the state transition can only occurs at fixed time.
2. Continuous time markov chain:- is a markov process in which the
state can change at any time. (Time) index set T is infinite. i.e. t ≥ 0
State of the process can change any time
inter arrival of the state change is exponentially distributed.
1.6. The Characteristics of Markov Analysis

Markov analysis can be used to analyze a number of different


decision situations; however, one of its most popular applications has
been the analysis of customer brand switching. This is basically a
marketing application that focuses on the loyalty of customers to a
particular product brand, store, or supplier.
Markov analysis provides information on the probability of customers'
switching from one brand to one or more other brands. An example
of the brand-switching problem will be used to demonstrate Markov
analysis.
Example 1

The probabilities of a customer's moving from service station to service


station within a 1-month period, presented in tabular form in Table F.1, can
also be presented in the form of a rectangular array of numbers called a
matrix, as follows:

A transition matrix includes the transition probabilities for each state of


nature.
Several new symbols are needed for Markov analysis using matrix algebra.
We will define the probability of a customer's trading with Petroco in period i,
given that the customer initially traded with Petroco, as;-
Similarly, the probability of a customer's trading with National in period i, given that a
customer initially traded with Petroco, is

For example, the probability of a customer's trading at National in month 2, given that the
customer initially traded with Petroco, is = Np(2)
The probabilities of a customer's trading with Petroco and National in a future period, i, given
that the customer traded initially with National, are defined as:-
If a customer is presently trading with Petroco (month 1), the following probabilities exist:

Pp(1) = 1.0

Np(1) = 0.0

In other words, the probability of a customer's trading at Petroco in month 1, given that the
customer trades at Petroco, is 1.0.

These probabilities can also be arranged in matrix form, as follows:


This matrix defines the starting conditions of our example system, given that a
customer initially trades at Petroco, as in the decision tree in Figure F.1. In other words,
a customer is originally trading with Petroco in month 1. We can determine the
subsequent probabilities of a customer's trading at Petroco or National in month 2 by
multiplying the preceding matrix by the transition matrix, as follows:

Computing probabilities of a customer trading at either station in future months,


using matrix multiplication that gives above solution.
These probabilities of 0.60 for a customer's trading at Petroco and 0.40 for a
customer's trading at National are the same as those computed in the decision tree in
Figure F.1. We use the same procedure for determining the month 3 probabilities,
except we now multiply the transition matrix by the month 2 matrix:-

These are the same probabilities we computed by using the decision tree analysis in
Figure F.1. However, whereas it would be cumbersome to determine additional
values by using the decision tree analysis, we can continue to use the matrix
approach as we have previously:
The state probabilities for several subsequent months are as follows:

Notice that as we go farther and farther into the future, the changes in the state probabilities
become smaller and smaller, until eventually there are no changes at all. At that point every
month in the future will have the same probabilities.
For this example, the state probabilities that result after some future month, i,
are:-

[PP(i) NP(i) ] = [0.67 0.33]

In future periods, the state probabilities become constant. This characteristic of


the state probabilities approaching a constant value after a number of time
periods is shown for Pp(i) in Figure below:-
2. Computing future state probabilities when the initial starting state is
National.
This same type of analysis can be performed given the starting condition in
which a customer initially trades with National in month 1. Given that a customer
initially trades at the National station, then

[Pn(1) Nn(1) ] = [1.0 0.0]

Using these initial starting-state probabilities, we can compute future-state


probabilities as follows:-
These are the same values obtained by using the decision tree analysis in Figure F.2.
Subsequent state probabilities, computed similarly, are shown next:-

As in the previous case in which Petroco was the starting state, these state probabilities also
become constant after several periods. However, notice that the eventual state probabilities
(i.e., 0.33 and 0.67) achieved when National is the starting state are exactly the same as the
previous state probabilities achieved when Petroco was the starting state. In other words,
the probability of ending up in a particular state in the future is not dependent on the
starting state.
Example 2.

State 1 is non-rainy day and state 2 is rainy day, obtain the following.

A. probability that day 1 is non-rainy day given that day 0 is rainy day

B. Prob. That day 2 is rainy day, given that day 0 is a non-rainy day.

C. Prob. That day 100 is a rainy day 0 is non- rainy day.


Con’t…

P (1) , in this case is [0.7, 0.3] because it is given that Day 0 Is non rainy.
Since 0.7*0.7+0.3*0.4=0.61
and 0.7*0.3+0.3*0.6=0.39
Prob. That day 100 is a rainy day given that day 0 is non-rainy day
P (n) = P (0) x P (N) which is far into the future.
Con’t…

Which is reached to the steady state probability.


Con’t…

The last point / value shows the steady state probability vector.
And it can show how to obtained steady state probability values.
Example 3.
A small community has two gasoline service stations, Petroco and
National. The residents of the community purchase gasoline at the two
stations on a monthly basis.

The marketing department of Petroco surveyed a number of residents and


found that the customers were not totally loyal to either brand of gasoline.
Customers were willing to change service stations as a result of
advertising, service, and other factors.
The marketing department found that if a customer bought gasoline
from Petroco in any given month, there was only a .60 probability that
the customer would buy from Petroco the next month and a .40
probability that the customer would buy gas from National the next
month. Likewise, if a customer traded with National in a given month,
there was an .80 probability that the customer would purchase gasoline
from National in the next month and a .20 probability that the customer
would purchase gasoline from Petroco. These probabilities are
summarized as below:-
The properties for the service station example just described define a Markov process. They
are summarized in Markov terminology as follows:

Property 1: Transition probabilities for a given beginning state of the system sum to one.

Property 2: The probabilities apply to all participants in the system.

Property 3: The transition probabilities are constant over time.

Property 4: The states are independent over time.


Markov Analysis Information

"What information will Markov analysis provide?" The most obvious information
available from Markov analysis is the probability of being in a state at some future
time period, which is also the sort of information we can gain from a decision tree.

Figure F.1. Probabilities of future states, given that a customer trades with Petroco this month
To determine the probability of a customer's trading with Petroco in month 3,
given that the customer initially traded with Petroco in month 1, we must add the
two branch probabilities in Figure F.1 associated with Petroco:-

0.36 + 0.08 =0.44, the probability of a customer's trading with Petroco in month 3.

Likewise, to determine the probability of a customer's purchasing gasoline from


National in month 3, we add the two branch probabilities in Figure F.1 associated
with National:

0.24 + 0.32 = 0.56, the probability of a customer's trading with National in month 3
Con’t…

Figure F.2. Probabilities of future states, given that a customer trades with National
this month. Look buying from the same brand is higher than other brand. E.g.
petroco to petroco=0.12 and national to national =0.64
Con’t…

This same type of analysis can be performed under the condition that a customer
initially purchased gasoline from National, as shown in Figure F.2. Given that
National is the starting state in month 1, the probability of a customer's purchasing
gasoline from National in month 3 is

0.08 + 0.64 = 0.72 and the probability of a customer's trading with Petroco in month 3
is

0.12 + 0.16 = 0.28

Notice that for each starting state, Petroco and National, the probabilities of ending up
in either state in month 3 sum to one
Final solution for probability of trade in month3

Although the use of decision trees is perfectly logical for this type of analysis, it is
time consuming and burdensome. For example, if Petroco wanted to know the
probability that a customer who trades with it in month 1 will trade with it in month
10, a rather large decision tree would have to be constructed. Alternatively, the same
analysis performed previously using decision trees can be done by using matrix
algebra techniques.
1.7. Steady-State Probabilities
The probabilities of 0.33 and 0.67 in our example are referred to as
steady-state probabilities.

The steady-state probabilities are average probabilities that the system will be in a
certain state after a large number of transition periods. This does not mean the
system stays in one state. The system will continue to move from state to state in
future time periods; however, the average probabilities of moving from state to
state for all periods will remain constant in the long run. In a Markov process,
after a number of periods have passed, the probabilities will approach steady state.
Therefore, Steady-state probabilities are average, constant probabilities that the
system will be in a state in the future.

until eventually we arrived at the


steady-state probabilities

You might also like