0% found this document useful (0 votes)
51 views2 pages

What Is A Markov Chain?: Continuous-Time Stochastic Process Is Simply A Stochastic Process in Which The State of

This document discusses stochastic processes and Markov chains. It describes a stochastic process where balls are painted different colors each time a coin is tossed. It then discusses using stock prices over time as a stochastic process, noting the importance of understanding how past prices relate to future prices. It closes by briefly mentioning continuous-time stochastic processes, like the number of people in a supermarket over time. It then defines a Markov chain as a special type of discrete-time stochastic process where the probability of the next state depends only on the current state, not past states. It provides the stationarity assumption and defines transition probabilities and the initial probability distribution for a Markov chain.

Uploaded by

Tom Psy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views2 pages

What Is A Markov Chain?: Continuous-Time Stochastic Process Is Simply A Stochastic Process in Which The State of

This document discusses stochastic processes and Markov chains. It describes a stochastic process where balls are painted different colors each time a coin is tossed. It then discusses using stock prices over time as a stochastic process, noting the importance of understanding how past prices relate to future prices. It closes by briefly mentioning continuous-time stochastic processes, like the number of people in a supermarket over time. It then defines a Markov chain as a special type of discrete-time stochastic process where the probability of the next state depends only on the current state, not past states. It provides the stationarity assumption and defines transition probabilities and the initial probability distribution for a Markov chain.

Uploaded by

Tom Psy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

has been tossed) we change the color of the ball (from red to black or from black to red).

To model this situation as a stochastic process, we define time t to be the time af-

EXAMPLE 2

EXAMPLE 3

ter the coin has been flipped for the tth time and the chosen ball has been painted. The
state at any time may be described by the vector [u r b], where u is the number of un-
painted balls in the urn, r is the number of red balls in the urn, and b is the number of
black balls in the urn. We are given that X0 [2 0 0]. After the first coin toss, one ball will
have been painted either red or black, and the state will be either [1 1 0] or [1 0
1].Hence,wecanbesurethatX1 [1 1 0]orX1 [1 0 1].Clearly,there must be some sort of
relation between the Xts. For example, if Xt [0 2 0], we can be sure that Xt 1 will be [0 1
1].

CSL Computer Stock

Let X0 be the price of a share of CSL Computer stock at the beginning of the current trad-
ing day. Also, let Xt be the price of a share of CSL stock at the beginning of the tth trad-
ing day in the future. Clearly, knowing the values of X0, X1, . . . , Xt tells us something
about the probability distribution of Xt 1; the question is, what does the past (stock prices
up to time t) tell us about Xt 1? The answer to this question is of critical importance in
finance. (See Section 17.2 for more details.)

We close this section with a brief discussion of continuous-time stochastic processes. A


continuous-time stochastic process is simply a stochastic process in which the state of
the system can be viewed at any time, not just at discrete instants in time. For example,
the number of people in a supermarket t minutes after the store opens for business may be
viewed as a continuous-time stochastic process. (Models involving continuous-time
stochastic processes are studied in Chapter 20.) Since the price of a share of stock can be
observed at any time (not just the beginning of each trading day), it may be viewed as a
continuous-time stochastic process. Viewing the price of a share of stock as a continuous-
time stochastic process has led to many important results in the theory of finance, in-
cluding the famous BlackScholes option pricing formula.

What Is a Markov Chain?


One special type of discrete-time stochastic process is called a Markov chain. To simplify
our exposition, we assume that at any time, the discrete-time stochastic process can be in
one of a finite number of states labeled 1, 2,...,s.
17.2
A discrete-time stochastic process is a Markov chain if, for t 0, 1, 2, . . . and all states,

P(Xt 1 it 1|Xt it, Xt 1 it 1,...,X1 i1, X0 i0)P(Xt 1 it 1|Xt it) (1)

DEFINITION

Essentially, (1) says that the probability distribution of the state at time t 1 depends on the
state at time t (it) and does not depend on the states the chain passed through on the way
to it at time t.

In our study of Markov chains, we make the further assumption that for all states i and j
and all t, P(Xt 1 j |Xt i) is independent of t. This assumption allows us to write

924 CHAPTER 17 MarkovChains

P(Xt 1 j|Xt i) pij (2)

where pij is the probability that given the system is in state i at time t, it will be in a state j
at time t 1. If the system moves from state i during one period to state j during the next
period, we say that a transition from i to j has occurred. The pijs are often referred to as
the transition probabilities for the Markov chain.

Equation (2) implies that the probability law relating the next periods state to the cur-
rent state does not change (or remains stationary) over time. For this reason, (2) is often
called the Stationarity Assumption. Any Markov chain that satisfies (2) is called a sta-
tionary Markov chain.

Our study of Markov chains also requires us to define qi to be the probability that the
chainisinstateiattime0;inotherwords,P(X0 i) qi.Wecallthevectorq [q1 q2 qs] the initial
probability distribution for the Markov chain. In most applica- tions, the transition
probab

You might also like