0% found this document useful (0 votes)
291 views34 pages

HiddenMarkovModels RobertFreyStonyBrook PDF

This document provides an overview and tutorial on hidden Markov models (HMMs) with univariate Gaussian outcomes. Specifically, it summarizes the HMMUG model, which uses a discrete-time Markov process with unobserved states to model a time series of continuous, normally distributed observations. It describes using the EM algorithm to estimate the model parameters by maximizing the likelihood of the observed data. The EM algorithm iterates between estimating the missing state data given the current parameters, and re-estimating the parameters based on the estimated states.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
291 views34 pages

HiddenMarkovModels RobertFreyStonyBrook PDF

This document provides an overview and tutorial on hidden Markov models (HMMs) with univariate Gaussian outcomes. Specifically, it summarizes the HMMUG model, which uses a discrete-time Markov process with unobserved states to model a time series of continuous, normally distributed observations. It describes using the EM algorithm to estimate the model parameters by maximizing the likelihood of the observed data. The EM algorithm iterates between estimating the missing state data given the current parameters, and re-estimating the parameters based on the estimated states.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Tutorial:

Hidden Markov Models with Univariate Gaussian Outcomes

Robert J. Frey, Research Professor


Stony Brook University, Applied Mathematics and Statistics

[email protected]
http://www.ams.sunysb.edu/~frey

22 December 2009

Hidden Markov Models


Consider a system which evolves according to a Markov process, which may be either continuous- or discrete-time, and may
have finitely or infinitely many states. This underlying Markov process is not directly observable; i.e., the system states are
occult or hidden. Associated with each state is a stochastic process which produces outcomes which are, however, observable.
As the system evolves over time in accordance with the occult Markov chain it produces these observable outcomes in a state-
dependent fashion.

For a concrete example, take a simple model of the monthly returns of a stock market. The market has two states, a "Bull" state
characterized by "good" performance and a "Bear" state characterized by "bad" performance. The transistion from one state to
another is controlled by a two-state Markov chain. Given the states i, j {Bull. Bear} of the market in a given month, the
probability the market transitions from state i to state j in the next month is aHi, jL. If the market is in some state i then its log
return is Normally distributed with state-dependent mean m(i) and standard deviation s(i). This simple market model is illus-
trated below:
2 HMMUG_20091222_05.nb

aHBull, BearL aHBear, BearL


Bull Bear
aHBull, BullL aHBear, BullL

8mHBullL, sHBullL< 8mHBearL, sHBearL<

Not what one would call the last word in market models. Limiting the granularity to monthly leaves out a great deal of informa-
tion. The market dynamics are undoubtedly complex and two states may not be enough to capture this. Even during periods when
a market appears to be reasonably characterized as bull or bear, the log returns tend to be more kurtotic than a Normal distribu-
tion would predict.

On the other hand, models always leave out stuff. That's the point. As demonstrated below, this simple model captures some
interesting aspects of real stock market behavior. The question is: Given a set of observations and the framework of a model such
as that above, how does one estimate its parameters?

Objectives of This Tutorial

"The road to learning by precept is long, but by example short and effective." ~Seneca

Lawrence Rabiner's excellent "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition" [Rabiner
1989] is available here as a PDF from his faculty website (as of 09 Dec 2009) at UC Santa Barbara. The tutorial covers the
maximum likelihood estimation (MLE) of hidden Markov models with both discrete and continuous outputs as well as several
extensions, such as autoregressive and non-ergodic models. It also covers alternatives to MLE, such as estimating Viterbi
(maximum probability) sequences. Finally, it covers important issues of implementation such as intializing parameters and
scaling computations to avoid under- or overflow conditions.

However, Rabiner's paper focuses on the theory as it pertains to discrete outcome models used in speech recognition. It defers the
details necessary for practical implementation until fairly late in the presentation. Where it does discuss continuous outcome
models, it does so using the most general of cases: finite mixtures of multivariate Gaussians. It also does not present an end-to-
end, step-by-step description of the practical implementation of the algorithm. Although it discusses applications, it does not
present actual working examples.

The objectives of this tutorial are to:


Present the general theory but narrow the focus to discrete-time models with continous outcomes, which are of more interest in quantitative
finance,
Discuss a simple--yet interesting and useful--case, a discrete time HMM with univariate Gaussian outcomes, and thereby provide a somewhat
more accessible introduction,
Set out a complete, detailed sequence of the computations required and discuss how the list/vector processing facilities of languages such as
Mathematica and MATLAB can be exploited in that implementation,
Provide a set of working tools and apply them to actual data in an active document, i.e., a Mathematica notebook, that students can use to run
examples.
Cover more recent developments and references, especially to those that pertain to financial time series.

None of these comments, however, should be interpreted as a criticism of [Rabiner 1989] and the student is encouraged to read
and work through that paper and the other references provided.

Model Overview and Theory


HMMUG_20091222_05.nb 3

Model Overview and Theory


Although the notation of [Rabiner 1989] is followed in most instances, there are a few modifications. The variable subscript is
reserved for time indexing, while state indexing is done parenthetically with the aims that it makes the mathematics both easier to
follow and more straightforward to transcribe into program data structures. Where Rabiner's original notation was at variance
with that which might be more familiar to the student, the more common usage was adopted; e.g., q is used instead of l to
indicate the parameter set. Finally, certain variables were used with different meanings in different places, e.g., ct for the scaling
constant [Rabiner 1989, (91)] and c j, m for the finite mixture of multivariate Gaussians proportions [Rabiner 1989; (50)]; herein,
nt is used for the scaling constant.

Description of the HMMUG Model

The hidden Markov model with univariate Gaussian outcomes (HMMUG) is the focus here. At a given time t we are presented
with a discrete-time system that can be in one of N states:
S = 8sH1L, , sHiL, , sHNL<

The states are not observable and are represented by:

Q = 8q1 , , qt , , qT <

The states evolve over time according to a finite-state, ergodic Markov chain characterized by its intial state and transition matrix:

iHiL = Prob@q1 = sHiLD


aHi, jL = Prob@qt+1 = sH jL qt = sHiLD

An observation is generated at each point in time in a manner that is conditioned on the state of the system. Here we assume that
the observations are generated from a univariate Gaussian distribution with state-dependent mean and standard deviation:
Density@xt qt = sHiLD = y@x mHiL, sHiLD

The symbol q will be used to refer to the parameters of the model collectively; implicit in the parameters is the number of states
of the system:
q = 8i, a, m, s<

We are given a time series of T periodic observations:

X = 8x1 , , xt , , xT <

and our objective is to fit parameters q given the observations X.

The model will be fit using maximum likelihood estimation (MLE), i.e., by determining the values of the parameters which
maximizes the likelihood of observing the data:
@qMLE XD = argmax@ Prob@X qD D
q

EM Algorithm

In the HMMUG model, the state of the system qt cannot be directly observed. If we had such state information, then estimating
the parameters q would be trivial.

The EM algorithm [Dempster et al., 1977] can be used for MLE when there are occult (i.e., hidden, latent, censored, or missing)
data or when a problem can be reformulated in those terms. Informally, the EM algorithm takes the observed data and a current
estimate of the parameters and uses them to estimate the missing data; it then takes the observed data and the estimated missing
data and uses them to produce a new estimate of the parameters. Remarkably, under suitable conditions, this iterative process
converges to a local maximum of the likelihood function [Dempster et al. 1977].
4 HMMUG_20091222_05.nb

The EM algorithm [Dempster et al., 1977] can be used for MLE when there are occult (i.e., hidden, latent, censored, or missing)
data or when a problem can be reformulated in those terms. Informally, the EM algorithm takes the observed data and a current
estimate of the parameters and uses them to estimate the missing data; it then takes the observed data and the estimated missing
data and uses them to produce a new estimate of the parameters. Remarkably, under suitable conditions, this iterative process
converges to a local maximum of the likelihood function [Dempster et al. 1977].

More formally, at the hth iteration given a complete data set X, occult data Z, and a working estimate of parameters qHhL , the
expectation (E) step calculates the expected value of the log likelihood function of the conditional distribution of Z given X and
qHhL .

Aq, qHhL E = EAlog @Z X, qD X, qHhL E

The maximization (M) step then finds a successor parameter set qHh+1L which maximizes that expectation.

qHh+1L = argmax Aq, qHhL E


q

The parameter set qHh+1 then becomes the new working parameter set. EM iterations continue until log @q XD converges. For
the HMMUG model and many other cases of interest, the E-step reduces to estimating sufficient statistics for Z given X and q and
the M-Step to maximizing @q X, ZD using the "completed" data.

Each EM iteration results in a non-decreasing likelihood, but the EM algorithm converges sublinearly and may find a local
maximum. There may be material numerical instabilities affecting convergence in practice. The algorithm may set certain
elements of q to explain a single point, and this drives the likelihood towards infinity. Single-point fits mean either that a less
complex model is appropriate or that the offending data point is an outlier and must be removed from the sample [Dempster et
al., 1977] [McLachlan 2008].

In the HMMUG model, the state of the system qt cannot be directly observed. If we had such state information, then estimating
the parameters would be trivial. Thus, fitting an HMMUG model can be viewed as a missing data problem in which one is given
the observations X but are missing the states Q.

E-Step

The E-step will involve the estimation of the following quantities [Baum et al. 1970]:

xt Hi, jL = Prob@qt = sHiL, qt+1 = sH jL X, qD

How does xt represent the missing state data? Clearly, if we knew xt with certainty, we could use it to separate the outcomes into
subsets depending upon state and use each subset to estimate m and s for each state. It would also allow us to count the frequency
of transitions from one state to another and thereby estimate a. As it is, the best we can do is estimate xt probabilisticly given X
and qHhL , i.e., as the solution to the expectation @q qHhL D.

M-Step

For the HMMUG model, the M-Step, the re-estimation of q based on the completed data, will be accomplished for m(i) and s(i)
by weighting each observation proportionally to the probability that it is in a given state at a given time and for the transition
matrix by summing the xt across time and normalizing the rows to sum to unity [Baum et al. 1970].

Likelihood Function

Direct calculation of [q | X] is not computationally tractable [Baum et al. 1970] [Rabiner 1989]. Let Q denote an arbitrary state-
sequence, then
HMMUG_20091222_05.nb 5

@q XD = Prob@X Q, qD Prob@Q qD
"Q

The computation of the likelihood for any one state sequence given the observed data and parameters is straightforward; how-
ever, there are N T such state sequences. Consider a modest problem in which we wish to fit a two-state model to one-year of
daily returns. The total number of possible state sequences would be 2250 > 1.8 1075 . In the nave form above, the computation
of the likelihood is not practical.

Baum-Welch Algorithm

Forunately, the forward-backward, or Baum-Welch, algorithm [Baum et al. 1970] employs a clever recursion to compute both
@q, qHhL D and @q XD efficiently; that paper also represents an early example of the application of the EM algorithm years before
it was more generally and formally codified in [Dempster et al., 1977].

Consider a case in which N = 3 and T = 10, i.e., three states and five periods. Even in a toy problem such as this there are 310 =
59,049 paths. However, as illustrated below, those paths are recombinant, forming a lattice.
states

time

The forward-backward algorithm exploits this topology to efficiently estimate the probabilities required to compute the both the
conditional expectation @q, qHhL D and the log likelihood log @X qD. The algorithm is a form of dynamic programming.

Forward-Backward Recursions

In presenting the forward-backward calculations, the variables are colored coded to cross-reference them with their associated
specifications as probabilities in the hope that this is more illuminating than distracting. The exposition in no way depends on this
color coding, however.

Forward Recursion: For a forward recursion, consider the example below in which we wish to compute
Prob@x1 , , x4 , q4 = sH3L qD and know the associated probabilities of the N predecessor states. Shown below are the target
8q4 = sH3L< and it predecessors 8q3 = sH1L, q3 = sH2L, q3 = sH3L< with the arrows meant to show the flow of computation.
states

time

From the density Prob@x4 q4 = sH1L, qD, the vector of predecessor forward probabilities Prob@x1 , , x3 , q3 = sHkL qD, and the
appropriate column of the transition matrix Prob@q4 = 1 q3 = sHkLD the required calculation expressed generally is:
6 HMMUG_20091222_05.nb

From the density Prob@x4 q4 = sH1L, qD, the vector of predecessor forward probabilities Prob@x1 , , x3 , q3 = sHkL qD, and the
appropriate column of the transition matrix Prob@q4 = 1 q3 = sHkLD the required calculation expressed generally is:

Prob@x1 , , xt , qt = sHiL qD =
N
Prob@xt qt = sHiL, qD Prob@x1 , , xt-1 , qt-1 = sHkL qD Prob@qt = i qt-1 = sHkLD
k=1

If one defines the following variables:

bt HiL = Prob@xt qt = sHiL, qD


at HiL = Prob@x1 , , xt , qt = sHiL qD

then:
N
at HiL = bt HiL at-1 HkL aHk, iL
k=1

An obvious initial condition to start the forward recursion is:

a1 HiL = iHiL b1 HiL

The forward recursion starts at time t = 1 and recursively calculates the forward probabilities across states for each time period
up to t = T.

Backward Recursion: For the backward recursion, consider the example below in which we wish to compute
Prob@x5 , , xT q4 = sH2L, qD and know the associated probabilities of its N successor states Prob@x6 , , x10 q5 = sHkL, qD.
Shown below are the target 8q4 = sH2L< and it successors 8q5 = sH1L, q5 = sH2L, q5 = sH3L< with the arrows again meant to show the
flow of computation.
states

time

From the transition probabilities Prob@q5 = sHkL q4 = sH2LD, the densities Prob@x5 q5 = sHkL, qD, and successor backward probabili-
ties Prob@x6 , , x10 q5 = sHkL, qD the required calculation expressed generally is:

Prob@xt+1 , , xT qt = sHiL, qD =
N
Prob@qt+1 = sHkL qt = sHiLD Prob@xt+1 qt+1 = sHkL, qD Prob@xt+2 , , xT qt+1 = sHkL, qD
k=1

If one defines the following variables:

bt HiL = Prob@xt+1 , , xT qt = sHiL, qD

then:
HMMUG_20091222_05.nb 7

M
bt HiL = aHi, kL bt+1 HkL bt+1 HkL
k=1

The initial condition to start the backward recursion is arbitrarily chosen as:

bT HiL = 1

The backward recursion starts at time t = T and recursively calculates the backward probabilities acrosss states for each time
period down to t = 1.

Sufficient Statistics for the Occult Data, Updating q, and Calculating log [X | q]

With the forward probabilities at HiL and backward probabilities bt HiL one can compute the probability that the system in a specific
state at a specific time given the observed data and the working parameters:
Prob@qt = sHiL X, qD Prob@x1 , , xt , qt = sHiL qD Prob@xt+1 , , xT qt = sHiL, qD

Define:

gt HiL = Prob@qt = sHiL X, qD

then:

gt HiL at HiL bt HiL

Similarly, we can calculate the probability that the system transits from state s(i) to state s(j) at time t:

Prob@qt = sHiL, qt+1 = sH jL X, qD


Prob@x1 , , xt-1 , qt-1 = sHkL qD
Prob@qt+1 = sH jL qt = sHiLD Prob@xt+1 qt+1 = sHkL, qD Prob@xt+2 , , xT qt+1 = sHkL, qD

Recalling the definition of xt Hi, jL:

xt Hi, jL = Prob@qt - sHiL, qt+1 = sH jL X, qD

then:

xt Hi, jL at HiL aHi, jL bt+1 H jL bt+1 H jL

Note that xt is not defined for t = T. The values of gt and xt are normalized so that they represent proper probability measures:
N
gt HkL = 1
k=1

N N
xt Hk, lL = 1
k=1 l=1

Again, note that, consistent with the definitions of the variables:


N
gt HiL = xt Hi, kL
k=1

As described earlier, the parameter set q is updated using xt , gt and xt . The log likelihood can be computed by summing aT HiL
over the states:
8 HMMUG_20091222_05.nb

N N
log @q XD = log Prob@X, qT = sHiL qD = log aT HiL
i=1 i=1

MLE Implementation

Implementation Issues

There are several implementation issues associated with actually fitting a model to data. We deal with three here:
Selecting the number of states.
Initializing and terminating the algorithm. This has three sub-issues:

Generating an intial estimate of the parameters qH0L .


Terminating the algorithm.
Avoiding local maxima of the log likelihood.
Scaling to avoid under- and over-flow conditions in the forward-backward recursions.

Selecting the Number of States:

In MLE one can always increase the likelihood by adding parameters, but, as one adds parameters, the risk of overfitting is also
increased. A trade-off mechanism is needed.

The Bayesian Information Criterion (BIC) is the log likelihood penalized for the number of free paramaters [McLachlan 2000].
The BIC is not a test of significance (i.e., it is not used to accept or reject models), but it does provide a means of ranking models
that have been fit on the same data. The model with the smallest (i.e., usually most negative) BIC is preferred.

The BIC given the likelihood , the number of free parameters k, and the number of observations n is
BIC@X, qD = -2 log + k log n

In the HMMUG model the number of observations is T, and the number of free parameters for i, a, m, and s, respectively, is

k@qD = HN - 1L + NHN - 1L + N + N = NHN + 2L - 1

The HMMUG model's BIC is, therefore,

BIC@X, qD = -2 log @q XD + HNHN + 2L - 1L log T

There are alternatives to the BIC [McLachlan 2000] that involve similar concepts of log likelihood penalized for model
complexity.

Initialization and Termination

The EM algorithm starts with an intial guess of the parameters q. The algorithm can get stuck in a local maximum, so the choice
of an intial q is more than just an issue of efficiency. Several approaches have been suggested [Finch et al. 1989] [Karlis 2001]
[Karlis et al. 2003]. Most termination criteria do not detect convergence per se but lack of progress, and the likelihood function
has "flat" regions that can lead to premature termination [Karlis 2001].

Therefore, it makes sense to make the termination criteria reasonably strict, and it also makes sense to start the algorithm at
multiple starting points. [Karlis 2001] [Karlis et al. 2003]. An approach suggested in [Karlis et al. 2003] is to run multiple
starting points for a limited number of iterations, pick the one with the highest likelihood and then run that choice using a fairly
tight terminiation tolerance. This is the approach taken in the demonstrations below.

Scaling
HMMUG_20091222_05.nb 9

Scaling

Consider the forward recursion:


N
at HiL = bt HiL at-1 HkL aHk, iL
k=1

Repeated multiplication by bt HiL at each time step can cause serious computational problems. In the discrete case as discussed in
[Rabiner 1989] bt HiL << 1 and the computation of at HiL is driven exponentially towards zero. In the continuous case, however,
bt HiL may take on any value and in the Gaussian case the expected value of bt HiL is 1 s 2 p . Thus, at HiL may be driven to 0 or
depending upon s. In the case of time series of financial returns s << 1 2 p ; hence, bt HiL tends to be >>1 and the problem
is more one of over-flow than under-flow.

The solution, as discussed in [Rabiner 1989], is to scale the computations in a manner which will still allow one to use the same

forward-backward recursions. For each at compute a normalization constant nt and apply it to at to produce a normalized at that
sums to 1:
N
nt = 1 at HkL
k=1

at HiL = nt at HiL

At each recursion use the normalized at to compute an unnormalized at+1 which is then itself normalized and used in the next
iteration. Note that as the recursion progresses:
t

at HiL = at HiL nu
u=1


On the backward recursion apply the same normalization constant so that a normalized bt is produced with the same scaling as

at . Note that:
T

bt HiL = bt HiL nu
u=t

As [Rabiner 1989] shows, the effects of the normalization constants cancel out in the numerators and denominators of the
computations of gt and xt .

Note that the true value of aT can be recovered from the scaled values:
N T N N T

1 = aT HkL = nt aT HkL aT HiL = 1 nt
k=1 t=1 k=1 i=1 t=1

and used to compute the log likelihood


N T
log @q XD = log aT HiL = -log nt
i=1 t=1

At this point, the complete algorithm can be set out.

EM (Baum-Welch) Algorithm for the HMMUG

10 HMMUG_20091222_05.nb

EM (Baum-Welch) Algorithm for the HMMUG

This subsection presents a complete end-to-end description of the computations of one iteration of the EM-algorithm and a brief
description of the termination criteria for termination. Parameters without superscript represents the current estimates (i.e., q
instead of qHhL , with the successor esimates for the next iteration by a "+" superscript (i.e., q+ instead of qHh+1L ).

E-Step

For observation and state, the density is computed:

bt HiL = y@xt mHiL, sHiLD

The forward recursions are intitialized to:

a1 HiL = iHiL b1 HiL

then scaled:
N
n1 = 1 a1 HkL
k=1

a1 HiL = n1 a1 HiL

The forward recursions continue for t = 2, , T computing at HiL then scaling it to at HiL:
N

at HiL = bt HiL at-1 HkL aHk, iL
k=1

N
nt = 1 at HkL
k=1

at HiL = nt at HiL

The backward recursion is intialized and proceeds backward from t = T, , 1:



bT HiL = nT
N

bt HiL = nt aHi, kL bt+1 HkL bt+1 HkL
k=1

The values of g and x are estimated using the scaled forward-backward values:
N

gt HiL = at HiL bt HiL at HkL bt HkL
k=1

N N

xt Hi, jL = at HiL aHi, jL bt+1 H jL bt+1 H jL at HkL aHk, lL bt+1 HlL bt+1 HlL
k=1 l=1

M-Step

The updated parameters q+ are:

HMMUG_20091222_05.nb 11

i+ HiL = g1 HiL
T-1 T-1
a+ Hi, jL = xt Hi, jL gt HiL
t=1 t=1

T T
m+ HiL = gt HiL xt gt HiL
t=1 t=1

T T
2
s+ HiL = gt HiL H xt - m+ HiLL gt HiL
t=1 t=1

Log Likelihood

As noted earlier the log likelihood is computed from the scaling constants:
T
log @q XD = -log nt
t=1

Termination Criteria

In the code developed here the relative change in log likelihood is used as the primary termination criterion. For some positive
constant t << 1 the algorithm is terminated when for the Hh + 1Lth iteration:

log AqHh+1L XE log AqHhL XE - 1 t

Other choices of termination criteria are covered in [Karlis 2001] and [Karlis 2003].

In addition, a maximum iteration limit is set and at that point the algorithm terminates even if the log likelihood tolerance has not
been achieved; one can look at the convergence of the log likelihood function and accept the solution or restart the algorithm
using either the final parameter estimates from the prior run or a new initialization.

Programming Considerations

This tutorial uses Mathematica to implement the Baum-Welch algorithm along with some useful supporting functions.

xHMMUG[data, qH0L ]

The primay function is xHMMUG[] which performs the actual MLE. It's structure can be summarized as follows:
input data vector X and initial qH0L
resolve options and set up working variables
compute initial a, n, b, and and log
begin loop
E-step: compute current b, g, and x
M-step: update q = {i, a, m, and s}
log liklihood: compute next a, n, b, and and log
append log history vector
break out of loop if has converged or max iterations reached
end loop
12 HMMUG_20091222_05.nb

end loop
compute BIC
return results: q, g, BIC, and log history

Note that an initial log likelihood is computed before the main loop starts. Normally, the log likelihood is computed immediately
after the M-step but the forward-backward algorithm also uses the as in the E-step, These computations are organized so that the
log likelihood reported at termination refers to the most recently updated q.

Exploiting List Processing

Mathematica, in common with other modern tools such as R, MATLAB, and others, has a syntax which supports list or array
processing. Using these capabilities usually results in code which is simpler and more efficient. For example, consider the
expression for at HiL:
N
at HiL = bt HiL at-1 HkL aHk, iL
k=1

Using to denote element-by-element multiplication, the expression above can be stated as:

at = bt Hat-1 aL

Similarly, consider the expression for xt Hi, jL:

xt Hi, jL at HiL aHi, jL bt+1 H jL bt+1 H jL

Using to denote the outer or Kronecker product, this can be restated as:

xt a Iat Ibt+1 bt+1MM


It is not necessary to hold onto xt for each iteration. The code in xHMMUG[] computes, normalizes, and accumulates x in one
statement:

mnXi += &@mnA KroneckerProduct@mnAlpha PtT, mnBetaPt + 1T mnBPt + 1TDD
Total@Flatten@DD

Unfortunately, the list or array conventions vary from one system to the next. Care needs to be taken if one tries, e.g., to tran-
scribe Mathematica code into R or MATLAB. At least one of the references below [Zucchini 2009] contains samples of R code.

Equilibrium Distribution of the Markov Chain


The equilibrium distribution for the Markov chain, p(i), is the long-run probability that the system finds itself in state s(i):
N
pHiL = aHi, kL pHiL
k=1

or

AT p = p
For finite-state Markov chains, the equilibrium distribution can be found by solving the following equations, where O is a matrix
of 1s, is a vector of 1s, and 0 is a vector or 0s:

AT p - I p = 0 and O p = IAT + O -IM p =

The general problem of finding the equilibrium distribution of a Markov chain is quite difficult, but the approach above often
small problems.
works for
HMMUG_20091222_05.nb 13

The general problem of finding the equilibrium distribution of a Markov chain is quite difficult, but the approach above often
works for small problems.

Mathematica Functions
The functions presented below are teaching and demonstration tools, lacking even rudimentary error checking and handling. It
would be easy to add the required features, but that would greatly increase the number of lines of code and obscure the algo-
rithms. They are, however, perfectly usable if one is careful about the inputs.

Equilibrium Distribution for a Finite State Markov Chain

Description

Compute the equilibrium distribution for an ergodic, finite state Markov chain.

Input

The transition matrix of the Markov chain, such that mnTransitionMatrixPi, jT = Prob@qt+1 = sH jL qt = sHiLD. There are no options.

Output

The equilibrium distribution as a numeric vector.

Note

This function is of limited generality but is quite useful for small problems.

Code

In[1]:= xMarkovChainEquilibrium@mnTransitionMatrix_D :=
Inverse@Transpose@mnTransitionMatrixD - IdentityMatrix@Length@mnTransitionMatrixDD + 1D.
ConstantArray@1, Length@mnTransitionMatrixDD;

Random q Initializer

Description

Generate a random q to intialize the EM algorithm for HMMUG models.

Input

There are two inputs: the data and the number of states:
the data are represented by a numeric vector, and
the number of states is a positive integer.

There are no options.

Output
14 HMMUG_20091222_05.nb

Output

The result is returned as a list of data representing q; it contains the following four components:
i - initial state probability vector (numeric vector),
a - transition matrix of the Markov chain (numeric square matrix),
m - state means (numeric vector), and
s - state standard deviations (numeric vector).

Note

The output is in a form in which in can be used directly as the second argument to xHMMUG[], i.e., qH0L .

The term "random" must be taken with a grain of salt. What is produced randomly is a candidate state transition matrix. The
intial state is estimated from its equilibrium distribution. A random spread for the mean and sdev vectors is then generated based
on the total sample mean and sdev.

In[2]:= xHMMUGRandomInitial@vnData_, iStates_D := ModuleB


8mnA, i, vnGamma, vnIota, vnM, vnQ, vnS, t, mnW<,
H* Generate a random transition matrix HaL *L

mnA = & [email protected], 0.99<, 8iStates, iStates<DL;
Total@D
H* Compute HiL from HaL *L
vnIota = xMarkovChainEquilibrium@mnAD;
H* Mean HmL and sdev HsL for each state *L

vnM = &@RandomReal@8- 0.99, 0.99<, iStatesDD Mean@vnDataD;
Total@D


vnS = &@[email protected], 0.99<, iStatesDD Variance@vnDataD ;
Total@D
H* return q *L
8vnIota, mnA, vnM, vnS<
F;

EM Algorithm for HHMs with Univariate Gaussian Outcomes

Description

MLE fit of a HMMUG model using the Baum-Welch, or forward-backward, version of the EM algorithm.

Input

There are two inputs: the raw data and intial parameters:
the raw data as a numeric vector, and
the parameters q as a list containing {i, a, m, s}.

There are two options controlling termination:


"LikelihoodTolerance" representing the minimum change in the log likelihood function required to terminate the EM iterations, and

"MaxIterations" which are the maximum number of EM iterations before the function terminates.

The lhs of the option rules are strings.


HMMUG_20091222_05.nb 15

The lhs of the option rules are strings.

Output

The result is returned as a list of rules containing:


"i" the initial state probability vector,
"a" the transition matrix,
"m" the mean vector,
"s" the standard deviation vector,
"g" the vector of periodic state probabilities, i.e, gPt, iT = Prob@qt = sHiL],
"BIC" the Bayesian Information Criterion for the current fit, and
"LL" the log likelihood history, i.e., a list of @q XD after each EM iteration.

The lhs of the rules above are strings.

Note

Restarting the fit from the prior run is straightforward. If vxH is the result of the prior run, then {"i" , "a", "m", "s"} /. vxH will
pull out q which can be used to restart the algorithm at the point at which it terminated. Typically, one should consider changing
the iteration limit or log likelihood tolerance options when restarting, although this is not always necessary.

Code

In[3]:= H* Options for xHMMUG function *L


Options@xHMMUGD = 9"LikelihoodTolerance" 10.-7 , "MaxIterations" 400=;

In[4]:= H* Input X, 8i, a, m, s<, and optionally a tolerance and iteration limit. *L
xHMMUG@vnData_, 8vnInitialIota_, mnInitialA_, vnInitialMean_, vnInitialSdev_<,
OptionsPattern@DD := ModuleB
8mnA, mnAlpha, mnB, mnBeta, nBIC, mnGamma, vnIota, vnLogLikelihood,
iMaxIter, vnMean, iN, vnNu, vnSdev, t, iT, nTol, mnWeights, mnXi<,
H* Resolve options *L
nTol = OptionValue@"LikelihoodTolerance"D;
iMaxIter = OptionValue@"MaxIterations"D;
H* Initialize variables *L
iT = Length@vnDataD;
iN = Length@mnInitialAD;
vnIota = vnInitialIota;
mnA = mnInitialA;
vnMean = vnInitialMean;
vnSdev = vnInitialSdev;
H* Initial log *L
H* --- b *L
16 HMMUG_20091222_05.nb

mnB = Table@
PDF@NormalDistribution@vnMeanPT, vnSdevPTD, vnDataPtTD & Range@iND,
8t, 1, iT<
D;
H* --- a and n *L
mnAlpha = Array@0. &, 8iT, iN<D;
vnNu = Array@0. &, iTD;
mnAlphaP1T = vnIota mnBP1T;
vnNuP1T = 1 Total@mnAlphaP1TD;
mnAlphaP1T *= vnNuP1T;
For@t = 2, t iT, t ++,
mnAlphaPtT = HmnAlphaPt - 1T.mnAL mnBPtT;
vnNuPtT = 1 Total@mnAlphaPtTD;
mnAlphaPtT *= vnNuPtT;
D;
H* --- log *L
vnLogLikelihood = 8- Total@Log@vnNuDD<;
H* Main Loop *L
DoB
H* --- E-Step *L
H* --- --- b *L
mnBeta = Array@0. &, 8iT, iN<D;
mnBetaPiT, ;;T = vnNuPiTT;
For@t = iT - 1, t 1, t --,
mnBetaPtT = mnA.HmnBetaPt + 1T mnBPt + 1TL vnNuPtT;
D;
H* --- --- g *L

mnGamma = & HmnAlpha mnBetaL;
Total@D
H* --- --- x; note that we do not need the individual xt s *L
mnXi = Array@0. &, 8iN, iN<D;
ForBt = 1, t iT - 1, t ++,

mnXi += &@mnA KroneckerProduct@mnAlphaPtT, mnBetaPt + 1T mnBPt + 1TDD;
Total@Flatten@DD
F;
H* --- M-Step*L
H* --- --- a *L

mnA = & mnXi;
Total@D
H* --- --- i *L
vnIota = mnGammaP1T;
H* --- --- observation weights *L

mnWeights = & ImnGammaM;
Total@D
H* --- --- m and s *L
vnMean = mnWeights.vnData;
;
HMMUG_20091222_05.nb 17

vnSdev = Total ImnWeights HvnData - & vnMeanL2 M ;


H* --- Log Likelihood *L
H* --- --- b *L
mnB = Table@
PDF@NormalDistribution@vnMeanPT, vnSdevPTD, vnDataPtTD & Range@iND,
8t, 1, iT<
D;
H* --- --- a and n *L
mnAlpha = Array@0. &, 8iT, iN<D;
vnNu = Array@0. &, iTD;
mnAlphaP1T = vnIota mnBP1T;
vnNuP1T = 1 Total@mnAlphaP1TD;
mnAlphaP1T *= vnNuP1T;
For@t = 2, t iT, t ++,
mnAlphaPtT = HmnAlphaPt - 1T.mnAL mnBPtT;
vnNuPtT = 1 Total@mnAlphaPtTD;
mnAlphaPtT *= vnNuPtT;
D;
H* --- --- log *L
vnLogLikelihood = Append@vnLogLikelihood, - Total@Log@vnNuDDD;
H* --- --- likelihood test for early Break@D out of Do@D *L
If@vnLogLikelihoodP- 1T vnLogLikelihoodP- 2T - 1 nTol, Break@DD,
H* --- Max iterations for Do@D *L
8iMaxIter<
F;
H* BIC *L
nBIC = - 2 vnLogLikelihoodP- 1T + HiN HiN + 2L - 1L Log@iTD;
H* Return i, a, m, s, g, BIC, log as rule vector *L
8"i" -> vnIota, "a" mnA, "m" vnMean,
"s" vnSdev, "g" mnGamma, "BIC" nBIC, "LL" vnLogLikelihood<
F;

xHMMUG Report

Description

Produce a summary report of the results of an xHMMUG[] fit.

Input

A vector of rules, typically as produced from xHMMUG[]. There are no options.

Output

The function does produce a result but Print[]s out the summary to the notebook as a side-effect.
18 HMMUG_20091222_05.nb

Code

In[5]:= xHMMUGReport@hmmug_D := ModuleA


8<,
`
PrintA"i = ", MatrixForm@"i" . hmmugDE;
`
PrintA"a = ", MatrixForm@"a" . hmmugDE;
`
PrintA"p = ", MatrixForm@xMarkovChainEquilibrium@"a" . hmmugDDE;
`
PrintA"m = ", MatrixForm@"m" . hmmugDE;
`
PrintA"s = ", MatrixForm@"s" . hmmugDE;
`
PrintA"@qXD= ", Last@"LL" . hmmugDE;
Print@"t = ", P- 1T P- 2T - 1 &@"LL" . hmmugDD;
Print@" = ", Length@"LL" . hmmugDD;
Print@"BIC = ", "BIC" . hmmugD;
E;

HMMUG Simulator

Description

Simulate an HMMUG model with specified parameters for a specified number of periods.

Input

There are two inputs:


q itself as a list containing the parameters {i. a, m, s}, and
a positive integer simulation length.

There are no options.

Output

Returns a 2-list;
the vector of hidden states expressed as integers and
the vector of simulated data.

Note

The entire simulation function is a single line of Mathematica.

Code

In[6]:= xSimHMMUG@8vnStart_, mnMarkovChain_, vnMean_, vnSdev_<, iSimLength_D :=


8, RandomReal@NormalDistribution@vnMeanPT, vnSdevPTDD & < &@
NestList@
RandomChoice@mnMarkovChainPT Range@Length@mnMarkovChainDDD &,
RandomChoice@vnStart Range@Length@mnMarkovChainDDD,
iSimLength - 1
D
D;

S&P 500 Demonstration


HMMUG_20091222_05.nb 19

S&P 500 Demonstration


The dataset used to demonstrate the code above is the monthly log returns for the S&P 500 from Jan 1969 to the last full month
of the current date.

Running the code below in mid-December 2009 results in just under 40-years of monthly log returns for the S&P 500 index. The
the ticker "^GSPC" is price only, dividends are not included. For the purposes of this exposition, this is good enough.

Note that this period include a great deal of variety in the market: the sideways markets of the 1970s, the extended bull market
that with hindsight appears to have ended in the early 2000s, and the difficult markets of the first decade of the century. It
includes several interesting events such as the explosion of interest rates in the 1980s, the 1987 crash, the tech bubble, the
housing bubble, etc.
In[7]:= FinancialData@"^GSPC", "Name"D

Out[7]= S And P 500 Indexrth

In[8]:= mxSP500 = FinancialData@"^GSPC", 81968, 12, 1<D;


mxSP500 = Most@Last Split@mxSP500, 1P1, 2T 2P1, 2T &DD;
mxSP500LogReturns = 8Rest@First mxSP500D, Differences@Log@Last mxSP500DD<;

In[11]:= Print@"Date Range: ", 8First First@D, First Last@D< &@mxSP500LogReturnsDD;


Print@"Number of Months: ", Length@mxSP500LogReturnsDD
Date Range: 881969, 1, 31<, 82009, 12, 31<<

Number of Months: 492

Plots of the period and cumulative returns and a histogram of returns are:

In[13]:= DateListPlot@mxSP500LogReturns, Joined True, PlotRange AllD

0.1

0.0

Out[13]=

-0.1

-0.2

1970 1980 1990 2000 2010


20 HMMUG_20091222_05.nb

In[14]:= DateListPlotA8First , Accumulate@Last D< &@mxSP500LogReturnsD,


Joined True, PlotRange AllE

2.5

2.0

1.5

Out[14]=
1.0

0.5

0.0

-0.5
1970 1980 1990 2000 2010

The log returns are noticeably negatively skewed and leptokurtotic (heavy-tailed):

In[15]:= Through@8Mean, StandardDeviation, Skewness, Kurtosis<@mxSP500LogReturnsPAll, 2TDD

Out[15]= 80.0048245, 0.0452431, - 0.709973, 5.58923<

Model Fits
Before fitting fancy hidden Markov models, it would be wise to evaluate the simple assumption of log Normality. For the S&P
500's log returns, the log likelihood is computed as follows:

Univariate Gaussian Model

For a univariate Gaussian distribution the log density of a single observation is:

1 1 x -m 2
log y@x m, sD = - log@2 pD - log@sD -
2 2 s
hence, for the observations X:
T 1 T 1 xt -m 2
@q XD = log y@xt m, sD = -T log@2 pD + log@sD -
t=1 2 t=1 2 s

Note that we are using the MLE of the standard deviation.

In[16]:= nMean = Mean@mxSP500LogReturnsPAll, 2TD


nSdev = MeanAmxSP500LogReturnsPAll, 2T2 E - nMean2
iSize = Length@mxSP500LogReturnsPAll, 2TD
Out[16]= 0.0048245

Out[17]= 0.0451971

Out[18]= 492

The log likelihood is:


HMMUG_20091222_05.nb 21

In[19]:= nLogLikelihood =
2
HLog@2 pDL 1 mxSP500LogReturnsPAll, 2T - nMean
- iSize + Log@nSdevD - TotalB F
2 2 nSdev
Out[19]= 825.47

The BIC is:

In[20]:= nBIC = - 2 nLogLikelihood + 2 Log@iSizeD

Out[20]= - 1638.54

Two-State Model

As mentioned earlier, it's a good idea to initialize the algorithm at different points. Because some of these initial values may be
really bad, Mathematica may display underflow warnings. It's usually okay at this stage to ignore them. A set of 20 intitial
guesses are run for five iterations...
In[21]:= vxInitialTest@"SP500", 2D = xHMMUG@mxSP500LogReturnsPAll, 2T, , "MaxIterations" 5D &
Table@xHMMUGRandomInitial@mxSP500LogReturnsPAll, 2T, 2D, 820<D;

... then the one with the highest likelihood is selected...

In[22]:= iBest@"SP500", 2D = Position@, Max@DD &@Last@"LL" . D & vxInitialTest@"SP500", 2DDP1, 1T

Out[22]= 12

It's a good idea to examine the ensemble graphically to see if the number of iterations allowed for each candidate is reasonable:

In[23]:= ListPlot@"LL" . & vxInitialTest@"SP500", 2D, Joined TrueD

840

820

800
Out[23]=

780

760

1 2 3 4 5 6

The best candidate is then used to intialize the model fit. Here the default tolerance and iteration limit is used:

In[24]:= vxHMMUG@"SP500", 2D = xHMMUG@


mxSP500LogReturnsPAll, 2T,
8"i", "a", "m", "s"< . HvxInitialTest@"SP500", 2DPiBest@"SP500", 2DTL
D;

The convergence of the likelihood function should be examined:


22 HMMUG_20091222_05.nb

In[25]:= ListPlot@"LL" . vxHMMUG@"SP500", 2D, PlotRange AllD

854.5

854.0

Out[25]=
853.5

853.0

10 20 30

A report summarizing the result is:

In[26]:= xHMMUGReport@vxHMMUG@"SP500", 2DD

` 5.76112 10-10
i =
1.

` 0.812364 0.187636
a = K O
0.044969 0.955031

` 0.193328
p = K O
0.806672

` - 0.0188743
m = K O
0.0104782

` 0.0691238
s = K O
0.0349897
`
@qXD= 854.718

t = 9.38355 10-8
= 39

BIC = - 1666.05

Finally, the values of gt are cumulatively summed. This makes the interplay across states easier to see compared with plotting the
state probabilities:
HMMUG_20091222_05.nb 23

In[27]:= DateListPlotA8mxSP500LogReturnsPAll, 1T, < &


IAccumulate IH"g" . vxHMMUG@"SP500", 2DLMM, PlotStyle 88Red<, 8Green<<, Joined TrueE

400

300

Out[27]= 200

100

0
1970 1980 1990 2000 2010

If there are doubts about the solution, then the number of initial candidates or the number of iterations set for the candidates can
be varied. Once a candidate is selected the tolerance and iteration limit of the main fit may be adjusted. It's also reasonable to
repeat the entire process above multiple times to check the convergence of the log likelihood. Finally, no claim is made that the
method of generating initial candidates is optimal; it may be wise to consider alternatives.

Three-State Model

Here is the same analysis for a three-state model:

In[28]:= vxInitialTest@"SP500", 3D = xHMMUG@mxSP500LogReturnsPAll, 2T, , "MaxIterations" 5D &


Table@xHMMUGRandomInitial@mxSP500LogReturnsPAll, 2T, 3D, 820<D;

In[29]:= iBest@"SP500", 3D = Position@, Max@DD &@Last@"LL" . D & vxInitialTest@"SP500", 3DDP1, 1T

Out[29]= 15

In[30]:= ListPlot@"LL" . & vxInitialTest@"SP500", 3D, Joined TrueD

850

840

830

Out[30]= 820

810

800

1 2 3 4 5 6

Previous trials have indicated that the iteration limit sometimes must be increased to achieve the desired tolerance.

In[31]:= vxHMMUG@"SP500", 3D = xHMMUG@


mxSP500LogReturnsPAll, 2T,
8"i", "a", "m", "s"< . HvxInitialTest@"SP500", 3DPiBest@"SP500", 3DTL,
"MaxIterations" 1000
D;
24 HMMUG_20091222_05.nb

In[32]:= ListPlot@"LL" . vxHMMUG@"SP500", 3D, PlotRange AllD

864

862

860

Out[32]= 858

856

854

50 100 150 200 250 300 350

In[33]:= xHMMUGReport@vxHMMUG@"SP500", 3DD

2.76515 10-259
`
i = 1.
1.22483 10-137

0.541067 0.458933 4.91812 10-29


`
a = 1.28261 10-6 0.969118 0.0308804
0.195081 8.44613 10-33 0.804919
0.0549045
`
p = 0.815937
0.129159
0.0536356
`
m = 0.00809838
- 0.0366716
0.0204549
`
s = 0.0356294
0.0695324
`
@qXD= 864.624

t = 7.87853 10-8
= 354

BIC = - 1642.47
HMMUG_20091222_05.nb 25

In[34]:= DateListPlotA8mxSP500LogReturnsPAll, 1T, < &


IAccumulate IH"g" . vxHMMUG@"SP500", 3DLMM, PlotStyle 88Red<, 8Green<, 8Blue<<,
Joined TrueE

400

300

Out[34]= 200

100

0
1970 1980 1990 2000 2010

Summary

Summarizing results by comparing the BICs for each model:

In[35]:= TableFormA
881, 2, 3<, Join@8nBIC<, "BIC" . & 8vxHMMUG@"SP500", 2D, vxHMMUG@"SP500", 3D<D<,
TableHeadings 8None, 8"States", "BIC"<<, TableAlignments CenterE
Out[35]//TableForm=
States BIC
1 - 1638.54
2 - 1666.05
3 - 1642.47

Run as of 21-Dec-2009, the two-state model is selected on the basis of its BIC.

A Few Observations

Consider the "best" model above, the two-state model:

In[36]:= xHMMUGReport@vxHMMUG@"SP500", 2DD


26 HMMUG_20091222_05.nb

` 5.76112 10-10
i =
1.

` 0.812364 0.187636
a = K O
0.044969 0.955031

` 0.193328
p = K O
0.806672

` - 0.0188743
m = K O
0.0104782

` 0.0691238
s = K O
0.0349897
`
@qXD= 854.718

t = 9.38355 10-8
= 39

BIC = - 1666.05

Based on monthly returns, the market appears to be in a "low volatility-moderately positive return" or "bull" state about 80% of
the time and in a "high volatility-highly negative return" or "bear" state the remaining 20% of the time. The large transition
probabilites on the main diagonal of the state transition matrix a indicate that the states are somewhat "sticky". Thus, once the
market finds itself in a bull or bear state, it tends to stay there for a time.

This is the simple bull-bear model used as the initial example in the first section of the tutorial.

Simulation

Generating Simulated Data

A simulation, with timing, over the same time horizon as the original data with the two-state model's parameters is:

In[37]:= 8nTime, 8vnStates, vnSim<< =


Timing@xSimHMMUG@8"i", "a", "m", "s"< . vxHMMUG@"SP500", 2D, Length@mxSP500LogReturnsDDD;

On a 22.93 GHz quad-core Intel Nehalem-based Xeon system, the time is < 5 milliseconds. Your time may be faster or slower
but even on a relatively slow processor the code should run fairly quickly.
In[38]:= nTime

Out[38]= 0.004454

Plots of the period and cumulative returns and a histogram of returns are:
HMMUG_20091222_05.nb 27

In[39]:= ListPlot@vnSim, Joined True, PlotRange AllD

0.15

0.10

0.05

Out[39]=
100 200 300 400 500
-0.05

-0.10

-0.15

In[40]:= ListPlot@Accumulate@vnSimD, Joined True, PlotRange AllD

2.5

2.0

1.5

Out[40]=
1.0

0.5

100 200 300 400 500

In[41]:= Histogram@vnSimD

Out[41]=

The distribution should be noticeably negatively skewed and leptokurtotic:

In[42]:= Through@8Mean, StandardDeviation, Skewness, Kurtosis<@vnSimDD

Out[42]= 80.00525115, 0.0436044, - 0.472083, 4.60125<

Fitting a Two-State HMMUG Model to the Simulation

In[43]:= vxInitialTest@"Sim", 2D = xHMMUG@mxSP500LogReturnsPAll, 2T, , "MaxIterations" 5D &


Table@xHMMUGRandomInitial@mxSP500LogReturnsPAll, 2T, 2D, 820<D;
28 HMMUG_20091222_05.nb

In[44]:= iBest@"Sim", 2D = Position@, Max@DD &@Last@"LL" . D & vxInitialTest@"Sim", 2DDP1, 1T

Out[44]= 14

In[45]:= ListPlot@"LL" . & vxInitialTest@"Sim", 2D, Joined TrueD

850

840

830

820
Out[45]=
810

800

790

1 2 3 4 5 6

In[46]:= vxHMMUG@"Sim", 2D = xHMMUG@


mxSP500LogReturnsPAll, 2T,
8"i", "a", "m", "s"< . HvxInitialTest@"Sim", 2DPiBest@"Sim", 2DTL
D;

In[47]:= ListPlot@"LL" . vxHMMUG@"Sim", 2D, PlotRange AllD

854.5

854.0

853.5
Out[47]=

853.0

852.5

10 20 30 40

Evaluating the Model

Note that the assignment of parameters to states is arbitrary and dependent upon initial conditions.

The original model fit is:

In[48]:= xHMMUGReport@vxHMMUG@"SP500", 2DD


HMMUG_20091222_05.nb 29

` 5.76112 10-10
i =
1.

` 0.812364 0.187636
a = K O
0.044969 0.955031

` 0.193328
p = K O
0.806672

` - 0.0188743
m = K O
0.0104782

` 0.0691238
s = K O
0.0349897
`
@qXD= 854.718

t = 9.38355 10-8
= 39

BIC = - 1666.05

The fit on the data simulated using the above as parameters is:

In[49]:= xHMMUGReport@vxHMMUG@"Sim", 2DD

` 2.62911 10-10
i =
1.

` 0.811942 0.188058
a = K O
0.044902 0.955098

` 0.192746
p = K O
0.807254

` - 0.0189478
m = K O
0.0104746

` 0.069162
s = K O
0.0350023
`
@qXD= 854.718

t = 7.83003 10-8
= 42

BIC = - 1666.05

The cumulative g-plots for the true hidden states produced by the simulation (dashed) and the fit (solid) are below. The assign-
ment of the underlying states positionally is dependent upon intitialization.
30 HMMUG_20091222_05.nb

In[50]:=
ListPlot@Join@Accumulate Transpose@"g" . vxHMMUG@"Sim", 2DD,
Accumulate Transpose@If@ 1, 81, 0<, 80, 1<D & vnStatesDD,
PlotStyle 88Red<, 8Green<, 8Red, Dashed<, 8Green, Dashed<<, Joined TrueD

400

300

Out[50]= 200

100

100 200 300 400 500

Using a qq-plot or quantile plot to compare the distribution of the original and simulated data requires that the Statistical Plots
package be loaded:
In[51]:= Needs@"StatisticalPlots`"D

In[52]:= QuantilePlot@mxSP500LogReturnsPAll, 2T, vnSimD


0.15


0.10










0.05






















Out[52]=



-0.2 -0.1




0.1














-0.05












-0.10


-0.15

It's a good idea to run the above simulation several times.

A Few More Observations

If one proceeds with the working assumption that the two-state model fit on the S&P 500 above is a reasonable approximation of
reality, then the simulation code can be used to study some interesting questions.
HMMUG_20091222_05.nb 31

Annual Log Returns

Although the distribution at the monthly level is clearly non-Gaussian (both the observed data and the resulting model), one may
wonder how fast the Central Limit Theorem might kick in and cause the log returns to look more Gaussian. We can use the two-
state model to simulate 12-month log returns for 10,000 samples:
In[53]:= vnYearOut = Table@
Total@Last@xSimHMMUG@8"i", "a", "m", "s"< . vxHMMUG@"SP500", 2D, 12DDD,
810 000<
D;

In[54]:= Through@8Mean, StandardDeviation, Skewness, Kurtosis<@vnYearOutDD

Out[54]= 80.0799498, 0.159211, - 0.612109, 4.04489<

In[55]:= Histogram@vnYearOutD

Out[55]=

Even at 12-months, the distribution of annual returns is still noticeably negatively skewed and leptokurtotic, although both are
somewhat attentuated compared to the monthly. The often applied simplifying assumption that returns are log Normally dis-
tributed is clearly not tenable even over year-long periods. This has significance for financial applications such as options pricing
or investment management.

Decade Log Returns

One might wonder if things settle down after a longer period. A decade-long simulation study appears below:

In[56]:= vnDecadeOut = Table@


Total@Last@xSimHMMUG@8"i", "a", "m", "s"< . vxHMMUG@"SP500", 2D, 120DDD,
810 000<
D;

In[57]:= Through@8Mean, StandardDeviation, Skewness, Kurtosis<@vnDecadeOutDD

Out[57]= 80.599441, 0.577925, - 0.316765, 3.10633<


32 HMMUG_20091222_05.nb

In[58]:= Histogram@vnDecadeOutD

Out[58]=

The kurtosis has approached the more Gaussian value of 3 but there is still significant negative skewness in the log returns. The
expected return is 0.6 - 1 > 82 % for cases run as of 21 Dec 2009, but material drawdowns are still an issue. Again, assumptions
of log Normality appear wide of the mark even for decade-long returns.

Nave Bootstrap of Decade from Annual

The Markov chain operates at the monthly level. After a year there is not much "memory" of the state the system was in a year
ago:

In[59]:= PrintA"\na12 = Prob@qt+12 = sHjLqt = sHiLD = ",


MatrixFormAMatrixPowerAH"a" . vxHMMUG@"SP500", 2DL, 12EEE

0.226973 0.773027
a12 = Prob@qt+12 = sHjLqt = sHiLD = K O
0.185264 0.814736

The rows are close the equilibrium distribution. This might cause one to think that after a year one could largely ignore the
dynamics of the Markov chain and generate a sample of decade-long returns by direct random sampling from the annual returns.
In other words, the annual simulation results are sampled and the effect of the Markov chain is otherwise ignored to produce
decade-long simulations:
In[60]:= vnDecadeFromAnnual = Table@Total@RandomChoice@vnYearOut, 10DD, 810 000<D;

In[61]:= Through@8Mean, StandardDeviation, Skewness, Kurtosis<@vnDecadeFromAnnualDD

Out[61]= 80.795806, 0.506324, - 0.192783, 3.06076<


HMMUG_20091222_05.nb 33

In[62]:= Histogram@vnDecadeFromAnnualD

Out[62]=

Not only is the mean log return significantly higher but the distribution is significantly less negatively skewed. Even out to a
decade the dynamics captured by the HMM are important and must be accounted for. This "long memory" effect for a model that
captures only monthly dynamics is interesting.

Next Steps

First of all, experiment. Try looking at different financial instruments at different time scales. Compare different periods.

Try different models. One obvious choice: a simple finitie mixture of Guassians rather than a HMM. The BIC can provide
guidance as to whether the extra parameters are justified in the HMM.

The form of the EM algorithm presented above is the most basic. There are methods for accelerating its convergence. There are
also alternatives that involve direct maximization of the likelihood function [McLachlan 2008]. Once comfortable with the
material in this tutorial, the student should study those alternatives.

The basic routine to fit HMMUG models can be easily extended to more complex cases. For example, if xt is a vector with state-
dependent multivaritate Gaussian densities, then one has to replace the univariate PDF with the multivariate in bt and update the
state dependent mean vectors and covariance matrices using the same weights based on gt . In [Rabiner 1989] the more general
case of using state-dependent finite mixtures of multivariate Guassians is covered.

The former change is trivial; the latter is slightly more difficult but still straightforward. The code above could have easily been
written to accommodate them. However, the objectives here were more pedagogical than utilitarian, and these tasks are, there-
fore, left as an exercise for the student.

Special structures can be incorporated into the transition matrix or the model can be extended to a second-order Markov chain,
i.e., one in which the probability of the next state is conditioned on the prior two states. The Markov chain can be expressed in
continous time and the price dynamics expressed as regime-switching stochastic differential equations [Bhar et al. 2004]
[Zucchini et al. 2009].

Densities other than Gaussian are accommodated in most instances by simply using them to compute bt HiL for forward-backward
recursions, although the distributions must be ones for which the M-step can be accomplished in something at least approaching
closed form. Parameter updates in the E-step are then done using MLE computations appropriate to the distributions at hand.

In some cases the M-step cannot be accomplished easily, and the conditional expectation must be computed by Monte Carlo.
This is an example of Gibbs sampling and is a simple form of Markov chain Monte Carlo [Gilks 1995]. The HMM can also be
viewed as a simple dynamic Bayesian network [Neapolitan 2003]. Thus, study of HMMs provides a gateway to powerful
statistical and machine learning techniques that have arisen with the past two decades.

References
34 HMMUG_20091222_05.nb

References
Baum, L. E., T. Petrie, G. Soules, and N. Weiss, "A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of
Markov Chains", Annals of Mathematical Statistics, Vol. 41, No. 1, 1970.
Bhar, Ramaprasad, and Shigeyuki Hanori, Hidden Markov Models: Applications to Financial Economics, Kluwer, 2004.
Dempster, A.P., N. M. Liard, and D. B. Rubin, "Maximum-Likelihood from Incomplete Data via the EM Algorithm", Journal of the Royal
Statistical Society, Series B, 39, 1977.
Finch, S., N. Mendell, and H. Thode, "Probabilistic Measures of Adequacy of a Numerical Search for a Global Maximum." Journal of the
American Statistical Association, 84, 1989.
Gilks, W. R., S. Richardson, and D. Spiegelhalter (Eds.), Markov Chain Monte Carlo in Practice: Interdisciplinary Statistics, Chapman &
Hall/CRC Interdisciplinary Statistics Series, 1995.
Karlis, D., "A Cautionary Note About the EM Algorithm for Finite Exponential Mixtures," Technical Report No 150, Department of Statisitics,
Athens University of Economics and Business, 2001.
Karlis, D., and E. Xekalaki, "Choosing Values for the EM Algorithm for Finite Mixtures," Computational Statistics & Data Analysis, 41, 2003.
McLachlan, G. J., and D. Peel, Finite Mixture Models, Wiley-Interscience, 2000.
McLachlan, G. J., and T. Krishnan, The EM Algorithm and Extensions, 2nd Ed., Wiley-Interscience, 2008.
Neapolitan, R. E., Learning Bayesian Networks, Prentice-Hall, 2003.
Rabiner, L. R., "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition", Proceedings of the IEEE, No. 2,
February 1989.
Wikipedia, the free encyclopedia, "Hidden Markov Model", retrieved from http://en.wikipedia.org/wiki/Hidden_Markov_model, 2009-12-22.
Zucchini, W., and I. L. MacDonald, Hidden Markov Models for Time Series: An Introduction Using R, CRC Press, 2009.

You might also like