HiddenMarkovModels RobertFreyStonyBrook PDF
HiddenMarkovModels RobertFreyStonyBrook PDF
[email protected]
http://www.ams.sunysb.edu/~frey
22 December 2009
For a concrete example, take a simple model of the monthly returns of a stock market. The market has two states, a "Bull" state
characterized by "good" performance and a "Bear" state characterized by "bad" performance. The transistion from one state to
another is controlled by a two-state Markov chain. Given the states i, j {Bull. Bear} of the market in a given month, the
probability the market transitions from state i to state j in the next month is aHi, jL. If the market is in some state i then its log
return is Normally distributed with state-dependent mean m(i) and standard deviation s(i). This simple market model is illus-
trated below:
2 HMMUG_20091222_05.nb
Not what one would call the last word in market models. Limiting the granularity to monthly leaves out a great deal of informa-
tion. The market dynamics are undoubtedly complex and two states may not be enough to capture this. Even during periods when
a market appears to be reasonably characterized as bull or bear, the log returns tend to be more kurtotic than a Normal distribu-
tion would predict.
On the other hand, models always leave out stuff. That's the point. As demonstrated below, this simple model captures some
interesting aspects of real stock market behavior. The question is: Given a set of observations and the framework of a model such
as that above, how does one estimate its parameters?
"The road to learning by precept is long, but by example short and effective." ~Seneca
Lawrence Rabiner's excellent "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition" [Rabiner
1989] is available here as a PDF from his faculty website (as of 09 Dec 2009) at UC Santa Barbara. The tutorial covers the
maximum likelihood estimation (MLE) of hidden Markov models with both discrete and continuous outputs as well as several
extensions, such as autoregressive and non-ergodic models. It also covers alternatives to MLE, such as estimating Viterbi
(maximum probability) sequences. Finally, it covers important issues of implementation such as intializing parameters and
scaling computations to avoid under- or overflow conditions.
However, Rabiner's paper focuses on the theory as it pertains to discrete outcome models used in speech recognition. It defers the
details necessary for practical implementation until fairly late in the presentation. Where it does discuss continuous outcome
models, it does so using the most general of cases: finite mixtures of multivariate Gaussians. It also does not present an end-to-
end, step-by-step description of the practical implementation of the algorithm. Although it discusses applications, it does not
present actual working examples.
None of these comments, however, should be interpreted as a criticism of [Rabiner 1989] and the student is encouraged to read
and work through that paper and the other references provided.
The hidden Markov model with univariate Gaussian outcomes (HMMUG) is the focus here. At a given time t we are presented
with a discrete-time system that can be in one of N states:
S = 8sH1L, , sHiL, , sHNL<
Q = 8q1 , , qt , , qT <
The states evolve over time according to a finite-state, ergodic Markov chain characterized by its intial state and transition matrix:
An observation is generated at each point in time in a manner that is conditioned on the state of the system. Here we assume that
the observations are generated from a univariate Gaussian distribution with state-dependent mean and standard deviation:
Density@xt qt = sHiLD = y@x mHiL, sHiLD
The symbol q will be used to refer to the parameters of the model collectively; implicit in the parameters is the number of states
of the system:
q = 8i, a, m, s<
X = 8x1 , , xt , , xT <
The model will be fit using maximum likelihood estimation (MLE), i.e., by determining the values of the parameters which
maximizes the likelihood of observing the data:
@qMLE XD = argmax@ Prob@X qD D
q
EM Algorithm
In the HMMUG model, the state of the system qt cannot be directly observed. If we had such state information, then estimating
the parameters q would be trivial.
The EM algorithm [Dempster et al., 1977] can be used for MLE when there are occult (i.e., hidden, latent, censored, or missing)
data or when a problem can be reformulated in those terms. Informally, the EM algorithm takes the observed data and a current
estimate of the parameters and uses them to estimate the missing data; it then takes the observed data and the estimated missing
data and uses them to produce a new estimate of the parameters. Remarkably, under suitable conditions, this iterative process
converges to a local maximum of the likelihood function [Dempster et al. 1977].
4 HMMUG_20091222_05.nb
The EM algorithm [Dempster et al., 1977] can be used for MLE when there are occult (i.e., hidden, latent, censored, or missing)
data or when a problem can be reformulated in those terms. Informally, the EM algorithm takes the observed data and a current
estimate of the parameters and uses them to estimate the missing data; it then takes the observed data and the estimated missing
data and uses them to produce a new estimate of the parameters. Remarkably, under suitable conditions, this iterative process
converges to a local maximum of the likelihood function [Dempster et al. 1977].
More formally, at the hth iteration given a complete data set X, occult data Z, and a working estimate of parameters qHhL , the
expectation (E) step calculates the expected value of the log likelihood function of the conditional distribution of Z given X and
qHhL .
The maximization (M) step then finds a successor parameter set qHh+1L which maximizes that expectation.
The parameter set qHh+1 then becomes the new working parameter set. EM iterations continue until log @q XD converges. For
the HMMUG model and many other cases of interest, the E-step reduces to estimating sufficient statistics for Z given X and q and
the M-Step to maximizing @q X, ZD using the "completed" data.
Each EM iteration results in a non-decreasing likelihood, but the EM algorithm converges sublinearly and may find a local
maximum. There may be material numerical instabilities affecting convergence in practice. The algorithm may set certain
elements of q to explain a single point, and this drives the likelihood towards infinity. Single-point fits mean either that a less
complex model is appropriate or that the offending data point is an outlier and must be removed from the sample [Dempster et
al., 1977] [McLachlan 2008].
In the HMMUG model, the state of the system qt cannot be directly observed. If we had such state information, then estimating
the parameters would be trivial. Thus, fitting an HMMUG model can be viewed as a missing data problem in which one is given
the observations X but are missing the states Q.
E-Step
The E-step will involve the estimation of the following quantities [Baum et al. 1970]:
How does xt represent the missing state data? Clearly, if we knew xt with certainty, we could use it to separate the outcomes into
subsets depending upon state and use each subset to estimate m and s for each state. It would also allow us to count the frequency
of transitions from one state to another and thereby estimate a. As it is, the best we can do is estimate xt probabilisticly given X
and qHhL , i.e., as the solution to the expectation @q qHhL D.
M-Step
For the HMMUG model, the M-Step, the re-estimation of q based on the completed data, will be accomplished for m(i) and s(i)
by weighting each observation proportionally to the probability that it is in a given state at a given time and for the transition
matrix by summing the xt across time and normalizing the rows to sum to unity [Baum et al. 1970].
Likelihood Function
Direct calculation of [q | X] is not computationally tractable [Baum et al. 1970] [Rabiner 1989]. Let Q denote an arbitrary state-
sequence, then
HMMUG_20091222_05.nb 5
@q XD = Prob@X Q, qD Prob@Q qD
"Q
The computation of the likelihood for any one state sequence given the observed data and parameters is straightforward; how-
ever, there are N T such state sequences. Consider a modest problem in which we wish to fit a two-state model to one-year of
daily returns. The total number of possible state sequences would be 2250 > 1.8 1075 . In the nave form above, the computation
of the likelihood is not practical.
Baum-Welch Algorithm
Forunately, the forward-backward, or Baum-Welch, algorithm [Baum et al. 1970] employs a clever recursion to compute both
@q, qHhL D and @q XD efficiently; that paper also represents an early example of the application of the EM algorithm years before
it was more generally and formally codified in [Dempster et al., 1977].
Consider a case in which N = 3 and T = 10, i.e., three states and five periods. Even in a toy problem such as this there are 310 =
59,049 paths. However, as illustrated below, those paths are recombinant, forming a lattice.
states
time
The forward-backward algorithm exploits this topology to efficiently estimate the probabilities required to compute the both the
conditional expectation @q, qHhL D and the log likelihood log @X qD. The algorithm is a form of dynamic programming.
Forward-Backward Recursions
In presenting the forward-backward calculations, the variables are colored coded to cross-reference them with their associated
specifications as probabilities in the hope that this is more illuminating than distracting. The exposition in no way depends on this
color coding, however.
Forward Recursion: For a forward recursion, consider the example below in which we wish to compute
Prob@x1 , , x4 , q4 = sH3L qD and know the associated probabilities of the N predecessor states. Shown below are the target
8q4 = sH3L< and it predecessors 8q3 = sH1L, q3 = sH2L, q3 = sH3L< with the arrows meant to show the flow of computation.
states
time
From the density Prob@x4 q4 = sH1L, qD, the vector of predecessor forward probabilities Prob@x1 , , x3 , q3 = sHkL qD, and the
appropriate column of the transition matrix Prob@q4 = 1 q3 = sHkLD the required calculation expressed generally is:
6 HMMUG_20091222_05.nb
From the density Prob@x4 q4 = sH1L, qD, the vector of predecessor forward probabilities Prob@x1 , , x3 , q3 = sHkL qD, and the
appropriate column of the transition matrix Prob@q4 = 1 q3 = sHkLD the required calculation expressed generally is:
Prob@x1 , , xt , qt = sHiL qD =
N
Prob@xt qt = sHiL, qD Prob@x1 , , xt-1 , qt-1 = sHkL qD Prob@qt = i qt-1 = sHkLD
k=1
then:
N
at HiL = bt HiL at-1 HkL aHk, iL
k=1
The forward recursion starts at time t = 1 and recursively calculates the forward probabilities across states for each time period
up to t = T.
Backward Recursion: For the backward recursion, consider the example below in which we wish to compute
Prob@x5 , , xT q4 = sH2L, qD and know the associated probabilities of its N successor states Prob@x6 , , x10 q5 = sHkL, qD.
Shown below are the target 8q4 = sH2L< and it successors 8q5 = sH1L, q5 = sH2L, q5 = sH3L< with the arrows again meant to show the
flow of computation.
states
time
From the transition probabilities Prob@q5 = sHkL q4 = sH2LD, the densities Prob@x5 q5 = sHkL, qD, and successor backward probabili-
ties Prob@x6 , , x10 q5 = sHkL, qD the required calculation expressed generally is:
Prob@xt+1 , , xT qt = sHiL, qD =
N
Prob@qt+1 = sHkL qt = sHiLD Prob@xt+1 qt+1 = sHkL, qD Prob@xt+2 , , xT qt+1 = sHkL, qD
k=1
then:
HMMUG_20091222_05.nb 7
M
bt HiL = aHi, kL bt+1 HkL bt+1 HkL
k=1
The initial condition to start the backward recursion is arbitrarily chosen as:
bT HiL = 1
The backward recursion starts at time t = T and recursively calculates the backward probabilities acrosss states for each time
period down to t = 1.
Sufficient Statistics for the Occult Data, Updating q, and Calculating log [X | q]
With the forward probabilities at HiL and backward probabilities bt HiL one can compute the probability that the system in a specific
state at a specific time given the observed data and the working parameters:
Prob@qt = sHiL X, qD Prob@x1 , , xt , qt = sHiL qD Prob@xt+1 , , xT qt = sHiL, qD
Define:
then:
Similarly, we can calculate the probability that the system transits from state s(i) to state s(j) at time t:
then:
Note that xt is not defined for t = T. The values of gt and xt are normalized so that they represent proper probability measures:
N
gt HkL = 1
k=1
N N
xt Hk, lL = 1
k=1 l=1
As described earlier, the parameter set q is updated using xt , gt and xt . The log likelihood can be computed by summing aT HiL
over the states:
8 HMMUG_20091222_05.nb
N N
log @q XD = log Prob@X, qT = sHiL qD = log aT HiL
i=1 i=1
MLE Implementation
Implementation Issues
There are several implementation issues associated with actually fitting a model to data. We deal with three here:
Selecting the number of states.
Initializing and terminating the algorithm. This has three sub-issues:
In MLE one can always increase the likelihood by adding parameters, but, as one adds parameters, the risk of overfitting is also
increased. A trade-off mechanism is needed.
The Bayesian Information Criterion (BIC) is the log likelihood penalized for the number of free paramaters [McLachlan 2000].
The BIC is not a test of significance (i.e., it is not used to accept or reject models), but it does provide a means of ranking models
that have been fit on the same data. The model with the smallest (i.e., usually most negative) BIC is preferred.
The BIC given the likelihood , the number of free parameters k, and the number of observations n is
BIC@X, qD = -2 log + k log n
In the HMMUG model the number of observations is T, and the number of free parameters for i, a, m, and s, respectively, is
There are alternatives to the BIC [McLachlan 2000] that involve similar concepts of log likelihood penalized for model
complexity.
The EM algorithm starts with an intial guess of the parameters q. The algorithm can get stuck in a local maximum, so the choice
of an intial q is more than just an issue of efficiency. Several approaches have been suggested [Finch et al. 1989] [Karlis 2001]
[Karlis et al. 2003]. Most termination criteria do not detect convergence per se but lack of progress, and the likelihood function
has "flat" regions that can lead to premature termination [Karlis 2001].
Therefore, it makes sense to make the termination criteria reasonably strict, and it also makes sense to start the algorithm at
multiple starting points. [Karlis 2001] [Karlis et al. 2003]. An approach suggested in [Karlis et al. 2003] is to run multiple
starting points for a limited number of iterations, pick the one with the highest likelihood and then run that choice using a fairly
tight terminiation tolerance. This is the approach taken in the demonstrations below.
Scaling
HMMUG_20091222_05.nb 9
Scaling
Repeated multiplication by bt HiL at each time step can cause serious computational problems. In the discrete case as discussed in
[Rabiner 1989] bt HiL << 1 and the computation of at HiL is driven exponentially towards zero. In the continuous case, however,
bt HiL may take on any value and in the Gaussian case the expected value of bt HiL is 1 s 2 p . Thus, at HiL may be driven to 0 or
depending upon s. In the case of time series of financial returns s << 1 2 p ; hence, bt HiL tends to be >>1 and the problem
is more one of over-flow than under-flow.
The solution, as discussed in [Rabiner 1989], is to scale the computations in a manner which will still allow one to use the same
forward-backward recursions. For each at compute a normalization constant nt and apply it to at to produce a normalized at that
sums to 1:
N
nt = 1 at HkL
k=1
at HiL = nt at HiL
At each recursion use the normalized at to compute an unnormalized at+1 which is then itself normalized and used in the next
iteration. Note that as the recursion progresses:
t
at HiL = at HiL nu
u=1
On the backward recursion apply the same normalization constant so that a normalized bt is produced with the same scaling as
at . Note that:
T
bt HiL = bt HiL nu
u=t
As [Rabiner 1989] shows, the effects of the normalization constants cancel out in the numerators and denominators of the
computations of gt and xt .
Note that the true value of aT can be recovered from the scaled values:
N T N N T
1 = aT HkL = nt aT HkL aT HiL = 1 nt
k=1 t=1 k=1 i=1 t=1
10 HMMUG_20091222_05.nb
This subsection presents a complete end-to-end description of the computations of one iteration of the EM-algorithm and a brief
description of the termination criteria for termination. Parameters without superscript represents the current estimates (i.e., q
instead of qHhL , with the successor esimates for the next iteration by a "+" superscript (i.e., q+ instead of qHh+1L ).
E-Step
then scaled:
N
n1 = 1 a1 HkL
k=1
a1 HiL = n1 a1 HiL
The forward recursions continue for t = 2, , T computing at HiL then scaling it to at HiL:
N
at HiL = bt HiL at-1 HkL aHk, iL
k=1
N
nt = 1 at HkL
k=1
at HiL = nt at HiL
The values of g and x are estimated using the scaled forward-backward values:
N
gt HiL = at HiL bt HiL at HkL bt HkL
k=1
N N
xt Hi, jL = at HiL aHi, jL bt+1 H jL bt+1 H jL at HkL aHk, lL bt+1 HlL bt+1 HlL
k=1 l=1
M-Step
HMMUG_20091222_05.nb 11
i+ HiL = g1 HiL
T-1 T-1
a+ Hi, jL = xt Hi, jL gt HiL
t=1 t=1
T T
m+ HiL = gt HiL xt gt HiL
t=1 t=1
T T
2
s+ HiL = gt HiL H xt - m+ HiLL gt HiL
t=1 t=1
Log Likelihood
As noted earlier the log likelihood is computed from the scaling constants:
T
log @q XD = -log nt
t=1
Termination Criteria
In the code developed here the relative change in log likelihood is used as the primary termination criterion. For some positive
constant t << 1 the algorithm is terminated when for the Hh + 1Lth iteration:
Other choices of termination criteria are covered in [Karlis 2001] and [Karlis 2003].
In addition, a maximum iteration limit is set and at that point the algorithm terminates even if the log likelihood tolerance has not
been achieved; one can look at the convergence of the log likelihood function and accept the solution or restart the algorithm
using either the final parameter estimates from the prior run or a new initialization.
Programming Considerations
This tutorial uses Mathematica to implement the Baum-Welch algorithm along with some useful supporting functions.
xHMMUG[data, qH0L ]
The primay function is xHMMUG[] which performs the actual MLE. It's structure can be summarized as follows:
input data vector X and initial qH0L
resolve options and set up working variables
compute initial a, n, b, and and log
begin loop
E-step: compute current b, g, and x
M-step: update q = {i, a, m, and s}
log liklihood: compute next a, n, b, and and log
append log history vector
break out of loop if has converged or max iterations reached
end loop
12 HMMUG_20091222_05.nb
end loop
compute BIC
return results: q, g, BIC, and log history
Note that an initial log likelihood is computed before the main loop starts. Normally, the log likelihood is computed immediately
after the M-step but the forward-backward algorithm also uses the as in the E-step, These computations are organized so that the
log likelihood reported at termination refers to the most recently updated q.
Mathematica, in common with other modern tools such as R, MATLAB, and others, has a syntax which supports list or array
processing. Using these capabilities usually results in code which is simpler and more efficient. For example, consider the
expression for at HiL:
N
at HiL = bt HiL at-1 HkL aHk, iL
k=1
Using to denote element-by-element multiplication, the expression above can be stated as:
at = bt Hat-1 aL
Using to denote the outer or Kronecker product, this can be restated as:
Unfortunately, the list or array conventions vary from one system to the next. Care needs to be taken if one tries, e.g., to tran-
scribe Mathematica code into R or MATLAB. At least one of the references below [Zucchini 2009] contains samples of R code.
or
AT p = p
For finite-state Markov chains, the equilibrium distribution can be found by solving the following equations, where O is a matrix
of 1s, is a vector of 1s, and 0 is a vector or 0s:
The general problem of finding the equilibrium distribution of a Markov chain is quite difficult, but the approach above often
small problems.
works for
HMMUG_20091222_05.nb 13
The general problem of finding the equilibrium distribution of a Markov chain is quite difficult, but the approach above often
works for small problems.
Mathematica Functions
The functions presented below are teaching and demonstration tools, lacking even rudimentary error checking and handling. It
would be easy to add the required features, but that would greatly increase the number of lines of code and obscure the algo-
rithms. They are, however, perfectly usable if one is careful about the inputs.
Description
Compute the equilibrium distribution for an ergodic, finite state Markov chain.
Input
The transition matrix of the Markov chain, such that mnTransitionMatrixPi, jT = Prob@qt+1 = sH jL qt = sHiLD. There are no options.
Output
Note
This function is of limited generality but is quite useful for small problems.
Code
In[1]:= xMarkovChainEquilibrium@mnTransitionMatrix_D :=
Inverse@Transpose@mnTransitionMatrixD - IdentityMatrix@Length@mnTransitionMatrixDD + 1D.
ConstantArray@1, Length@mnTransitionMatrixDD;
Random q Initializer
Description
Input
There are two inputs: the data and the number of states:
the data are represented by a numeric vector, and
the number of states is a positive integer.
Output
14 HMMUG_20091222_05.nb
Output
The result is returned as a list of data representing q; it contains the following four components:
i - initial state probability vector (numeric vector),
a - transition matrix of the Markov chain (numeric square matrix),
m - state means (numeric vector), and
s - state standard deviations (numeric vector).
Note
The output is in a form in which in can be used directly as the second argument to xHMMUG[], i.e., qH0L .
The term "random" must be taken with a grain of salt. What is produced randomly is a candidate state transition matrix. The
intial state is estimated from its equilibrium distribution. A random spread for the mean and sdev vectors is then generated based
on the total sample mean and sdev.
vnS = &@[email protected], 0.99<, iStatesDD Variance@vnDataD ;
Total@D
H* return q *L
8vnIota, mnA, vnM, vnS<
F;
Description
MLE fit of a HMMUG model using the Baum-Welch, or forward-backward, version of the EM algorithm.
Input
There are two inputs: the raw data and intial parameters:
the raw data as a numeric vector, and
the parameters q as a list containing {i, a, m, s}.
"MaxIterations" which are the maximum number of EM iterations before the function terminates.
Output
Note
Restarting the fit from the prior run is straightforward. If vxH is the result of the prior run, then {"i" , "a", "m", "s"} /. vxH will
pull out q which can be used to restart the algorithm at the point at which it terminated. Typically, one should consider changing
the iteration limit or log likelihood tolerance options when restarting, although this is not always necessary.
Code
In[4]:= H* Input X, 8i, a, m, s<, and optionally a tolerance and iteration limit. *L
xHMMUG@vnData_, 8vnInitialIota_, mnInitialA_, vnInitialMean_, vnInitialSdev_<,
OptionsPattern@DD := ModuleB
8mnA, mnAlpha, mnB, mnBeta, nBIC, mnGamma, vnIota, vnLogLikelihood,
iMaxIter, vnMean, iN, vnNu, vnSdev, t, iT, nTol, mnWeights, mnXi<,
H* Resolve options *L
nTol = OptionValue@"LikelihoodTolerance"D;
iMaxIter = OptionValue@"MaxIterations"D;
H* Initialize variables *L
iT = Length@vnDataD;
iN = Length@mnInitialAD;
vnIota = vnInitialIota;
mnA = mnInitialA;
vnMean = vnInitialMean;
vnSdev = vnInitialSdev;
H* Initial log *L
H* --- b *L
16 HMMUG_20091222_05.nb
mnB = Table@
PDF@NormalDistribution@vnMeanPT, vnSdevPTD, vnDataPtTD & Range@iND,
8t, 1, iT<
D;
H* --- a and n *L
mnAlpha = Array@0. &, 8iT, iN<D;
vnNu = Array@0. &, iTD;
mnAlphaP1T = vnIota mnBP1T;
vnNuP1T = 1 Total@mnAlphaP1TD;
mnAlphaP1T *= vnNuP1T;
For@t = 2, t iT, t ++,
mnAlphaPtT = HmnAlphaPt - 1T.mnAL mnBPtT;
vnNuPtT = 1 Total@mnAlphaPtTD;
mnAlphaPtT *= vnNuPtT;
D;
H* --- log *L
vnLogLikelihood = 8- Total@Log@vnNuDD<;
H* Main Loop *L
DoB
H* --- E-Step *L
H* --- --- b *L
mnBeta = Array@0. &, 8iT, iN<D;
mnBetaPiT, ;;T = vnNuPiTT;
For@t = iT - 1, t 1, t --,
mnBetaPtT = mnA.HmnBetaPt + 1T mnBPt + 1TL vnNuPtT;
D;
H* --- --- g *L
mnGamma = & HmnAlpha mnBetaL;
Total@D
H* --- --- x; note that we do not need the individual xt s *L
mnXi = Array@0. &, 8iN, iN<D;
ForBt = 1, t iT - 1, t ++,
mnXi += &@mnA KroneckerProduct@mnAlphaPtT, mnBetaPt + 1T mnBPt + 1TDD;
Total@Flatten@DD
F;
H* --- M-Step*L
H* --- --- a *L
mnA = & mnXi;
Total@D
H* --- --- i *L
vnIota = mnGammaP1T;
H* --- --- observation weights *L
mnWeights = & ImnGammaM;
Total@D
H* --- --- m and s *L
vnMean = mnWeights.vnData;
;
HMMUG_20091222_05.nb 17
xHMMUG Report
Description
Input
Output
The function does produce a result but Print[]s out the summary to the notebook as a side-effect.
18 HMMUG_20091222_05.nb
Code
HMMUG Simulator
Description
Simulate an HMMUG model with specified parameters for a specified number of periods.
Input
Output
Returns a 2-list;
the vector of hidden states expressed as integers and
the vector of simulated data.
Note
Code
Running the code below in mid-December 2009 results in just under 40-years of monthly log returns for the S&P 500 index. The
the ticker "^GSPC" is price only, dividends are not included. For the purposes of this exposition, this is good enough.
Note that this period include a great deal of variety in the market: the sideways markets of the 1970s, the extended bull market
that with hindsight appears to have ended in the early 2000s, and the difficult markets of the first decade of the century. It
includes several interesting events such as the explosion of interest rates in the 1980s, the 1987 crash, the tech bubble, the
housing bubble, etc.
In[7]:= FinancialData@"^GSPC", "Name"D
Plots of the period and cumulative returns and a histogram of returns are:
0.1
0.0
Out[13]=
-0.1
-0.2
2.5
2.0
1.5
Out[14]=
1.0
0.5
0.0
-0.5
1970 1980 1990 2000 2010
The log returns are noticeably negatively skewed and leptokurtotic (heavy-tailed):
Model Fits
Before fitting fancy hidden Markov models, it would be wise to evaluate the simple assumption of log Normality. For the S&P
500's log returns, the log likelihood is computed as follows:
For a univariate Gaussian distribution the log density of a single observation is:
1 1 x -m 2
log y@x m, sD = - log@2 pD - log@sD -
2 2 s
hence, for the observations X:
T 1 T 1 xt -m 2
@q XD = log y@xt m, sD = -T log@2 pD + log@sD -
t=1 2 t=1 2 s
Out[17]= 0.0451971
Out[18]= 492
In[19]:= nLogLikelihood =
2
HLog@2 pDL 1 mxSP500LogReturnsPAll, 2T - nMean
- iSize + Log@nSdevD - TotalB F
2 2 nSdev
Out[19]= 825.47
Out[20]= - 1638.54
Two-State Model
As mentioned earlier, it's a good idea to initialize the algorithm at different points. Because some of these initial values may be
really bad, Mathematica may display underflow warnings. It's usually okay at this stage to ignore them. A set of 20 intitial
guesses are run for five iterations...
In[21]:= vxInitialTest@"SP500", 2D = xHMMUG@mxSP500LogReturnsPAll, 2T, , "MaxIterations" 5D &
Table@xHMMUGRandomInitial@mxSP500LogReturnsPAll, 2T, 2D, 820<D;
Out[22]= 12
It's a good idea to examine the ensemble graphically to see if the number of iterations allowed for each candidate is reasonable:
840
820
800
Out[23]=
780
760
1 2 3 4 5 6
The best candidate is then used to intialize the model fit. Here the default tolerance and iteration limit is used:
854.5
854.0
Out[25]=
853.5
853.0
10 20 30
` 5.76112 10-10
i =
1.
` 0.812364 0.187636
a = K O
0.044969 0.955031
` 0.193328
p = K O
0.806672
` - 0.0188743
m = K O
0.0104782
` 0.0691238
s = K O
0.0349897
`
@qXD= 854.718
t = 9.38355 10-8
= 39
BIC = - 1666.05
Finally, the values of gt are cumulatively summed. This makes the interplay across states easier to see compared with plotting the
state probabilities:
HMMUG_20091222_05.nb 23
400
300
Out[27]= 200
100
0
1970 1980 1990 2000 2010
If there are doubts about the solution, then the number of initial candidates or the number of iterations set for the candidates can
be varied. Once a candidate is selected the tolerance and iteration limit of the main fit may be adjusted. It's also reasonable to
repeat the entire process above multiple times to check the convergence of the log likelihood. Finally, no claim is made that the
method of generating initial candidates is optimal; it may be wise to consider alternatives.
Three-State Model
Out[29]= 15
850
840
830
Out[30]= 820
810
800
1 2 3 4 5 6
Previous trials have indicated that the iteration limit sometimes must be increased to achieve the desired tolerance.
864
862
860
Out[32]= 858
856
854
2.76515 10-259
`
i = 1.
1.22483 10-137
t = 7.87853 10-8
= 354
BIC = - 1642.47
HMMUG_20091222_05.nb 25
400
300
Out[34]= 200
100
0
1970 1980 1990 2000 2010
Summary
In[35]:= TableFormA
881, 2, 3<, Join@8nBIC<, "BIC" . & 8vxHMMUG@"SP500", 2D, vxHMMUG@"SP500", 3D<D<,
TableHeadings 8None, 8"States", "BIC"<<, TableAlignments CenterE
Out[35]//TableForm=
States BIC
1 - 1638.54
2 - 1666.05
3 - 1642.47
Run as of 21-Dec-2009, the two-state model is selected on the basis of its BIC.
A Few Observations
` 5.76112 10-10
i =
1.
` 0.812364 0.187636
a = K O
0.044969 0.955031
` 0.193328
p = K O
0.806672
` - 0.0188743
m = K O
0.0104782
` 0.0691238
s = K O
0.0349897
`
@qXD= 854.718
t = 9.38355 10-8
= 39
BIC = - 1666.05
Based on monthly returns, the market appears to be in a "low volatility-moderately positive return" or "bull" state about 80% of
the time and in a "high volatility-highly negative return" or "bear" state the remaining 20% of the time. The large transition
probabilites on the main diagonal of the state transition matrix a indicate that the states are somewhat "sticky". Thus, once the
market finds itself in a bull or bear state, it tends to stay there for a time.
This is the simple bull-bear model used as the initial example in the first section of the tutorial.
Simulation
A simulation, with timing, over the same time horizon as the original data with the two-state model's parameters is:
On a 22.93 GHz quad-core Intel Nehalem-based Xeon system, the time is < 5 milliseconds. Your time may be faster or slower
but even on a relatively slow processor the code should run fairly quickly.
In[38]:= nTime
Out[38]= 0.004454
Plots of the period and cumulative returns and a histogram of returns are:
HMMUG_20091222_05.nb 27
0.15
0.10
0.05
Out[39]=
100 200 300 400 500
-0.05
-0.10
-0.15
2.5
2.0
1.5
Out[40]=
1.0
0.5
In[41]:= Histogram@vnSimD
Out[41]=
Out[44]= 14
850
840
830
820
Out[45]=
810
800
790
1 2 3 4 5 6
854.5
854.0
853.5
Out[47]=
853.0
852.5
10 20 30 40
Note that the assignment of parameters to states is arbitrary and dependent upon initial conditions.
` 5.76112 10-10
i =
1.
` 0.812364 0.187636
a = K O
0.044969 0.955031
` 0.193328
p = K O
0.806672
` - 0.0188743
m = K O
0.0104782
` 0.0691238
s = K O
0.0349897
`
@qXD= 854.718
t = 9.38355 10-8
= 39
BIC = - 1666.05
The fit on the data simulated using the above as parameters is:
` 2.62911 10-10
i =
1.
` 0.811942 0.188058
a = K O
0.044902 0.955098
` 0.192746
p = K O
0.807254
` - 0.0189478
m = K O
0.0104746
` 0.069162
s = K O
0.0350023
`
@qXD= 854.718
t = 7.83003 10-8
= 42
BIC = - 1666.05
The cumulative g-plots for the true hidden states produced by the simulation (dashed) and the fit (solid) are below. The assign-
ment of the underlying states positionally is dependent upon intitialization.
30 HMMUG_20091222_05.nb
In[50]:=
ListPlot@Join@Accumulate Transpose@"g" . vxHMMUG@"Sim", 2DD,
Accumulate Transpose@If@ 1, 81, 0<, 80, 1<D & vnStatesDD,
PlotStyle 88Red<, 8Green<, 8Red, Dashed<, 8Green, Dashed<<, Joined TrueD
400
300
Out[50]= 200
100
Using a qq-plot or quantile plot to compare the distribution of the original and simulated data requires that the Statistical Plots
package be loaded:
In[51]:= Needs@"StatisticalPlots`"D
0.15
0.10
0.05
Out[52]=
-0.2 -0.1
0.1
-0.05
-0.10
-0.15
If one proceeds with the working assumption that the two-state model fit on the S&P 500 above is a reasonable approximation of
reality, then the simulation code can be used to study some interesting questions.
HMMUG_20091222_05.nb 31
Although the distribution at the monthly level is clearly non-Gaussian (both the observed data and the resulting model), one may
wonder how fast the Central Limit Theorem might kick in and cause the log returns to look more Gaussian. We can use the two-
state model to simulate 12-month log returns for 10,000 samples:
In[53]:= vnYearOut = Table@
Total@Last@xSimHMMUG@8"i", "a", "m", "s"< . vxHMMUG@"SP500", 2D, 12DDD,
810 000<
D;
In[55]:= Histogram@vnYearOutD
Out[55]=
Even at 12-months, the distribution of annual returns is still noticeably negatively skewed and leptokurtotic, although both are
somewhat attentuated compared to the monthly. The often applied simplifying assumption that returns are log Normally dis-
tributed is clearly not tenable even over year-long periods. This has significance for financial applications such as options pricing
or investment management.
One might wonder if things settle down after a longer period. A decade-long simulation study appears below:
In[58]:= Histogram@vnDecadeOutD
Out[58]=
The kurtosis has approached the more Gaussian value of 3 but there is still significant negative skewness in the log returns. The
expected return is 0.6 - 1 > 82 % for cases run as of 21 Dec 2009, but material drawdowns are still an issue. Again, assumptions
of log Normality appear wide of the mark even for decade-long returns.
The Markov chain operates at the monthly level. After a year there is not much "memory" of the state the system was in a year
ago:
0.226973 0.773027
a12 = Prob@qt+12 = sHjLqt = sHiLD = K O
0.185264 0.814736
The rows are close the equilibrium distribution. This might cause one to think that after a year one could largely ignore the
dynamics of the Markov chain and generate a sample of decade-long returns by direct random sampling from the annual returns.
In other words, the annual simulation results are sampled and the effect of the Markov chain is otherwise ignored to produce
decade-long simulations:
In[60]:= vnDecadeFromAnnual = Table@Total@RandomChoice@vnYearOut, 10DD, 810 000<D;
In[62]:= Histogram@vnDecadeFromAnnualD
Out[62]=
Not only is the mean log return significantly higher but the distribution is significantly less negatively skewed. Even out to a
decade the dynamics captured by the HMM are important and must be accounted for. This "long memory" effect for a model that
captures only monthly dynamics is interesting.
Next Steps
First of all, experiment. Try looking at different financial instruments at different time scales. Compare different periods.
Try different models. One obvious choice: a simple finitie mixture of Guassians rather than a HMM. The BIC can provide
guidance as to whether the extra parameters are justified in the HMM.
The form of the EM algorithm presented above is the most basic. There are methods for accelerating its convergence. There are
also alternatives that involve direct maximization of the likelihood function [McLachlan 2008]. Once comfortable with the
material in this tutorial, the student should study those alternatives.
The basic routine to fit HMMUG models can be easily extended to more complex cases. For example, if xt is a vector with state-
dependent multivaritate Gaussian densities, then one has to replace the univariate PDF with the multivariate in bt and update the
state dependent mean vectors and covariance matrices using the same weights based on gt . In [Rabiner 1989] the more general
case of using state-dependent finite mixtures of multivariate Guassians is covered.
The former change is trivial; the latter is slightly more difficult but still straightforward. The code above could have easily been
written to accommodate them. However, the objectives here were more pedagogical than utilitarian, and these tasks are, there-
fore, left as an exercise for the student.
Special structures can be incorporated into the transition matrix or the model can be extended to a second-order Markov chain,
i.e., one in which the probability of the next state is conditioned on the prior two states. The Markov chain can be expressed in
continous time and the price dynamics expressed as regime-switching stochastic differential equations [Bhar et al. 2004]
[Zucchini et al. 2009].
Densities other than Gaussian are accommodated in most instances by simply using them to compute bt HiL for forward-backward
recursions, although the distributions must be ones for which the M-step can be accomplished in something at least approaching
closed form. Parameter updates in the E-step are then done using MLE computations appropriate to the distributions at hand.
In some cases the M-step cannot be accomplished easily, and the conditional expectation must be computed by Monte Carlo.
This is an example of Gibbs sampling and is a simple form of Markov chain Monte Carlo [Gilks 1995]. The HMM can also be
viewed as a simple dynamic Bayesian network [Neapolitan 2003]. Thus, study of HMMs provides a gateway to powerful
statistical and machine learning techniques that have arisen with the past two decades.
References
34 HMMUG_20091222_05.nb
References
Baum, L. E., T. Petrie, G. Soules, and N. Weiss, "A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of
Markov Chains", Annals of Mathematical Statistics, Vol. 41, No. 1, 1970.
Bhar, Ramaprasad, and Shigeyuki Hanori, Hidden Markov Models: Applications to Financial Economics, Kluwer, 2004.
Dempster, A.P., N. M. Liard, and D. B. Rubin, "Maximum-Likelihood from Incomplete Data via the EM Algorithm", Journal of the Royal
Statistical Society, Series B, 39, 1977.
Finch, S., N. Mendell, and H. Thode, "Probabilistic Measures of Adequacy of a Numerical Search for a Global Maximum." Journal of the
American Statistical Association, 84, 1989.
Gilks, W. R., S. Richardson, and D. Spiegelhalter (Eds.), Markov Chain Monte Carlo in Practice: Interdisciplinary Statistics, Chapman &
Hall/CRC Interdisciplinary Statistics Series, 1995.
Karlis, D., "A Cautionary Note About the EM Algorithm for Finite Exponential Mixtures," Technical Report No 150, Department of Statisitics,
Athens University of Economics and Business, 2001.
Karlis, D., and E. Xekalaki, "Choosing Values for the EM Algorithm for Finite Mixtures," Computational Statistics & Data Analysis, 41, 2003.
McLachlan, G. J., and D. Peel, Finite Mixture Models, Wiley-Interscience, 2000.
McLachlan, G. J., and T. Krishnan, The EM Algorithm and Extensions, 2nd Ed., Wiley-Interscience, 2008.
Neapolitan, R. E., Learning Bayesian Networks, Prentice-Hall, 2003.
Rabiner, L. R., "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition", Proceedings of the IEEE, No. 2,
February 1989.
Wikipedia, the free encyclopedia, "Hidden Markov Model", retrieved from http://en.wikipedia.org/wiki/Hidden_Markov_model, 2009-12-22.
Zucchini, W., and I. L. MacDonald, Hidden Markov Models for Time Series: An Introduction Using R, CRC Press, 2009.