Hidden Markov Models: Background
Hidden Markov Models: Background
Hidden Markov Models: Background
Magnus Karlsson
Background
Hidden Markov chains was originally introduced and studied in the late 1960s and early
1970s. During the 1980s the models became increasingly popular. The reason for this is two-
folded. Firstly, the hidden Markov models are very rich in mathematical structure and hence
can form the theoretical basis for a wide range of applications. Secondly, the models have,
when applied properly, turned out to be highly successful. Some of the notable applications
are speech recognition and bioinformatics in particular protein modelling.
In this work, basics for the hidden Markov models are described. Problems, which need to be
solved are outlined, and sketches of the solutions are given. A possible extension of the
models is discussed and some implementation issues are considered. Finally, three examples
of different applications are discussed.
The vast majority of the theoretical results in this work is a summary of the results in Rabiner
(1989). The example in speech recognition is due to Rabiner (1989) , the example of protein
modelling is due to Krogh et al. (1994) and finally an application in fatigue analysis is due to
Johannesson (1999).
What is Hidden Markov Models?
Hidden Markov models (HMM) can be seen as an extension of Markov models to the case
where the observation is a probabilistic function of the state, i.e. the resulting model is a
doubly embedded stochastic process, which is not necessarily observable, but can be observed
through another set of stochastic processes that produce the sequence of observations. To get
a better understanding for this the following example might be useful:
Example
Consider a room with N urns. Within each urn there are a large number of coloured balls. We
assume that there is M different colours in total. Furthermore, assume that an urn is initially
chosen according to some probability distribution. From this urn, a ball is chosen at random,
and its colour is recorded as the observation. The ball is then replaced in the urn from which it
was selected. A new urn is selected according to a random selection process associated with
the current urn.
Figure. An N-state urn and ball model, which illustrates the general case of a discrete symbol HMM.
From Rabiner (1989).
The ball selection process is repeated for the new urn, after which the next urn is selected
according to a selection process associated with the second urn, and so forth. The entire
process generates a finite observation sequence of colours, which we would like to model as
the observable output of an HMM. We can now see that we have an underlying Markov
chain, where each state corresponds to the selection of a particular urn. This chain is however
not observable, but can be observed through the sequence of colours which obviously is a
probabilistic function of the embedded Markov chain, since a colour is chosen randomly
depending on the state which we are currently in, i.e. the urn, which we are currently
choosing the ball from.
Description of HMM
Rabiner (1989) suggest that a HMM can be described by the following:
1. N , the number of states in the model. Although the states are hidden, for many
practical applications there is often some physical significance attached to the states
or to sets of states of the model. In the example with the balls and urns above, N
corresponds to the number of urns. We denote the individual states as
{ }
N
s s s S ,..., ,
2 1
= , and the state at time n as
n
Z .
2. M , the number of distinct observation symbols per state. The observation symbols
correspond to the physical output of the system being modelled. In our example
above M corresponds to the number of colours of the balls. We denote the individual
symbols as { }
M
v v v V ,..., ,
2 1
= , and the symbol at time n as
n
X .
3. The state transition probability matrix { }
ij
P P = , where
( )
i n j n ij
s Z s Z P P = = =
+1
, N j i , 1 (1)
4. The observation symbol probability distribution in state
j
s , ( ) { } k b B
j
= , where
( ) ( )
j n k n j
s Z v X P k b = = = N j 1 , M k 1 (2)
5. The initial state distribution { }
i
= where
( ),
0 i i
s Z P = = N i 1 (3)
It can be seen from the above that a complete specification of an HMM requires specification
of the two model parameters ( N and M ), specification of the observation symbols and the
specification of the three probability measures P , B and . For convenience, we use the
compact notation
( ) , , B P = (4)
It should be noted here that the above discussion has considered only the case when the
observations is characterised as discrete symbols. In principle, this is however not necessary.
The symbols or outputs can be either discrete or continuous, and either scalar or vector-
valued. However, in all cases we need to assume that the stochastic process { }
n
Z is a Markov
chain having the property that { }
k k
X X X ,...,
0
=
and { } ,... ,
2 1 1 + +
+
+
=
k k k
Z Z Z are conditionally
independent given { }. ,...,
0 k k
Z Z Z =
=
n j
N
i
ij n n
X b P i j 1 1 , 1 T n N i (7)
Since
( ) ( )
i T T T
s Z X X X P i = = , ,..., ,
2 1
(8)
it follows that
( ) ( )
=
=
N
i
T
i X P
1
(9)
Using the forward variable ( ) i
n
we have now solved the first problem above. (Note that this
does not include any backward variable. The backward variable is actually not necessary for
the solution and is therefore excluded here, but it will appear in the solution for problem 3.)
Solution to problem 2:
Unlike, problem 1 where an exact solution can be given, there are several possible ways of
solving problem 2, i.e. finding the optimal state sequence associated with the given
observation sequence. The difficulty comes from the fact that there are several different
optimality criteria. One possible optimality criterion is to choose the states
n
Z , which are
individually most likely. This optimality criterion maximises the expected number of correct
individual states, but it does not take into consideration whether the sequence of states is
possible. For instance although the transition between two states is impossible i.e. 0 =
ij
P for
some i and j , they may still be the most likely at the very instants. This is due to the fact that
the solution of this problem simply determines the most likely state at every instant, without
considering the probability of occurrence of sequences of states. The most widely used
criterion is instead to find the single best state sequence, i.e. to maximise ( ) , X Z P , which is
equivalent to maximising ( ) X Z P , . An algorithm for solving this problem has been found
and is called the Viterbi algorithm. This algorithm can simply be seen as the maximum
likelihood estimate. The algorithm can be summarised as follows:
To find the best state sequence, { }
T
Z Z Z Z ,..., ,
1 0
= , for the given observation
{ }
T
X X X X ,..., ,
1 0
= , we need to define the quantity
( ) ( )
n i n
Z Z Z
i n
X X X s Z Z Z P s
n
,..., , , ,..., , max arg
1 0 1 0
,..., ,
1 1 0
= =
(10)
i.e. ( )
i n
s is the best score (highest probability) along a single path, at time n , which
accounts for the first 1 + n observations and ends in state
i
s . By induction we have
( ) ( ) | | ( )
1 1
max
+ +
=
n j ij i n
i
j n
X b P s s (11)
To actually retrieve the state sequence, we need to keep track of the argument which
maximised the above the above equation, for each n and j . We do this with the array
( )
j n
s . The procedure for finding the best state sequence now follows as:
1) Initialisation:
( ) ( )
0 0
X b s
i i i
= , N i 1
( ) 0
0
=
i
s (12)
2) Recursion:
( ) ( ) | | ( )
n j ij i n
N i
j n
X b P s s =
1
1
max , T n 1 , N j 1
( ) ( ) | |
ij i n
N i
j n
P s s
1
1
max arg
= , T n 1 , N j 1 (13)
3) Termination:
( ) | |
i T
N i
s P
=
1
*
max
( ) | |
i T
N i
T
s Z
=
1
*
max arg (14)
4) State sequence backtracking:
( )
*
1 1
*
+ +
=
n n n
Z Z , . 0 , 1 ,..., 2 , 1 = T T n (15)
The best sequence according to the Viterbi algorithm is thus found as ) ,..., , (
* *
1
*
0
*
T
Z Z Z Z = . It
should be noted that apart from the backtracking step the Viterbi algortihm is rather similar to
the forward calculation used in problem 1.
Solution to problem 3:
The by far most difficult of the three problems is to determine a method to adjust the model
parameters ( ) , , B P = to maximise the probability of the observation sequence given the
model. This problem is in fact not possible to solve using a finite observation sequence as
training data, but we can choose ( ) , , B P = such that ( ) X P is locally maximised using
an iterative procedure such as the Baum-Welch method. (Equivalent results will be found
using the EM method.) We start of with introducing a backward variable ( ) i
n
defined as
( ) ( ) , ,..., ,
2 1 i n T n n n
s Z X X X P i = =
+ +
(16)
i.e. the probability of the partial observation sequence from 1 + n to the end, given the state
i
s at time n and the model . Again we can solve for ( ) i
n
inductively, as follows:
( ) , 1 = i
T
N i 1 (17)
and induction leads to
( ) ( ) ( )
=
+ +
=
N
j
n n j ij n
j X b P i
1
1 1
0 , 1 ,..., 2 , 1 = T T n , N i 1 (18)
In order to describe the procedure for reestimation of HMM parameters, we also define
( ) j i
n
, , the probability of being in state
i
s at time n and state
j
s at time 1 + n , given the
model and the observation sequence, i.e.
( ) ( ) , , ,
1
X s Z s Z P j i
j n i n n
= = =
+
(19)
From the definition of the forward and backward variables it follows that we can write
( ) j i
n
, in the form
( )
( ) ( ) ( )
( )
X P
j X b P i
j i
n n j ij n
n
1 1
,
+ +
=
( ) ( ) ( )
( ) ( ) ( )
= =
+ +
+ +
=
N
i
N
j
n n j ij n
n n j ij n
j X b P i
j X b P i
1 1
1 1
1 1
(20)
where the numerator is simply ( ) X s Z s Z P
j n i n
, ,
1
= =
+
and the division by ( ) X P gives
the desired probability measure. We also need to define ( ) i
n
as the probability of being in
state
i
s at time n , given the observation sequence and the model. It follows that
( ) ( )
=
=
N
j
n n
j i i
1
, (21)
If we sum ( ) i
n
over the time index up to time 1 T we get a quantity, which can be
interpreted as the expected number of transitions made from state
i
s . Similarly, summation of
( ) j i
n
, up to time 1 T can be interpreted as the expected number of transitions from state
i
s to state
j
s . We can also sum ( ) i
n
over the time index up to time T , which can be
interpreted as the expected number of times in state
i
s . Using this, we can get a method for
reestimation of the parameters in an HMM. The reestimation formulas can be found as
( ) i t s
n i i
= = = ) 1 ( at time state in times) of (number frequency expected (22)
( )
( )
=
= =
1
1
1
1
,
state from ns transitio of number expected
state to state from ns transitio of number expected
T
n
n
T
n
n
i
j i
ij
i
j i
s
s s
P
(23)
( ) k b
j
= =
j
k j
s
v s
state in times of number expected
symbol observing and state in times of number expected
( )
( )
( )
=
=
=
=
T
n
n
T
n
v X n
j
I j
k n
1
1
. (24)
The reestimation procedure now runs as follows. We define the current model as
( ) , , B P = , and use that to compute the right-hand side of the above equations, which is
put equal to the left-hand side. The left-hand sides are the parameters in the model and this
can be used to further improve the model by repeating the procedure until a limiting point is
reached.
An Extension of the standard HMMs
There are naturally many extensions to the simple scalar, discrete case, which has been
introduced here. One of these interesting extensions of the standard HMMs presented here
would be to model state duration, i.e. that the sequence stays in a state for a non-zero amount
of time. For the standard HMMs, it can be shown that the inherent duration probability
density ) (d p
i
associated with state
i
s , i.e. the probability of d consecutive observations in
state
i
s is of the form:
) 1 ( ) (
1
ii
d
ii i
P P d p =
, (25)
where
ii
P is the self-transition coefficient for state
i
s . For most applications, this exponential
state duration density is inappropriate. Instead, it is preferable to explicitly model duration
density in some analytical form. This means that the HMM would run as follows. First an
initial state
i
s is chosen according to some distribution
i
, and then a duration
0
d is chosen
according to the state duration density ) (
0
d p
i
. Observations for the observing times
0
,..., 0 d t = are chosen according to the joint density ( )
0 0
,..., ,
1 0 d Z
X X X b . Finally, the next
state is chosen according to the state transition probabilities
ij
P , where 0 =
ii
P since we have
determine the state duration to be exactly
0
d . The procedure is then repeated for the second
state and so forth. It should be noted that for the special case where ) 1 ( ) (
1
ii
d
ii i
P P d p =
, the
situation is equivalent to the standard HMM. The formulation with state duration density
cannot be directly applied to the solution of the three problems described above, but assuming
that entire duration intervals are included in the observation sequence it is possible to find
similar solutions to the problems.
It should, however, be noted that there a number of drawbacks with the incorporation of
duration densities. One is the increase of computational load. Another noteworthy problem is
that, in general, a larger training data set is required, since fewer state transitions are made
with this model compared to the standard HMM.
Implementation issues for HMMs
There are a number of details to pay attention to when implementing the HMMs. Examples of
these are scaling issues, initial parameter estimates, and insufficient training data. The issues
are sketched and some ideas about solutions are given here.
Scaling
In order to see why scaling is of importance when implementing the reestimation procedure,
consider the definition of the forward variable ) (i
n
. It can be seen in the definition that
) (i
n
consists of the sum of a large number of terms, each of the form
( )
=
=
+
n
s
Z Z
n
s
Z Z
s s s s
X b P
0
1
0
1
(26)
where
i n
s Z = . Since each of the factors in the product generally is significantly less than 1 it
can be seen that as n starts to get big each term in ) (i
n
starts to head exponentially to zero.
This means that after a sufficiently long time any computer will run into problems with
precision range. For this reason a scaling procedure is necessary. The basic procedure, which
is used, is to multiply ) (i
n
with a scaling coefficient independent of i , with the goal of
keeping the scaled ) (i
n
within the dynamic range for each value of n i.e. T n 0 .The
suggested scaling in Rabiner (1989) is to multiply ) (i
n
with a factor
( )
=
=
N
i
n
n
i
c
1
1
(27)
The scaled coefficients are thus found as
( ) ( ) i c i
n n n
= (28)
A similar scaling is done for the backward variables ) (i
n
using the same scaling factor, i.e.
( ) ( ) i c i
n n n
=
(29)
It can then be shown that when calculating
ij
P due to cancellations we get the same results
when using ) ( i
n
and ) (
i
n
instead of ) (i
n
and ) (i
n
respectively. The only really
important change in the solutions of the problems listed above comes in the calculation of
) | ( X P , since one cannot simply sum up the ) ( i
T
terms since they are scaled already.
However, it turns out that it is still possible to calculate logarithm of ) | ( X P . In the Viterbi
algorithm it turns out that no scaling is necessary if one uses logarithms in the four steps of
the algorithm. This means that one will arrive at ( )
*
log P rather than
*
P , but with less
computing and no numerical errors.
Initial parameter estimations
In principal there are no straightforward answer on how to choose the initial estimates of the
HMM parameters. It appears as the distribution of the initial distribution and the transition
matrix P is rather insensitive. (For instance, uniform initial estimates can be used.) However
for the parameters in B , the initial estimates are crucial, especially in the continuous case, i.e.
when the observation symbols come from a continuous distribution. There are a number of
suggestions on how to obtain good initial estimates, e.g. manual segmentation of the
observation sequence into states with averaging of observations within states, and maximum
likelihood segmentation of observations with averaging.
Insufficient training data
An obvious problem with the training of HMM parameters, is that the observation sequence is
finite. This means that there is often insufficient numbers of occurrences of the different
model events to give good parameter estimates. A natural way of solving this problem is to
gather more data, but this often impossible in practical situations and therefore it is necessary
to find a technique, which deals with the data at hand. One possible solution is simply to
reduce the size of the model, e.g. the number of states, number of symbols per state, etc.
However, in many practical situations the nature of the model is given by a physical situation
and thus reduction of the model is not possible. A third possibility is to interpolate one set of
parameter estimates with another set of parameter estimates from a model for which an
adequate amount of training data exists. The idea is to use the training data to design two
models, one corresponding to the desired one, and one which is smaller, but for which the
training data is sufficient. The smaller model is created by tieing one or more sets of
parameters of the initial model together. The final result is obtained by interpolation between
the two models. A key issue is to understand how much weight should be put on the initial
model and how much on the reduced model. There are however some results on this topic,
which can provide an optimal weight.
Applications and Examples
Three examples of very different applications will be given here. The first is the perhaps most
classic in the field i.e. speech recognition. The second comes from the biological area, and
refers to protein modelling. Finally, a more theoretical result useful in fatigue analysis will be
given.
Speech recognition
Arguably, one of the most noteworthy applications of HMMs is speech recognition. The
example given here is due to Rabiner (1989) and deals with isolated word recognition.
Assume there are in total V words, which are to be recognised and that there are K
occurrences of each spoken word. Each occurrence of the word constitutes an observation.
The observations of words are typically represented in terms of spectra and/or time signals. In
order to do the isolated word recognition, there are two tasks that are necessary to perform:
1. First it is necessary to build HMMs for each word in the vocabulary, i.e. for each word
v, we need to estimate the model parameters ( )
v v v v
B P , , = , which optimise the
likelihood of the training set observation vectors for the word.
2. For each unknown word the observation sequence is analysed and calculations of
model likelihoods for all possible models, i.e. all possible words, are performed.
Finally, the model gives the recognised word as the one with the highest model
likelihood.
One of the possible ways to perform the analysis and obtain the observation vector X is to
conduct a spectral analysis. A common technique is then to use linear predictive coding
(LPC) to extract observation vectors.
Protein modelling
The modelling of proteins is not as unrelated to the case with speech recognition as it first
appears. A more general speech recognition when a sequence of words or phonemes is
considered can be seen as a pattern recognition task. This is also true for the protein
modelling case, where the task is to model a sequence of amino acids, which build up
proteins. In fact the words correspond to the 20 amino acids from which protein molecules are
constructed. The example of a hidden Markov model for proteins considered here is due to
Krogh et al. (1994).
The structural intuition of a protein can be seen in the following way: a) A sequence of
positions, each with its on distribution over the amino acids; b) the possibility of either
skipping a position or inserting extra amino acids between consecutive positions; and c)
allowing for the possibility that continuing an insertion or deletion is more likely than starting
one. Krogh et al. (1994) construct their hidden Markov model to catch the properties listed
above. The main line of the HMM contains a sequence of M states, which we will call match
states, corresponding to the positions in a protein or columns in a multiple alignment. Each of
the M states can generate a letter x from the 20-letter amino acid alphabet according to the
distribution ) | (
k
m Z x X P = = , M k ,..., 1 = , i.e. each generated letter correspond to a specific
amino acid. The notation ) | (
k
m Z x X P = = means that each of the match states
k
m ,
M k 1 , have distinct distributions. In order to model the possibility of skipping the
position there is a deletion state
k
d for each state
k
m , which is simply a dummy state.
Finally, in order to model the possibility of inserting extra amino acids there are a total of
1 + M insert states to either side of the match states, which generate letters from the amino
acid alphabet in exactly the same way as the match state, but use the probability distributions
) | (
k
i Z x X P = = , M k ,..., 1 , 0 = . For simplicity purposes a dummy state has been added in the
beginning and the end, denoted
0
m and
1 + M
m , which do not produce any amino acids. The
situation can be seen below for the case when 4 = M .
Figure. The protein model for 4 = M . From Krogh et al. (1994).
Notice that the model allows for several extra amino acids since there is a positive self-
transition probability for the insert states. From each state, there are three possible transitions.
Transitions into match states or deletion states always move forward in the model whereas
transitions into insert states do not. The transition probability from a state q to a state r
) | ( q Z r Z P = = is here denoted ) | ( q r T , which corresponds to the more familiar notation
rq
P .
A sequence from the model is generated in the following way: Starting in the dummy state
0
m , choose a transition to
1
m ,
1
d , or
0
i randomly according to the transition probabilities
) | (
0 1
m m T , ) | (
0 1
m d T and ) | (
0 0
m i T . Whenever we are in an insertion or matching state a
letter x corresponding to an amino acid is generated. For instance if we are in state
k
m an
amino acid is generated according to the probability distribution ) | (
k
m Z x X P = = . If on the
other hand we are in a deletion state no amino acid is generated. The next state is chosen
according to the possible transitions in the current state. The procedure continues until the
sequence reaches the state
k
m , which is the dummy end state, where no amino acid is
generated. The generated sequence
L
x x x ,..., ,
2 1
is now a sequence of letters corresponding to
the different amino acids, where the sequence has been found following a path of states
1 1 0
, ,..., ,
+ N N
q q q q , where
0 0
m q = and
1 1 + +
=
M N
m q . Since the deletion states does not
create any amino acids we can conclude that N (the number of states in a path) is larger or
equal to L (the length of the sequence). If
i
q is a match or insertion state, we define ) (i l to
be the index in the sequence
L
x x x ,..., ,
2 1
of the amino acid produced in state
i
q . The
probability of the event that the path
1 1 0
, ,..., ,
+ N N
q q q q is taken and the sequence
L
x x x ,..., ,
2 1
is generated is
( ) ( ) ( )
( )
( )
=
+ +
=
N
i
i i l i i N N N L
q x P q q T q m T q q x x P
1
1 1 1 0 1
model ,..., , ,..., (30)
where
( )
( ) 1 =
i i l
q x P if
i
q is a deletion state. The probability of any sequence
L
x x x ,..., ,
2 1
of
amino acids can be found as the sum over all possible paths that could produce that sequence
( ) ( )
+
+
=
1 0
paths
1 0 1 1
model ,..., , ,..., model ,...,
N
,...,q q
N L L
q q x x P x x P (31)
A way of estimating the parameters is in the model is the following: For a given set of
training sequences ) ( ),..., 1 ( n s s , one can see how well a model fits them by calculating the
probability it generates them. This is simply a product of terms of the form given by the sum
above, where we for each n j ,..., 1 = , let ) ( ,..., ,
2 1
j s x x x
L
= . The result is the likelihood
function and maximising with respect to the parameters in the model leads to the best model
according to the maximium likelihood method.
Fatigue analysis
One of the major reasons for structural failure in the automotive industry is fatigue. Over the
years various methods of extracting fatigue relevant data from random load-time histories
have been developed. One way of dealing with this problem is to form equivalent load cycles
and then use damage accumulation methods, such as the Palmgren-Miner rule. The method
that has shown best results is the rainflow cycle counting method. It has become the most
commonly used counting method in engineering. The way of constructing the cycles is based
on counting hysteresis cycles for the load in the stress-strain plane. A definition suitable for
mathematical analysis is the following, first presented by Rychlik (1987):
Definition:
From the k:th local maximum (value
k
M ) one looks at the lowest values in
forward and backward directions between
k
M and the nearest point at which
the load exceeds
k
M . The larger (less negative) of those two values, denoted
by
rfc
k
m , is the rainflow minimum paired with
k
M , i.e.
rfc
k
m is the least drop
before reaching the value
k
M again on either side. Thus the k:th rainflow pair
is ( )
k
rfc
k
M m , and the rainflow range is
rfc
k k
rfc
k
m M H = .
This definition is probably best understood from a figure:
rfc
k
H
k
t
k
t
+
k
t
rfc
k k
m m =
+
k
m
k
M
Figure. The definition of the rainflow cycle. The rainflow cycle is denoted
rfc
k
H .
A vehicle is usually driven in very different environment, for instance it is possible to distinct
between driving in curves, slopes, flat straights or performing manuoevers. These cases will
create very different sort of loads, not the least because of the differences in speed. One
possible way of modelling these loads is by hidden Markov chains, where the states of the
underlying Markov chain corresponds to the particular driving mode. The observed load at
time n is denoted
n
X . For a more complete treatment of this see Johannesson (1999). We
can regard the observed load signal { }
=0 n n
X as a random process with the state space
{ }
M
v v ,...,
1
, such that a successive value is given by a Markov transition according to one of
r possible transition matrices, corresponding to the different driving modes. Which transition
matrix to choose is determined by the regime process { }
=0 n n
Z with possible values r ,..., 1 .
The regime process is assumed to be a Markov chain with the transition matrix ( )
N
j i
ij
P
1 , =
= P
having the property that { }
n n
X X ,...
0
=
, { } ,... ,
2 1 1 + +
+
+
=
n n n
Z Z Z are conditionally
independent given Z { }
n n
Z Z ,...,
0
=
= =
1 1
,
n n n
i Z j Z P
( )
ij n n
P i Z j Z = = = =
1
P . The evolution of the process { }
=0 n n
X is described by the transition
probabilities ( ) = = = = =
1 2 1
) (
, , ,
n n n i n j n
z
ij
Z z Z v X v X P q ( ) z Z v X v X
n i n j n
= = =
,
1
P ,
giving the transition matrices
( ) ( )
( )
N
j i
z
ij
z
q
1 , =
= Q , r z ,..., 1 = . We now have a special case of the
standard HMMs where we know that the observation process { }
n
X is a Markov chain
conditioning on the hidden Markov chain { }
n
Z . In this case it is common to call the process
{ }
n
X a switching Markov chain (with Markov regime), and call the process { }
n
Z the regime
process. { }
n
X itself does not satisfy the Markov property, however it can be shown that the
joint process ( ) { }
= + 0 1
,
n n n
Z X is a Markov chain, that is
( ) ( )( ) ( ) ( ) ( ) ( ) = = = =
+ + 1 0 1 0 1 1 1 1
, , ,..., , , , , s v Z X s v Z X s v Z X
n n n n n n n n
P
( ) ( )( ) ( ) ( )
n n n n n n n n
s v Z X s v Z X , , , ,
1 1 1 1 + +
= = = P (32)
The joint process has state space ( ) { }
r N
z i i
z v
,
1 , 1
,
= =
containing r N states and transition matrix
( )
N
j i
ij
1 , =
= Q Q , where ( ) ( )
r
w z
ij ij
w z Q
1 ,
,
=
= Q and ( )
( )
zw
z
ij ij
p q w z Q = , (33)
The r r matrix
ij
Q describes a transition from i to j for { }
n
X where the regime process
{ }
n
Z may switch state. For fixed j we can define the column vector q
( )
m
q q = , ( )
T
mr m m m
q q q ...
2 1
= q ,
( )
+ =
= = = > =
N
j l
z
ml k m n j n mz
q z Z v X v X q
1
1
, P (34)
containing the probabilities that ( ) ( ) z v Z X
m n n
, ,
1
=
=
+ =
|
|
.
|
\
|
+ =
k
k rfc
j i , (35)
where the row vector is ( )
1 2 1
...
~
=
i
, the column vectors d and e are as defined
above as well as the sub-matrices A and C.
The rainflow counting intensity can for instance be used to calculate the expected
cumulative fatigue damage caused by a load sequence.
Conclusions
A short summary of the some of the theory behind the hidden Markov models have been
given. For understanding purposes, the intention has been to show relatively simple parts of
the theory. However, the examples of results and very different kinds of applications showed
here still give a hint of the usefulness of the hidden Markov models.
References
P. Johannesson. Rainflow Analysis of Switching Markov Loads. PhD thesis, Lund Institute of
Technology, Lund, 1999.
A. Krogh, M. Brown, I. S. Mian, K. Sjlander, and D. Haussler. Hidden Markov models in
computational biology: Applications to protein modeling. Journal of Molecular Biology,
235:1501-1531, 1994.
L. R. Rabiner. A tutorial on Hidden Markov Models and selected applications in speech
recognition. Proceedings of the IEEE, 77:257-286, 1989.
I. Rychlik. A new definition of the rainflow cycle counting method, International Journal of
Fatigue, 9:119-121, 1987.