R Markov
R Markov
BugReports http://github.com/spedygiorgio/markovchain/issues
URL http://github.com/spedygiorgio/markovchain/
RoxygenNote 6.1.1
NeedsCompilation yes
Author Giorgio Alfredo Spedicato [aut, cre]
(<https://orcid.org/0000-0002-0315-8888>),
Tae Seung Kang [aut],
Sai Bhargav Yalamanchi [aut],
1
2 R topics documented:
R topics documented:
markovchain-package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
absorptionProbabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
blanden . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
committorAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
conditionalDistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
craigsendi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
createSequenceMatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
ctmc-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
ctmcFit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
expectedRewards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
expectedRewardsBeforeHittingA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
ExpectedTime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
firstPassage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
firstPassageMultiple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
fitHigherOrder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
fitHighOrderMultivarMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
freq2Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
generatorToTransitionMatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
HigherOrderMarkovChain-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
hittingProbabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
holson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
hommc-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
ictmc-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
impreciseProbabilityatT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
inferHyperparam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
is.accessible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
is.CTMCirreducible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
is.irreducible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
is.regular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
is.TimeReversible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
kullback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
markovchain-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
markovchainList-class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
markovchainListFit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
markovchainSequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
meanAbsorptionTime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
meanFirstPassageTime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
markovchain-package 3
meanNumVisits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
meanRecurrenceTime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
multinomialConfidenceIntervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
name<- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
names,markovchain-method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
noofVisitsDist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
predictHommc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
predictiveDistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
preproglucacon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
priorDistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
probabilityatT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
rain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
rctmc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
rmarkovchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
sales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
show,hommc-method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
steadyStates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
tm_abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
transition2Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
transitionProbability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
verifyMarkovProperty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Index 64
Description
The package contains classes and method to create and manage (plot, print, export for example)
discrete time Markov chains (DTMC). In addition it provide functions to perform statistical (fitting
and drawing random variates) and probabilistic (analysis of DTMC proprieties) analysis
Details
Package: markovchain
Type: Package
Version: 0.6.9.10
Date: 2018-05-30
License: GPL-2
Depends: R (>= 3.4.0), methods, expm, matlab, igraph, Matrix
4 absorptionProbabilities
Author(s)
Giorgio Alfredo Spedicato Maintainer: Giorgio Alfredo Spedicato <[email protected]>
References
Discrete-Time Markov Models, Bremaud, Springer 1999
Examples
# create some markov chains
statesNames=c("a","b")
mcA<-new("markovchain", transitionMatrix=matrix(c(0.7,0.3,0.1,0.9),byrow=TRUE,
nrow=2, dimnames=list(statesNames,statesNames)))
statesNames=c("a","b","c")
mcB<-new("markovchain", states=statesNames, transitionMatrix=
matrix(c(0.2,0.5,0.3,0,1,0,0.1,0.8,0.1), nrow=3,
byrow=TRUE, dimnames=list(statesNames, statesNames)))
statesNames=c("a","b","c","d")
matrice<-matrix(c(0.25,0.75,0,0,0.4,0.6,0,0,0,0,0.1,0.9,0,0,0.7,0.3), nrow=4, byrow=TRUE)
mcC<-new("markovchain", states=statesNames, transitionMatrix=matrice)
mcD<-new("markovchain", transitionMatrix=matrix(c(0,1,0,1), nrow=2,byrow=TRUE))
absorptionProbabilities
Absorption probabilities
Description
Computes the absorption probability from each transient state to each recurrent one (i.e. the (i, j)
entry or (j, i), in a stochastic matrix by columns, represents the probability that the first not transient
state we can go from the transient state i is j (and therefore we are going to be absorbed in the
communicating recurrent class of j)
Usage
absorptionProbabilities(object)
Arguments
object the markovchain object
blanden 5
Value
A named vector with the expected number of steps to go from a transient state to any of the recurrent
ones
Author(s)
Ignacio Cordón
References
C. M. Grinstead and J. L. Snell. Introduction to Probability. American Mathematical Soc., 2012.
Examples
m <- matrix(c(1/2, 1/2, 0,
1/2, 1/2, 0,
0, 1/2, 1/2), ncol = 3, byrow = TRUE)
mc <- new("markovchain", states = letters[1:3], transitionMatrix = m)
absorptionProbabilities(mc)
Description
This table show mobility between income quartiles for father and sons for the 1970 cohort born
Usage
data(blanden)
Format
An object of class table with 4 rows and 4 columns.
Details
The rows represent fathers’ income quartile when the son is aged 16, whilst the columns represent
sons’ income quartiles when he is aged 30 (in 2000).
Source
Personal reworking
References
Jo Blanden, Paul Gregg and Stephen Machin, Intergenerational Mobility in Europe and North
America, Center for Economic Performances (2005)
6 committorAB
Examples
data(blanden)
mobilityMc<-as(blanden, "markovchain")
Description
Returns the probability of hitting states rom set A before set B with different initial states
Usage
committorAB(object,A,B,p)
Arguments
object a markovchain class object
A a set of states
B a set of states
p initial state (default value : 1)
Details
The function solves a system of linear equations to calculate probaility that the process hits a state
from set A before any state from set B
Value
Return a vector of probabilities in case initial state is not provided else returns a number
Examples
transMatr <- matrix(c(0,0,0,1,0.5,
0.5,0,0,0,0,
0.5,0,0,0,0,
0,0.2,0.4,0,0,
0,0.8,0.6,0,0.5),
nrow = 5)
object <- new("markovchain", states=c("a","b","c","d","e"),transitionMatrix=transMatr)
committorAB(object,c(5),c(3))
conditionalDistribution 7
conditionalDistribution
conditionalDistribution of a Markov Chain
Description
It extracts the conditional distribution of the subsequent state, given current state.
Usage
conditionalDistribution(object, state)
Arguments
object A markovchain object.
state Subsequent state.
Value
A named probability vector
Author(s)
Giorgio Spedicato, Deepak Yadav
References
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
See Also
markovchain
Examples
# define a markov chain
statesNames <- c("a", "b", "c")
markovB <- new("markovchain", states = statesNames, transitionMatrix =
matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1),nrow = 3,
byrow = TRUE, dimnames = list(statesNames, statesNames)))
conditionalDistribution(markovB, "b")
8 createSequenceMatrix
craigsendi CD4 cells counts on HIV Infects between zero and six month
Description
This is the table shown in Craig and Sendi paper showing zero and six month CD4 cells count in
six brakets
Usage
data(craigsendi)
Format
The format is: table [1:3, 1:3] 682 154 19 33 64 19 25 47 43 - attr(*, "dimnames")=List of 2 ..$ :
chr [1:3] "0-49" "50-74" "75-UP" ..$ : chr [1:3] "0-49" "50-74" "75-UP"
Details
Rows represent counts at the beginning, cols represent counts after six months.
Source
Estimation of the transition matrix of a discrete time Markov chain, Bruce A. Craig and Peter P.
Sendi, Health Economics 11, 2002.
References
see source
Examples
data(craigsendi)
csMc<-as(craigsendi, "markovchain")
steadyStates(csMc)
Description
Given a sequence of states arising from a stationary state, it fits the underlying Markov chain dis-
tribution using either MLE (also using a Laplacian smoother), bootstrap or by MAP (Bayesian)
inference.
createSequenceMatrix 9
Usage
createSequenceMatrix(stringchar, toRowProbs = FALSE, sanitize = FALSE,
possibleStates = character())
Arguments
stringchar It can be a n x n matrix or a character vector or a list
toRowProbs converts a sequence matrix into a probability matrix
sanitize put 1 in all rows having rowSum equal to zero
possibleStates Possible states which are not present in the given sequence
data It can be a character vector or a n x n matrix or a n x n data frame or a list
method Method used to estimate the Markov chain. Either "mle", "map", "bootstrap" or
"laplace"
byrow it tells whether the output Markov chain should show the transition probabilities
by row.
nboot Number of bootstrap replicates in case "bootstrap" is used.
laplacian Laplacian smoothing parameter, default zero. It is only used when "laplace"
method is chosen.
name Optional character for name slot.
parallel Use parallel processing when performing Boostrap estimates.
confidencelevel
α
level for conficence intervals width. Used only when method equal to "mle".
confint a boolean to decide whether to compute Confidence Interval or not.
hyperparam Hyperparameter matrix for the a priori distribution. If none is provided, default
value of 1 is assigned to each parameter. This must be of size k x k where k is
the number of states in the chain and the values should typically be non-negative
integers.
Details
Disabling confint would lower the computation time on large datasets. If data or stringchar
contain NAs, the related NA containing transitions will be ignored.
Value
A list containing an estimate, log-likelihood, and, when "bootstrap" method is used, a matrix of
standards deviations and the bootstrap samples. When the "mle", "bootstrap" or "map" method is
used, the lower and upper confidence bounds are returned along with the standard error. The "map"
method also returns the expected value of the parameters with respect to the posterior distribution.
10 ctmc-class
Note
This function has been rewritten in Rcpp. Bootstrap algorithm has been defined "heuristically". In
addition, parallel facility is not complete, involving only a part of the bootstrap process. When data
is either a data.frame or a matrix object, only MLE fit is currently available.
Author(s)
Giorgio Spedicato, Tae Seung Kang, Sai Bhargav Yalamanchi
References
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
Inferring Markov Chains: Bayesian Estimation, Model Comparison, Entropy Rate, and Out-of-
Class Modeling, Christopher C. Strelioff, James P. Crutchfield, Alfred Hubler, Santa Fe Institute
Yalamanchi SB, Spedicato GA (2015). Bayesian Inference of First Order Markov Chains. R pack-
age version 0.2.5
See Also
markovchainSequence, markovchainListFit
Examples
sequence <- c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a",
"b", "b", "b", "a")
sequenceMatr <- createSequenceMatrix(sequence, sanitize = FALSE)
mcFitMLE <- markovchainFit(data = sequence)
mcFitBSP <- markovchainFit(data = sequence, method = "bootstrap", nboot = 5, name = "Bootstrap Mc")
Description
The S4 class that describes ctmc (continuous time Markov chain) objects.
ctmc-class 11
Arguments
states Name of the states. Must be the same of colnames and rownames of the gener-
ator matrix
byrow TRUE or FALSE. Indicates whether the given matrix is stochastic by rows or by
columns
generator Square generator matrix
name Optional character name of the Markov chain
Methods
dim signature(x = "ctmc"): method to get the size
initialize signature(.Object = "ctmc"): initialize method
states signature(object = "ctmc"): states method.
steadyStates signature(object = "ctmc"): method to get the steady state vector.
plot signature(x = "ctmc",y = "missing"): plot method for ctmc objects
Note
1. ctmc classes are written using S4 classes
2. Validation method is used to assess whether either columns or rows totals to zero. Rounding
is used up to 5th decimal. If state names are not properly defined for a generator matrix,
coercing to ctmc object leads to overriding states name with artificial "s1", "s2", ... sequence
References
Introduction to Stochastic Processes with Applications in the Biosciences (2013), David F. Ander-
son, University of Wisconsin at Madison. Sai Bhargav Yalamanchi, Giorgio Spedicato
See Also
generatorToTransitionMatrix,rctmc
Examples
energyStates <- c("sigma", "sigma_star")
byRow <- TRUE
gen <- matrix(data = c(-3, 3,
1, -1), nrow = 2,
byrow = byRow, dimnames = list(energyStates, energyStates))
molecularCTMC <- new("ctmc", states = energyStates,
byrow = byRow, generator = gen,
name = "Molecular Transition Model")
steadyStates(molecularCTMC)
## Not run: plot(molecularCTMC)
12 ctmcFit
Description
This function fits the underlying CTMC give the state transition data and the transition times using
the maximum likelihood method (MLE)
Usage
ctmcFit(data, byrow = TRUE, name = "", confidencelevel = 0.95)
Arguments
data It is a list of two elements. The first element is a character vector denoting the
states. The second is a numeric vector denoting the corresponding transition
times.
byrow Determines if the output transition probabilities of the underlying embedded
DTMC are by row.
name Optional name for the CTMC.
confidencelevel
Confidence level for the confidence interval construnction.
Details
Note that in data, there must exist an element wise corresponding between the two elements of the
list and that data[[2]][1] is always 0.
Value
It returns a list containing the CTMC object and the confidence intervals.
Author(s)
Sai Bhargav Yalamanchi
References
Continuous Time Markov Chains (vignette), Sai Bhargav Yalamanchi, Giorgio Alfredo Spedicato
2015
See Also
rctmc
expectedRewards 13
Examples
data <- list(c("a", "b", "c", "a", "b", "a", "c", "b", "c"), c(0, 0.8, 2.1, 2.4, 4, 5, 5.9, 8.2, 9))
ctmcFit(data)
Description
Given a markovchain object and reward values for every state, function calculates expected reward
value after n steps.
Usage
expectedRewards(markovchain,n,rewards)
Arguments
markovchain the markovchain-class object
n no of steps of the process
rewards vector depicting rewards coressponding to states
Details
the function uses a dynamic programming approach to solve a recursive equation described in ref-
erence.
Value
returns a vector of expected rewards for different initial states
Author(s)
Vandit Jain
References
Stochastic Processes: Theory for Applications, Robert G. Gallager, Cambridge University Press
Examples
transMatr<-matrix(c(0.99,0.01,0.01,0.99),nrow=2,byrow=TRUE)
simpleMc<-new("markovchain", states=c("a","b"),
transitionMatrix=transMatr)
expectedRewards(simpleMc,1,c(0,1))
14 ExpectedTime
expectedRewardsBeforeHittingA
Expected first passage Rewards for a set of states in a markovchain
Description
Given a markovchain object and reward values for every state, function calculates expected reward
value for a set A of states after n steps.
Usage
expectedRewardsBeforeHittingA(markovchain, A, state, rewards, n)
Arguments
markovchain the markovchain-class object
A set of states for first passage expected reward
state initial state
rewards vector depicting rewards coressponding to states
n no of steps of the process
Details
The function returns the value of expected first passage rewards given rewards coressponding to
every state, an initial state and number of steps.
Value
returns a expected reward (numerical value) as described above
Author(s)
Sai Bhargav Yalamanchi, Vandit Jain
Description
Returns expected hitting time from state i to state j
Usage
ExpectedTime(C,i,j,useRCpp)
firstPassage 15
Arguments
C A CTMC S4 object
i Initial state i
j Final state j
useRCpp logical whether to use Rcpp
Details
According to the theorem, holding times for all states except j should be greater than 0.
Value
A numerical value that returns expected hitting times from i to j
Author(s)
Vandit Jain
References
Markovchains, J. R. Norris, Cambridge University Press
Examples
states <- c("a","b","c","d")
byRow <- TRUE
gen <- matrix(data = c(-1, 1/2, 1/2, 0, 1/4, -1/2, 0, 1/4, 1/6, 0, -1/3, 1/6, 0, 0, 0, 0),
nrow = 4,byrow = byRow, dimnames = list(states,states))
ctmc <- new("ctmc",states = states, byrow = byRow, generator = gen, name = "testctmc")
ExpectedTime(ctmc,1,4,TRUE)
Description
This function compute the first passage probability in states
Usage
firstPassage(object, state, n)
Arguments
object A markovchain object
state Initial state
n Number of rows on which compute the distribution
16 firstPassageMultiple
Details
Based on Feres’ Matlab listings
Value
A matrix of size 1:n x number of states showing the probability of the first time of passage in states
to be exactly the number in the row.
Author(s)
Giorgio Spedicato
References
Renaldo Feres, Notes for Math 450 Matlab listings for Markov chains
See Also
conditionalDistribution
Examples
simpleMc <- new("markovchain", states = c("a", "b"),
transitionMatrix = matrix(c(0.4, 0.6, .3, .7),
nrow = 2, byrow = TRUE))
firstPassage(simpleMc, "b", 20)
Description
The function calculates first passage probability for a subset of states given an initial state.
Usage
firstPassageMultiple(object, state, set, n)
Arguments
object a markovchain-class object
state intital state of the process (charactervector)
set set of states A, first passage of which is to be calculated
n Number of rows on which compute the distribution
fitHigherOrder 17
Value
A vector of size n showing the first time proabilities
Author(s)
Vandit Jain
References
Renaldo Feres, Notes for Math 450 Matlab listings for Markov chains; MIT OCW, course - 6.262,
Discrete Stochastic Processes, course-notes, chap -05
See Also
firstPassage
Examples
statesNames <- c("a", "b", "c")
markovB <- new("markovchain", states = statesNames, transitionMatrix =
matrix(c(0.2, 0.5, 0.3,
0, 1, 0,
0.1, 0.8, 0.1), nrow = 3, byrow = TRUE,
dimnames = list(statesNames, statesNames)
))
firstPassageMultiple(markovB,"a",c("b","c"),4)
Description
Given a sequence of states arising from a stationary state, it fits the underlying Markov chain distri-
bution with higher order.
Usage
fitHigherOrder(sequence, order = 2)
seq2freqProb(sequence)
seq2matHigh(sequence, order)
Arguments
sequence A character list.
order Markov chain order
18 fitHighOrderMultivarMC
Value
A list containing lambda, Q, and X.
Note
This function is written in Rcpp.
Author(s)
Giorgio Spedicato, Tae Seung Kang
References
Ching, W. K., Huang, X., Ng, M. K., & Siu, T. K. (2013). Higher-order markov chains. In Markov
Chains (pp. 141-176). Springer US.
Ching, W. K., Ng, M. K., & Fung, E. S. (2008). Higher-order multivariate Markov chains and their
applications. Linear Algebra and its Applications, 428(2), 492-507.
Examples
sequence<-c("a", "a", "b", "b", "a", "c", "b", "a", "b", "c", "a", "b",
"c", "a", "b", "c", "a", "b", "a", "b")
fitHigherOrder(sequence)
fitHighOrderMultivarMC
Function to fit Higher Order Multivariate Markov chain
Description
Given a matrix of categorical sequences it fits Higher Order Multivariate Markov chain.
Usage
fitHighOrderMultivarMC(seqMat, order = 2, Norm = 2)
Arguments
seqMat a matrix or a data frame where each column is a categorical sequence
order Multivariate Markov chain order. Default is 2.
Norm Norm to be used. Default is 2.
Value
an hommc object
freq2Generator 19
Author(s)
Giorgio Spedicato, Deepak Yadav
References
W.-K. Ching et al. / Linear Algebra and its Applications
Examples
data <- matrix(c('2', '1', '3', '3', '4', '3', '2', '1', '3', '3', '2', '1',
c('2', '4', '4', '4', '4', '2', '3', '3', '1', '4', '3', '3')),
ncol = 2, byrow = FALSE)
Description
The function provides interface to calculate generator matrix corresponding to a frequency matrix
and time taken
Usage
freq2Generator(P, t = 1, method = "QO", logmethod = "Eigen")
Arguments
P relative frequency matrix
t (default value = 1)
method one among "QO"(Quasi optimaisation), "WA"(weighted adjustment), "DA"(diagonal
adjustment)
logmethod method for computation of matrx algorithm (by default : Eigen)
Value
returns a generator matix with same dimnames
References
E. Kreinin and M. Sidelnikova: Regularization Algorithms for Transition Matrices. Algo Research
Quarterly 4(1):23-40, 2001
20 generatorToTransitionMatrix
Examples
sample <- matrix(c(150,2,1,1,1,200,2,1,2,1,175,1,1,1,1,150),nrow = 4,byrow = TRUE)
sample_rel = rbind((sample/rowSums(sample))[1:dim(sample)[1]-1,],c(rep(0,dim(sample)[1]-1),1))
freq2Generator(sample_rel,1)
data(tm_abs)
tm_rel=rbind((tm_abs/rowSums(tm_abs))[1:7,],c(rep(0,7),1))
## Derive quasi optimization generator matrix estimate
freq2Generator(tm_rel,1)
generatorToTransitionMatrix
Function to obtain the transition matrix from the generator
Description
The transition matrix of the embedded DTMC is inferred from the CTMC’s generator
Usage
generatorToTransitionMatrix(gen, byrow = TRUE)
Arguments
gen The generator matrix
byrow Flag to determine if rows (columns) sum to 0
Value
Returns the transition matrix.
Author(s)
Sai Bhargav Yalamanchi
References
Introduction to Stochastic Processes with Applications in the Biosciences (2013), David F. Ander-
son, University of Wisconsin at Madison
See Also
rctmc,ctmc-class
HigherOrderMarkovChain-class 21
Examples
energyStates <- c("sigma", "sigma_star")
byRow <- TRUE
gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2,
byrow = byRow, dimnames = list(energyStates, energyStates))
generatorToTransitionMatrix(gen)
HigherOrderMarkovChain-class
Higher order Markov Chains class
Description
The S4 class that describes HigherOrderMarkovChain objects.
Description
Given a markovchain object, this function calculates the probability of ever arriving from state i to j
Usage
hittingProbabilities(object)
Arguments
object the markovchain-class object
Value
a matrix of hitting probabilities
Author(s)
Ignacio Cordón
References
R. Vélez, T. Prieto, Procesos Estocásticos, Librería UNED, 2013
22 holson
Examples
M <- matlab::zeros(5, 5)
M[1,1] <- M[5,5] <- 1
M[2,1] <- M[2,3] <- 1/2
M[3,2] <- M[3,4] <- 1/2
M[4,2] <- M[4,5] <- 1/2
Description
A data set containing 1000 life histories trajectories and a categorical status (1,2,3) observed on
eleven evenly spaced steps.
Usage
data(holson)
Format
A data frame with 1000 observations on the following 12 variables.
id unique id
time1 observed status at i-th time
time2 observed status at i-th time
time3 observed status at i-th time
time4 observed status at i-th time
time5 observed status at i-th time
time6 observed status at i-th time
time7 observed status at i-th time
time8 observed status at i-th time
time9 observed status at i-th time
time10 observed status at i-th time
time11 observed status at i-th time
Details
The example can be used to fit a markovchain or a markovchainList object.
hommc-class 23
Source
Private communications
References
Private communications
Examples
data(holson)
head(holson)
Description
An S4 class for representing High Order Multivariate Markovchain (HOMMC)
Slots
order an integer equal to order of Multivariate Markovchain
states a vector of states present in the HOMMC model
P array of transition matrices
Lambda a vector which stores the weightage of each transition matrices in P
byrow if FALSE each column sum of transition matrix is 1 else row sum = 1
name a name given to hommc
Author(s)
Giorgio Spedicato, Deepak Yadav
Examples
statesName <- c("a", "b")
Description
An S4 class for representing Imprecise Continuous Time Markovchains
Slots
states a vector of states present in the ICTMC model
Q matrix representing the generator demonstrated in the form of variables
range a matrix that stores values of range of variables
name name given to ICTMC
impreciseProbabilityatT
Calculating full conditional probability using lower rate transition
matrix
Description
This function calculates full conditional probability at given time s using lower rate transition matrix
Usage
impreciseProbabilityatT(C,i,t,s,error,useRCpp)
Arguments
C a ictmc class object
i initial state at time t
t initial time t. Default value = 0
s final time
error error rate. Default value = 0.001
useRCpp logical whether to use RCpp implementation; by default TRUE
Author(s)
Vandit Jain
References
Imprecise Continuous-Time Markov Chains, Thomas Krak et al., 2016
inferHyperparam 25
Examples
states <- c("n","y")
Q <- matrix(c(-1,1,1,-1),nrow = 2,byrow = TRUE,dimnames = list(states,states))
range <- matrix(c(1/52,3/52,1/2,2),nrow = 2,byrow = 2)
name <- "testictmc"
ictmc <- new("ictmc",states = states,Q = Q,range = range,name = name)
impreciseProbabilityatT(ictmc,2,0,1,10^-3,TRUE)
Description
Since the Bayesian inference approach implemented in the package is based on conjugate priors,
hyperparameters must be provided to model the prior probability distribution of the chain param-
eters. The hyperparameters are inferred from a given a priori matrix under the assumption that
the matrix provided corresponds to the mean (expected) values of the chain parameters. A scaling
factor vector must be provided too. Alternatively, the hyperparameters can be inferred from a data
set.
Usage
inferHyperparam(transMatr = matrix(), scale = numeric(),
data = character())
Arguments
transMatr A valid transition matrix, with dimension names.
scale A vector of scaling factors, each element corresponds to the row names of the
provided transition matrix transMatr, in the same order.
data A data set from which the hyperparameters are inferred.
Details
transMatr and scale need not be provided if data is provided.
Value
Returns the hyperparameter matrix in a list.
Note
The hyperparameter matrix returned is such that the row and column names are sorted alphanumer-
ically, and the elements in the matrix are correspondingly permuted.
26 is.accessible
Author(s)
Sai Bhargav Yalamanchi, Giorgio Spedicato
References
Yalamanchi SB, Spedicato GA (2015). Bayesian Inference of First Order Markov Chains. R pack-
age version 0.2.5
See Also
markovchainFit, predictiveDistribution
Examples
data(rain, package = "markovchain")
inferHyperparam(data = rain$rain)
Description
This function verifies if a state is reachable from another, i.e., if there exists a path that leads to state
j leaving from state i with positive probability
Usage
is.accessible(object, from, to)
Arguments
object A markovchain object.
from The name of state "i" (beginning state).
to The name of state "j" (ending state).
Details
It wraps an internal function named reachabilityMatrix.
is.CTMCirreducible 27
Value
A boolean value.
Author(s)
Giorgio Spedicato, Ignacio Cordón
References
James Montgomery, University of Madison
See Also
is.irreducible
Examples
statesNames <- c("a", "b", "c")
markovB <- new("markovchain", states = statesNames,
transitionMatrix = matrix(c(0.2, 0.5, 0.3,
0, 1, 0,
0.1, 0.8, 0.1), nrow = 3, byrow = TRUE,
dimnames = list(statesNames, statesNames)
)
)
is.accessible(markovB, "a", "c")
Description
This function verifies whether a CTMC object is irreducible
Usage
is.CTMCirreducible(ctmc)
Arguments
ctmc a ctmc-class object
Value
a boolean value as described above.
28 is.irreducible
Author(s)
Vandit Jain
References
Continuous-Time Markov Chains, Karl Sigman, Columbia University
Examples
energyStates <- c("sigma", "sigma_star")
byRow <- TRUE
gen <- matrix(data = c(-3, 3,
1, -1), nrow = 2,
byrow = byRow, dimnames = list(energyStates, energyStates))
molecularCTMC <- new("ctmc", states = energyStates,
byrow = byRow, generator = gen,
name = "Molecular Transition Model")
is.CTMCirreducible(molecularCTMC)
Description
This function verifies whether a markovchain object transition matrix is composed by only one
communicating class.
Usage
is.irreducible(object)
Arguments
object A markovchain object
Details
It is based on .communicatingClasses internal function.
Value
A boolean values.
Author(s)
Giorgio Spedicato
is.regular 29
References
Feres, Matlab listings for Markov Chains.
See Also
summary
Examples
statesNames <- c("a", "b")
mcA <- new("markovchain", transitionMatrix = matrix(c(0.7,0.3,0.1,0.9),
byrow = TRUE, nrow = 2,
dimnames = list(statesNames, statesNames)
))
is.irreducible(mcA)
Description
Function to check wether a DTCM is regular
Usage
is.regular(object)
Arguments
object a markovchain object
Details
A Markov chain is regular if some of the powers of its matrix has all elements strictly positive
Value
A boolean value
Author(s)
Ignacio Cordón
References
Matrix Analysis. Roger A.Horn, Charles R.Johnson. 2nd edition. Corollary 8.5.8, Theorem 8.5.9
30 is.TimeReversible
See Also
is.irreducible
Examples
Description
Usage
is.TimeReversible(ctmc)
Arguments
Value
Author(s)
Vandit Jain
References
Examples
energyStates <- c("sigma", "sigma_star")
byRow <- TRUE
gen <- matrix(data = c(-3, 3,
1, -1), nrow = 2,
byrow = byRow, dimnames = list(energyStates, energyStates))
molecularCTMC <- new("ctmc", states = energyStates,
byrow = byRow, generator = gen,
name = "Molecular Transition Model")
is.TimeReversible(molecularCTMC)
kullback Example from Kullback and Kupperman Tests for Contingency Tables
Description
A list of two matrices representing raw transitions between two states
Usage
data(kullback)
Format
A list containing two 6x6 non - negative integer matrices
Description
The S4 class that describes markovchain objects.
Arguments
states Name of the states. Must be the same of colnames and rownames of the transi-
tion matrix
byrow TRUE or FALSE indicating whether the supplied matrix is either stochastic by
rows or by columns
transitionMatrix
Square transition matrix
name Optional character name of the Markov chain
32 markovchain-class
Creation of objects
Objects can be created by calls of the form new("markovchain",states,byrow,transitionMatrix,...).
Methods
* signature(e1 = "markovchain",e2 = "markovchain"): multiply two markovchain objects
* signature(e1 = "markovchain",e2 = "matrix"): markovchain by matrix multiplication
* signature(e1 = "markovchain",e2 = "numeric"): markovchain by numeric vector multipli-
cation
* signature(e1 = "matrix",e2 = "markovchain"): matrix by markov chain
* signature(e1 = "numeric",e2 = "markovchain"): numeric vector by markovchain multipli-
cation
[ signature(x = "markovchain",i = "ANY",j = "ANY",drop = "ANY"): ...
^ signature(e1 = "markovchain",e2 = "numeric"): power of a markovchain object
== signature(e1 = "markovchain",e2 = "markovchain"): equality of two markovchain object
!= signature(e1 = "markovchain",e2 = "markovchain"): non-equality of two markovchain
object
absorbingStates signature(object = "markovchain"): method to get absorbing states
canonicForm signature(object = "markovchain"): return a markovchain object into canonic
form
coerce signature(from = "markovchain",to = "data.frame"): coerce method from markovchain
to data.frame
conditionalDistribution signature(object = "markovchain"): returns the conditional proba-
bility of subsequent states given a state
coerce signature(from = "data.frame",to = "markovchain"): coerce method from data.frame
to markovchain
coerce signature(from = "table",to = "markovchain"): coerce method from table to markovchain
coerce signature(from = "msm",to = "markovchain"): coerce method from msm to markovchain
coerce signature(from = "msm.est",to = "markovchain"): coerce method from msm.est (but
only from a Probability Matrix) to markovchain
coerce signature(from = "etm",to = "markovchain"): coerce method from etm to markovchain
coerce signature(from = "sparseMatrix",to = "markovchain"): coerce method from sparseMatrix
to markovchain
coerce signature(from = "markovchain",to = "igraph"): coercing to igraph objects
coerce signature(from = "markovchain",to = "matrix"): coercing to matrix objects
coerce signature(from = "markovchain",to = "sparseMatrix"): coercing to sparseMatrix
objects
coerce signature(from = "matrix",to = "markovchain"): coercing to markovchain objects
from matrix one
dim signature(x = "markovchain"): method to get the size
names signature(x = "markovchain"): method to get the names of states
markovchain-class 33
Note
1. markovchain object are backed by S4 Classes.
2. Validation method is used to assess whether either columns or rows totals to one. Rounding is
used up to .Machine$double.eps * 100. If state names are not properly defined for a proba-
bility matrix, coercing to markovhcain object leads to overriding states name with artificial
"s1", "s2", ... sequence. In addition, operator overloading has been applied for +, ∗,, ==, ! =
operators.
Author(s)
Giorgio Spedicato
References
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
See Also
markovchainSequence,markovchainFit
Examples
#show markovchain definition
showClass("markovchain")
#create a simple Markov chain
transMatr<-matrix(c(0.4,0.6,.3,.7),nrow=2,byrow=TRUE)
simpleMc<-new("markovchain", states=c("a","b"),
transitionMatrix=transMatr,
name="simpleMc")
34 markovchainList-class
#power
simpleMc^4
#some methods
steadyStates(simpleMc)
absorbingStates(simpleMc)
simpleMc[2,1]
t(simpleMc)
is.irreducible(simpleMc)
#conditional distributions
conditionalDistribution(simpleMc, "b")
#example for predict method
sequence<-c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a")
mcFit<-markovchainFit(data=sequence)
predict(mcFit$estimate, newdata="b",n.ahead=3)
#direct conversion
myMc<-as(transMatr, "markovchain")
#example of summary
summary(simpleMc)
## Not run: plot(simpleMc)
Description
A class to handle non homogeneous discrete Markov chains
Arguments
markovchains Object of class "list": a list of markovchains
name Object of class "character": optional name of the class
Methods
[[ signature(x = "markovchainList"): extract the i-th markovchain
dim signature(x = "markovchainList"): number of markovchain underlying the matrix
predict signature(object = "markovchainList"): predict from a markovchainList
print signature(x = "markovchainList"): prints the list of markovchains
show signature(object = "markovchainList"): same as print
markovchainListFit 35
Note
The class consists in a list of markovchain objects. It is aimed at working with non homogeneous
Markov chains.
Author(s)
Giorgio Spedicato
References
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
See Also
markovchain
Examples
showClass("markovchainList")
#define a markovchainList
statesNames=c("a","b")
mcA<-new("markovchain",name="MCA",
transitionMatrix=matrix(c(0.7,0.3,0.1,0.9),
byrow=TRUE, nrow=2,
dimnames=list(statesNames,statesNames))
)
markovchainListFit markovchainListFit
Description
Given a data frame or a matrix (rows are observations, by cols the temporal sequence), it fits a non -
homogeneous discrete time markov chain process (storing row). In particular a markovchainList of
size = ncol - 1 is obtained estimating transitions from the n samples given by consecutive column
pairs.
36 markovchainSequence
Usage
Arguments
Details
Value
Examples
Description
Provided any markovchain object, it returns a sequence of states coming from the underlying sta-
tionary distribution.
Usage
Arguments
n Sample size
markovchain markovchain object
t0 The initial state
include.t0 Specify if the initial state shall be used
useRCpp Boolean. Should RCpp fast implementation being used? Default is yes.
Details
Value
A Character Vector
Author(s)
Giorgio Spedicato
References
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
See Also
markovchainFit
Examples
Description
Computes the expected number of steps to go from any of the transient states to any of the recurrent
states. The Markov chain should have at least one transient state for this method to work
Usage
meanAbsorptionTime(object)
Arguments
Value
A named vector with the expected number of steps to go from a transient state to any of the recurrent
ones
Author(s)
Ignacio Cordón
References
Examples
Description
Given an irreducible (ergodic) markovchain object, this function calculates the expected number of
steps to reach other states
Usage
meanFirstPassageTime(object, destination)
Arguments
Details
• If destination is empty, the average first time (in steps) that takes the Markov chain to go from
initial state i to j. (i, j) represents that value in case the Markov chain is given row-wise, (j, i)
in case it is given col-wise.
• If destination is not empty, the average time it takes us from the remaining states to reach the
states in destination
Value
a Matrix of the same size with the average first passage times if destination is empty, a vector if
destination is not
Author(s)
References
Examples
m <- matrix(1 / 10 * c(6,3,1,
2,3,5,
4,1,5), ncol = 3, byrow = TRUE)
mc <- new("markovchain", states = c("s","c","r"), transitionMatrix = m)
meanFirstPassageTime(mc, "r")
Description
Given a markovchain object, this function calculates a matrix where the element (i, j) represents the
expect number of visits to the state j if the chain starts at i (in a Markov chain by columns it would
be the element (j, i) instead)
Usage
meanNumVisits(object)
Arguments
Value
Author(s)
Ignacio Cordón
References
R. Vélez, T. Prieto, Procesos Estocásticos, Librería UNED, 2013
meanRecurrenceTime 41
Examples
M <- matlab::zeros(5, 5)
M[1,1] <- M[5,5] <- 1
M[2,1] <- M[2,3] <- 1/2
M[3,2] <- M[3,4] <- 1/2
M[4,2] <- M[4,5] <- 1/2
Description
Computes the expected time to return to a recurrent state in case the Markov chain starts there
Usage
meanRecurrenceTime(object)
Arguments
object the markovchain object
Value
For a Markov chain it outputs is a named vector with the expected time to first return to a state
when the chain starts there. States present in the vector are only the recurrent ones. If the matrix
is ergodic (i.e. irreducible), then all states are present in the output and order is the same as states
order for the Markov chain
Author(s)
Ignacio Cordón
References
C. M. Grinstead and J. L. Snell. Introduction to Probability. American Mathematical Soc., 2012.
Examples
m <- matrix(1 / 10 * c(6,3,1,
2,3,5,
4,1,5), ncol = 3, byrow = TRUE)
mc <- new("markovchain", states = c("s","c","r"), transitionMatrix = m)
meanRecurrenceTime(mc)
42 multinomialConfidenceIntervals
multinomialConfidenceIntervals
A function to compute multinomial confidence intervals of DTMC
Description
Usage
multinomialConfidenceIntervals(transitionMatrix, countsTransitionMatrix,
confidencelevel = 0.95)
Arguments
transitionMatrix
An estimated transition matrix.
countsTransitionMatrix
Empirical (conts) transition matrix, on which the transitionMatrix was per-
formed.
confidencelevel
confidence interval level.
Value
References
Constructing two-sided simultaneous confidence intervals for multinomial proportions for small
counts in a large number of cells. Journal of Statistical Software 5(6) (2000)
See Also
markovchainFit
Examples
seq<-c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a", "b", "b", "b", "a")
mcfit<-markovchainFit(data=seq,byrow=TRUE)
seqmat<-createSequenceMatrix(seq)
multinomialConfidenceIntervals(mcfit$estimate@transitionMatrix, seqmat, 0.95)
name 43
Description
This method returns the name of a markovchain object
Usage
name(object)
Arguments
object A markovchain object
Author(s)
Giorgio Spedicato, Deepak Yadav
Examples
statesNames <- c("a", "b", "c")
markovB <- new("markovchain", states = statesNames, transitionMatrix =
matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3,
byrow = TRUE, dimnames=list(statesNames,statesNames)),
name = "A markovchain Object"
)
name(markovB)
Description
This method modifies the existing name of markovchain object
Usage
name(object) <- value
Arguments
Author(s)
Examples
names,markovchain-method
Returns the states for a Markov chain object
Description
Usage
Arguments
noofVisitsDist return a joint pdf of the number of visits to the various states of the
DTMC
Description
This function would return a joint pdf of the number of visits to the various states of the DTMC
during the first N steps.
Usage
noofVisitsDist(markovchain,N,state)
Arguments
Details
This function would return a joint pdf of the number of visits to the various states of the DTMC
during the first N steps.
Value
Author(s)
Vandit Jain
Examples
transMatr<-matrix(c(0.4,0.6,.3,.7),nrow=2,byrow=TRUE)
simpleMc<-new("markovchain", states=c("a","b"),
transitionMatrix=transMatr,
name="simpleMc")
noofVisitsDist(simpleMc,5,"a")
46 period
Description
These functions return absorbing and transient states of the markovchain objects.
Usage
period(object)
communicatingClasses(object)
recurrentClasses(object)
transientClasses(object)
transientStates(object)
recurrentStates(object)
absorbingStates(object)
canonicForm(object)
Arguments
object A markovchain object.
Value
period returns a integer number corresponding to the periodicity of the Markov chain (if it is
irreducible)
absorbingStates returns a character vector with the names of the absorbing states in the Markov
chain
communicatingClasses returns a list in which each slot contains the names of the states that are
in that communicating class
recurrentClasses analogously to communicatingClasses, but with recurrent classes
transientClasses analogously to communicatingClasses, but with transient classes
transientStates returns a character vector with all the transient states for the Markov chain
recurrentStates returns a character vector with all the recurrent states for the Markov chain
canonicForm returns the Markov chain reordered by a permutation of states so that we have blocks
submatrices for each of the recurrent classes and a collection of rows in the end for the transient
states
period 47
Author(s)
Giorgio Alfredo Spedicato, Ignacio Cordón
References
Feres, Matlab listing for markov chain.
See Also
markovchain
Examples
statesNames <- c("a", "b", "c")
mc <- new("markovchain", states = statesNames, transitionMatrix =
matrix(c(0.2, 0.5, 0.3,
0, 1, 0,
0.1, 0.8, 0.1), nrow = 3, byrow = TRUE,
dimnames = list(statesNames, statesNames))
)
communicatingClasses(mc)
recurrentClasses(mc)
recurrentClasses(mc)
absorbingStates(mc)
transientStates(mc)
recurrentStates(mc)
canonicForm(mc)
# periodicity analysis
A <- matrix(c(0, 1, 0, 0, 0.5, 0, 0.5, 0, 0, 0.5, 0, 0.5, 0, 0, 1, 0),
nrow = 4, ncol = 4, byrow = TRUE)
mcA <- new("markovchain", states = c("a", "b", "c", "d"),
transitionMatrix = A,
name = "A")
is.irreducible(mcA) #true
period(mcA) #2
# periodicity analysis
B <- matrix(c(0, 0, 1/2, 1/4, 1/4, 0, 0,
0, 0, 1/3, 0, 2/3, 0, 0,
0, 0, 0, 0, 0, 1/3, 2/3,
0, 0, 0, 0, 0, 1/2, 1/2,
0, 0, 0, 0, 0, 3/4, 1/4,
1/2, 1/2, 0, 0, 0, 0, 0,
1/4, 3/4, 0, 0, 0, 0, 0), byrow = TRUE, ncol = 7)
mcB <- new("markovchain", transitionMatrix = B)
period(mcB)
48 predictiveDistribution
Description
This function provides a prediction of states for a higher order multivariate markovchain object
Usage
predictHommc(hommc,t,init)
Arguments
hommc a hommc-class object
t no of iterations to predict
init matrix of previous states size of which depends on hommc
Details
The user is required to provide a matrix of giving n previous coressponding every categorical se-
quence. Dimensions of the init are s X n, where s is number of categorical sequences and n is order
of the homc.
Value
The function returns a matrix of size s X t displaying t predicted states in each row coressponding
to every categorical sequence.
Author(s)
Vandit Jain
predictiveDistribution
predictiveDistribution
Description
The function computes the probability of observing a new data set, given a data set
Usage
predictiveDistribution(stringchar, newData, hyperparam = matrix())
predictiveDistribution 49
Arguments
stringchar This is the data using which the Bayesian inference is performed.
newData This is the data whose predictive probability is computed.
hyperparam This determines the shape of the prior distribution of the parameters. If none is
provided, default value of 1 is assigned to each parameter. This must be of size
kxk where k is the number of states in the chain and the values should typically
be non-negative integers.
Details
The underlying method is Bayesian inference. The probability is computed by averaging the like-
lihood of the new data with respect to the posterior. Since the method assumes conjugate priors,
the result can be represented in a closed form (see the vignette for more details), which is what is
returned.
Value
Author(s)
References
Inferring Markov Chains: Bayesian Estimation, Model Comparison, Entropy Rate, and Out-of-
Class Modeling, Christopher C. Strelioff, James P. Crutchfield, Alfred Hubler, Santa Fe Institute
Yalamanchi SB, Spedicato GA (2015). Bayesian Inference of First Order Markov Chains. R pack-
age version 0.2.5
See Also
markovchainFit
Examples
sequence<- c("a", "b", "a", "a", "a", "a", "b", "a", "b", "a", "b", "a", "a",
"b", "b", "b", "a")
hyperMatrix<-matrix(c(1, 2, 1, 4), nrow = 2,dimnames=list(c("a","b"),c("a","b")))
predProb <- predictiveDistribution(sequence[1:10], sequence[11:17], hyperparam =hyperMatrix )
hyperMatrix2<-hyperMatrix[c(2,1),c(2,1)]
predProb2 <- predictiveDistribution(sequence[1:10], sequence[11:17], hyperparam =hyperMatrix2 )
predProb2==predProb
50 priorDistribution
Description
Sequence of bases for preproglucacon DNA protein
Usage
data(preproglucacon)
Format
A data frame with 1572 observations on the following 2 variables.
Source
Avery Henderson
References
Averuy Henderson, Fitting markov chain models on discrete time series such as DNA sequences
Examples
data(preproglucacon)
preproglucaconMc<-markovchainFit(data=preproglucacon$preproglucacon)
priorDistribution priorDistribution
Description
Function to evaluate the prior probability of a transition matrix. It is based on conjugate priors and
therefore a Dirichlet distribution is used to model the transitions of each state.
Usage
priorDistribution(transMatr, hyperparam = matrix())
priorDistribution 51
Arguments
Details
The states (dimnames) of the transition matrix and the hyperparam may be in any order.
Value
The log of the probabilities for each state is returned in a numeric vector. Each number in the vector
represents the probability (log) of having a probability transition vector as specified in correspond-
ing the row of the transition matrix.
Note
This function can be used in conjunction with inferHyperparam. For example, if the user has a
prior data set and a prior transition matrix, he can infer the hyperparameters using inferHyperparam
and then compute the probability of their prior matrix using the inferred hyperparameters with
priorDistribution.
Author(s)
References
Yalamanchi SB, Spedicato GA (2015). Bayesian Inference of First Order Markov Chains. R pack-
age version 0.2.5
See Also
predictiveDistribution, inferHyperparam
Examples
priorDistribution(matrix(c(0.5, 0.5, 0.5, 0.5),
nrow = 2,
dimnames = list(c("a", "b"), c("a", "b"))),
matrix(c(2, 2, 2, 2),
nrow = 2,
dimnames = list(c("a", "b"), c("a", "b"))))
52 probabilityatT
Description
This function returns the probability of every state at time t under different conditions
Usage
probabilityatT(C,t,x0,useRCpp)
Arguments
C A CTMC S4 object
t final time t
x0 initial state
useRCpp logical whether to use RCpp implementation
Details
The initial state is not mandatory, In case it is not provided, function returns a matrix of transition
function at time t else it returns vector of probaabilities of transition to different states if initial state
was x0
Value
returns a vector or a matrix in case x0 is provided or not respectively.
Author(s)
Vandit Jain
References
INTRODUCTION TO STOCHASTIC PROCESSES WITH R, ROBERT P. DOBROW, Wiley
Examples
states <- c("a","b","c","d")
byRow <- TRUE
gen <- matrix(data = c(-1, 1/2, 1/2, 0, 1/4, -1/2, 0, 1/4, 1/6, 0, -1/3, 1/6, 0, 0, 0, 0),
nrow = 4,byrow = byRow, dimnames = list(states,states))
ctmc <- new("ctmc",states = states, byrow = byRow, generator = gen, name = "testctmc")
probabilityatT(ctmc,1,useRCpp = TRUE)
rain 53
Description
Rainfall measured in Alofi Island
Usage
data(rain)
Format
A data frame with 1096 observations on the following 2 variables.
Source
Avery Henderson
References
Avery Henderson, Fitting markov chain models on discrete time series such as DNA sequences
Examples
data(rain)
rainMc<-markovchainFit(data=rain$rain)
rctmc rctmc
Description
The function generates random CTMC transitions as per the provided generator matrix.
Usage
rctmc(n, ctmc, initDist = numeric(), T = 0, include.T0 = TRUE,
out.type = "list")
54 rctmc
Arguments
n The number of samples to generate.
ctmc The CTMC S4 object.
initDist The initial distribution of states.
T The time up to which the simulation runs (all transitions after time T are not
returned).
include.T0 Flag to determine if start state is to be included.
out.type "list" or "df"
Details
In order to use the T0 argument, set n to Inf.
Value
Based on out.type, a list or a data frame is returned. The returned list has two elements - a character
vector (states) and a numeric vector (indicating time of transitions). The data frame is similarly
structured.
Author(s)
Sai Bhargav Yalamanchi
References
Introduction to Stochastic Processes with Applications in the Biosciences (2013), David F. Ander-
son, University of Wisconsin at Madison
See Also
generatorToTransitionMatrix,ctmc-class
Examples
energyStates <- c("sigma", "sigma_star")
byRow <- TRUE
gen <- matrix(data = c(-3, 3, 1, -1), nrow = 2,
byrow = byRow, dimnames = list(energyStates, energyStates))
molecularCTMC <- new("ctmc", states = energyStates,
byrow = byRow, generator = gen,
name = "Molecular Transition Model")
Description
Provided any markovchain or markovchainList objects, it returns a sequence of states coming
from the underlying stationary distribution.
Usage
rmarkovchain(n, object, what = "data.frame", useRCpp = TRUE,
parallel = FALSE, num.cores = NULL, ...)
Arguments
n Sample size
object Either a markovchain or a markovchainList object
what It specifies whether either a data.frame or a matrix (each rows represent a
simulation) or a list is returned.
useRCpp Boolean. Should RCpp fast implementation being used? Default is yes.
parallel Boolean. Should parallel implementation being used? Default is yes.
num.cores Number of Cores to be used
... additional parameters passed to the internal sampler
Details
When a homogeneous process is assumed (markovchain object) a sequence is sampled of size n.
When a non - homogeneous process is assumed, n samples are taken but the process is assumed to
last from the begin to the end of the non-homogeneous markov process.
Value
Character Vector, data.frame, list or matrix
Note
Check the type of input
Author(s)
Giorgio Spedicato
References
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
56 sales
See Also
markovchainFit, markovchainSequence
Examples
# define the markovchain object
statesNames <- c("a", "b", "c")
mcB <- new("markovchain", states = statesNames,
transitionMatrix = matrix(c(0.2, 0.5, 0.3, 0, 0.2, 0.8, 0.1, 0.8, 0.1),
nrow = 3, byrow = TRUE, dimnames = list(statesNames, statesNames)))
Description
Sales demand sequences of five products (A, B, C, D, E). Each row corresponds to a sequence. First
row corresponds to Sequence A, Second row to Sequence B and so on.
Usage
data("sales")
Format
An object of class matrix with 269 rows and 5 columns.
show,hommc-method 57
Details
The example can be used to fit High order multivariate markov chain.
Examples
data("sales")
# fitHighOrderMultivarMC(seqMat = sales, order = 2, Norm = 2)
Description
This is a convenience function to display the slots of hommc object in proper format
Usage
## S4 method for signature 'hommc'
show(object)
Arguments
object An object of class hommc
Description
This method returns the states of a transition matrix.
Usage
states(object)
Arguments
object A discrete markovchain object
Value
The character vector corresponding to states slot.
58 steadyStates
Author(s)
Giorgio Spedicato
References
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
See Also
markovchain
Examples
statesNames <- c("a", "b", "c")
markovB <- new("markovchain", states = statesNames, transitionMatrix =
matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3,
byrow = TRUE, dimnames=list(statesNames,statesNames)),
name = "A markovchain Object"
)
states(markovB)
names(markovB)
Description
This method returns the stationary vector in matricial form of a markovchain object.
Usage
steadyStates(object)
Arguments
object A discrete markovchain object
Value
A matrix corresponding to the stationary states
Note
The steady states are identified starting from which eigenvectors correspond to identity eigenvalues
and then normalizing them to sum up to unity. When negative values are found in the matrix, the
eigenvalues extraction is performed on the recurrent classes submatrix.
tm_abs 59
Author(s)
Giorgio Spedicato
References
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
See Also
markovchain
Examples
statesNames <- c("a", "b", "c")
markovB <- new("markovchain", states = statesNames, transitionMatrix =
matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3,
byrow = TRUE, dimnames=list(statesNames,statesNames)),
name = "A markovchain Object"
)
steadyStates(markovB)
Description
Matrix of Standard and Poor’s Global Corporate Rating Transition Frequencies 2000 (NR Re-
moved)
Usage
data(tm_abs)
Format
The format is: num [1:8, 1:8] 17 2 0 0 0 0 0 0 1 455 ... - attr(*, "dimnames")=List of 2 ..$ : chr [1:8]
"AAA" "AA" "A" "BBB" ... ..$ : chr [1:8] "AAA" "AA" "A" "BBB" ...
References
European Securities and Markets Authority, 2016 https://cerep.esma.europa.eu/cerep-web/statistics/transitionMatrice.xhtml
Examples
data(tm_abs)
60 transitionProbability
Description
Calculate the generator matrix for a corresponding transition matrix
Usage
transition2Generator(P, t = 1, method = "logarithm")
Arguments
P transition matrix between time 0 and t
t time of observation
method "logarithm" returns the Matrix logarithm of the transition matrix
Value
A matrix that represent the generator of P
See Also
rctmc
Examples
mymatr <- matrix(c(.4, .6, .1, .9), nrow = 2, byrow = TRUE)
Q <- transition2Generator(P = mymatr)
expm::expm(Q)
Description
This is a convenience function to get transition probabilities.
Usage
transitionProbability(object, t0, t1)
Arguments
Value
Numeric Vector
Author(s)
Giorgio Spedicato
References
A First Course in Probability (8th Edition), Sheldon Ross, Prentice Hall 2010
See Also
markovchain
Examples
statesNames <- c("a", "b", "c")
markovB <- new("markovchain", states = statesNames, transitionMatrix =
matrix(c(0.2, 0.5, 0.3, 0, 1, 0, 0.1, 0.8, 0.1), nrow = 3,
byrow = TRUE, dimnames=list(statesNames,statesNames)),
name = "A markovchain Object"
)
transitionProbability(markovB,"b", "c")
Description
These functions verify the Markov property, assess the order and stationarity of the Markov chain.
This function tests whether an empirical transition matrix is statistically compatible with a theoret-
ical one. It is a chi-square based test
Verifies that the s elements in the input list belongs to the same DTMC
62 verifyMarkovProperty
Usage
verifyMarkovProperty(sequence, verbose = TRUE)
Arguments
sequence An empirical sequence.
verbose Should test results be printed out?
nblocks Number of blocks.
data matrix, character or list to be converted in a raw transition matrix
object a markovchain object
inputList A list of items that can coerced to transition matrices
Value
Verification result
a list with following slots: statistic (the chi - square statistic), dof (degrees of freedom), and corre-
sponding p-value
a list of transition matrices?
Author(s)
Tae Seung Kang, Giorgio Alfredo Spedicato
References
Anderson and Goodman.
See Also
markovchain
Examples
sequence <- c("a", "b", "a", "a", "a", "a", "b", "a", "b",
"a", "b", "a", "a", "b", "b", "b", "a")
mcFit <- markovchainFit(data = sequence, byrow = FALSE)
verifyMarkovProperty(sequence)
assessOrder(sequence)
assessStationarity(sequence, 1)
verifyMarkovProperty 63
#Example taken from Kullback Kupperman Tests for Contingency Tables and Markov Chains
sequence<-c(0,1,2,2,1,0,0,0,0,0,0,1,2,2,2,1,0,0,1,0,0,0,0,0,0,1,1,
2,0,0,2,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,2,1,0,
0,2,1,0,0,0,0,0,0,1,1,1,2,2,0,0,2,1,1,1,1,2,1,1,1,1,1,1,1,1,1,0,2,
0,1,1,0,0,0,1,2,2,0,0,0,0,0,0,2,2,2,1,1,1,1,0,1,1,1,1,0,0,2,1,1,
0,0,0,0,0,2,2,1,1,1,1,1,2,1,2,0,0,0,1,2,2,2,0,0,0,1,1)
mc=matrix(c(5/8,1/4,1/8,1/4,1/2,1/4,1/4,3/8,3/8),byrow=TRUE, nrow=3)
rownames(mc)<-colnames(mc)<-0:2; theoreticalMc<-as(mc, "markovchain")
verifyEmpiricalToTheoretical(data=sequence,object=theoreticalMc)
data(kullback)
verifyHomogeneity(inputList=kullback,verbose=TRUE)
Index
!=,markovchain,markovchain-method absorptionProbabilities,markovchain-method
(markovchain-class), 31 (markovchain-class), 31
∗Topic classes assessOrder (verifyMarkovProperty), 61
ctmc-class, 10 assessStationarity
markovchain-class, 31 (verifyMarkovProperty), 61
markovchainList-class, 34
∗Topic datasets blanden, 5
blanden, 5
craigsendi, 8 canonicForm (period), 46
holson, 22 canonicForm,markovchain-method
kullback, 31 (markovchain-class), 31
preproglucacon, 50 coerce,data.frame,markovchain-method
rain, 53 (markovchain-class), 31
sales, 56 coerce,etm,markovchain-method
tm_abs, 59 (markovchain-class), 31
∗Topic package coerce,markovchain,data.frame-method
markovchain-package, 3 (markovchain-class), 31
*,markovchain,markovchain-method coerce,markovchain,igraph-method
(markovchain-class), 31 (markovchain-class), 31
*,markovchain,matrix-method coerce,markovchain,matrix-method
(markovchain-class), 31 (markovchain-class), 31
*,markovchain,numeric-method coerce,markovchain,sparseMatrix-method
(markovchain-class), 31 (markovchain-class), 31
*,matrix,markovchain-method coerce,matrix,markovchain-method
(markovchain-class), 31 (markovchain-class), 31
*,numeric,markovchain-method coerce,msm,markovchain-method
(markovchain-class), 31 (markovchain-class), 31
==,markovchain,markovchain-method coerce,msm.est,markovchain-method
(markovchain-class), 31 (markovchain-class), 31
[,markovchain,ANY,ANY,ANY-method coerce,sparseMatrix,markovchain-method
(markovchain-class), 31 (markovchain-class), 31
[[,markovchainList-method coerce,table,markovchain-method
(markovchainList-class), 34 (markovchain-class), 31
^,markovchain,numeric-method committorAB, 6
(markovchain-class), 31 communicatingClasses (period), 46
communicatingClasses,markovchain-method
absorbingStates (period), 46 (markovchain-class), 31
absorbingStates,markovchain-method conditionalDistribution, 7, 16
(markovchain-class), 31 conditionalDistribution,markovchain-method
absorptionProbabilities, 4 (markovchain-class), 31
64
INDEX 65
craigsendi, 8 is.TimeReversible, 30
createSequenceMatrix, 8
ctmc-class, 10 kullback, 31
ctmcFit, 12
markovchain, 7, 35, 47, 58, 59, 61
dim,ctmc-method (ctmc-class), 10 markovchain-class, 31
dim,markovchain-method markovchain-package, 3
(markovchain-class), 31 markovchainFit, 26, 33, 37, 49, 56
dim,markovchainList-method markovchainFit (createSequenceMatrix), 8
(markovchainList-class), 34 markovchainList-class, 34
markovchainListFit, 10, 35
expectedRewards, 13 markovchainSequence, 10, 33, 36, 56
expectedRewardsBeforeHittingA, 14 meanAbsorptionTime, 38
ExpectedTime, 14 meanAbsorptionTime,markovchain-method
(markovchain-class), 31
firstPassage, 15, 17
meanFirstPassageTime, 39
firstPassageMultiple, 16
meanFirstPassageTime,markovchain,character-method
fitHigherOrder, 17
(markovchain-class), 31
fitHighOrderMultivarMC, 18
meanFirstPassageTime,markovchain,missing-method
freq2Generator, 19
(markovchain-class), 31
generatorToTransitionMatrix, 11, 20, 54 meanNumVisits, 40
meanNumVisits,markovchain-method
HigherOrderMarkovChain-class, 21 (markovchain-class), 31
hittingProbabilities, 21 meanRecurrenceTime, 41
hittingProbabilities,markovchain-method meanRecurrenceTime,markovchain-method
(markovchain-class), 31 (markovchain-class), 31
holson, 22 multinomialConfidenceIntervals, 42
hommc (hommc-class), 23
hommc-class, 23 name, 43
name,markovchain-method (name), 43
ictmc (ictmc-class), 24 name<-, 43
ictmc-class, 24 name<-,markovchain-method (name<-), 43
impreciseProbabilityatT, 24 names,markovchain-method, 44
inferHyperparam, 25, 51 names<-,markovchain-method
initialize,ctmc_method (ctmc-class), 10 (markovchain-class), 31
initialize,markovchain-method noofVisitsDist, 45
(markovchain-class), 31
is.accessible, 26 period, 46
plot,ctmc,missing-method (ctmc-class),
is.accessible,markovchain,character,character-method
(markovchain-class), 31 10
plot,markovchain,missing-method
is.accessible,markovchain,missing,missing-method
(markovchain-class), 31 (markovchain-class), 31
is.CTMCirreducible, 27 predict,markovchain-method
is.irreducible, 28, 30 (markovchain-class), 31
is.irreducible,markovchain-method predict,markovchainList-method
(markovchain-class), 31 (markovchainList-class), 34
is.regular, 29 predictHommc, 48
is.regular,markovchain-method predictiveDistribution, 26, 48, 51
(markovchain-class), 31 preproglucacon, 50
66 INDEX
print,markovchain-method transitionProbability, 60
(markovchain-class), 31 transitionProbability,markovchain-method
print,markovchainList-method (transitionProbability), 60
(markovchainList-class), 34
priorDistribution, 50 verifyEmpiricalToTheoretical
probabilityatT, 52 (verifyMarkovProperty), 61
verifyHomogeneity
rain, 53 (verifyMarkovProperty), 61
rctmc, 11, 12, 20, 53, 60 verifyMarkovProperty, 61
recurrentClasses (period), 46
recurrentClasses,markovchain-method
(markovchain-class), 31
recurrentStates (period), 46
recurrentStates,markovchain-method
(markovchain-class), 31
rmarkovchain, 55
sales, 56
seq2freqProb (fitHigherOrder), 17
seq2matHigh (fitHigherOrder), 17
show,hommc-method, 57
show,markovchain-method
(markovchain-class), 31
show,markovchainList-method
(markovchainList-class), 34
sort,markovchain-method
(markovchain-class), 31
states, 57
states,ctmc-method (ctmc-class), 10
states,markovchain-method (states), 57
steadyStates, 58
steadyStates,ctmc-method (ctmc-class),
10
steadyStates,markovchain-method
(markovchain-class), 31
summary, 29
summary,markovchain-method
(markovchain-class), 31
t,markovchain-method
(markovchain-class), 31
tm_abs, 59
transientClasses (period), 46
transientClasses,markovchain-method
(markovchain-class), 31
transientStates (period), 46
transientStates,markovchain-method
(markovchain-class), 31
transition2Generator, 60