hw2 New
hw2 New
COMPUTACIONAL METHODS.
HW 2.
LEONARDO D. VILLAMIL.
02/04/2015.
Let's say you wanted to simulate rolling a dice 5 times, and you wished to count the number of 3's you observe. You
could simulate this experiment using the following code:
rbinom(1,5,1/6)
The problem #1 of homework 2.
Observe 90,000 random numbers 25 times (trials), the probability of observing any given number is 0.3.
sim.mc {mhsmm}
R Documentation
Usage
sim.mc(init, transition, N)
Arguments
init
Value
Author(s)
Jared O'Connell [email protected]
Examples
p <- matrix(c(.1,.3,.6,rep(1/3,3),0,.5,.5),ncol=3,byrow=TRUE)
init <- rep(1/3,3)
sim.mc(init,p,10)
Markov Chains
Suppose in small town there are three places to eat, two restaurants one
Chinese and another one is Mexican restaurant. The third place is a pizza
place. Everyone in town eats dinner in one of these places or has dinner at
home.
Assume that 20% of those who eat in Chinese restaurant go to Mexican next
time, 20% eat at home, and 30% go to pizza place. From those who eat in
Mexican restaurant, 10% go to pizza place, 25% go to Chinese restaurant,
and 25% eats at home next time. From those who eat at pizza place 30Those
who eat at home 20% go to Chinese, 25% go to Mexican place, and 30% to
pizza place. We call this situation a system. A person in the town can eat
dinner in one of these four places, each of them called astate. In our
example, the system has four states. We are interested in success of these
places in terms of their business. For example after a given period of time,
what percentage of people in town will go to pizza place?
Suppose there is a physical or mathematical system that has possible
states and at any one time, the system is in one and only one of its states.
And suppose that at a given observation period, say
period, the
probability of the system being in a particular state depends on its status at
the n-1 period, such a system is called Markov Chain or Markov process.
In the example above there are four states for the system. Define
to be
the probability of the system to be in state after it was in state j ( at any
observation ). The matrix
Markov Chain.
The first column represents state of eating at home, the second column
represents state of eating at the Chinese restaurant, the third column
represents state of eating at the Mexican restaurant, and the fourth column
represents state of eating at the Pizza Place.
Similarly the rows respectively represent eating at home, eating at the
Chinese restaurant, eating at the Mexican restaurant and eating at the Pizza
Place.
state, at the
To answer this question, we first define the state vector. For a Markov
Chain, which has k states, the state vector for an observation period , is a
column vector defined by
where,
= probability that the system is in the
state at the time of
observation. Note that the sum of the entries of the state vector has to be
one. Any column vector ,
where
is called a probability vector. Consider our
example, suppose at the beginning, every one eats at home, that is the initial
state vector
is
. In the next observation period, say end of the first week, the state vector
will be
At the end of
At the end of
directly using
as
observation
Computing
suggest that the state vector approached to some fixed vector, as the number
of observation periods increase. This is not the case for every Markov
Chain. For example if
, and
These computations indicates that this system oscillates and does not
approach any fixed vector.