0% found this document useful (0 votes)
106 views9 pages

hw2 New

The document discusses computational methods and Markov chains. It provides examples of using R to simulate binomial, exponential and Markov chain random variables. It also includes an example of a Markov chain with four states representing places people eat dinner and the transition probabilities between each state.

Uploaded by

kaliman2010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views9 pages

hw2 New

The document discusses computational methods and Markov chains. It provides examples of using R to simulate binomial, exponential and Markov chain random variables. It also includes an example of a Markov chain with four states representing places people eat dinner and the transition probabilities between each state.

Uploaded by

kaliman2010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

COMPUTATIONAL METHODS

COMPUTACIONAL METHODS.
HW 2.
LEONARDO D. VILLAMIL.
02/04/2015.

rbinom(n, size, prob)


This is conventionally interpreted as the number of successes in size trials.
Examples.

Let's say you wanted to simulate rolling a dice 5 times, and you wished to count the number of 3's you observe. You
could simulate this experiment using the following code:
rbinom(1,5,1/6)
The problem #1 of homework 2.
Observe 90,000 random numbers 25 times (trials), the probability of observing any given number is 0.3.

1. Simulate 10000 binomial random numbers with parameters 25 and 0.3,


assigning them to a vector called binsim. Let X be a binomial (25, 0.3)
random variable. Use the simulated numbers to estimate the following:
(a) Calculate the proportion of the simulated numbers which are less
than or equal to 5.
(b) Calculate the proportion of the simulated numbers which are equal
to 5.
(c) Sample mean and sample variance. Compare them with EX and
V ar(X)

3. Simulate 90 000 exponential random numbers having rate 6.


(a) Find the proportion of these numbers which are less than 2. Compare with the
probability that an exponential random variable with rate 6 will be less than 2.
(b) Compute the average of these numbers. Compare with the expected value.
(c) Calculate the variance of this sample, and compare with the theoretical value.
https://rstudio-pubsstatic.s3.amazonaws.com/26693_e1151035722942b2813c0063c6b220ae.html
4. A simple electronic device consists of two components which have failure times
which may be modeled as independent exponential random variables. The first
component has a mean time to failure of 4 months, and the second has a mean
time to failure of 6 months. The electronic device will fail when either of the
components fails.
(a) Use simulation to estimate the mean and variance of the time to failure for the
device.
(b) Re-do the calculation in the previous question under the assumption
that the device will fail only when both components fail.

## A simple electronic device consists of two components that have failure


## times which may be modeled as independent exponential random
variables.
## The first component has a mean time to failure of 3 months, and the
second
## has a mean time to failure of 6 months. The electronic device will fail
## when either of the components fails. Use simulation to estimate the
mean
## and variance of the time to failure for the device. What if the device
only
## fails after BOTH components fail?
r1 <- rexp(100000, rate = 1/3)
r2 <- rexp(100000, rate = 1/6)
index <- (r2 - r1) > 0
r.min <- c(r1[index], r2[!index])
mean(r.min)
var(r.min)
r.max <- c(r1[!index], r2[index])
mean(r.max)
var(r.max)
Markov Chain R simulation.

sim.mc {mhsmm}

R Documentation

Markov chain simulation


Description
Simulates a Markov chain

Usage
sim.mc(init, transition, N)

Arguments
init

The distribution of states at the first time step


transition

The transition probability matrix of the Markov chain


N

The number of observations to simulate

Value

A vector of integers representing the state sequence.

Author(s)
Jared O'Connell [email protected]

Examples
p <- matrix(c(.1,.3,.6,rep(1/3,3),0,.5,.5),ncol=3,byrow=TRUE)
init <- rep(1/3,3)
sim.mc(init,p,10)

Markov Chains
Suppose in small town there are three places to eat, two restaurants one
Chinese and another one is Mexican restaurant. The third place is a pizza
place. Everyone in town eats dinner in one of these places or has dinner at
home.
Assume that 20% of those who eat in Chinese restaurant go to Mexican next
time, 20% eat at home, and 30% go to pizza place. From those who eat in
Mexican restaurant, 10% go to pizza place, 25% go to Chinese restaurant,
and 25% eats at home next time. From those who eat at pizza place 30Those
who eat at home 20% go to Chinese, 25% go to Mexican place, and 30% to
pizza place. We call this situation a system. A person in the town can eat
dinner in one of these four places, each of them called astate. In our
example, the system has four states. We are interested in success of these
places in terms of their business. For example after a given period of time,
what percentage of people in town will go to pizza place?
Suppose there is a physical or mathematical system that has possible
states and at any one time, the system is in one and only one of its states.
And suppose that at a given observation period, say
period, the
probability of the system being in a particular state depends on its status at
the n-1 period, such a system is called Markov Chain or Markov process.
In the example above there are four states for the system. Define
to be
the probability of the system to be in state after it was in state j ( at any
observation ). The matrix
Markov Chain.

) is called the Transition matrix of the

So transition matrix for example above, is

The first column represents state of eating at home, the second column
represents state of eating at the Chinese restaurant, the third column
represents state of eating at the Mexican restaurant, and the fourth column
represents state of eating at the Pizza Place.
Similarly the rows respectively represent eating at home, eating at the
Chinese restaurant, eating at the Mexican restaurant and eating at the Pizza
Place.

We are interested in the following question:


What is the probability that the system is in the
observation?

state, at the

To answer this question, we first define the state vector. For a Markov
Chain, which has k states, the state vector for an observation period , is a
column vector defined by

where,
= probability that the system is in the
state at the time of
observation. Note that the sum of the entries of the state vector has to be
one. Any column vector ,

where
is called a probability vector. Consider our
example, suppose at the beginning, every one eats at home, that is the initial
state vector
is

. In the next observation period, say end of the first week, the state vector
will be

At the end of

week the state vector is

At the end of

week the state vector is

Note that we can compute

directly using

Similarly, we can find the state vector for


periods.

as

observation

Computing

suggest that the state vector approached to some fixed vector, as the number
of observation periods increase. This is not the case for every Markov
Chain. For example if

, and

we can compute the state vectors for different observation periods:

These computations indicates that this system oscillates and does not
approach any fixed vector.

You might also like