0% found this document useful (0 votes)
28 views6 pages

Stats 102c Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views6 pages

Stats 102c Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Monte Carlo Variance Reduction

Antithetic Variate
antithetic variate technique to reduce Monte Carlo estimator
Review : Use Monte Carlo to estimate Elg(x)
X (g(x) <
E(g(x)
1 . Simulate X for B simulations w/ iid U2 unit 10, 1)
2 .
Pass thr inverse adf Ex" (n) to get X
3 Pass thru function
.

a
calculate estimated value of g(x)
4
. from Mc simulation

+
Fx (u)

&
U, S
XI < g(x,) =
9
,
(g(xz)
U2
X2 E(g(x)) +... +
=
92 = 9, +
92 g
: : B
up) XB (g(XB) =
9B
Isquare root of variance) o

standard error : sample population's standard error SE


=
n

Ba (B 2) = variability multiple samples of a


across

SECE(g(x)))
.

=
population ; estimated

"to make E(g(X)) more precise SECE(g(x))) by


,
↓ ↑ simulations
"
can become inefficient :
to half SECE(g(X)) BX4 ,

Antithetic : reduces 9 + 92 +... + g by introducing some opposite


behavior btun adjoining values of 9
"
instead of B indep replications , pair values St they have If depen-
.

dency to each other work w/ replications/pairs


(f)
dependency introduces covariate terms
each pair has comes from same dist) each pair has
If
-1 correlation/covariance
+ P correlation
Fx (u)
3 < g(x,) =
9
,

&
U, X
1 -
u, =
42
3
X2 (g(x2) =
92 E(g(x)) = 91 +

92
+
93 +
94 +... +
9

Uz -
X3 (g(Xz) =
93
-
1 -

my
=
14 X4 (g(x4) =
94

: :
VB-1
~ X
1 -

V+ =
4B B (g(X() =
9B

SECE(g(X)))Av Ug 2POgB
= log
NgH
+ + =

"remember who (p) is negative ,


so SE indeed smaller
Control Variate
T= g(X) w/ unknown
expectation ECY] by control variate
:
assumes can approx system
-

we

*
simplified model z =
g (x) w/ known expectation ECZ] (constant)
Yand z both dependent on same random variable X

sample diff btwn Y and


f =
Y z
=
+ E[z] at same rv input X
.

-
mean of Y ECO] ECY] ECE] + ELE] ELY] <My Mo
+ Elz]
Mix - z)
= -
= = =

=var(Y-z]
variance of Y var(M + ] =
var (Mc z1] =
N
=
ELLY-z-ELY-z))2]
n

E(z]) ]
<
=
E[LY- E(Y] -

(z-
N

=
E[(Y-E[Y])2 -

2(Y- ECT])(z-E(z]) + 12 -
ECE])3]
N

=
var[i] + var[z]-zcor[Y z] ,

original Mc variance : var (My) var(i)


=

want coV(Y, z]4 that var [u + ] ↓: control variate z correlated w/Y


so
strongly
St 2 coV[i,
.
z] > var[E]

MC in Statistical Inference
-

confidence interval : statistical tests performed visually


outside of it occurs < 5% of time
-
95 % Cl covers 95 % of bootstrapped means (anything
3 (significantly different
< 0 05
p-value of anything outside of CI .
stratified sampling
g(X) g and random variable input
-

output variable T= w/ function X


-

for MC Simulation ,
run prob density function fx (knowns <probl, that X sampled from
fx falls into Xi can be precisely calculated
of values at value most of the time
majority Ymax'thus sampling same

1 .
Split range of input random variable X into (Strata : similar groups (value groups Xi , i = 1, . .

.,
4)
and Y sampled w/in each
group (also divides ↑into corresponding strata Yi)

samples x from X: generates samples Y =


g(x) from is
4 Yi variance small

2
. free to choose # of samples we generate win each group

need more samples in 1st group be spread of y variables is much greater ; and group needs

only few samples to calculate mean

3
.
separately estimate expectation/mean value of output random variable win each group
,
then
combine to get ECY] =
My
ECY] WiECYi]
=

Since ECTi] unknown for any i, ECT] =


My = Wil
,

var [My] = wi var(ii]


ni

A A then 1 4 W2 ↓
↑ , ,

Ymax- g(x) fx
(X

-X
XI Xz fy
w
,
= 0 .
3 W2 = 0
.
7

max
Markov Chains
ex1 :
Assume there's a restaurant that serves 3 types of food : 1) hamburger ,
2) pizza 3) hot dog.
,

on any given day, they serve only litem and it depends on what they served yesterday .

Laka predict what will serve tomorrow if what they're serving today.
we can
they we know

0 2
transition ~
P(X4 hotdog) Xz pizza) 0 7
.

= = =
.

diagram :
Hamburger
7 r
0 3 0 5
after Steps P(ham) =
0 3519
.
.

a .

,
6 0 2
L
0 . .

J P(p) = 0 .
2124
Pizza > Hotdog 30 .
5
P(HO) =
0 4356
.

0 .
7

random walk of 10 days


Ham-p-Ham-Hot >
- Ham -Hot-Hot >
-
Hot - >
Ham -p

i. e
. prob corresponding to each food item (prob dist of states)
# of occurrences PCHaM) = P(P) =
To P(H) =
# of days

Markov chain sequence of events where future probs ONLY depend on present
-

1) current state , future state (future state only depends on current state
P(Xn + 1 =
x(Xn = x)
2) sum of weights of outgoing arrows from any state =
transition : each arrow from one state to another

stationary dist/equilibrium state


:
prob dist does not change w/time for

Markov chain ; output cow rector =


input row rector ITA =
I

It is left eigenvector w/ eigenvalue


-

=
1 Av a =

elements of TT =
1 (i . e. 21 +
122] + +[3] =
1)
I w/ eigenvalue =
>
stationar state if there exists / eigenvectors
-

>

represent w/ transition matrix where elements are edge weight connecting 2

corresponding vertices crow--column)


goal find prob of each state
:

row rector it :
elements represent states' probs ; prob dist of states
ham Piz not

ham 0 2 .
0. 6 0 2 .

A= 0 3 g 0 7
Pit .
.

not 0 5 .
8 0 S .

If on Pizzaday To = [0 1 0]
1 .
,

ToA =
[0 1 0] 0 2
.
0. 6 0 2 .
= [0 . 300 .
7]
0 3 7 future probs
.
g 0 .

corresponding
0 5 .
8 0 S . to pizza state

2 . H, A =
20 .
3 0 0 .
73 0 2 .
0. 6 0 2 .
=
[0 410 18
. .
0 .
41]
0 3 .
g 0 .
7

0 5 .
8 0 S .

3 .
T2A =
[0 41 .
0 . 18 0 .
41] 0 2 .
0. 6 0 2 .
20 340
=

. .
250 .
41]
0 3 .
g 0 .
7

0 5 8 0 S
· .
.

=
[]
exc : 0 .

75G Bull "Bear 20. S


,
=
PSo start at initial state then
,
-
Sz =
PS, =
Paso transition
0 .
4 multiply by n

Sn =
"So matrices pr
Freediagram :
0 . 75 Bull transition matrix : column-row
,

0 . 75 Bu A B
;

&
Bear
Bull 255

(
0

=A
.

P
04 Bull
>
0 .
259 Bear
0 .
63 Bear

You might also like