Stats 102c Notes
Stats 102c Notes
Antithetic Variate
antithetic variate technique to reduce Monte Carlo estimator
Review : Use Monte Carlo to estimate Elg(x)
X (g(x) <
E(g(x)
1 . Simulate X for B simulations w/ iid U2 unit 10, 1)
2 .
Pass thr inverse adf Ex" (n) to get X
3 Pass thru function
.
a
calculate estimated value of g(x)
4
. from Mc simulation
+
Fx (u)
&
U, S
XI < g(x,) =
9
,
(g(xz)
U2
X2 E(g(x)) +... +
=
92 = 9, +
92 g
: : B
up) XB (g(XB) =
9B
Isquare root of variance) o
SECE(g(x)))
.
=
population ; estimated
&
U, X
1 -
u, =
42
3
X2 (g(x2) =
92 E(g(x)) = 91 +
92
+
93 +
94 +... +
9
Uz -
X3 (g(Xz) =
93
-
1 -
my
=
14 X4 (g(x4) =
94
: :
VB-1
~ X
1 -
V+ =
4B B (g(X() =
9B
SECE(g(X)))Av Ug 2POgB
= log
NgH
+ + =
we
*
simplified model z =
g (x) w/ known expectation ECZ] (constant)
Yand z both dependent on same random variable X
-
mean of Y ECO] ECY] ECE] + ELE] ELY] <My Mo
+ Elz]
Mix - z)
= -
= = =
=var(Y-z]
variance of Y var(M + ] =
var (Mc z1] =
N
=
ELLY-z-ELY-z))2]
n
E(z]) ]
<
=
E[LY- E(Y] -
(z-
N
=
E[(Y-E[Y])2 -
2(Y- ECT])(z-E(z]) + 12 -
ECE])3]
N
=
var[i] + var[z]-zcor[Y z] ,
MC in Statistical Inference
-
for MC Simulation ,
run prob density function fx (knowns <probl, that X sampled from
fx falls into Xi can be precisely calculated
of values at value most of the time
majority Ymax'thus sampling same
1 .
Split range of input random variable X into (Strata : similar groups (value groups Xi , i = 1, . .
.,
4)
and Y sampled w/in each
group (also divides ↑into corresponding strata Yi)
2
. free to choose # of samples we generate win each group
need more samples in 1st group be spread of y variables is much greater ; and group needs
3
.
separately estimate expectation/mean value of output random variable win each group
,
then
combine to get ECY] =
My
ECY] WiECYi]
=
A A then 1 4 W2 ↓
↑ , ,
Ymax- g(x) fx
(X
-X
XI Xz fy
w
,
= 0 .
3 W2 = 0
.
7
max
Markov Chains
ex1 :
Assume there's a restaurant that serves 3 types of food : 1) hamburger ,
2) pizza 3) hot dog.
,
on any given day, they serve only litem and it depends on what they served yesterday .
Laka predict what will serve tomorrow if what they're serving today.
we can
they we know
0 2
transition ~
P(X4 hotdog) Xz pizza) 0 7
.
= = =
.
diagram :
Hamburger
7 r
0 3 0 5
after Steps P(ham) =
0 3519
.
.
a .
,
6 0 2
L
0 . .
J P(p) = 0 .
2124
Pizza > Hotdog 30 .
5
P(HO) =
0 4356
.
0 .
7
i. e
. prob corresponding to each food item (prob dist of states)
# of occurrences PCHaM) = P(P) =
To P(H) =
# of days
Markov chain sequence of events where future probs ONLY depend on present
-
1) current state , future state (future state only depends on current state
P(Xn + 1 =
x(Xn = x)
2) sum of weights of outgoing arrows from any state =
transition : each arrow from one state to another
=
1 Av a =
elements of TT =
1 (i . e. 21 +
122] + +[3] =
1)
I w/ eigenvalue =
>
stationar state if there exists / eigenvectors
-
>
row rector it :
elements represent states' probs ; prob dist of states
ham Piz not
ham 0 2 .
0. 6 0 2 .
A= 0 3 g 0 7
Pit .
.
not 0 5 .
8 0 S .
If on Pizzaday To = [0 1 0]
1 .
,
ToA =
[0 1 0] 0 2
.
0. 6 0 2 .
= [0 . 300 .
7]
0 3 7 future probs
.
g 0 .
corresponding
0 5 .
8 0 S . to pizza state
2 . H, A =
20 .
3 0 0 .
73 0 2 .
0. 6 0 2 .
=
[0 410 18
. .
0 .
41]
0 3 .
g 0 .
7
0 5 .
8 0 S .
3 .
T2A =
[0 41 .
0 . 18 0 .
41] 0 2 .
0. 6 0 2 .
20 340
=
. .
250 .
41]
0 3 .
g 0 .
7
0 5 8 0 S
· .
.
=
[]
exc : 0 .
Sn =
"So matrices pr
Freediagram :
0 . 75 Bull transition matrix : column-row
,
0 . 75 Bu A B
;
&
Bear
Bull 255
(
0
=A
.
P
04 Bull
>
0 .
259 Bear
0 .
63 Bear