0% found this document useful (0 votes)
40 views6 pages

Cheat Sheet For The Final Exam

This cheat sheet provides essential formulas and definitions for probability and statistics relevant to the MATH-232 course. It covers topics such as binomial coefficients, random variables, moment generating functions, convergence of random variables, and various distributions including Bernoulli, Binomial, Poisson, and Normal distributions. Additionally, it includes key concepts related to hypothesis testing, error types, and confidence intervals.

Uploaded by

hugostuart73
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views6 pages

Cheat Sheet For The Final Exam

This cheat sheet provides essential formulas and definitions for probability and statistics relevant to the MATH-232 course. It covers topics such as binomial coefficients, random variables, moment generating functions, convergence of random variables, and various distributions including Bernoulli, Binomial, Poisson, and Normal distributions. Additionally, it includes key concepts related to hypothesis testing, error types, and confidence intervals.

Uploaded by

hugostuart73
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Probability and Statistics Cheat Sheet

MATH-232 Fall 2024


Final Exam

Basic formulas and definitions


• Properties of binomial coefficients
1. Pascal’s triangle nr = n−1 n−1
  
r−1 + r .
Pr
2. Vandermonde’s formula j=0 m
 n  m+n

j r−j = r .

3. Negative binomial series (1 − x)−n = i=0 n+i−1
P  i
i x , |x| < 1.
−r n 1

4. limn→∞ n r = r! , where r ∈ N is fixed.

xi
P∞
• exp(x) = ex = i=0 i! .

• Inclusion-exclusion formula:
n
[ n
X X
P( Ai ) = (−1)r+1 P(Ai1 ∩ Ai2 ∩ · · · ∩ Air )
i=1 r=1 1≤i1 <i2 <...<ir ≤n

R∞
• Gamma function is defined as Γ(α) = 0
uα−1 e−u du for α > 0. It further satisfies:
– Γ(n) = (n − 1)! n ∈ N.
– Γ(α + 1) = αΓ(α) α > 0.
• Let g(X, Y ) be a function of a random vector (X, Y ). Its conditional expectation given X = x
is (P
g(x, y)fY |X (y|x) discrete case
E[g(X, Y )|X = x] = R y
g(x, y)fY |X (y|x)dy continuous case
on the condition that fX (x) > 0 and E[|g(X, Y )||X = x] < ∞.
• For random variables X and Y = g(X), where g is a monotone increasing or decreasing
function with differentiable inverse g −1 , we have

dg −1 (y)
fY (y) = | |fX (g −1 (y)).
dy

• For a random variable X, the moment generating function is defined as MX (t) = E[etX ] for
t ∈ R such that MX (t) < ∞. Similarly, for a random vector Xp×1 = (X1 , X2 , . . . , Xp )T , we
T
have MX (t) = E[et X ] for t ∈ Rp such that MX (t) < ∞.
• The random vector X ∼ Np (µ, Ω) has a density function on Rp if and only if Ω is positive
definite, i.e., Ω has rank p. If so, the density function is
1 1
f (x; µ, Ω) = exp(− (x − µ)T Ω−1 (x − µ)),
(2π)p/2 |Ω|1/2 2

1
and moment generating function is MX (u) = exp(uT µ + 21 uT Ωu) for u ∈ Rp . If not, X is a
linear combination of variables that have a density function on Rm , where m < p is the rank
of Ω.
• Let X ∼ Np (µp×1 , Ωp×p ), where |Ω| > 0, and let A, B ⊂ {1, . . . , p} with |A| = q < p, |B| =
r < p and A ∩ B = ∅. Let µA , ΩA and ΩAB be respectively the q × 1 subvector of µ, q × q and
q × r submatrices of Ω conformable with A, A × A and A × B. Then:
– the marginal distribution of XA is normal, XA ∼ Nq (µA , ΩA );
– the conditional distribution of XA given XB = xB is normal, XA |XB = xB ∼ Nq (µA +
ΩAB Ω−1 −1
B (xB − µB ), ΩA − ΩAB ΩB ΩBA ).

• Let Y = g(X) ∈ Rn , where X ∈ Rn is a continuous variable and

(X1 , . . . , Xn ) → (Y1 = g1 (X1 , . . . , Xn ), . . . , Yn = gn (X1 , . . . , Xn )),

where gi ’s are continuously differentiable. If the inverse transformation hi = gi−1 exist, and
∂gi
we have Jacobian J(x1 , . . . , xn ) ∈ Rn×n such that Jij = ∂x j
such that |J(x1 , . . . , xn )| > 0 if
fX1 ,...,Xn (x1 , . . . , xn ) > 0 then

fY1 ,...,Yn (y1 , . . . , yn ) = fX1 ,...,Xn (x1 , . . . , xn )|J(x1 , . . . , xn )|−1

evaluated at x1 = h1 (y1 , . . . , yn ), . . . , xn = hn (y1 , . . . , yn ).


• Convergence of random variables: we consider the following definitions for convergence of
random variables X1 , X2 , . . .
2
→ X if limn→∞ E[(Xn − X)2 ] = 0, where
– Xn converges to X in mean square, Xn −
2 2
E[X ], E[Xn ] < ∞.
P
– Xn converges to X in probability, Xn −
→ X if for all ε > 0, limn→∞ P(|Xn −X| > ε) = 0.
D
– Xn converges to X in distribution, Xn −→ X if limn→∞ Fn (X) = F (x), at each point
where F (x) is continuous where F represents the cumulative distribution function.
• For X and X1 , X2 , . . . defined on the same probability space we have:
2 P D
Xn −
→ X ⇒ Xn −
→ X ⇒ Xn −→ X.

• Continuity theorem. Let {Xn }, X be random variables with cumulative distribution functions
{Fn }, F , whose MGFs Mn (t), M (t) exist for 0 ≤ |t| < b. If there exists a 0 < a < b such that
D
Mn (t) → M (t) for |t| ≤ a when n → ∞, then Xn −→ X.
• Combination of convergent sequences including Slutsky’s Lemma. Let x0 , y0 be constants,
X, Y, {Xn }, {Yn } random variables, and h a function continuous at x0 . Then
D P
– Xn −→ x0 =⇒ Xn −
→ x0 ,
P P
– Xn −
→ x0 =⇒ h(Xn ) −
→ h(x0 ),
D P D D
– Xn −→ X and Yn −
→ y0 =⇒ Xn + Yn −→ X + y0 , Xn Yn −→ Xy0 .
• Some inequalities:
E[X]
– Markov’s inequality: P(X ≥ a) ≤ a assuming that X only takes non-negative values
and a > 0.
var(X)
– Chebyshev’s inequality: P(|X − E[X]| ≥ a) ≤ a2
– Jensen’s inequality: E[g(X)] ≥ g(E[X]), where g is a convex function

2
– Hoeffding’s inequality: Let Z1 , . . . , Zn be independent random variables with E[Zi ] = 0
Pn Qn 2 2
and ai ≤ Zi ≤ bi . For ε > 0 and any t > 0, we have P( i=1 Zi ≥ ε) ≤ e−tε i=1 et (bi −ai ) /8 .
Particularly, for i.i.d. X1 , . . . , Xn ∼ Bernoulli(p) and ε > 0, we have P(|X̄ − p| ≥ ε) ≤
2
2e−2nε , where X̄ = (X1 + · · · + Xn )/n.
p
– Cauchy–Schwarz inequality: For random variables X, Y we have |E[XY ]| ≤ E[X 2 ]E[Y 2 ]
assuming E[X 2 ], E[Y 2 ] < ∞. As a special case cov(X, Y )2 ≤ var(X)var(Y ) (assuming
variances are defined).

• For an estimator θ̂ of θ we have the bias-variance decomposition MSE(θ̂) = E[(θ̂ − θ)2 ] =


var(θ̂) + b(θ)2 where b(θ) = E[θ̂] − θ is the bias. For two unbiased estimators θ̂1 , θ̂2 , we say θ̂1
is more efficient than θ̂2 if var(θ̂1 ) ≤ var(θ̂2 ).
• For independent Y1 , . . . , Yn ∼ N (µ, σ 2 ), the following provides a (1 − αL − αU ) confidence
interval for σ 2 :
(n − 1)S 2 (n − 1)S 2
 
(L, U ) = , ,
χ2n−1 (1 − αL ) χ2n−1 (αU )
where χ2ν (p) P
is the p quantile of the chi-squared distribution with ν degrees of freedom and
n
(n − 1)S 2 = j=1 (Yj − Y )2 .

• When we decide between the hypotheses, we can make two sorts of error:
– Type I error (false positive): H0 is true, but we wrongly reject it (and choose H1 );
– Type II error (false negative): H1 is true, but we wrongly accept H0 .

Decision
Accept H0 Reject H0
H0 true Correct choice (true negative) Type I error (false positive)
State of nature
H1 true Type II error (false negative) Correct choice (true positive)

Further, we call

– the false positive probability the size α of the test, and


– the true positive probability the power β of the test.
• Pearson statistic (or chi-square statistic). Let O1 , . . . , Ok be the number of observations of a
random sample of size n = n1 + · · · + nk falling into the categories 1, . . . , k, whose expected
numbers are E1 , . . . , Ek ,where Ei > 0. Then the Pearson statistic (or chi-square statistic) is
2
T = i=1 (Oi −E
Pk i)
Ei . If the joint distribution of O1 , . . . , Ok is multinomial with denominator
E1 Ek
n andP probabilities p1 = n , . . . , pk = n , then T ∼χ ˙ 2k−1 , the approximation being good if
−1
k Ei ≥ 5.
• Neyman–Pearson Lemma. Let f0 (y), f1 (y) be the densities of Y under simple null and alter-
native hypotheses. Assume the set Yα = {y ∈ Ω : ff01 (y)
(y)
> tα } such that P0 (Y ∈ Yα ) = α
exists. Then, Yα maximizes P1 (Y ∈ Yα ) among all the Y such that P0 (Y ∈ Y ′ ) ≤ α. Thus,

to maximize the power of a given threshold, we must base the decision on Yα (Yα should be
the reject region for H0 ).

3
Distributions

Expected
Distribution PMF/PDF Value Variance MGF

Bernoulli P (X = 1) = p
Bern(p) P (X = 0) = 1 − p p p(1 − p) 1 − p + pet

P (X = k) = nk pk (1 − p)n−k

Binomial
Bin(n, p) k ∈ {0, 1, 2, . . . n} np np(1 − p) (1 − p + pet )n

pet
Geometric P (X = k) = (1 − p)k−1 p 1−(1−p)et
1 1−p t
Geom(p) k ∈ {1, 2, . . . } p p2 (1 − p)e < 1

t
pe r
P (X = x) = x−1
 r x−r
Neg. Binom. r−1 p (1 − p)
( 1−(1−p)e t)

r r(1−p)
NegBin(r, p) x ∈ {r, r + 1, r + 2, . . . } p p2 (1 − p)et < 1

(wk)(n−k b
)
P (X = k) =
Hypergeom. (w+bn )  
HypG(w, b, n) k ∈ {0, 1, 2, . . . , n} µ= nw
b+w
w+b−n
w+b−1 n nµ (1 − nµ ) messy

−λ k

Poisson P (X = k) = e k!λ
t
−1)
Pois(λ) k ∈ {0, 1, 2, . . . } λ λ eλ(e

1
Uniform f (x) = b−a
a+b (b−a)2 etb −eta
U(a, b) x ∈ [a, b] 2 12 t(b−a)

Exponential f (x) = λe−λx


1 1 λ
exp(λ) x ∈ (0, ∞) λ λ2 λ−t , t<λ

(x−µ)2
f (x) = √1 e− 2σ 2
Normal σ 2π
σ 2 t2
N (µ, σ 2 ) x ∈ (−∞, ∞) µ σ2 etµ+ 2

Chi-Square
1
2n/2 Γ(n/2)
xn/2−1 e−x/2 (1 − 2t)−n/2
χ2n x ∈ (0, ∞) n 2n t < 1/2

λα α−1 −λx
Gamma Γ(α) x e (1 − λt )−α
α α
Γ(α, λ) x ∈ (0, ∞) λ λ2 t<λ

4
Standard normal distribution Φ(z)

Φ(z)

For z < 0 we use symmetry: P(Z ≤ z) = Φ(z) = 1 − Φ(−z), z ∈ R.

z 0 1 2 3 4 5 6 7 8 9
0.0 .50000 .50399 .50798 .51197 .51595 .51994 .52392 .52790 .53188 .53586
0.1 .53983 .54380 .54776 .55172 .55567 .55962 .56356 .56750 .57142 .57535
0.2 .57926 .58317 .58706 .59095 .59483 .59871 .60257 .60642 .61026 .61409
0.3 .61791 .62172 .62552 .62930 .63307 .63683 .64058 .64431 .64803 .65173
0.4 .65542 .65910 .66276 .66640 .67003 .67364 .67724 .68082 .68439 .68793
0.5 .69146 .69497 .69847 .70194 .70540 .70884 .71226 .71566 .71904 .72240
0.6 .72575 .72907 .73237 .73565 .73891 .74215 .74537 .74857 .75175 .75490
0.7 .75804 .76115 .76424 .76730 .77035 .77337 .77637 .77935 .78230 .78524
0.8 .78814 .79103 .79389 .79673 .79955 .80234 .80511 .80785 .81057 .81327
0.9 .81594 .81859 .82121 .82381 .82639 .82894 .83147 .83398 .83646 .83891
1.0 .84134 .84375 .84614 .84850 .85083 .85314 .85543 .85769 .85993 .86214
1.1 .86433 .86650 .86864 .87076 .87286 .87493 .87698 .87900 .88100 .88298
1.2 .88493 .88686 .88877 .89065 .89251 .89435 .89617 .89796 .89973 .90147
1.3 .90320 .90490 .90658 .90824 .90988 .91149 .91309 .91466 .91621 .91774
1.4 .91924 .92073 .92220 .92364 .92507 .92647 .92786 .92922 .93056 .93189
1.5 .93319 .93448 .93574 .93699 .93822 .93943 .94062 .94179 .94295 .94408
1.6 .94520 .94630 .94738 .94845 .94950 .95053 .95154 .95254 .95352 .95449
1.7 .95543 .95637 .95728 .95818 .95907 .95994 .96080 .96164 .96246 .96327
1.8 .96407 .96485 .96562 .96638 .96712 .96784 .96856 .96926 .96995 .97062
1.9 .97128 .97193 .97257 .97320 .97381 .97441 .97500 .97558 .97615 .97670
2.0 .97725 .97778 .97831 .97882 .97932 .97982 .98030 .98077 .98124 .98169
2.1 .98214 .98257 .98300 .98341 .98382 .98422 .98461 .98500 .98537 .98574
2.2 .98610 .98645 .98679 .98713 .98745 .98778 .98809 .98840 .98870 .98899
2.3 .98928 .98956 .98983 .99010 .99036 .99061 .99086 .99111 .99134 .99158
2.4 .99180 .99202 .99224 .99245 .99266 .99286 .99305 .99324 .99343 .99361
2.5 .99379 .99396 .99413 .99430 .99446 .99461 .99477 .99492 .99506 .99520
2.6 .99534 .99547 .99560 .99573 .99585 .99598 .99609 .99621 .99632 .99643
2.7 .99653 .99664 .99674 .99683 .99693 .99702 .99711 .99720 .99728 .99736
2.8 .99744 .99752 .99760 .99767 .99774 .99781 .99788 .99795 .99801 .99807
2.9 .99813 .99819 .99825 .99831 .99836 .99841 .99846 .99851 .99856 .99861
3.0 .99865 .99869 .99874 .99878 .99882 .99886 .99889 .99893 .99896 .99900
3.1 .99903 .99906 .99910 .99913 .99916 .99918 .99921 .99924 .99926 .99929
3.2 .99931 .99934 .99936 .99938 .99940 .99942 .99944 .99946 .99948 .99950
3.3 .99952 .99953 .99955 .99957 .99958 .99960 .99961 .99962 .99964 .99965
3.4 .99966 .99968 .99969 .99970 .99971 .99972 .99973 .99974 .99975 .99976
3.5 .99977 .99978 .99978 .99979 .99980 .99981 .99981 .99982 .99983 .99983
3.6 .99984 .99985 .99985 .99986 .99986 .99987 .99987 .99988 .99988 .99989
3.7 .99989 .99990 .99990 .99990 .99991 .99991 .99992 .99992 .99992 .99992
3.8 .99993 .99993 .99993 .99994 .99994 .99994 .99994 .99995 .99995 .99995
3.9 .99995 .99995 .99996 .99996 .99996 .99996 .99996 .99996 .99997 .99997

5
χ2ν distribution

χ2ν(p)

χ2ν (p): quantiles for the chi-square distribution with ν degrees of freedom.

ν .005 .01 .025 .05 .10 .25 .50 .75 .90 .95 .975 .99 .995 .999
1 0 .0002 .010 .0039 .0158 .102 .455 1.32 2.71 3.84 5.02 6.63 7.88 10.8
2 .0100 .0201 .0506 .103 .211 .575 1.39 2.77 4.61 5.99 7.38 9.21 10.6 13.8
3 .0717 .115 .216 .352 .584 1.21 2.37 4.11 6.25 7.81 9.35 11.3 12.8 16.3
4 .207 .297 .484 .711 1.06 1.92 3.36 5.39 7.78 9.49 11.1 13.3 14.9 18.5
5 .412 .554 .831 1.15 1.61 2.67 4.35 6.63 9.24 11.1 12.8 15.1 16.7 20.5
6 .676 .872 1.24 1.64 2.20 3.45 5.35 7.84 10.6 12.6 14.4 16.8 18.5 22.5
7 .989 1.24 1.69 2.17 2.83 4.25 6.35 9.04 12.0 14.1 16.0 18.5 20.3 24.3
8 1.34 1.65 2.18 2.73 3.49 5.07 7.34 10.2 13.4 15.5 17.5 20.1 22.0 26.1
9 1.73 2.09 2.70 3.33 4.17 5.90 8.34 11.4 14.7 16.9 19.0 21.7 23.6 27.9
10 2.16 2.56 3.25 3.94 4.87 6.74 9.34 12.5 16.0 18.3 20.5 23.2 25.2 29.6
11 2.60 3.05 3.82 4.57 5.58 7.58 10.3 13.7 17.3 19.7 21.9 24.7 26.8 31.3
12 3.07 3.57 4.40 5.23 6.30 8.44 11.3 14.8 18.5 21.0 23.3 26.2 28.3 32.9
13 3.57 4.11 5.01 5.89 7.04 9.30 12.3 16.0 19.8 22.4 24.7 27.7 29.8 34.5
14 4.07 4.66 5.63 6.57 7.79 10.2 13.3 17.1 21.1 23.7 26.1 29.1 31.3 36.1
15 4.60 5.23 6.26 7.26 8.55 11.0 14.3 18.2 22.3 25.0 27.5 30.6 32.8 37.7
16 5.14 5.81 6.91 7.96 9.31 11.9 15.3 19.4 23.5 26.3 28.8 32.0 34.3 39.3
17 5.70 6.41 7.56 8.67 10.1 12.8 16.3 20.5 24.8 27.6 30.2 33.4 35.7 40.8
18 6.26 7.01 8.23 9.39 10.9 13.7 17.3 21.6 26.0 28.9 31.5 34.8 37.2 42.3
19 6.84 7.63 8.91 10.1 11.7 14.6 18.3 22.7 27.2 30.1 32.9 36.2 38.6 43.8
20 7.43 8.26 9.59 10.9 12.4 15.5 19.3 23.8 28.4 31.4 34.2 37.6 40.0 45.3
21 8.03 8.90 10.3 11.6 13.2 16.3 20.3 24.9 29.6 32.7 35.5 38.9 41.4 46.8
22 8.64 9.54 11.0 12.3 14.0 17.2 21.3 26.0 30.8 33.9 36.8 40.3 42.8 48.3
23 9.26 10.2 11.7 13.1 14.8 18.1 22.3 27.1 32.0 35.2 38.1 41.6 44.2 49.7
24 9.89 10.9 12.4 13.8 15.7 19.0 23.3 28.2 33.2 36.4 39.4 43.0 45.6 51.2
25 10.5 11.5 13.1 14.6 16.5 19.9 24.3 29.3 34.4 37.7 40.6 44.3 46.9 52.6
26 11.2 12.2 13.8 15.4 17.3 20.8 25.3 30.4 35.6 38.9 41.9 45.6 48.3 54.1
27 11.8 12.9 14.6 16.2 18.1 21.7 26.3 31.5 36.7 40.1 43.2 47.0 49.6 55.5
28 12.5 13.6 15.3 16.9 18.9 22.7 27.3 32.6 37.9 41.3 44.5 48.3 51.0 56.9
29 13.1 14.3 16.0 17.7 19.8 23.6 28.3 33.7 39.1 42.6 45.7 49.6 52.3 58.3
30 13.8 15.0 16.8 18.5 20.6 24.5 29.3 34.8 40.3 43.8 47.0 50.9 53.7 59.7
40 20.7 22.2 24.4 26.5 29.1 33.7 39.3 45.6 51.8 55.8 59.3 63.7 66.8 73.4
50 28.0 29.7 32.4 34.8 37.7 42.9 49.3 56.3 63.2 67.5 71.4 76.2 79.5 86.7
60 35.5 37.5 40.5 43.2 46.5 52.3 59.3 67.0 74.4 79.1 83.3 88.4 92.0 99.6
70 43.3 45.4 48.8 51.7 55.3 61.7 69.3 77.6 85.5 90.5 95.0 100. 104. 112.

You might also like