0% found this document useful (0 votes)
18 views2 pages

Midterm2 Formula Sheet

This document provides formulas and definitions for common probability distributions that may be useful for a midterm exam, including the binomial, geometric, Poisson, uniform, normal, and exponential distributions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views2 pages

Midterm2 Formula Sheet

This document provides formulas and definitions for common probability distributions that may be useful for a midterm exam, including the binomial, geometric, Poisson, uniform, normal, and exponential distributions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Formula sheet for midterm 2

This is a (non-exhaustive) list of some (possibly) useful definitions/formulas from class.

• De-Morgan’s Laws.

– (A ∪ B)c = Ac ∩ B c .
– (A ∩ B)c = Ac ∪ B c .

• Definition of probability. A probability space consists of a sample space S and a prob-


ability function P which takes an event A ⊆ S as input and returns P(A), a real number
between 0 and 1, as output. The function P must satisfy the following axioms:

– P(∅) = 0, P(S) = 1.
P∞
– If A1 , A2 , . . . are disjoint events, then P(∪∞
j=1 Aj ) = j=1 P(Aj ).

• Definition of conditional probability. P(A|B) = P(A∩B)


P(B)

• Bayes’ Law. P(A|B) = P(B|A)·P(A)


P(B)

• Law of total probability. For any partition A1 , . . . , An of the sample space,


n
P(B) = P(B|Ai ) · P(Ai )
X

i=1

• Principle of Inclusion-Exclusion.

P(A ∪ B) = P(A) + P(B) − P(A ∩ B)

• Expectation.

– For a discrete random variable X with PMF p: E[X] =


P
x xp(x)
R∞
– For a continuous random variable X with PDF f : E[X] = −∞ xf (x)dx

• Variance. Var(X) = E[(X − E[X])2 ] = E[X 2 ] − (E[X])2

• Linearity of expectation. E[aX + bY + c] = aE[X] + bE[Y ] + c

• Law of the unconscious statistician. For a discrete r.v. X,

E[g(X)] = g(X) · P(X = x)


X

• Expectation via survival function. For a discrete r.v. X with support 0, 1, 2, 3, . . .



E[X] =
X
P(X > n)
n=0

1
• Bernoulli distribution. Ber(p)

– PMF: P(X = 1) = p, P(X = 0) = 1 − p


– Mean: p
– Variance: p(1 − p)

• Binomial distribution. Bin(n, p)

– PMF: P(X = k) = n k
k p (1 − p)n−k for k = 0, . . . , n
– Mean: np
– Variance: np(1 − p)

• Geometric distribution. Geo(p)

– PMF: P(X = k) = p(1 − p)k for k = 0, 1, 2, . . .


– Mean: 1−p
p
– Variance: 1−p
p2

• Hypergeometric distribution. HGeom(w, b, n)

– PMF: P(X = k) = w b  w+b


k n−k / n for integers k satisfying 0 ≤ k ≤ w and 0 ≤ n−k ≤ b
– Mean: nw
w+b
– Variance: not discussed in class

• Poisson distribution. Poiss(λ)


k
– PMF: P(X = k) = e−λ λk! for k = 0, 1, 2, . . .
– Mean: λ
– Variance: λ

• Standard uniform distribution. Unif(0, 1)

– PDF: f (x) = 1 for x ∈ [0, 1], and 0 otherwise.


– Mean: 1/2
– Variance: 1/12

• Standard Gaussian distribution. N (0, 1)


2
– PDF: ϕ(x) = √1 e−x /2

– Mean: 0
– Variance: 1

• Standard exponential distribution. Expo(1)

– PDF: f (x) = e−x for x > 0, and 0 otherwise


– Mean: 1
– Variance: 1

You might also like