0% found this document useful (0 votes)
18 views5 pages

PT2425 Cheatsheet Updatedv2

The document provides formulas and concepts related to Probability Theory, including the Law of Total Probability, Bayes' Rule, and definitions of probability, independence, and conditional probability. It also covers random variables, their distributions (Bernoulli, Binomial, Poisson, Normal), and key statistical measures such as expectation, variance, and covariance. Additionally, it includes the Central Limit Theorem, Law of Large Numbers, and various mathematical operations such as derivatives and integrals relevant to probability and statistics.

Uploaded by

ahamedinfas94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views5 pages

PT2425 Cheatsheet Updatedv2

The document provides formulas and concepts related to Probability Theory, including the Law of Total Probability, Bayes' Rule, and definitions of probability, independence, and conditional probability. It also covers random variables, their distributions (Bernoulli, Binomial, Poisson, Normal), and key statistical measures such as expectation, variance, and covariance. Additionally, it includes the Central Limit Theorem, Law of Large Numbers, and various mathematical operations such as derivatives and integrals relevant to probability and statistics.

Uploaded by

ahamedinfas94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Formulas for the exam Probability Theory Law of Total Probability

This section contains NO Questions Let A1 , . . . , An be a partition of S (i.e.


Number of Ways to Sample disjoint events that together form S), with
P (Ai ) > 0 for all i. Then
We discussed four categories of sampling.
n
We have n elements, and draw k samples. X
This can be done in these totals of ways: P (B) = P (B, Ai )
i=1
1. Ordered, with replacement: nk Xn
= P (B | Ai )P (Ai )
n!
2. Ordered, without replacement: (n−k)! i=1

3. Unordered, without replacement:


n n!

k = k!(n−k)!
Bayes’ Rule
n+k−1

4. Unordered, with replacement k P (B | A)P (A)
(using the stars and bars method) P (A | B) = .
P (B)
in which n! = n · (n − 1) · . . . · 2 · 1 is the
number of permutations of n elements, and
0! = 1 by definition. Independence
Two events are independent if
General Definition of Probability
P (A, B) = P (A)P (B) .
A probability space consists of an outcome
space S and a probability function P which If P (A) > 0 and P (B) > 0, this is equiva-
takes an event A ⊆ S as input and returns a lent to
real number. P (A | B) = P (A)
It must satisfy three axioms:
and vice versa.
1. For any event A, P (A) ≥ 0
2. P (S) = 1
Mutually Independent
3. If A1 , A2 , A3 . . . are disjoint(∗ ) events, Three events A, B and C are mutually inde-
then pendent if all of these hold:
P (A1 ∪ A2 ∪ A3 ∪ . . .) P (A, B) = P (A)P (B)
= P (A1 ) + P (A2 ) + P (A3 ) + . . . P (A, C) = P (A)P (C)
P (B, C) = P (B)P (C)
(∗ ) Disjoint means the events are mutually
exclusive: Ai ∩ Aj = ∅ for i ̸= j. P (A, B, C) = P (A)P (B)P (C) .

The first three alone?


Conditional Probability Then pairwise independence.
Let A and B be events with P (B) > 0,
the conditional probability of A given B, de-
noted P (A | B) is defined as Conditional Independence
Two events are A and B are said to be con-
P (A ∩ B)
P (A | B) = ditionally independent given E if
P (B)
P (A, B | E) = P (A | E)P (B | E),
Note: often we write P (A ∩ B) as P (A, B)
and call it the joint probability. or equivalently, if P (A | B, E) = P (A | E)

Formula Sheet - 1
Random variable Continuous Random Variable and PDF
A random variable X is a function from the • A random variable X is a continuous
outcome space S to the real numbers R: r.v. if its CDF is FX (x) is continuous
for all x ∈ R .
X:S→R
• A continuous r.v. X with CDF FX has
• Range RX : the set of possible values probability density function (PDF) fX
for X if Z x
FX (x) = fX (t)dt
−∞

Discrete Random Variable


• A random variable X is a discrete ran-
dom variable, if its range RX is finite Expectation of a Function g(X)
or countable. Assume X is a random variable and g is a
• If X is a discrete r.v., the function function of X, then the expectation of g(X)
is
PX (x) = P (X = x), for x ∈ RX
Discrete case:
is the probability mass function (PMF)
X
E [g(X)] = g(x)PX (x)
of X x∈SX

• Support SX = {x|PX (x) > 0}. Continuous case:


Z
E [g(X)] = g(x)fX (x)dx
x∈SX

Two Discrete Random Variables


Consider two discrete random variables X
and Y . Variance and Standard Deviation
• Joint PMF: • The variance of a random variable X is
defined as
PXY (x, y) = P (X = x, Y = y)
2
Var [X] = E X 2 − E [X]
 

• X and Y are independent if


• The standard deviation of a random
PXY (x, y) = PX (x)PY (y) for all x, y variable X is defined as
p
σX = Var [X]

The cumulative distribution (CDF)


• The cumulative distribution function Covariance and Pearson Correlation
(CDF) FX for a r.v. X is defined as
• The covariance random variables X
FX (x) = P (X ≤ x) and Y is defined as

Cov (X, Y ) = E [XY ] − E [X] E [Y ]


• FX (x) defined for all x ∈ R

• If X is discrete then it follows that • Pearson correlation is defined as


Cov (X, Y )
X
FX (x) = PX (xk ) . ρX,Y = p
xk ∈RX ,xk ≤x Var [X] Var [Y ]

Formula Sheet - 2
The Method of Transformations The Hypergeometric Distribution
Suppose that X is a continuous random vari- X ∼ HypGeom(n, s, m)
able with PDF fX and suppose g : R → R is
a strictly monotonic differentiable function. • n items in total. s are blue (for exam-
Let Y = g(X). Then the PDF of Y is given ple). The others red.
by
• Choose m ≤ n items at random with-
out replacement

dx
fX (x) · dy where g(x) = y,


fY (y) = 0 if g(x) = y has no • X is the number of blue samples

 s
 n−s 
 (real) solution.
k m−k
• PX (k) = n

m

The Bernoulli Distribution • SX = {k ∈ N | max(0, m + s − n) ≤


X ∼ Bern(p), 0 < p < 1 k ≤ min(s, m)}

• Random variable X has outcomes 0


and 1

• PX (1) = p, PX (0) = 1 − p The Poisson Distribution

• E [X] = p X ∼ Poisson(λ), λ > 0

• Var [X] = p − p2 λk −λ
• PX (k) = e ,
k!
for k = 0, 1, 2, . . .
The Binomial Distribution • E [X] = λ
X ∼ Binomial(N, p), N > 0, integer and
0<p<1
• Do N experiments with independent
Bernoulli(p) variables Xi , The Uniform Distribution
X ∼ Unif(a, b), b > a
• X is the total number of 1-s. 
 1 if a < x < b,
• SX = {0, 1, . . . , N }
• fX (x) = b − a
  0 otherwise.
N k
• PX (k) = p (1 − p)N −k
k
• E [X] = 12 (a + b)
• E [X] = N p 1
• Var [X] = 12 (b − a)2

The Geometric Distribution


X ∼ Geometric(p), 0 < p < 1
The Exponential Distribution
• Example: Consider a coin with
P (H) = p. Toss the coin repeatedly X ∼ Exponential(λ), λ > 0
until first Heads. How many tosses? (
λe−λx if x > 0,
• fX (x) =
• X = total number of tosses. (Sometimes 0 otherwise.
slightly different definition X = total number -1)
1
• E [X] = λ
• SX = {1, 2, 3 . . .}
1
• Var [X] = λ2
• PX (k) = p(1 − p)k−1

Formula Sheet - 3
The Normal/Gaussian Distribution Summations and Products
2 n
X ∼ N (µ, σ ) X
• uk = u1 + u2 + . . . + un
(x − µ)2
 
1 k=1
• fX (x) = √ exp −
σ 2π 2σ 2 n
Y
• uk = u1 · u2 · . . . · un
• E [X] = µ
k=1
• Var [X] = σ 2
• Standard normal: Z ∼ N (0, 1)

• CDF of standard normal:


Identities with Summations
FZ (z) = Φ(z)
n
X n
X n
X
• (uk + vk ) = uk + vk ,
k=1 k=1 k=1

n
X n
X
Law of Large Numbers • c · uk = c uk ,
k=1 k=1
Let X1 , X2 , . . . , Xn be i.i.d. (independent
and identically distributed) r.v.-s X m
n X m X
X n

- with a finite expected value • uk,j = uk,j .


E[Xi ] = µ < ∞ k=1 j=1 j=1 k=1

- and a finite variance Var [Xi ] = σ 2 < ∞.

Define the sample mean


X1 + X2 + · · · + Xn Relations between exp and log
X=
n • exp(x) = ex
Then for al ϵ > 0, • exp(x + y) = exp(x) · exp(y)

lim P |X − µ| ≥ ϵ = 0 • log(exp(x)) = x
n→∞

• log(x · y) = log(x) + log(y)


(with x > 0, y > 0)
n
! n
Y X
The Central Limit Theorem (CLT) • log ui = log(ui )
i=1 i=1
Let X1 , X2 , . . . , Xn be i.i.d. random vari- (with ui > 0)
ables with expected value E[Xi ] = µ < ∞
and variance 0 < Var(Xi ) = σ 2 < ∞. • Unless stated otherwise:
Then, the random variable log is the natural logarithm: log = ln

X −µ X + X2 + · · · + Xn − nµ
Zn = √ = 1 √
σ/ n nσ

converges in distribution to the standard nor-


Derivatives
mal random variable as n → ∞, that is
• f (x) = xa ⇒ f ′ (x) = axa−1
lim P (Zn ≤ x) = Φ(x), for all x ∈ R,
n→∞ • f (x) = exp(x) ⇒ f ′ (x) = exp(x)
where Φ(x) is the standard normal CDF. 1
• f (x) = log(x) ⇒ f ′ (x) =
x

Formula Sheet - 4
More derivatives
• f (x) = a · g(x) + b · h(x)
⇒ f ′ (x) = a · g ′ (x) + b · h′ (x)
• f (x) = g(x) · h(x)
⇒ f ′ (x) = g ′ (x) · h(x) + g(x) · h′ (x)
g(x)
• f (x) = ⇒
h(x)
g ′ (x) · h(x) − g(x) · h′ (x)
f ′ (x) =
(h(x))2
• f (x) = g(h(x))
⇒ f ′ (x) = g ′ (h(x)) · h′ (x)

Integrals
• f ′ (x) = g(x)
Z b
⇒ g(x)dx = f (b) − f (a)
a

• If α ̸= −1
Z b
1
xα dx = bα+1 − aα+1


a α+1

Formula Sheet - 5

You might also like