0% found this document useful (0 votes)
13 views2 pages

Memo Proba

The document provides a comprehensive overview of probability and statistics, detailing concepts such as probability spaces, distributions, expectations, and variances. It covers various probability distributions including Bernoulli, Binomial, Poisson, and Normal distributions, along with properties of random variables and vectors. Additionally, it discusses convergence types and important results in probability theory.

Uploaded by

Deepesh Suresh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views2 pages

Memo Proba

The document provides a comprehensive overview of probability and statistics, detailing concepts such as probability spaces, distributions, expectations, and variances. It covers various probability distributions including Bernoulli, Binomial, Poisson, and Normal distributions, along with properties of random variables and vectors. Additionally, it discusses convergence types and important results in probability theory.

Uploaded by

Deepesh Suresh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Memo (I) Probability and Statistics

Let (Ω, A, P) be a probability space.




 0 , x < x1 3 Some usual distribution
 i
 X
2. c.d.f. : FX (x) = P(X = xi ) , xi ≤ x < xi+1 , Bernoulli distribution B(p), p ∈ (0, 1)
1 Probability 

 j=1
, x ≥ xn P(X = 1) = p, P(X = 0) = 1 − p = q,

1
For A, B ∈ A:
E(X) = p, V(X) = pq and ϕX (t) = q + peit , t ∈ R.
n
1. P(Ā) = 1 − P(A), X
3. Expectation : E(g(X)) = g(xi )P(X = xi ), g piecewise cont. Binomial distribution B(n, p), n ∈ N∗ , p ∈ (0, 1)
2. A ⊆ B =⇒ P(A) ≤ P(B), i=1

3. P(A \ B) = P(A) − P(A ∩ B), n


X P(X = k) = Cnk pk q n−k with 1 − p = q, and Cnk = k!(n−k)!
n!
.
4. P(A ∪ B) = P(A) + P(B) − P(A ∩ B). 4. Charac. function : ϕX (t) = eitxj P(X = xj ), t ∈ R it n
E(X) = np, V(X) = npq and ϕX (t) = (q + pe ) , t ∈ R.
j=1

Conditional probability Poisson distribution P(λ), λ > 0


Continuous case: X(Ω) = I, I interval of R
1. For A, B ∈ A with P(B) 6= 0. The conditional probability of A λk −λ
fX (x)dx, B ∈ B(R) .
R
1. Distribution : P(X ∈ B) = B
P(X = k) = e
P(B ∩ A) k! it
given B is P(A|B) = . Z E(X) = λ, V(X) = λ, and ϕX (t) = eλ(e −1) , t ∈ R.
P(B)
2 p.d.f.: fX : R → R+ satisfying fX (x)dx = 1.
2. [Law of total probability] Let B1 , B2 , . . . , Bn , be events in A, R Uniform distribution U(a, b), a < b real numbers
with P(Bi ) 6= 0, such that ∪n
i=1 Bi = Ω and Bi ∩ Bj = ∅ for i 6= j . Rx d
n 3. c.d.f. : FX (x) = fX (t)dt and fX = F , a.e..
dx X
−∞ 1 a+b (b−a)2
X f (x) = 1 (x), x ∈ R, E(X) = , V(X) =
Then, for any event A in A: P(A) = P(A|Bi )P(Bi ). Z b−a [a,b] 2 12

i=1
4. Expectation : E(g(X)) = g(x)fX (x)dx, g mes. function Exponential distribution E(λ)
R
Independence
eitx fX (x)dx, t ∈ R f (x) = λe−λx 1[0,+∞[ (x), x ∈ R,
R 1 1
Two events A, B ∈ A with P(B) 6= 0 are independent if 5. Charac. function : ϕX (t) = R E(X) = λ
, V(X) = λ2

P(A|B) = P(A) or equivalently P(A ∩ B) = P(A)P(B) .


Same distribution Two random variables X, Y have the same Normal distribution N(m, σ )nm ∈ R, σ > 0
2

probability distribution if FX = FY or ϕX = ϕY . 1
2 Real random variable (R) f (x) =
1 x−m 2
√ e− 2 ( σ ) , x ∈ R,
σ 2π
1. The probability distribution of a real random variable X is Properties of expectation and variance −t2 σ 2
E(X) = m, V(X) = σ 2 and ϕX (t) = eitm e 2 ,t ∈ R
1. The expectation is linear i.e. for λ, µ ∈ R, we have
given by P(X ∈ B) = P({ω ∈ Ω; X(ω) ∈ B}), B ∈ B(R).
E(λX + µY ) = λE(X) + µE(Y ).
X −m
2. The c.d.f. of a random variable X is Moreover X ∼ N(m, σ 2 ) ⇔ Z = ∼ N(0, 1).
σ
2. The variance of X is defined by V(X) = E((X − E(X))2 ). We
FX : R → [0, 1] defined by FX (x) = P(X ≤ x), x ∈ R. Given α ∈ [0, 1] then there is zα > 0 s.t.
have V(X) = E(X 2 ) − (E(X))2 and the variance is quadratic i.e. P(|Z| ≤ zα ) = 1 − α ⇔ P(Z ≤ zα ) = 1 − α2 .
It satisfies P(a < X ≤ b) = FX (b) − FX (a), a, b ∈ R, a < b .
V(λX + µ) = λ2 V(X) for λ, µ ∈ R.
Chi-square distribution χ2 (n), n ∈ N∗
p
Discrete case: X(Ω) = {x1 , . . . , xn } ⊂ R or X(Ω) = 3. The standard deviation of X is σ(X) = V(X). n
x 2 −1 e− 2
x

{x1 , . . . , xn , . . . } ⊂ R f (x) = n 1]0,+∞] (x), x ∈ R, E(X) = n, V(X) = 2n


4. [Chebyshev’s inequality] Given a random variable X with finite 2 2 Γ( n2 )
1. Distribution : P(X = xi ) = P({ωi ; X(ωi ) = xi }), i = 1, . . . , n . and ϕX (t) = (1 − 2it)−n/2 , ∀t ∈ R.
V(X)
mean and variance, ∀ε > 0, P(|X − E(X)| ≥ ε) ≤ ε2
.

Centrale Nantes / M1-ENG-CM Page 1/2 2022-2023


4 Real random vector (Rn ) We have Cov(X, X) = V(X) . 3. If X1 , . . . , Xn are i.i.d. with Xi ∼ N(mi , σi2 ), then n
P
i=1 λi Xi ∼
2. If the random variables X1 , . . . , Xn are jointly distributed then N(m, σ ), where P
2
λi is an arbitrary real number and m =
1. The probability distribution of a random vector X is
P n 2 n 2 2
V(X1 + · · · + Xn ) = n i=1 λi mi , σ = i=1 λi σi .
P P
i=1 V(Xi ) + i6=j Cov(Xi , Xj ).
PX (B) = P(X ∈ B) = P({ω ∈ Ω; X(ω) ∈ B}), B ∈ B(Rn ). 4. If X is N(0, 1) , then X 2 ∼ χ2 (1) .
It is called the joint distribution of X1 , . . . , Xn . Let A = A1 × · · · × An ⊂ R with Ai ∈ B(R), i = 1, . . . , n.
n
5. If X1 , . . . , Xk are i.i.d. with Xi is χ2 (ni ), then X1 + · · · + Xk ∼
χ2 (n1 + · · · + nk ).
2. The c.d.f. of a random vector X is FX : Rn → [0, 1] defined for Conditional distributions
all x = (x1 , . . . , xn ) ∈ Rn by FX (x) = P(X1 ≤ x1 , . . . , Xn ≤ xn ), 1. Given two discrete random vectors X and Y such that
P(Y = y) 6= 0. The distribution of X given Y is
6 Convergence
3. The marginal distributions of a random vector X are the dis- P(X = x, Y = y) Let {Xn }n∈N be a sequence of random variables.
tributions PXi of the random variables Xi , i = 1, . . . , n. P(X = x|Y = y) = .
P(Y = y)
P 1. Convergence in probability
We have P(X ∈ A|Y = y) = x∈A P(X = x|Y = y).
Discrete case: X(Ω) = {(xi1 , . . . , xin ), ij ∈ N, xij ∈ R} P
2. Given two continuous random vectors X and Y such Xn −→ X ⇔ ∀ε > 0, lim P(|Xn − X| > ε) = 0.
n→+∞
1. Distribution: P(X1 = xi1 , . . . , Xn = xin ) .
that fY (y) = 6 0. The conditional p.d.f. of X given Y is
2. Convergence almost surely
2. Expectation: fX,Y (x, y)
fX|Y =y (x) = . a.s.
Xn −→ X ⇔ P({ω ∈ Ω : lim Xn (ω) 6= X(ω)}) = 0.
X fY (y) n→+∞
E(g(X)) = g(xi1 , . . . , xin )P(X1 = xi1 , . . . , Xn = xin ) R
(xi1 ,...,xin )∈X(Ω) We have P(X ∈ A|Y = y) = A fX|Y =y (x)dx1 . . . dxn . 3. Convergence in distribution
L
3. Marginal dist.: Xn −→ X ⇔ lim FXn (t) = FX (t), for t where FX is cont, or
Independence Let X1 , . . . , Xn be random variables. n→+∞
X
P(Xk = xik ) = P(X1 = xi1 , . . . , Xn = xin )
if and only if: lim ϕXn (t) = ϕ(t), ∀t ∈ R.
xij 6=xi n→+∞
k 1. X1 , . . . , Xn are independent if, for all A1 , . . . , An ,
P(X1 ∈ A1 , . . . , Xn ∈ An ) = n
Q
i=1 P(Xi ∈ Ai ). 4. Convergence in moment of second order
Continuous case: X(Ω) box of Rn L2
Xn −→ X ⇔ lim E(|Xn − X|2 ) = 0.
Z 2. If X1 , . . . , Xn are independent, g1 (X1 ), . . . , gn (Xn ) are also n→+∞
1. Distribution: P(X ∈ B) = fX (x1 , . . . , xn )dx1 . . . dxn . independent for all measurable function gi : R → R, i = 1, . . . , n.
B
Z 3. If X1 , . . . , Xn are discrete, they are independent Important results
n
2. p.d.f.: fX : R → R+ s.t. fX (x1 , .., xn )dx1 ..dxn = 1 1. Given a continuous function f : R → R then
⇔ P(X1 = xi1 , . . . , Xn = xin ) = P(X1 = xi1 ) . . . P(Xn = xin ).
Rn a.s. a.s.
R x1 R xn Xn −→ X ⇒ f (Xn ) −→ f (X).
3. c.d.f.: FX (x1 , . . . , xn ) = −∞
··· −∞
fX (t1 , . . . , tn )dt1 . . . dtn 4. If X1 , . . . , Xn are continuous, they are independent
⇔ fX (x1 , . . . , xn ) = fX1 (x1 ) . . . fXn (xn ). 2. [Slutsky’s theorem] Given two sequences of random variables
∂n
and fX = F
∂x1 ...∂xn X
a.e. a.s. L L
{Xn }n∈N and {Yn }n∈N then Xn → c, Yn → Y ⇒ Xn Yn → cY.
5. If X1 , . . . , Xn are independent then:
4. Expectation:
Z 1. E(X1 × · · · × Xn ) = E(X1 ) × · · · × E(Xn ), 3. [Central Limit Theorem of Laplace] Let {Xi }i=1,n , i.i.d.
E(g(X)) = g(x1 , . . . , xn )fX (x1 , . . . , xn )dx1 . . . dxn with mean m and variance σ 2 . Then, if X n = X1 +···+X n
,
Rn
2. Cov(Xi , Xj ) = ρ(Xi , Xj ) = 0 pour i 6= j, n

3. V(X1 + · · · + Xn ) = V(X1 ) + · · · + V(Xn ) Xn − m L


5. Marginal dist. : defined through its marginal p.d.f Zn = √ −→ Z ∼ N(0, 1).
Z σ/ n
fXk (xk ) = fX (x1 , . . . , xn )dx1 . . . dxk−1 dxk+1 . . . dxn
Rn−1
5 Function of random variables 4. [Weak law of large number] Let {Xi }i=1,n , i.i.d. with mean
X1 +···+Xn P
1. Let X1 , . . . , Xk i.i.d. with Xi ∼ B(ni , p), then X1 + · · · + Xk ∼ and variance then X n = n
→ E(X1 ).
Covariance B(n1 + · · · + nk , p).
1. The covariance of two random variables X, Y is 5. [Strong law of large number] Let {Xi }i=1,n , i.i.d. with mean
2. Let X1 , . . . , Xn i.i.d. with Xi ∼ P(λi ), then X1 + · · · + Xn ∼
a.s.
Cov(X, Y ) = E(X − E(X))(Y − E(Y )) = E(XY ) − E(X)E(Y ). P(λ1 + · · · + λn ). and variance, moreover if E(|Xi |) < ∞ then X n → E(X1 ).

Centrale Nantes / M1-ENG-CM Page 2/2 2022-2023

You might also like