0% found this document useful (0 votes)
19 views24 pages

04-Joint Distributions

Uploaded by

pratik.nikam1920
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views24 pages

04-Joint Distributions

Uploaded by

pratik.nikam1920
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

MSA 8190 - Fall 2023

Statistical Foundation
Joint Probability Distributions

Mohammad Javad Feizollahi

September 25, 2023


©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 1 / 24
Lecture’s Objectives

1 Understand the concept of joint, marginal, and conditional


probability distributions
2 Compute probability, expected value, and variance for joint,
marginal, and conditional probability distributions
3 Compute and interpret covariance and correlation
4 Compute moments and moment generating function for a
random variable
5 Perform previous objectives in R

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 2 / 24


Introduction

In the development of a new receiver for the transmission of digital


information, each received bit is rated as acceptable, suspect, or
unacceptable, depending on the quality of the received signal, with
probabilities 0.9, 0.08, and 0.02, respectively. Assume that the
ratings of each bit are independent.
In the first four bits transmitted, let
X denote the number of acceptable bits
Y denote the number of suspect bits
Then,
the distribution of X is , and
the distribution of Y is .

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 3 / 24


Introduction (cont.)

4.10 × 10 – 5
4
f X Y (x, y)

3 4.10 × 10
– 5 1.84 × 10 – 3

1.54 × 10 – 5 1.38 × 10 – 3 3.11 × 10 – 2


2

1 2.56 × 10
– 6 3.46 × 10 – 4 1.56 × 10 – 2 0.2333

1.6 × 10 – 7 2.88 × 10 – 5 1.94 × 10 – 3 5.83 × 10 – 2 0.6561


0
0 1 2 3 4
x

P(aasu) = 0.9 × 0.9 × 0.08 × 0.02 = 0.0013


MONTGOMERY: Applied Statistics, 3e
4! Fig. 5.1 W-84
= 12 possible sequences of two a’s, one s, and one u
2!1!1!
fXY (2, 1) = P(X = 2, Y = 1) = 12 × 0.0013 = 0.0156
fXY (x, y ) = .
©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 4 / 24
Joint Probability Mass Functions

The joint probability mass function for the discrete RVs X , Y


denoted by fXY (x, y ) satisfies the following properties:
1 0 ≤ fXY (x, y ) ≤ 1
XX
2 fXY (x, y ) = 1
x y
3 fXY (x, y ) = P(X = x, Y = y )

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 5 / 24


Marginal Probability Mass Functions

If X and Y are discrete random variables with joint probability


mass function fXY (x, y ), then the marginal probability mass
functions of X and Y are
P
fX (x) = P(X = x) = fXY (x, y )
Rx
P
fY (y ) = P(Y = y ) = fXY (x, y )
Ry
where Rx denotes the set of all points in the range of (X , Y ) for
which X = x and Ry denotes the set of all points in the range of
(X , Y ) for which Y = y .

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 6 / 24


Marginal Probability Mass Functions (cont.)

y
f Y (y) =
0.00004 4

0.00188 3

0.03250 2

0.24925 1

0.71637 0
0 1 2 3 4 x
f X (x) = 0.0001 0.0036 0.0486 0.2916 0.6561

MONTGOMERY: Applied Statistics, 3e


Fig. 5.2 W-85
©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 7 / 24
Mean and Variance for Joint Distributions

 
X X X
E(X ) = µx = xfX (x) = x fXY (x, y )
x x Rx
XX X
= xfXY (x, y ) = xfXY (x, y )
x Rx R
 
X X X
Var(X ) = σX2 = (x − µX )2 fX (x) = (x − µX )2  fXY (x, y )
x x Rx
XX X
= (x − µX )2 fXY (x, y ) = (x − µX )2 fXY (x, y )
x Rx R

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 8 / 24


Conditional Probability Distributions

4.10 × 10 –5
4
f XY (x, y)

3 4.10 × 10
–5 1.84 × 10 – 3

1.54 × 10 –5 1.38 × 10 – 3 3.11 × 10 –2


2

1 2.56 × 10
–6 3.46 × 10 – 4 1.56 × 10 – 2 0.2333

1.6 × 10 – 7 2.88 × 10 – 5 1.94 × 10 –3 5.83 × 10 – 2 0.6561


0
0 1 2 3 4
x

P(Y = 0|X = 3) =

MONTGOMERY: Applied Statistics, 3e


Fig. 5.1 W-84
P(Y = 1|X = 3) =

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 9 / 24


Conditional PMF of Y Given X = x

0.410
4

0.410 0.511
3

0.154 0.383 0.640


2

0.0256 0.096 0.320 0.800


1

0.0016 0.008 0.040 0.200 1.0


0
0 1 2 3 4 x

MONTGOMERY: Applied Statistics, 3e


©2023 M. J. Feizollahi, All Rights ReservedFig. 5.3 Joint
W-86Probability Distributions 10 / 24
Conditional Probability Mass Functions

Given discrete random variables X and Y with joint probability


mass function fXY (x, y ) the conditional probability mass function
of Y given X = x is

fY |x (y ) = fXY (x, y )/fX (x) for fX (x) > 0.

Note that
1 0 ≤ fY |x (y ) ≤ 1
X
2 fY |x (y ) = 1
Rx
3 fY |x (y ) = P(Y = y |X = x)

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 11 / 24


Conditional Mean and Variance

X
E(Y |x) = µY |x = yfY |x (y )
Rx
X X
Var(Y |x) = σY2 |x = (y − µY |x )2 fY |x (y ) = y 2 fY |x (y ) − µ2Y |x
Rx Rx

In the example,
E(Y |X = 2) =

Var(Y |X = 2) =

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 12 / 24


Independence of Discrete Random Variables

For discrete random variables X and Y , if any one of the following


properties is true, the others are also true, and X and Y are
independent:
1 fXY (x, y ) = fX (x)fY (y ), ∀(x, y ) ∈ R
2 fY |x (y ) = fY (y ), ∀x with fX (x) > 0
3 fX |y (x) = fX (x), ∀y with fY (y ) > 0
4 P(X ∈ A, Y ∈ B) = P(X ∈ A)P(Y ∈ B) for any sets A and B
in the range of X and Y , respectively.

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 13 / 24


Multiple Discrete Random Variables

We can easily extend the results to the cases with multiple discrete
random variables.

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 14 / 24


Joint Probability Density Functions

A joint pdf for the continuous RVs X1 , X2 , · · · , Xn denoted by


fX1 X2 ···Xn (x1 , x2 , · · · , xn ) satisfies the following properties:
1 fX1 X2 ···Xn (x1 , x2 , · · · , xn ) ≥ 0
Z ∞Z ∞ Z ∞
2 ··· fX1 X2 ···Xn (x1 , x2 , · · · , xn )dx1 dx2 · · · dxn = 1
−∞ −∞ −∞
3 For any region B of p-dimensional space
Z 1 , X2Z, · · · , Xn ) ∈ B] =
P[(X
Z
··· fX1 X2 ···Xn (x1 , x2 , · · · , xn )dx1 dx2 · · · dxn
B

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 15 / 24


Marginal Probability Density Functions

If the joint pdf for the continuous RVs X1 , X2 , · · · , Xp is


fX1 X2 ···Xn (x1 , x2 , · · · , xn ), the marginal pdf of Xi is
Z Z Z
fXi (xi ) = ··· fX1 X2 ···Xn (x1 , x2 , · · · , xn )dx1 dx2 · · · dxi−1 dxi+1 · · · dxn
Rxi

where Rxi denotes the set of all points in the range of


X1 , X2 , · · · , Xn for which Xi = xi .

Continuous RVs X1 , X2 , · · · , Xp are independent if and only if

fX1 X2 ···Xn (x1 , x2 , · · · , xn ) = fX1 (x1 )fX2 (x2 ) · · · fXn (xn )

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 16 / 24


Covariance and Correlation
Consider two RVs X and Y . For any function h(X , Y )

 PP
 h(X , Y )fXY (x, y ) X , Y discrete
R
E[h(X , Y )] = RR
 h(X , Y )fXY (x, y )dxdy X , Y continuous
R

The covariance between the RVs X and Y is

Cov(X , Y ) = σXY = E[(X − µX )(Y − µY )] = E(XY ) − µX µY

The correlation between RVs X and Y is


Cov(X , Y ) σXY
ρXY = p =
Var(V )Var(Y ) σX σY

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 17 / 24


Example

Consider two RVs X and Y with the following joint pmf.


y

3 0.3

2 0.2 0.2

1 0.1 0.2

1 2 3 x

Compute marginal distribution functions fX (x) and fY (y ),


expectations µX and µY , variances σX2 and σY2 , covariance σXY
MONTGOMERY: Applied Statistics, 3e
and correlation ρXY .
Fig. 5.12 W-95
©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 18 / 24
Positive, Negative and Zero Covariance
y

x x

(a) Positive covariance (b) Zero covariance

y All points are of y


equal probability

(c) Negative covariance (d) Zero covariance

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 19 / 24


Facts about Covariance and Correlation

For any two RVs X and Y , −1 ≤ ρXY ≤ +1


If Cov(X , Y ) is positive, negative, or zero, then ρXY is
positive, negative, or zero, respectively.
If ρXY equals +1 or −1, the points with positive probability
fall exactly along a straight line.
Two RVs with nonzero correlation are said to be correlated.
If X and Y are two independent RVs, then σXY = ρXY = 0.
However, if σXY = ρXY = 0, we cannot immediately conclude
that X and Y are independent.
If X and Y are normal RVs and σXY = ρXY = 0. Then, X
and Y are independent.

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 20 / 24


Linear Combination of Random Variables

Given RVs X1 , X2 , · · · , Xn and constants c1 , c2 , · · · , cn ∈ R,

Y = c1 X1 + c2 X2 + · · · + cn Xn

is a linear combination of X1 , X2 , · · · , Xn .

E(Y ) = c1 E(X1 ) + c2 E(X2 ) + · · · + cn E(Xn )

Var(Y ) = c12 Var(X1 ) + c22 Var(X2 ) + · · · + cn2 Var(Xn )


Pn Pn
+2 ci cj Cov(Xi , Xj )
i=1 j=i+1

If X1 , X2 , · · · , Xn are independent

Var(Y ) = c12 Var(X1 ) + c22 Var(X2 ) + · · · + cn2 Var(Xn )


©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 21 / 24
Mean and Variance of a Sample Average

Consider an independent and identically distributed (iid)


sample of n RVs X1 , X2 , · · · , Xn .
Suppose E(Xi ) = µ and Var(Xi ) = σ 2 , ∀i = 1, · · · , n.
Pn
Xi
i=1
X̄ = is called the sample average.
n
Then, X̄ is itself a RV with

E(X̄ ) = and Var(X̄ ) =

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 22 / 24


Reproductive Property of the Normal Distribution

Suppose X1 , X2 , · · · , Xn are independent, normal RVs with


E(Xi ) = µi and Var(Xi ) = σi2 , for i = 1, · · · , n.
Then, Y = c1 X1 + c2 X2 + · · · + cn Xn is a normal random
variable
with

E(Y ) =
and
Var(Y ) =

©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 23 / 24


Moments
The r th moment about the origin of the RV X is
 P

 x r f (x) X discrete
0 r x
µr = E(X ) = Z ∞

 x r f (x)dx X continuous
−∞

The moment generating function of the RV X is the expected


value of e tX and is denoted by MX (t). That is,
 P

 e tx f (x) X discrete
tX x
MX (t) = E(e ) = Z ∞

 e tx f (x)dx X continuous
−∞

d r MX (t)
µ0r =
dt r
t=0
©2023 M. J. Feizollahi, All Rights Reserved Joint Probability Distributions 24 / 24

You might also like