week 8 notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

4.1.

3 Conditional Distributions
P( A  B)
We already know that if P  A   0, then P  B | A   ……………………………..(1)
P( A)

If X and Y are discrete random variables and we have the events

{A: X=x}, {B: Y=y}, then equation (1) above becomes

P ( X  x, Y  y ) f ( x, y )
P (Y  y | X  x )  
P( X  x) f1 ( x)

Where f ( x, y )  P ( X  x, Y  y ) is the joint probability function and f1 ( x) is the

f ( x, y )
marginal probability function for X. we define f ( y | x)  and call it the
f1 ( x)

conditional probability of Y given X. Similarly, the conditional probability of X

f ( x, y )
given Y is f ( x | y )  . We sometimes denote f ( x | y ) and f ( y | x) by f1 ( x | y )
f2 ( y)

and f 2 ( y | x) respectively. If X and Y are continuous random variables, then the

f ( x, y )
conditional density function of Y given X is f ( y | x)  where f ( x, y ) is the
f1 ( x)

joint density function of X and Y and f1 ( x) is the marginal density function of X.

Thus we can find, for example, that the probability of Y being between c and d

given that x  X  x  x as

d
P(c  Y  d | x  X  x  x)   f ( y | x)dy .
c

1
Example 4

The joint probability function of two discrete random variables X and Y is given

by

1
  2x  y  x  0,1, 2; y  0,1, 2,3
f ( x, y )   42

0 Otherwise

Find

a) f(y|2)

b) p(Y=1|x=2)

Solution.

f ( x, y ) p (Y  y , x  2)
a) f ( y | 2)  
f1 ( x) p ( x  2)

3 3
But f1 ( x)   f ( x, y)   p(Y  y, X  2)
y 0 y 0

3
1
= (4  y ) =1/42(4+5+6+7) = 22/42 = 11/21.
y  0 42

1
f ( x, y )  P( x  2, Y  y )  (4  y )
42

1 4  y
Hence f ( y | 2)  42 =1/22 (4+y)
11
21

b) p(Y=1|x=2) = 1/22(4+1) = 5/22

2
Example 5

If X and Y have a joint density function

3
  xy 0<x  1; 0<y  1
f ( x, y )   4

0 Otherwise

Find

a) f ( y | x)

 1 1 1 
b) P  Y  |  X   dx 
 2 2 2 

Solution.

a) For 0 <x<1

3 
1 1
3 x
f1 ( x)   f ( x, y )dy     xy  dy  
0 
0
4 4 2

 3  xy 3  4 xy
f ( x, y )  4  , 0  y 1
Hence f ( y | x)   3  x 3  2x
f1 ( x)  4 2
0, other y

For other values x , f ( y | x) is not defined.

   3 2y
1
9
b) P Y  1 2 | 1 2  x  1 2  dx   f y | 1 2 dy   dy 
1 1 4 16
2 2

3
4.1.4 Variance for Joint Distributions (Co-Variance)
Let X and Y be two continuous random variables having joint density function

f(x, y). Then the means or expectations of X and Y are given by:

 
X  E  X     xf ( x, y )dxdy,
 

 
Y  E Y     yf ( x, y)dxdy.
 

And the variances are given by

 
 X2  E  X   x      x   
2 2
f ( x, y )dxdy,
  X
 

 
 Y2  E  X  Y      y 
2 2
f ( x, y )dxdy.
  Y
 

Another parameter that arises in the case of two variables X and Y is called the

covariance defined by

 XY  Cov( X ,Y )  E  X   X Y  Y 

In terms of the joint density function f(x,y), we have

 
 XY     x    y    f ( x, y)dxdy
 
X Y

If X and Y are discrete, then we write

 X   xf ( x, y )
x y

4
Y   yf ( x, y )
x y

 XY    x   X  y  Y  f ( x, y )
x y

Where the sums are taken over the discrete values of X and Y.

The following theorems are very important in covariance.

1.  XY  E ( XY )  E ( X ) E (Y )  E ( XY )   X Y

2. If X and Y are independent random variables, then  XY  Cov( X , Y )  0 . The

converse of this in not necessarily true.

3. Var ( X  Y )  Var ( X )  Var (Y )  2Cov( X , Y ) i.e.  X2 Y   X2   Y2  2 XY . If X and Y

are independent, this reduces to Var ( X  Y )  Var ( X )  Var (Y ) i.e.

 X2 Y   X2   Y2 In other words, the variance of a sum of independent

variables equals to the sum of their variances.

4.  XY   X  Y

5
4.2 INDEPENDENCE OF RANDOM VARIABLES
Suppose that X and Y are discrete random variables. If the events X=x and Y=y

are independent events for all x and y then we say that X and Y are independent

random variables. In such case P(X=x, Y=y) = P(X=x) P(Y=y) or equivalently,

f(x, y)= f1(x) f2(y). Conversely, if for all x and y the joint probability function f(x, y)

can be expressed as a product of a function of x alone and a function of y alone

(which are then the marginal probability functions of x and y), then X and Y are

said to be independent. If X and Y are continuous random variables, we say that

they are independent random variables if the events X ≤ x and Y ≤ y are

independent events for all x and y. In such a case we write

P(X ≤ x, Y ≤ y) = P(X ≤ x) P(Y ≤ y) or equivalently, F(x, y)= F1(x) F2(y) where F1(x)

and F2(y) are the marginal distribution functions of X and Y respectively.

Conversely, X and Y are independent random variables if for all x and y their

joint distribution F(x, y) can be expressed as a product of a function of x alone,

f1(x), and a function of y alone, f2(y), and these are the marginal density functions

of X and Y respectively.

Example 1

Show that the random variables X and Y whose joint probability distribution

function is

1
  2x  y  x  0,1, 2; y  0,1, 2,3
f ( x, y )   42

0 Otherwise

6
are not independent (i.e. are dependent)

Solution

3
f1 ( x)   f ( x, y )
y 0

1
7 x0
 f1 (0) x0 
 1
  f1 (1) x 1  x 1
 f (2) x2 3
 1  11
 21 x2

2
Similarly, f 2 ( y )   f ( x, y )
x 0

 f 2 (0) y0
 f (1) y 1
 2

 f 2 (2) y2
 f 2 (3) y 3

1
7 y0

3 y 1
4

2 y2
7
5
 y3
4

Now consider P(x=0, y=0) = P(X=0) P(Y=0)

=1/7 × 1/7 = 1/49

7
But P(X=0, Y=0) = 1/42 (0+0) = 0.

⇒ P(X=0, Y=0) ≠ P(x=0, y=0)

Or consider

P(x=1, y=2) = P(X=1) P(Y=2)

= 1/3 × ¾ =1/4

But P(X=1) P(Y=2) = 1/42 (2+2) =2/21.

⇒ P(X=1, Y=2) ≠ P(x=1, y=2) .

And so on!

Thus 1/42 (2x + y ) cannot be expressed as a function of x alone times a function

of y alone. Hence X and Y are dependent.

You might also like