0% found this document useful (0 votes)
359 views39 pages

Probability (Merged)

This document discusses joint probability distributions and related concepts. It defines: (1) Joint probability as the probability of two discrete random variables X and Y taking on specific values, represented by p(x,y). (2) Marginal distributions as the probability mass functions of each variable individually, obtained by summing the joint probabilities over the other variable. (3) The joint probability distribution as the set of all joint probabilities p(xi,yj). (4) Independent variables having a joint distribution equal to the product of their marginal distributions. (5) Expectations, variances, and covariance for discrete random variables X and Y based on their joint distribution.

Uploaded by

Nilanjan Kundu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
359 views39 pages

Probability (Merged)

This document discusses joint probability distributions and related concepts. It defines: (1) Joint probability as the probability of two discrete random variables X and Y taking on specific values, represented by p(x,y). (2) Marginal distributions as the probability mass functions of each variable individually, obtained by summing the joint probabilities over the other variable. (3) The joint probability distribution as the set of all joint probabilities p(xi,yj). (4) Independent variables having a joint distribution equal to the product of their marginal distributions. (5) Expectations, variances, and covariance for discrete random variables X and Y based on their joint distribution.

Uploaded by

Nilanjan Kundu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Joint Probability

Let X and Y be two discrete random variables. Let 𝑝(𝑥, 𝑦) be a function such that 𝑝(𝑥, 𝑦) =
𝑃(𝑋 = 𝑥, 𝑌 = 𝑦), then 𝑝(𝑥, 𝑦) is called joint probability function of X and Y, if the following
conditions are satisfied

(i) 𝑝(𝑥, 𝑦) ≥ 0

(ii) ∑𝑛𝑖=1 ∑𝑚
𝑗=1 𝑃(𝑥𝑖 , 𝑦𝑗 ) = 1

Marginal distribution of X

In the bivariate probability distribution, if the probability mass function of only X is taken then it is
called Marginal distribution of X, denoted by 𝑃𝑖 or 𝑃(𝑥𝑖 ) or 𝑃(𝑥).
𝑚

𝑃(𝑥𝑖 ) = 𝑃𝑖 = ∑ 𝑃(𝑥𝑖 , 𝑦𝑗 )
𝑗=1

Marginal distribution of Y

In the bivariate probability distribution, if the probability mass function of only Y is taken then it is
called Marginal distribution of Y, denoted by 𝑃𝑗 or 𝑃(𝑦𝑗 ) or 𝑃(𝑦).
𝑛

𝑃(𝑦𝑗 ) = 𝑃𝑗 = ∑ 𝑃(𝑥𝑖 , 𝑦𝑗 )
𝑖=1

Joint probability distribution of X and Y

The set of values of the 𝑃(𝑥𝑖 , 𝑦𝑗 ) = 𝑃𝑖𝑗 for 𝑖 = 1,2, … 𝑛, 𝑗 = 1,2, … 𝑚 is called the joint probability
distribution of X and Y

Y
𝑦1 𝑦1 … 𝑦𝑗 … 𝑦𝑚 ∑ 𝑦𝑖
X
𝑖
𝑥1 𝑝(𝑥1 , 𝑦1 ) 𝑝(𝑥1 , 𝑦2 ) … 𝑝(𝑥1 , 𝑦𝑗 ) … 𝑝(𝑥1 , 𝑦𝑚 ) 𝑃1
𝑥2 𝑝(𝑥2 , 𝑦1 ) 𝑝(𝑥2 , 𝑦2 ) … 𝑝(𝑥2 , 𝑦𝑗 ) … 𝑝(𝑥2 , 𝑦𝑚 ) 𝑃2
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
𝑥𝑖 𝑝(𝑥𝑖 , 𝑦1 ) 𝑝(𝑥𝑖 , 𝑦2 ) … 𝑝(𝑥𝑖 , 𝑦𝑗 ) … 𝑝(𝑥𝑖 , 𝑦𝑚 ) 𝑃𝑖
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
𝑥𝑛 𝑝(𝑥𝑛 , 𝑦1 ) 𝑝(𝑥𝑛 , 𝑦2 ) … 𝑝(𝑥𝑛 , 𝑦𝑗 ) … 𝑝(𝑥𝑛 , 𝑦𝑚 ) 𝑃𝑛

∑ 𝑥𝑖 𝑄1 𝑄2 … 𝑄𝑗 … 𝑄𝑚 1
𝑖
Independent Random variables

Two random variables X and Y are said to be independent, if their joint probability mass function
equal to the product of their marginal distribution

𝑃(𝑥𝑖 , 𝑦𝑗 ) = 𝑃(𝑥𝑖 )𝑃(𝑦𝑗 )

OR

𝑃(𝑋 = 𝑥𝑖 , 𝑌 = 𝑦𝑗 ) = 𝑃(𝑋 = 𝑥𝑖 )𝑃(𝑌 = 𝑦𝑗 )

OR

𝐸(𝑋𝑌) = 𝐸(𝑋)𝐸(𝑌)

Expectation (mean), Variance and Covariance

If X and Y are two discrete random variables having the joint probability 𝑃(𝑥, 𝑦) then the
expectations of X and Y are defined as follows

𝜇𝑋 = 𝐸(𝑋) = ∑ 𝑥𝑖 𝑃(𝑥𝑖 ) = ∑𝑥𝑝(𝑥)


𝑗=1
𝑛

𝜇𝑌 = 𝐸(𝑌) = ∑ 𝑦𝑗 𝑃(𝑦𝑗 ) = ∑𝑦𝑝(𝑦)


𝑖=1

𝐸(𝑋𝑌) = ∑𝑥𝑦𝑝(𝑥, 𝑦)

Variance

𝑉(𝑋) = 𝜎𝑋2 = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2

𝑉(𝑌) = 𝜎𝑌2 = 𝐸(𝑌 2 ) − [𝐸(𝑌)]2

Standard deviation

𝑆𝐷(𝑋) = 𝜎𝑋 = √𝐸(𝑋 2 ) − [𝐸(𝑋)]2

𝑆𝐷(𝑌) = 𝜎𝑌 = √𝐸(𝑌 2 ) − [𝐸(𝑌)]2

Coefficient of correlation 𝒓(𝑿, 𝒀) or 𝝆


𝑐𝑜𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒(𝑋, 𝑌)
𝑟(𝑋, 𝑌) =
𝜎𝑋 𝜎𝑌
𝑐𝑜𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒(𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌)
Note:

𝐸(𝑋 + 𝑌) = 𝐸(𝑋) + 𝐸(𝑌)


If X and Y are independent random variables then 𝐸(𝑋𝑌) = 𝐸(𝑋)𝐸(𝑌) or 𝑐𝑜𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒(𝑋, 𝑌) = 0

Problems:

1. In the Joint probability distribution, find the correlation coefficient.

Y
1 2 3
X
2 0.2 0.1 0
4 0.1 0.1 0.1
6 0.2 0.2 0

Solution: Given

Y P(X)
1 2 3
X
2 0.2 0.1 0 0.3
4 0.1 0.1 0.1 0.3
6 0.2 0.2 0 0.4
P(Y) 0.5 0.4 0.1 1

Marginal distribution of X is

X 2 4 6
P(X) 0.3 0.3 0.4

Marginal distribution of Y is

Y 1 2 3
P(Y) 0.5 0.4 0.1

𝐸(𝑋) = ∑𝑥𝑝(𝑥) = (2 × 0.3) + (4 × 0.3) + (6 × 0.4) = 4.2

𝐸(𝑌) = ∑𝑦𝑝(𝑦) = (1 × 0.5) + (2 × 0.4) + (3 × 0.1) = 1.6

𝐸(𝑋 2 ) = ∑𝑥 2 𝑝(𝑥) = (22 × 0.3) + (42 × 0.3) + (62 × 0.4) = 20.4


𝐸(𝑌 2 ) = ∑𝑦 2 𝑝(𝑦) = (12 × 0.5) + (22 × 0.4) + (32 × 0.1) = 3

𝑉(𝑋) = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2

𝑉(𝑋) = 20.4 − 4.22 = 2.76

𝜎𝑋 = √𝑉(𝑋) = 1.6613

𝑉(𝑌) = 𝐸(𝑌 2 ) − [𝐸(𝑌)]2

𝑉(𝑌) = 3 − 1.62 = 0.44

𝜎𝑌 = √𝑉(𝑌) = 0.6633

𝐸(𝑋𝑌) = ∑𝑥𝑦𝑝(𝑥, 𝑦)

Y
1 2 3
X
2 0.2 0.1 0
4 0.1 0.1 0.1
6 0.2 0.2 0

𝐸(𝑋𝑌) = (2 × 1 × 0.2) + (2 × 2 × 0.1) + (2 × 3 × 0)


+(4 × 1 × 0.1) + (4 × 2 × 0.1) + (4 × 3 × 0.1)
+(6 × 1 × 0.2) + (6 × 2 × 0.2) + (6 × 3 × 0)
𝐸(𝑋𝑌) = 6.8

𝑐𝑜𝑣(𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌) = 6.8 − (4.2 × 1.6) = 0.08

𝑐𝑜𝑣(𝑋, 𝑌) 0.08
𝑟= = = 0.0725
𝜎𝑋 𝜎𝑦 1.6613 × 0.6633
2. A joint probability distribution is given by the following table

Y
-3 2 4
X
1 0.1 0.2 0.2
3 0.3 0.1 0.1

Find the (i) Marginal distribution of X and Y

(ii) 𝜇𝑥 , 𝜇𝑦 , 𝜎𝑥 , 𝜎𝑦

(iii) Correlation coefficient

Solution:

Y P(X)
-3 2 4
X
1 0.1 0.2 0.2 0.5
3 0.3 0.1 0.1 0.5
P(Y) 0.4 0.3 0.3 1

Marginal distribution of X

X 1 3
P(X) 0.5 0.5

Marginal distribution of Y

Y -3 2 4
P(Y) 0.4 0.3 0.3

𝜇𝑋 = 𝐸(𝑋) = ∑𝑥𝑝(𝑥) = (1 × 0.5) + (3 × 0.5) = 2

𝜇𝑌 = 𝐸(𝑌) = ∑𝑦𝑝(𝑦) = (−3 × 0.4) + (2 × 0.3) + (4 × 0.3) = 0.6

𝐸(𝑋 2 ) = ∑𝑥 2 𝑝(𝑥) = (12 × 0.5) + (32 × 0.5) = 5

𝐸(𝑌 2 ) = ∑𝑦 2 𝑝(𝑦) = ((−3)2 × 0.4) + (22 × 0.3) + (42 × 0.3) = 9.6

𝑉(𝑋) = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2

𝑉(𝑋) = 5 − 22 = 1
𝑉(𝑌) = 𝐸(𝑌 2 ) − [𝐸(𝑌)]2

𝑉(𝑌) = 9.6 − 0.62 = 9.24

𝜎𝑋 = √𝑉(𝑋) = 1

𝜎𝑌 = √𝑉(𝑌) = 3.0397

𝐸(𝑋𝑌) = ∑𝑥𝑦𝑝(𝑥, 𝑦) = −0.3 + 0.4 + 0.8 − 2.7 + 0.6 + 1.2 = 0

𝐶𝑂𝑉(𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌) = 0 − (2 × 0.6) = −1.2

𝐶𝑂𝑉(𝑋, 𝑌) −1.2
𝑟= = = −0.3947
𝜎𝑋 𝜎𝑌 3.0397

3. A coin is tossed three times. Let 𝑋 be equal to 0 or 1 according as a head or a tail occurs on
the first toss. Let 𝑌 be equal to the total number of heads which occurs. Determine (i) the
marginal distributions of 𝑋 and 𝑌, and (ii) the joint distribution of 𝑋 and 𝑌, (iii) expected
values of 𝑋, 𝑌, 𝑋 + 𝑌 and 𝑋𝑌, (iv) 𝜎𝑋 and 𝜎𝑌 , (v) 𝐶𝑜𝑣(𝑋, 𝑌) and 𝜌(𝑋, 𝑌).

Solution: Here the sample space is given by

𝑆 = {𝐻𝐻𝐻, 𝐻𝐻𝑇, 𝐻𝑇𝐻, 𝐻𝑇𝑇, 𝑇𝐻𝐻, 𝑇𝐻𝑇, 𝑇𝑇𝐻, 𝑇𝑇𝑇}

(i) The distribution of the random variable 𝑋 is given by the following table
X 0 1
(First toss Head or Tail) (First toss Head) (First toss Tail)
P(X) 4 4
(Probability of random variable X) 8 8

which is the marginal distribution of the random variable X.

The distribution of the random variable Y is given by the following table


Y 0 1 2 3
(Total number of Heads) (zero Heads) (one Head) (two Head) (three Head)
P(Y) 1 3 3 1
(Probability of random variable Y) 8 8 8 8
which is the marginal distribution of the random variable Y.
(ii) The joint distribution of the random variables 𝑋and 𝑌 is given by the following table
Y 0 1 2 3
X (zero Heads) (one Head) (two Head) (three Head)
0 1 2 1
(First toss 0
Head ) 8 8 8
1 1 2 1
0
(First toss Tail) 8 8 8

4 4 4
E[X] = μX = ∑ xi P(xi ) = (0 × ) + (1 × ) = = 0.5
8 8 8

1 3 3 1 12
E[Y] = μY = ∑ yj P(yj ) = (0 × ) + (1 × ) + (2 × ) + (3 × ) = = 1.5
8 8 8 8 8

𝐸(𝑋 + 𝑌) = 𝐸(𝑋) + 𝐸(𝑌) = 0.5 + 1.5 = 2

OR

E[X + Y] = ∑ ∑ Pij (xi + yj )


= P11 (x1 + y1 ) + P12 (x1 + y2 ) + P13 (x1 + y3 ) + P14 (x1 + y4 ) + P21 (x2 + y1 )
+ P22 (x2 + y2 ) + P23 (x2 + y3 ) + P24 (x2 + y4 )
1 2 2 1 16
= 0(0 + 0) + (0 + 1) + (0 + 2) + (1 + 1) + (1 + 2) + 0(1 + 3) = = 2.
8 8 8 8 8

E(𝑋𝑌) = ∑ ∑ Pij (xi yj )


= P11 (x1 y1 ) + P12 (x1 y2 ) + P13 (x1 y3 ) + P14 (x1 y4 ) + P21 (x2 y1 ) + P22 (x2 y2 ) + P23 (x2 y3 )
+ P24 (x2 y4 )
1 2 2 1 1
= 0(0 × 0) + (0 × 1) + (0 × 2) + (1 × 1) + (1 × 2) + 0(1 × 3) = = 0.5
8 8 8 8 2

4 4 4 2 1
σ2X = E[X 2 ] − [E(X)]2 = (02 × 8) + (12 × 8) − (8) = 4

1 3 3 1 2 2 3
σ2Y = E[Y 2 ] − [E(Y)]2 = (02 × ) + (12 × ) + (22 × ) + (32 × ) − ( ) =
8 8 8 8 8 4

1 1 3 1
Cov(X, Y) = E(XY) − E(X)E(Y) = 2 − 2 × 2 = − 4

Cov(X,Y) −1/4 1
r = ρ(X, Y) = σX σY
= (1/2)( =− .
√3/2) √3
4. A joint probability distribution is given by the following table

Y
2 3 4
X
1 0.06 0.15 0.09
2 0.14 0.35 0.21

Determine the marginal distributions of X and Y. Also verify that X and Y are independent.

Solution:

Y P(X)
2 3 4
X
1 0.06 0.15 0.09 0.3
2 0.14 0.35 0.21 0.7
P(Y) 0.2 0.5 0.3 1

Here

𝑃1 = 0.3, 𝑃2 = 0.7
𝑄1 = 0.2, 𝑄2 = 0.5, 𝑄3 = 0.3

𝑃11 = 𝑃1 𝑄1 = 0.06 = 𝑃(𝑥1 , 𝑦1 )


𝑃12 = 𝑃1 𝑄2 = 0.15 = 𝑃(𝑥1 , 𝑦2 )
𝑃13 = 𝑃1 𝑄3 = 0.09 = 𝑃(𝑥1 , 𝑦3 )
𝑃21 = 𝑃2 𝑄1 = 0.14 = 𝑃(𝑥2 , 𝑦1 )
𝑃22 = 𝑃2 𝑄2 = 0.35 = 𝑃(𝑥2 , 𝑦2 )
𝑃23 = 𝑃2 𝑄3 = 0.2 = 𝑃(𝑥2 , 𝑦3 )
Thus, 𝑃𝑖 𝑄𝑗 = 𝑃𝑖𝑗 for all values of 𝑖 and 𝑗. Accordingly, X and Y are stochastically independent.

OR

Alternate method:

Marginal distribution of X

X 1 2
P(X) 0.3 0.7

𝐸(𝑋) = 1.7

Y 2 3 4
P(Y) 0.2 0.5 0.3

𝐸(𝑌) = 3.1
𝐸(𝑋𝑌) = 0.12 + 0.45 + 0.36 + 0.56 + 2.1 + 1.68 = 5.27

𝐸(𝑋𝑌) = 𝐸(𝑋)𝐸(𝑌) = (1.7 × 3.1) = 5.27


Thus, X and Y are stochastically independent.

5. A probability distributions of two stochastically independent random variables X and Y are given
by the following table.

𝑋 0 1 𝑌 1 2 3
P(X) 0.2 0.8 P(Y) 0.1 0.4 0.5

Find the joint probability distribution. Also compute E(X) and E(Y).

Solution:

𝑃11 = 𝑃(𝑋 = 0) × 𝑃(𝑌 = 1) = 0.2 × 0.1 = 0.02


𝑃12 = 𝑃(𝑋 = 0) × 𝑃(𝑌 = 2) = 0.2 × 0.4 = 0.08
𝑃13 = 𝑃(𝑋 = 0) × 𝑃(𝑌 = 3) = 0.2 × 0.5 = 0.1
𝑃21 = 𝑃(𝑋 = 1) × 𝑃(𝑌 = 1) = 0.8 × 0.1 = 0.08
𝑃22 = 𝑃(𝑋 = 1) × 𝑃(𝑌 = 2) = 0.8 × 0.4 = 0.32
𝑃23 = 𝑃(𝑋 = 1) × 𝑃(𝑌 = 3) = 0.8 × 0.5 = 0.4

Hence the joint probability distribution is

X \Y 1 2 3 P(X)
0 0.02 0.08 0.1 0.2
1 0.08 0.32 0.4 0.8
P(Y) 0.1 0.4 0.5 1

𝐸(𝑋) = 0.8
𝐸(𝑌) = 2.4
𝐸(𝑋𝑌) = ∑𝑥𝑦𝑝(𝑥, 𝑦) = 0 + 0 + 0 + 0.08 + 0.64 + 1.2 = 1.92

𝐸(𝑋𝑌) = 𝐸(𝑋)𝐸(𝑌) = (0.8)(2.4) = 1.92


6. The joint probability distribution of two random variables X and Y is given by the
following table.
Y
1 3 9
X
2 1/8 1/24 1/12
4 1/4 1/4 0
6 1/8 1/24 1/12
Find the marginal distribution of 𝑋 and 𝑌, and evaluate 𝑐𝑜𝑣(𝑋, 𝑌).

Solution: From the table, we note that


1 1 1 1
𝑃1 = + + =
8 24 12 4
1 1 1
𝑃2 = + + 0=
4 4 2
1 1 1 1
𝑃3 = 8 + 24 + 12 = 4
1 1 1 1
𝑄1 = 8 + 4 + =2
8
1 1 1 1
𝑄2 = 24 + 4 +24 = 3
1 1 1
𝑄3 = +0+ =
12 12 6

The marginal distribution of X is given by the table:


𝑥𝑖 2 4 6
𝑃𝑖 1/4 1/2 1/4

And the marginal distribution of Y is given by the table:


11 𝑦𝑗 1 3 9
𝑄𝑗 1/2 1/3 1/6

Therefore, the means of these distributions are respectively,


1 1 1
𝜇𝑋 = ∑ 𝑥 𝑖 𝑃(𝑥 𝑖 ) = (2 × ) + (4 × ) + (6 × ) = 4
4 2 4

1 1 1
𝜇𝑌 = ∑ 𝑦 𝑗 𝑃(𝑦 𝑗 ) = (1 × ) + (3 × ) + (9 × ) = 3
2 3 6

E[𝑋𝑌] = ∑𝑖 ∑𝑗 𝑃𝑖𝑗 𝑥 𝑖 𝑦 𝑗
1 1 1 1 1 1
= (2 × 8) + (6 × 24) + (18 × 12) + (4 × 4) + (12 × 4) + 36 × 0 + (6 × 8) +
1 1
(18 × 24) + (54 × 12)
= 2 + 4 + 6 = 12
𝐶𝑜𝑣 (𝑋, 𝑌) = 𝐸[𝑋𝑌] − 𝜇𝑋 𝜇𝑌 = 12 − 12 = 0
𝜌(𝑋, 𝑌) = 0.
7. For the following bivariate probability distribution of X and Y find
(i) 𝑃(𝑋 ≤ 1, 𝑌 = 2) (ii) 𝑃(𝑋 ≤ 1) (iii) 𝑃(𝑌 = 3) (iv) 𝑃(𝑌 ≤ 3) (v) 𝑃(𝑋 < 3, 𝑌 ≤ 4)

Y
1 2 3 4 5 6
X
0 0 0 1/32 2/32 2/32 3/32
1 1/16 1/16 1/8 1/8 1/8 1/8
2 1/32 1/32 1/64 1/64 0 2/64

Solution:

Y P(X)
1 2 3 4 5 6
X
0 0 0 1/32 2/32 2/32 3/32 8/32
1 1/16 1/16 1/8 1/8 1/8 1/8 10/16
2 1/32 1/32 1/64 1/64 0 2/64 8/64
P(Y) 3/32 3/32 11/64 13/64 6/32 16/64 1

1 1
𝑃(𝑋 ≤ 1, 𝑌 = 2) = 𝑃(𝑋 = 0, 𝑌 = 2) + 𝑃(𝑋 = 1, 𝑌 = 2) = 0 + =
16 16

8 10 7
𝑃(𝑋 ≤ 1) = 𝑃(𝑋 = 0) + 𝑃(𝑋 = 1) = + =
32 16 8

11
𝑃(𝑌 = 3) =
64

3 3 11 23
𝑃(𝑌 ≤ 3) = + + =
32 32 64 64

𝑃(𝑋 < 3, 𝑌 ≤ 4) = 𝑃(𝑋 = 0, 𝑌 ≤ 4) + 𝑃(𝑋 = 1, 𝑌 ≤ 4) + 𝑃(𝑋 = 2, 𝑌 ≤ 4)


= 𝑃(𝑋 = 0, 𝑌 = 1) + 𝑃(𝑋 = 0, 𝑌 = 2) + 𝑃(𝑋 = 0, 𝑌 = 3) + 𝑃(𝑋 = 0, 𝑌 = 4)
+𝑃(𝑋 = 1, 𝑌 = 1) + 𝑃(𝑋 = 1, 𝑌 = 2) + 𝑃(𝑋 = 1, 𝑌 = 3) + 𝑃(𝑋 = 1, 𝑌 = 4)
+𝑃(𝑋 = 2, 𝑌 = 1) + 𝑃(𝑋 = 2, 𝑌 = 2) + 𝑃(𝑋 = 2, 𝑌 = 3) + 𝑃(𝑋 = 2, 𝑌 = 4)

1 2 1 1 1 1 1 1 1 1 9
𝑃(𝑋 < 3, 𝑌 ≤ 4) = (0 + 0 + + )+( + + + )+( + + + )=
32 32 16 16 8 8 32 32 64 64 16
8. For the following bivariate probability distribution, find the value of k.

Y
1 2 3
X
-5 0 0.1 0.1
0 0.1 k 0.2
5 0.2 0.1 0

Solution:

Y P(X)
1 2 3
X
-5 0 0.1 0.1 0.2
0 0.1 k 0.2 0.3+k
5 0.2 0.1 0 0.3
P(Y) 0.3 0.2+k 0.3 1

0.8 + 𝑘 = 1
𝑘 = 0.2
Baye’s theorem
If B1, B2, …, Bn are mutually disjoint events with 𝑃(𝐵𝑖) ≠ 0 (𝑖 = 1,2, … , 𝑛) then for any arbitrary
event A which is a subset of ⋃𝑛𝑖=1 𝐵𝑖 such that 𝑃(𝐴) > 0, then

𝑃(𝐵𝑖 )𝑃(𝐴|𝐵𝑖 )
𝑃(𝐵𝑖 |𝐴) = 𝑛 .
∑𝑖=1 𝑃(𝐵𝑖 )𝑃(𝐴|𝐵𝑖 )

Problems on Baye’s theorem:

1. In a certain assembly plant, three machines, B1, B2, and B3, make 30%, 45%, and 25%,
respectively, of the products. It is known from past experience that 2%, 3%, and 2% of the products
made by each machine, respectively, are defective. Now, suppose that a finished product is randomly
selected. What is the probability that it is defective? What is the probability that it was made by
machine B3?
Solution: Consider the following events:
A: the product is defective,
B1: the product is made by machine B1,
B2: the product is made by machine B2,
B3: the product is made by machine B3.

Applying the rule of elimination, we can write

𝑃(𝐴) = 𝑃(𝐵1)𝑃(𝐴|𝐵1) + 𝑃(𝐵2)𝑃(𝐴|𝐵2) + 𝑃(𝐵3)𝑃(𝐴|𝐵3).

𝑃(𝐵1)𝑃(𝐴|𝐵1) = (0.3)(0.02) = 0.006,


𝑃(𝐵2)𝑃(𝐴|𝐵2) = (0.45)(0.03) = 0.0135,
𝑃(𝐵3)𝑃(𝐴|𝐵3) = (0.25)(0.02) = 0.005,
∴ 𝑃(𝐴) = 0.006 + 0.0135 + 0.005 = 0.0245

Using Baye’s rule, the probability that the defective product was made by machine B3 is

𝑃(𝐵3)𝑃(𝐴|𝐵3)
𝑃(𝐵3|𝐴) =
𝑃(B1)P(A|B1) + P(B2)P(A|B2) + P(B3)P(A|B3)

0.005
= = 0.204
0.0245

2. An office has 4 secretaries handling respectively 20%, 60%, 15% and 5% of files of all government
reports. The probability that they misfile such reports are respectively 0.05, 0.1, 0.1 and 0.05. Find the
probability that the misfiled report can be blamed on the first secretary.

Solution: Let 𝐴1 , 𝐴2 , 𝐴3 and 𝐴4 be 4 secretaries of the office respectively handling 20%, 60%, 15%
and 5% of these files. Hence we have
𝑃(𝐴1 ) = 0.2, 𝑃(𝐴2 ) = 0.6, 𝑃(𝐴3 ) = 0.15 and 𝑃(𝐴4 ) = 0.05

Let E be the event of misfiling a report by the secretaries.

𝑃(𝐸|𝐴1 ) = 0.05, 𝑃(𝐸|𝐴2 ) = 0.1, 𝑃(𝐸|𝐴3 ) = 0.1 and 𝑃(𝐸|𝐴4 ) = 0.05

𝑃(𝐴1 )𝑃(𝐸|𝐴1 ) = 0.2 × 0.05 = 0.01


𝑃(𝐴2 )𝑃(𝐸|𝐴2 ) = 0.6 × 0.1 = 0.06
𝑃(𝐴3 )𝑃(𝐸|𝐴3 ) = 0.15 × 0.1 = 0.015
𝑃(𝐴4 )𝑃(𝐸|𝐴4 ) = 0.05 × 0.05 = 0.0025

𝑃(𝐸) = 𝑃(𝐴1 )𝑃(𝐸|𝐴1 ) + 𝑃(𝐴2 )𝑃(𝐸|𝐴2 ) + 𝑃(𝐴3 )𝑃(𝐸|𝐴3 ) + 𝑃(𝐴4 )𝑃(𝐸|𝐴4 )


𝑃(𝐸) = 0.01 + 0.06 + 0.015 + 0.0025 = 0.0875
Using Baye’s rule, probability that the misfiled report can be blamed on the first secretary is

𝑃(𝐴1 )𝑃(𝐸|𝐴1 )
𝑃(𝐴1 |𝐸) =
𝑃(𝐸)
0.01
𝑃(𝐴1 |𝐸) = = 0.1142
0.0875
3. One factory F1 produces 1000 articles of which 20 are defective, second factory F2 produces 4000
articles of which 40 are defective and the third factory F3 produces 5000 articles of which 50 are
defective. All these articles are put in one stock file. If one of them is chosen, what is the probability
that it is defective? If a chosen item is defective, what is the probability that it is from factory F1?

Solution: E be the article is defective.


F1: the article produced by F1
F2: the article produced by F2
F3: the article produced by F3

Since there are 10,000 articles in all (produced by F1, F2 and F3 taken together), we have
1000 4000 5000
𝑃(𝐹1 ) = 10000 = 0.1 , 𝑃(𝐹2 ) = 10000 = 0.4, 𝑃(𝐹3 ) = 10000 = 0.5

20 40 50
𝑃(𝐸|𝐹1 ) = 1000 = 0.02, 𝑃(𝐸|𝐹2 ) = 4000 = 0.01, 𝑃(𝐸|𝐹3 ) = 5000 = 0.01

𝑃(𝐹1 )𝑃(𝐸|𝐹1 ) = 0.002


𝑃(𝐹2 )𝑃(𝐸|𝐹2 ) = 0.004
𝑃(𝐹3 )𝑃(𝐸|𝐹3 ) = 0.005

Therefore, the probability of selecting a defective article from the lot is,
𝑃(𝐸) = 𝑃(𝐹1 )𝑃(𝐸|𝐹1 ) + 𝑃(𝐹2 )𝑃(𝐸|𝐹2 ) + 𝑃(𝐹3 )𝑃(𝐸|𝐹3 )
𝑃(𝐸) = 0.002 + 0.004 + 0.005
𝑃(𝐸) = 0.011

The probability that the chosen defective article is from F1 is


𝑃(𝐹1 )𝑃(𝐸|𝐹1 )
𝑃(𝐹1 |𝐸) =
𝑃(𝐸)
0.002
𝑃(𝐹1 |𝐸) = = 0.1818
0.011
4. The chance that a doctor will diagnose a disease correctly is 60%. The chance that a patient will die
after correct diagnose is 40% and the chance of death by wrong diagnosis is 70%. If a patient dies,
what is the chance that his disease was correctly diagnosed?
Solution:
Let A be the event of correct diagnosis and B be the event of wrong diagnosis by the doctor
𝑃(𝐴) = 0.6
𝑃(𝐴) + 𝑃(𝐵) = 1
𝑃(𝐵) = 1 − 𝑃(𝐴)
𝑃(𝐵) = 0.4
Let E be the event that patient dies
𝑃(𝐸|𝐴) = 0.4 and 𝑃(𝐸|𝐵) = 0.7

𝑃(𝐸) = 𝑃(𝐴)𝑃(𝐸|𝐴) + 𝑃(𝐵)𝑃(𝐸|𝐵) = (0.6 × 0.4) + (0.4 × 0.7) = 0.52

Using Baye’s rule, the probability that the disease was correctly diagnosed is

𝑃(𝐴)𝑃(𝐸|𝐴) 0.6 × 0.4


𝑃(𝐴|𝐸) = = = 0.4615
𝑃(𝐸) 0.52

5. A shipment of components consists of 3 identical boxes. The first box contains 200 components of
which 25% are defective, the second box has 5000 components of which 20% are defective and the
third box contains 2000 components of which 600 are defective. A box is selected at random and a
component is removed at random from this box. What is the probability that this component is
defective? If the component drawn is found to be defective, what is the probability that it came from
the first or second box?
Solution: Let A , B, C are the first second and third box respectively.
Let P(A), P(B) and P(C) be the probabilities of choosing the first, second and third boxes respectively.
1 1 1
𝑃(𝐴) = , 𝑃(𝐵) = , 𝑃(𝐶) =
3 3 3
E be the event of defective component

Let 𝑃(𝐸|𝐴), 𝑃(𝐸|𝐵) and 𝑃(𝐸|𝐶) denote the probabilities of choosing a defective item from the first,
second and third boxes respectively
600
𝑃(𝐸|𝐴) = 0.25, 𝑃(𝐸|𝐵) = 0.2 and 𝑃(𝐸|𝐶) = 2000 = 0.3

Therefore, the probability of drawing a defective component from an arbitrarily chosen box is
𝑃(𝐸) = 𝑃(𝐴)𝑃(𝐸|𝐴) + 𝑃(𝐵)𝑃(𝐸|𝐵) + 𝑃(𝐶)𝑃(𝐸|𝐶)
1
𝑃(𝐸) = 3 [0.25 + 0.2 + 0.3] = 0.25

The probability that a drawn component is defective and it is from the first box is
1
𝑃(𝐴)𝑃(𝐸|𝐴) 3 × 0.25 1
𝑃(𝐴|𝐸) = = =
𝑃(𝐸) 0.25 3
The probability that a drawn component is defective and it is from the second box is
1
𝑃(𝐵)𝑃(𝐸|𝐵) 3 × 0.2
𝑃(𝐵|𝐸) = = = 0.2667
𝑃(𝐸) 0.25

Consequently, the probability that a drawn component is defective and it is from the first or second
box is
𝑃(𝐴|𝐸) + 𝑃(𝐵|𝐸) = 0.3333 + 0.2667 = 0.6.

6. A company manufacturing ball pens in two writing colours blue and red make packets of 10 pens
with 5 pens of each colour. In a particular shop it was found that after sales, packet 1 contained 3 blue
and 2 red pens, packet 2 contained 2 blue and 3 red pens, packet 3 contained 3 blue and 5 red pens.
On the demand of a customer for a pen, packet was drawn at random and a pen was taken out. It was
found blue. Find the probability that packet 1 was selected.

Solution: Let 𝑃1 , 𝑃2 and 𝑃3 be the event of selecting packets 1, 2 and 3 respectively at random.
1
∴ 𝑃(𝑃1 ) = 𝑃(𝑃2 ) = 𝑃(𝑃3 ) = 3

Let E be the event of selecting a blue pen


Probability of selecting a blue pen from packet1, packet 2 and packet 3 is
3 2 3
𝑃(𝐸|𝑃1 ) = 5 , 𝑃(𝐸|𝑃2 ) = 5 and 𝑃(𝐸|𝑃3 ) = 8

𝑃(𝐸) = 𝑃(𝑃1 )𝑃(𝐸|𝑃1 ) + 𝑃(𝑃2 )𝑃(𝐸|𝑃2 ) + 𝑃(𝑃3 )𝑃(𝐸|𝑃3 )


1 3 2 3 11
𝑃(𝐸) = 3 [5 + 5 + 8] = 24 = 0.4583

Using Baye’s rule, we have


1 3
𝑃(𝑃1 )𝑃(𝐸|𝑃1 ) ×
𝑃(𝑃1 |𝐸) = = 3 5 = 0.4364
𝑃(𝐸) 0.4583
Joint Probability distribution of continuous random variables
Joint density function

The joint probability density function (pdf) of two continuous random variable (X,Y) is defined as a
function 𝑓(𝑥, 𝑦) satisfying the following conditions:

(i) 𝑓(𝑥, 𝑦) ≥ 0, ∀𝑥, 𝑦


∞ ∞
(ii) ∫−∞ ∫−∞ 𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦 = 1

Marginal density function of X and Y



The function 𝑝1 (𝑥) = 𝑔(𝑥) = 𝑓𝑋 (𝑥) = ∫−∞ 𝑓(𝑥, 𝑦)𝑑𝑦 is called the marginal density function of X.


The function 𝑝2 (𝑦) = ℎ(𝑦) = 𝑓𝑌 (𝑦) = ∫−∞ 𝑓(𝑥, 𝑦)𝑑𝑥 is called the marginal density function of Y.

Independent Random variables

Two random variables X and Y are said to be independent or stochastically independent if

𝑝1 (𝑥)𝑝2 (𝑦) = 𝑓(𝑥, 𝑦)


OR

𝐸(𝑋𝑌) = 𝐸(𝑋)𝐸(𝑌)
Mean, Variance and Covariance

𝐸(𝑋) = ∫ 𝑥 𝑝1 (𝑥)𝑑𝑥
−∞

𝐸(𝑌) = ∫ 𝑦 𝑝2 (𝑦)𝑑𝑦
−∞
∞ ∞
𝐸(𝑋𝑌) = ∫ ∫ 𝑥𝑦 𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦
−∞ −∞
∞ ∞
𝐸(𝑋 + 𝑌) = ∫ ∫ (𝑥 + 𝑦) 𝑓(𝑥, 𝑦)𝑑𝑥𝑑𝑦
−∞ −∞

𝑐𝑜𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒(𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌)


𝑐𝑜𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒(𝑋, 𝑌)
𝑟(𝑋, 𝑌) =
𝜎𝑋 𝜎𝑌

where, 𝜎𝑋 = √𝐸(𝑋 2 ) − [𝐸(𝑋)]2

𝜎𝑌 = √𝐸(𝑌 2 ) − [𝐸(𝑌)]2
1. Let (X,Y) be continuous random variable with Joint PDF given by
𝑥𝑦
𝑥2 + , 0 ≤ 𝑥 ≤ 1, 0≤𝑦≤2
f(x, y) = { 3
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
Is 𝑓(𝑥, 𝑦) a probability density function? (ii) Marginal pdf of X and Y.

Solution: The given 𝑓(𝑥, 𝑦) ≥ 0, ∀𝑥, 𝑦

1 2
𝑥𝑦
∫ ∫ (𝑥 2 + ) 𝑑𝑦 𝑑𝑥
𝑥=0 𝑦=0 3
1
𝑥𝑦 2
= ∫ (𝑥 2 𝑦 + ) 𝑑𝑥 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 0 𝑡𝑜 2
0 6
1
4𝑥
= ∫ (2𝑥 2 + ) 𝑑𝑥
0 6

2𝑥 3 4𝑥 2
= + 0 𝑡𝑜 1
3 12
=1
Therefore 𝑓(𝑥, 𝑦) is a probability density function.

Marginal density of X

𝑝1 (𝑥) = ∫ 𝑓(𝑥, 𝑦)𝑑𝑦
−∞
2
𝑥𝑦
𝑝1 (𝑥) = ∫ (𝑥 2 + ) 𝑑𝑦
0 3

𝑥𝑦 2
𝑝1 (𝑥) = 𝑥 2 𝑦 + 0 𝑡𝑜 2
6
2𝑥
𝑝1 (𝑥) = 2𝑥 2 + , 𝑜≤𝑥≤1
3

Marginal density of Y

𝑝2 (𝑦) = ∫ 𝑓(𝑥, 𝑦)𝑑𝑥
−∞
1
𝑥𝑦
𝑝2 (𝑦) = ∫ (𝑥 2 + ) 𝑑𝑥
0 3
1 𝑦
𝑝2 (𝑦) = + , 0≤𝑦≤2
3 6
2. Find the constant ‘𝑘’ so that
𝑘(𝑥 + 1)𝑒 −𝑦 , 0 < 𝑥 < 1, 𝑦 > 0
h(x, y) = {
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
is a joint probability density function. Are X and Y independent?

Solution: We observe that ℎ(𝑥, 𝑦) ≥ 0 for 𝑥, 𝑦, if 𝑘 ≥ 0

∞ ∞ ∞ 1
∫ ∫ ℎ(𝑥, 𝑦) 𝑑𝑥 𝑑𝑦 = ∫ ∫ ℎ(𝑥, 𝑦) 𝑑𝑥 𝑑𝑦
−∞ −∞ 𝑦=0 𝑥=0

1 ∞
= 𝑘 {∫ (𝑥 + 1)𝑑𝑥} {∫ 𝑒 −𝑦 𝑑𝑦}
0 0

3 3
1 = 𝑘 {2} {0 + 1} = 𝑘.
2

2
⇒𝑘=
3
∞ ∞ 2
Hence ∫−∞ ∫−∞ ℎ(𝑥, 𝑦) 𝑑𝑥 𝑑𝑦 = 1 if k = 3.

2
Therefore, ℎ(𝑥, 𝑦) is a joint probability density function if k = 3.

2
With k = 3, the marginal density functions are


𝑝1 (𝑥) = ∫ ℎ(𝑥, 𝑦) 𝑑𝑦, 0<𝑥<1
−∞


2
= (𝑥 + 1) ∫ 𝑒 −𝑦 𝑑𝑦
3 0

2
= (𝑥 + 1)(0 + 1).
3
2
𝑝1 (𝑥) = (𝑥 + 1), 0 < 𝑥 < 1
3


𝑝2 (𝑦) = ∫ ℎ(𝑥, 𝑦) 𝑑𝑥, 𝑦 > 0
−∞
2 −𝑦 1 2 −𝑦 22 1 2 3
𝑝2 (𝑦) = 𝑒 ∫ (𝑥 + 1) 𝑑𝑥 = 𝑒 { − } = 𝑒 −𝑦
3 0 3 2 2 3 2

𝑝2 (𝑦) = 𝑒 −𝑦 , 𝑦 > 0.

Therefore, 𝑝1 (𝑥)𝑝2 (𝑦) = ℎ(𝑥, 𝑦) and hence 𝑋 and Y are stochastically independent.

3. Let X and Y be random variables having the joint density function


4𝑥𝑦, 0 ≤ 𝑥 ≤ 1, 0≤𝑦≤1
f(x, y) = {
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
Verify that 𝐸(𝑋 + 𝑌) = 𝐸(𝑋) + 𝐸(𝑌) and also find 𝐸(𝑋𝑌).

Solution:

𝑝1 (𝑥) = ∫ 𝑓(𝑥, 𝑦)𝑑𝑦
−∞
1
𝑝1 (𝑥) = ∫ 4𝑥𝑦𝑑𝑦 = 2𝑥
0


𝑝2 (𝑦) = ∫ 𝑓(𝑥, 𝑦)𝑑𝑥
−∞
1
𝑝2 (𝑦) = ∫ 4𝑥𝑦𝑑𝑥 = 2𝑦
0

∞ 1
2
𝐸(𝑋) = ∫ 𝑥𝑝1 (𝑥)𝑑𝑥 = ∫ 2𝑥 2 𝑑𝑥 =
−∞ 0 3

∞ 1
2
𝐸(𝑌) = ∫ 𝑦𝑝2 (𝑦)𝑑𝑦 = ∫ 2𝑦 2 𝑑𝑦 =
−∞ 0 3

∞ ∞
𝐸(𝑋 + 𝑌) = ∫ ∫ (𝑥 + 𝑦)𝑓(𝑥, 𝑦) 𝑑𝑥 𝑑𝑦
−∞ −∞
1 1
𝐸(𝑋 + 𝑌) = ∫ ∫ (𝑥 + 𝑦)4𝑥𝑦 𝑑𝑥 𝑑𝑦
0 0
1 1
𝐸(𝑋 + 𝑌) = ∫ ∫ (4𝑥2 𝑦 + 4𝑥𝑦2 ) 𝑑𝑥 𝑑𝑦
0 0
1
4𝑦
𝐸(𝑋 + 𝑌) = ∫ ( + 2𝑦 2 ) 𝑑𝑦
0 3

4𝑦 2 2𝑦 3
𝐸(𝑋 + 𝑌) = +
6 3
2 2
𝐸(𝑋 + 𝑌) = +
3 3
Hence 𝐸(𝑋 + 𝑌) = 𝐸(𝑋) + 𝐸(𝑌)

∞ ∞
𝐸(𝑋𝑌) = ∫ ∫ (𝑥𝑦)𝑓(𝑥, 𝑦) 𝑑𝑥 𝑑𝑦
−∞ −∞
1 1
𝐸(𝑋𝑌) = ∫ ∫ (𝑥𝑦)4𝑥𝑦 𝑑𝑥 𝑑𝑦
0 0
1 1
𝐸(𝑋𝑌) = ∫ ∫ 4𝑥2 𝑦2 𝑑𝑥 𝑑𝑦
0 0
1 4𝑦2
𝐸(𝑋𝑌) = ∫ 𝑑𝑦
0 3
4
𝐸(𝑋𝑌) =
9
Hence 𝐸(𝑋𝑌) = 𝐸(𝑋)𝐸(𝑌)
Therefore X and Y are independent.

4. Let X and Y be random variables having the joint density function


𝑥𝑦
, 0 ≤ 𝑥 ≤ 4, 1≤𝑦≤5
f(x, y) = {96
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒

Evaluate the following


(i) 𝑃(1 < 𝑥 < 2, 2 < 𝑦 < 3)
(ii) 𝑃(𝑥 ≥ 3, 𝑦 ≤ 2)
(iii) 𝑃(𝑦 ≤ 𝑥)
(iv) 𝑃(𝑦 > 𝑥)
(v) 𝑃(𝑥 + 𝑦 ≤ 3)
(vi) 𝑃(𝑥 + 𝑦 > 3)
Solution:

2 3 𝑥𝑦 5
(i)𝑃(1 < 𝑥 < 2, 2 < 𝑦 < 3) = ∫𝑥=1 ∫𝑦=2 96 𝑑𝑦 𝑑𝑥 = 128

4 2 𝑥𝑦 7
(ii) 𝑃(𝑥 ≥ 3, 𝑦 ≤ 2) = ∫𝑥=3 ∫𝑦=1 96 𝑑𝑦 𝑑𝑥 = 128

4 𝑥 𝑥𝑦
(iii) 𝑃(𝑦 ≤ 𝑥) = ∫𝑥=1 ∫𝑦=1 96 𝑑𝑦 𝑑𝑥

4
𝑥𝑦 2
𝑃(𝑦 ≤ 𝑥) = ∫ 𝑑𝑥 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 1 𝑡𝑜 𝑥
𝑥=1 192
4
1
𝑃(𝑦 ≤ 𝑥) = ∫ (𝑥 3 − 𝑥)𝑑𝑥
192 𝑥=1
75
𝑃(𝑦 ≤ 𝑥) =
256

75 181
(iv) 𝑃(𝑦 > 𝑥) = 1 − 𝑃(𝑦 ≤ 𝑥) = 1 − = 256
256

2 3−𝑥 𝑥𝑦
(v) 𝑃(𝑥 + 𝑦 ≤ 3) = ∫𝑥=0 ∫𝑦=1 𝑑𝑦 𝑑𝑥
96

2
𝑥𝑦 2
𝑃(𝑥 + 𝑦 ≤ 3) = ∫ 𝑑𝑥 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 1 𝑡𝑜 3 − 𝑥
𝑥=0 192
2
𝑃(𝑥 + 𝑦 ≤ 3) = ∫ [𝑥(3 − 𝑥)2 − 𝑥]𝑑𝑥
𝑥=0
2
𝑃(𝑥 + 𝑦 ≤ 3) = ∫ [𝑥(9 + 𝑥 2 − 6𝑥) − 𝑥]𝑑𝑥
𝑥=0
2
𝑃(𝑥 + 𝑦 ≤ 3) = ∫ [8𝑥 + 𝑥 3 − 6𝑥 2 ]𝑑𝑥
𝑥=0

1
𝑃(𝑥 + 𝑦 ≤ 3) =
48
1 47
(vi) 𝑃(𝑥 + 𝑦 > 3) = 1 − 𝑃(𝑥 + 𝑦 ≤ 3) = 1 − 48 = 48

𝑒 −(𝑥+𝑦) , 𝑥 ≥ 0, 𝑦 ≥ 0
5. Verify that f (x, y) = { is a density function of a joint probability
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
distribution. Then evaluate the following:
1
(i) 𝑃 (2 < 𝑥 < 2, 0 < 𝑦 < 4) (ii) 𝑃(𝑥 < 1) (iii) 𝑃(𝑥 > 𝑦) (iv) 𝑃(𝑥 + 𝑦 ≤ 1).

Solution: Given 𝑓(𝑥, 𝑦) ≥ 0

∞ ∞ ∞ ∞ ∞ ∞
𝑓(𝑥, 𝑦) = ∫ ∫ 𝑓(𝑥, 𝑦)𝑑𝑥 𝑑𝑦 = ∫ ∫ 𝑒 −(𝑥+𝑦) 𝑑𝑥 𝑑𝑦 = ∫ 𝑒 −𝑥 𝑑𝑥 ∫ 𝑒 −𝑦 𝑑𝑦
−∞ −∞ −∞ −∞ −∞ 0

= (0 + 1)(0 + 1) = 1.
Therefore, 𝑓(𝑥, 𝑦) is a density function.

1 2 4 2 4
(i) 𝑃 (2 < 𝑥 < 2, 0 < 𝑦 < 4) = ∫1/2 ∫0 𝑓(𝑥, 𝑦) 𝑑𝑦 𝑑𝑥 = ∫1/2 ∫0 𝑒 −(𝑥+𝑦) 𝑑𝑦 𝑑𝑥
2 4
= ∫1/2 𝑒 −𝑥 𝑑𝑥 ∫0 𝑒 −𝑦 𝑑𝑦 = (𝑒 −1/2 − 𝑒 −2 )(1 − 𝑒 −4 ).

(ii) The marginal density function of 𝑥 is


∞ ∞ ∞
−(𝑥+𝑦) −𝑥
𝑝1 (𝑥) = ∫ 𝑓(𝑥, 𝑦) 𝑑𝑦 = ∫ 𝑒 𝑑𝑦 = 𝑒 ∫ 𝑒 −𝑦 𝑑𝑦 = 𝑒 −𝑥
−∞ 0 0
1 1 1
Therefore, 𝑃(𝑥 < 1) = ∫0 ℎ1 (𝑥) 𝑑𝑥 = ∫0 𝑒 −𝑥 𝑑𝑥 = 1 − .
𝑒

∞ 𝑦 ∞ 𝑦
(iii) 𝑃(𝑥 ≤ 𝑦) = ∫0 {∫0 𝑓(𝑥, 𝑦) 𝑑𝑥} 𝑑𝑦 = ∫0 {∫0 𝑒 −(𝑥+𝑦) 𝑑𝑥} 𝑑𝑦
∞ 𝑦 ∞
= ∫ 𝑒 −𝑦 (∫ 𝑒 −𝑥 𝑑𝑥) 𝑑𝑦 = ∫ 𝑒 −𝑦 (1 − 𝑒 −𝑦 ) 𝑑𝑦
0 0 0

1 1
= ∫ (𝑒 −𝑦 − 𝑒 −2𝑦 ) 𝑑𝑦 = 1 − =
0 2 2
1 1
Therefore, 𝑃(𝑥 > 𝑦) = 1 − 𝑃(𝑥 ≤ 𝑦) = 1 − = .
2 2

(iv) 𝑃(𝑥 + 𝑦 ≤ 1) = ∬𝐴 𝑓(𝑥, 𝑦)𝑑𝐴

1 1−𝑥 1 1−𝑥
= ∫ ∫ 𝑓(𝑥, 𝑦) 𝑑𝑦 𝑑𝑥 = ∫ {∫ 𝑒 −(𝑥+𝑦) 𝑑𝑦} 𝑑𝑥
𝑥=0 𝑦= 0 0 0

1 1−𝑥 1
=∫ 𝑒 −𝑥
{∫ 𝑒 −𝑦
𝑑𝑦} 𝑑𝑥 = ∫ 𝑒 −𝑥 {1 − 𝑒 − (1−𝑥) } 𝑑𝑥
0 0 0
1
2
= ∫ (𝑒 −𝑥 − 𝑒 −1 ) 𝑑𝑥 = 1 − .
0 𝑒
Random variable

Random variable is a real number X connected with the outcome of a random experiment E. For
example, if E consists of three tosses of a coin, one can consider the random variable which is the
number of heads (0, 1, 2 or 3).

Outcome HHH HHT HTH THH TTH THT HTT TTT

Value of X 3 2 2 2 1 1 1 0

Let S denote the sample space of a random experiment. A random variable means it is a rule which
assigns a numerical value to each and every outcome of the experiment. Thus, random variable is a
function X (ω) with domain S and range (-∞, ∞) such that for every real number a, the event [ω: X (ω)
≤ a ] ϵ B the field of subsets in S. It is denoted as f: S → R.

Note that all the outcomes of the experiment are associated with a unique number. Therefore, f is an
example of a random variable. Usually, a random variable is denoted by letters such as X, Y, Z etc. The
image set of the random variable may be written as f(S) = {0, 1, 2, 3}.

There are two types of random variables. They are;

1. Discrete Random Variable (DRV)


2. Continuous Random Variable (CRV).

Discrete Random Variable: A discrete random variable is one which takes only a countable number
of distinct values such as 0, 1, 2, 3, … . Discrete random variables are usually (but not necessarily)
counts. If a random variable takes at most a countable number of values, it is called a discrete random
variable. In other words, a real valued function defined on a discrete sample space is called a discrete
random variable.

Examples of Discrete Random Variable:

(i) In the experiment of throwing a die, define X as the number that is obtained. Then X takes any
of the values 1 – 6. Thus, X(S) = {1, 2, 3…6} which is a finite set and hence X is a DRV.
(ii) If X be the random variable denoting the number of marks scored by a student in a subject of
an examination, then X(S) = {0, 1, 2, 3,…100}. Then, X is a DRV.
(iii) The number of children in a family is a DRV.
(iv) The number of defective light bulbs in a box of ten is a DRV.
Probability Mass Function: Suppose X is a one-dimensional discrete random variable taking at most
a countably infinite number of values x1, x2, …. With each possible outcome xi, one can associate a
number pi = P(X = xi) = p(xi), called the probability of xi.

The numbers p(xi); i = 1, 2, … must satisfy the following conditions:

(i) 𝑝(𝑥𝑖 ) ≥ 0 ∀ 𝑖 ,
(ii) ∑∞
𝑖=1 𝑝(𝑥𝑖 ) = 1 .

This function 𝑝 is called the probability mass function of the random variable X and the set
{𝑥𝑖 , 𝑝(𝑥𝑖 )} is called the probability distribution of the random variable X.

Continuous Random Variable: A continuous random variable is not defined at specific values.
Instead, it is defined over an interval of values, and is represented by the area under a curve. Thus, a
random variable X is said to be continuous if it can take all possible values between certain limits. In
other words, a random variable is said to be continuous when its different values cannot be put in 1-1
correspondence with a set of positive integers. Here, the probability of observing any single value is
equal to zero, since the number of values which may be assumed by the random variable is infinite.

A continuous random variable is a random variable that (at least conceptually) can be measured
to any desired degree of accuracy.

Examples of Continuous Random Variable:

(i) Rainfall in a particular area can be treated as CRV.


(ii) Age, height and weight related problems can be included under CRV.
(iii) The amount of sugar in an orange is a CRV.
(iv) The time required to run a mile is a CRV.

Important Remark: In case of DRV, the probability at a point i.e., P (x = c) is not zero for some
fixed c. However, in case of CRV the probability at a point is always zero.

i.e., P(x = c) = 0 for all possible values of c.

Probability Density Function: The probability density function (p.d.f) of a random variable X usually
denoted by 𝑓𝑥 (𝑥) or simply by 𝑓(𝑥) has the following obvious properties:

i) 𝑓(𝑥) ≥ 0, −∞ < 𝑥 < ∞



ii) ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1
iii) The probability 𝑃(𝐸) given by 𝑃(𝐸) = ∫ 𝑓(𝑥)𝑑𝑥 is well defined for any event E.
If f (x) is the p.d.f of x, then the probability that x belongs to A, where A is some interval (a, b) is given
by the integral of f (x) over that interval.

𝑏
i.e., 𝑃(𝑋 ∈ 𝐴) = ∫𝑎 𝑓(𝑥)𝑑𝑥

Cumulative Density Function: Cumulative density function of a continuous random variable is


𝑥
defined as 𝐹(𝑥) = ∫−∞ 𝑓(𝑡)𝑑𝑡 for − ∞ < 𝑥 < ∞ .

Mean/Expectation, Variance and Standard deviation of Discrete Random Variable (DRV):

The mean or expected value of a DRV X is defined as 𝜇 = 𝐸(𝑋) = ∑𝑥𝑝(𝑥)

The variance of a DRV X is defined as 𝜎 2 = 𝑉𝑎𝑟(𝑋) = 𝐸[𝑋 − 𝐸(𝑋)]2 = ∑𝑛𝑖=1(𝑥 − 𝜇)2 𝑝(𝑥)

The standard deviation of a DRV X is given by = √𝑉(𝑋) .

Mean/Expectation, Variance and Standard deviation of Continuous Random Variable (CRV):


The mean or expected value of a CRV X is defined as 𝜇 = 𝐸(𝑋) = ∫−∞ 𝑥 𝑓(𝑥)𝑑𝑥


The variance of a CRV X is defined as 𝑉𝑎𝑟(𝑋) = 𝜎 2 = ∫−∞ 𝑥 2 𝑓(𝑥)𝑑𝑥 − 𝜇2

The standard deviation of a CRV X is given by = √𝑉𝑎𝑟(𝑋) .

Note: For computation purpose, we use


i) 𝜎 2 = 𝑉(𝑋) = 𝐸[𝑋 2 ] − [𝐸(𝑋)]2
ii) 𝐸(𝑋 2 ) = ∑𝑥 2 𝑝(𝑥)

Properties:
𝐸(𝑎) = 𝑎
𝐸(𝑎𝑋 + 𝑏) = 𝐸(𝑎𝑋) + 𝐸(𝑏) = 𝑎𝐸(𝑋) + 𝑏
𝐸(𝑋 + 𝑌) = 𝐸(𝑋) + 𝐸(𝑌)
𝑉(𝑎) = 0

𝑉(𝑎𝑋) = 𝑎2 𝑉(𝑋)
𝑉(𝑎𝑋 + 𝑏) = 𝑎2 𝑉(𝑋)
1. For the following distribution find 𝐸(𝑋), 𝐸(2𝑋 + 4), 𝐸(𝑋 2 + 2), 𝑉(𝑋), 𝑆𝐷(𝑋), 𝑉(2𝑋).

x -2 0 1 2
𝑝(𝑥) 0.2 0.4 0.3 0.1

Solution:
𝐸(𝑋) = ∑𝑥𝑝(𝑥) = (−2 × 0.2) + (0 × 0.4) + (1 × 0.3) + (2 × 0.1) = 0.1

𝐸(2𝑋 + 4) = 2𝐸(𝑋) + 4 = (2 × 0.1) + 4 = 4.2

𝐸(𝑋 2 ) = ∑𝑥 2 𝑝(𝑥)

𝐸(𝑋 2 ) = ((−2)2 × 0.2) + (02 × 0.4) + (12 × 0.3) + (22 × 0.1) = 1.5

𝐸(𝑋 2 + 2) = 𝐸(𝑋 2 ) + 𝐸(2) = 𝐸(𝑋 2 ) + 2 = 1.5 + 2 = 3.5

𝑉(𝑋) = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2 = 1.5 − 0.12 = 1.49

𝑆𝐷(𝑋) = √𝑉(𝑋) = √1.49 = 1.2206

𝑉(2𝑋) = 22 𝑉(𝑋) = 4 × 1.49 = 5.96

2. Find the missing probability in the following distribution then compute E(X) and V(X)
x 2 4 6 8 10
𝑝(𝑥) 1/8 1/6 A 1/4 1/12

Solution: Let A be the missing number


1 1 1 1
+ +𝐴+ + =1
8 6 4 12
5
𝐴+ =1
8
3
𝐴=
8
1 2 9 5
𝐸(𝑋) = ∑𝑥𝑝(𝑥) = + + +2+ =6
4 3 4 6
4 16 3 64 100
𝐸(𝑋 2 ) = ∑𝑥 2 𝑝(𝑥) = + + (36 × ) + + = 41
8 6 8 4 12

𝑉(𝑋) = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2 = 41 − 62 = 5

3. A person throws a fair coin, He gets Rs 100/- if head appears and loses Rs. 50/- if tail appears. Find
the expected gain and SD(X).
Solution: Let X be the amount he gets.
The probability distribution is given by
x 100 -50
p(x) 1/2 1/2

𝐸(𝑋) = ∑𝑥𝑝(𝑥) = 50 − 25 = 25
1 1
𝐸(𝑋 2 ) = (1002 × ) + (502 × ) = 6250
2 2
𝑉(𝑋) = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2 = 6250 − 252 = 5625

𝑆𝐷(𝑋) = √5625 = 75

4. A bag contains 5 red and 3 blue marbles. A person draws 2 marbles at random. If he is to receive
Rs 5 for every red marble he draws and Rs 8 for every blue marble he draws. What is his expectation?
Solution: Let X be the amount he gets.
Given 𝑆 = {5𝑅, 3𝐵}

5𝐶 10
P[X=10]=P[both red]= 8𝐶2 = 28
2

5𝐶1 ×3𝐶1 15
P[X=13]=P[One red and One blue] = 8𝐶2
= 28

3𝐶2 3
P[X=16]=P[both blue]= =
8𝐶2 28

The probability distribution is given by


x 10 13 16
p(x) 10/28 15/28 3/28

100 195 48 343


𝐸(𝑋) = ∑𝑥𝑝(𝑥) = + + = = 12.25
28 28 28 28
5. A box contains 12 items of which 4 are defective. A sample of 3 items selected from the box. Let X
denote the number of defective items in the sample. Find the probability distribution of X. Determine
the mean and standard deviation of the distribution.

Solution: There are 12 items


4 - defective
8 – Non defective
8𝐶3 56 14
P[X=0]=P[non defective]= = =
12𝐶3 220 55

4𝐶1 ×8𝐶2 28
P[X=1]=P[1 defective]= =
12𝐶3 55

4𝐶2 ×8𝐶1 12
P[X=2]=P[2 defective]= 12𝐶3
= 55

4𝐶 1
P[X=3]=P[3 defective]=12𝐶3 = 55
3

The probability distribution is given by

x 0 1 2 3
p(x) 14/55 28/55 12/55 1/55

28 24 3 55
𝐸(𝑋) = + + = =1
55 55 55 55

𝑉(𝑋) = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2


28 48 9 17
𝐸(𝑋 2 ) = ∑𝑥 2 𝑝(𝑥) = + + = = 1.5454
55 55 55 11
𝑉(𝑋) = 1.5454 − 1 = 0.5454

𝑆𝐷(𝑋) = √𝑉(𝑋) = 0.7385

6. A man wins if he gets 5 on a single throw of a die. He loses if gets 2 or 4. If he wins, he gets Rs 50.
If he loses, he gets Rs 10, otherwise he has to pay Rs 15. Find his expected gain.
Solution: let 𝑆 = {1,2,3,4,5,6}
Let X be the amount he gets

1
P[X=50]=P[he gets 5]=6
1 1 2
P[X=10]=P[he gets 2 or 4]= + =
6 6 6
1 1 1 3
P[X=-15]=P[he gets 1 or 3 or 6]=6 + 6 + 6 = 6
The probability distribution is given by

x 50 10 -15
p(x) 1/6 2/6 3/6

1 2 3
𝐸(𝑋) = (50 × ) + (10 × ) + (−15 × ) = 4.1666
6 6 6

7. The probability mass function of a discrete random variable X is given below:

x 0 1 2 3 4 5 6
P(X = x) = p(x) k 3k 5k 7k 9k 11k 13k

Find (i) k (ii) 𝑃(𝑋 ≥ 5); (iii) 𝑃(2 ≤ 𝑋 < 5); (iv) 𝑃(𝑋 < 5) and (v) E(X) (vi) Var(X)

Solution: To find the value of k, consider the sum of all the probabilities which equals to 49k. Equating
this to 1,

k = 1/49. Therefore, distribution of X may now be written as

x 0 1 2 3 4 5 6
P(X = x) = p(x) 1/49 3/49 5/49 7/49 9/49 11/49 13/49

11 13 24
𝑃[𝑋 ≥ 5] = 𝑃[𝑋 = 5] + 𝑃[𝑋 = 6] = + = .
49 49 49

5 7 9 21
𝑃[2 ≤ 𝑋 < 5] = 𝑃[𝑋 = 2] + 𝑃[𝑋 = 3] + 𝑃[𝑋 = 4] = + + = .
49 49 49 49

25
𝑃(𝑋 < 5) = 𝑃(𝑋 = 0) + 𝑃(𝑋 = 1) + 𝑃[𝑋 = 2] + 𝑃[𝑋 = 3] + 𝑃[𝑋 = 4] =
49

OR

24 25
𝑃(𝑋 < 5) = 1 − 𝑃(𝑋 ≥ 5) = 1 − =
49 49

𝐸(𝑋) = ∑𝑥𝑝(𝑥) = 4.1428

𝑉(𝑋) = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2

𝑉(𝑋) = 19.8571 − 4.14282 = 2.701


8. The probability mass function of a discrete random variable X is given below:

x 0 1 2 3 4 5 6 7
P(X = x) =p(x) 0 k 2k 2k 3k 𝑘 2 2𝑘 2 7𝑘 2 + 𝑘

Find (i) k; (ii) 𝑃(𝑋 < 6) (iii) 𝑃(𝑋 ≥ 6); (iv) 𝑃(0 < 𝑋 < 5); (v) E(X)

Solution: ∑𝑝(𝑥) = 1

10𝑘 2 + 9𝑘 = 1

𝑘 = 0.1, −1

𝑘 ≠ −1 (Probability of an event cannot be negative)

𝑘 = 0.1

𝑃(𝑋 < 6) = 8𝑘 + 𝑘 2 = 8 × 0.1 + 0.12 = 0.81

OR

𝑃(𝑋 < 6) = 1 − 𝑃(𝑋 ≥ 6) = 1 − 0.19 = 0.81

𝑃(𝑋 ≥ 6) = 9𝑘 2 + 𝑘 = 9 × 0.12 + 0.1 = 0.19

𝑃(0 < 𝑋 < 5) = 8𝑘 = 8 × 0.1 = 0.8

𝐸(𝑋) = ∑𝑥𝑝(𝑥) = 3.66

9. A random variable 𝑋 has 𝑝(𝑥) = 2−𝑥 , 𝑥 = 1,2,3, ⋯ Show that p(x) is a probability function. Also
find
(i) 𝑝(𝑋 𝑖𝑠 𝑒𝑣𝑒𝑛) (ii) 𝑝(𝑋 𝑖𝑠 𝑑𝑖𝑣𝑖𝑠𝑖𝑏𝑙𝑒 𝑏𝑦 3) (iii)𝑃(𝑋 ≥ 5).

1
Solution: 𝑝(𝑥) = 2−𝑥 = , 𝑃(𝑥) ≥ 0 for all x.
2𝑥

1 1 1
∑𝑝(𝑥) = ∑ 2−𝑥 = + 2+ 3+⋯
2 2 2
𝑥=1

∞ 1
1 1 1 𝑎
∑𝑝(𝑥) = ∑ 2 −𝑥
= + 2+ 3+⋯= = 2 =1
2 2 2 1−𝑟 1−1
𝑥=1 2
Hence 𝑝(𝑥) is a probability function.
∞ 1
−𝑥
1 1 1 2 2 1
𝑃(𝑋 𝑖𝑠 𝑒𝑣𝑒𝑛) = ∑ 2 = 2+ 4+ 6+⋯= =
2 2 2 1
𝑥=2,4,6 1− 2 3
2
∞ 1
1 1 1 3 1
𝑃(𝑋 𝑖𝑠 𝑑𝑖𝑣𝑖𝑠𝑖𝑏𝑙𝑒 𝑏𝑦 3) = ∑ 2−𝑥 = 3+ 6+ 9+⋯= 2 =
2 2 2 1
𝑥=3,6,9,… 1− 3 7
2
∞ 1
−𝑥
1 1 1 25 1
𝑃(𝑋 ≥ 5) = ∑ 2 = 5+ 6+ 7+⋯= =
2 2 2 1 16
𝑥=5,6,7,… 1−
2
OR
1 1 1 1 15 1
𝑃(𝑋 ≥ 5) = 1 − 𝑃(𝑋 < 5) = 1 − ( + 2 + 3 + 4 ) = 1 − =
2 2 2 2 16 16
Problems on continuous random variable.

1. The diameter of an electric cable, say X, is assumed to be a continuous random variable with p.d.f
6𝑥(1 − 𝑥) 0≤𝑥≤1
𝑓(𝑥) = {
0 otherwise
(i) Check that above is p.d.f.

2
(ii) Find 𝑃 (3 < 𝑥 < 1)

(iii) Determine a number b such that 𝑃(𝑋 < 𝑏) = 𝑃(𝑋 > 𝑏).

Solution: (i) 𝑓(𝑥) ≥ 0 in the given interval.

∞ 0 1 ∞
∫ 𝑓(𝑥)𝑑𝑥 = ∫ 𝑓(𝑥)𝑑𝑥 + ∫ 𝑓(𝑥)𝑑𝑥 + ∫ 𝑓(𝑥)𝑑𝑥
−∞ −∞ 0 1

1
= 0 + ∫0 6𝑥(1 − 𝑥) 𝑑𝑥 + 0

6𝑥 2 6𝑥 3
={ 2
− 3
} by putting limits x = 0 to 1

=1

2 6𝑥 2 6𝑥 3
2 1 1
(ii) 𝑃 ( < 𝑥 < 1) = ∫2/3 𝑓(𝑥)𝑑𝑥 = ∫2/3 (6𝑥 − 6𝑥 )𝑑𝑥 = { 2
− 3
} = 3𝑥 2 − 2𝑥 3
3
.

3×4 2×8 4 16 7
=1−( − )=1−( − )=
9 27 3 27 27

(iii) 𝑃(𝑋 < 𝑏) = 𝑃(𝑋 > 𝑏)

𝑏 ∞
∫−∞ 𝑓(𝑥)𝑑𝑥 = ∫𝑏 𝑓(𝑥)𝑑𝑥

𝑏 1
∫0 𝑓(𝑥)𝑑𝑥 = ∫𝑏 𝑓(𝑥)𝑑𝑥

𝑏 1
6 ∫0 𝑥(1 − 𝑥)𝑑𝑥 = 6 ∫𝑏 𝑥(1 − 𝑥)𝑑𝑥

𝑏2 𝑏3 1 1 𝑏2 𝑏3
(2 − 3
) = [(2 − 3) − ( 2 − 3
)]

2
2𝑏 3 1
𝑏 − =
3 6

4𝑏 3 − 6𝑏 2 + 1 = 0

𝑏 = 0.5,1.3666, −0.3666
From this 𝑏 = 0.5 is the only real value lying between 0 and 1 and satisfying the given condition.

2. Suppose that the error in the reaction temperature, in ◦C, for a controlled laboratory
experiment is a continuous random variable X having the probability density function

𝑥2
𝑓(𝑥) = { 3 , −1 < 𝑥 < 2
0 , elsewhere

(a) Verify that f (x) is a probability density function.

(b) Find P(0 < X ≤ 1).

Solution: a) 𝑓(𝑥) ≥ 0 in the given interval.

∞ 2 𝑥2
∫−∞ 𝑓(𝑥)𝑑𝑥 = ∫−1 3
𝑑𝑥 = 1. Hence the given function is a p.d.f.

1 𝑥2 1
b) P(0 < X ≤ 1) = ∫0 𝑑𝑥 = .
3 9

3. The length of time (in minutes) that a certain lady speaks on telephone is found to be a random
𝑥

variable with probability function 𝑓(𝑥) = {𝐴𝑒 5 𝑓𝑜𝑟 𝑥 ≥ 0
0 otherwise

(i) Find A

(ii) Find the probability that she will speak on the phone

(a) more than 10 minutes (b) less than 5 minutes (c) between 5 & 10 minutes.


Solution: (i) Given f (x) is p.d.f. i.e., ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1

0 ∞
∫−∞ 𝑓(𝑥)𝑑𝑥 + ∫0 𝑓(𝑥)𝑑𝑥 = 1

𝑦𝑖𝑒𝑙𝑑𝑠 𝑥

→ 0 + ∫0 𝐴𝑒 −5 𝑑𝑥 = 1

𝑥

𝐴𝑒 5
1 = 1 between 0 to infinity

5

−5𝐴(0 − 1) = 1
5𝐴 = 1

1
𝐴=5

∞ ∞ 1 −𝑥 𝑥
(ii) (a) 𝑃(𝑥 > 10) = ∫10 𝑓(𝑥)𝑑𝑥 = ∫10 5 𝑒 5 𝑑𝑥 = −𝑒 −5 between 10 to ∞

∞ 1 −𝑥
. = ∫10 𝑒 5 𝑑𝑥 = 𝑒 −2 = 0.1353
5

5 5 1 −𝑥 𝑥
(b) 𝑃(𝑥 < 5) = ∫−∞ 𝑓(𝑥)𝑑𝑥 = ∫0 5 𝑒 5 𝑑𝑥 = −𝑒 −5 = −(𝑒 −1 − 1)

= −𝑒 −1 + 1 = 0.6322

10 10 1 −𝑥
(c) 𝑃(5 < 𝑥 < 10) = ∫5 𝑓(𝑥)𝑑𝑥 = ∫5 5
𝑒 5 𝑑𝑥 .

= −𝑒 −2 + 𝑒 −1 = 0.2325

4. Suppose X is a continuous random variable with the following probability density function
𝑓(𝑥) = 3𝑥 2 for 0 < 𝑥 < 1 . Find the mean and variance of X.


Solution: Mean = 𝐸(𝑋) = 𝜇 = ∫−∞ 𝑥𝑓(𝑥)𝑑𝑥

0 1 ∞
= ∫−∞ 𝑥𝑓(𝑥)𝑑𝑥 + ∫0 𝑥𝑓(𝑥)𝑑𝑥 + ∫1 𝑥𝑓(𝑥)𝑑𝑥

1 1 3
= 0 + ∫0 (𝑥 × 3𝑥 2 )𝑑𝑥 + 0 = ∫0 3𝑥 3 𝑑𝑥 = .
4

𝑉(𝑋) = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2


Variance = 𝜎 2 = ∫−∞ 𝑥 2 𝑓(𝑥)𝑑𝑥 − [𝐸(𝑋)]2

1
= ∫0 𝑥 2 𝑓(𝑥)𝑑𝑥 − [𝐸(𝑋)]2
1 3 2
= ∫0 (𝑥 2 × 3𝑥 2 )𝑑𝑥 − (4)
1 3 2 3 3
= ∫0 3𝑥 4 𝑑𝑥 − (4) = 5 − 0.752 = 80
= 0.0375 .
5. X is a random variable with probability function given by

𝑘𝑥, 0 ≤ 𝑥 < 2
𝑓(𝑥) = { 2𝑘, 2 ≤ 𝑥 < 4
−𝑘𝑥 + 6𝑘, 4 ≤ 𝑥 < 6

Find 𝑘 and mean value of X.


Solution: ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1

6
∫ 𝑓(𝑥)𝑑𝑥 = 1
0

2 4 6
∫ 𝑓(𝑥)𝑑𝑥 + ∫ 𝑓(𝑥)𝑑𝑥 + ∫ 𝑓(𝑥)𝑑𝑥 = 1
0 2 4

2 4 6
∫ 𝑘𝑥 𝑑𝑥 + ∫ 2𝑘 𝑑𝑥 + ∫ (−𝑘𝑥 + 6𝑘) 𝑑𝑥 = 1
0 2 4

2𝑘 + 4𝑘 + 2𝑘 = 1

1
𝑘=
8
2 4 6
𝐸(𝑋) = ∫ 𝑥𝑓(𝑥)𝑑𝑥 + ∫ 𝑥𝑓(𝑥)𝑑𝑥 + ∫ 𝑥𝑓(𝑥)𝑑𝑥
0 2 4

2
𝑥2 4
𝑥 6 (−𝑥 2
+ 6𝑥)
𝐸(𝑋) = ∫ 𝑑𝑥 + ∫ 𝑑𝑥 + ∫ 𝑑𝑥
0 8 2 4 4 8

𝐸(𝑋) = 3

6. Suppose that a continuous random variable X has the following probability density function

0.25𝑒 −0.25𝑥 , 𝑥 > 0


𝑓(𝑥) = {
0 , elsewhere

Find the number 𝑐 such that 𝑃[𝑋 < 𝑐] = 0.05

Solution:

𝑐 0 𝑐
𝑃[𝑋 < 𝑐] = ∫ 𝑓(𝑥)𝑑𝑥 = ∫ 𝑓(𝑥)𝑑𝑥 + ∫ 𝑓(𝑥)𝑑𝑥
−∞ −∞ 0

𝑐
𝑃[𝑋 < 𝑐] = ∫ 0.25𝑒 −0.25𝑥 𝑑𝑥
0
𝑃[𝑋 < 𝑐] = −(𝑒 −0.25𝑐 − 1)

0.05 = 1 − 𝑒 −0.25𝑐

𝑒 −0.25𝑐 = 0.95

−0.25𝑐 = ln (0.95)

−0.25𝑐 = −0.05129

𝑐 = 0.2051

7. Examine whether the following is a density function

2𝑥, 0 < 𝑥 ≤ 1
𝑓(𝑥) = {4 − 2𝑥, 1 < 𝑥 < 2
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒

Solution: For 𝑓(𝑥) to be a probability density function, the following two conditions are to be
satisfied.

(i) 𝑓(𝑥) ≥ 0 and (ii) ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1

𝑓(𝑥) ≥ 0 in the given interval.


1 2
∫ 2𝑥𝑑𝑥 + ∫ (4 − 2𝑥)𝑑𝑥 = 2 ≠ 1
0 1

Therefore 𝑓(𝑥) is not a density function.

You might also like