0% found this document useful (0 votes)
5 views37 pages

CH 05

Chapter 5 provides a comprehensive review of probability theory, including problem solutions that cover sample spaces, statistical independence, and conditional probabilities. It discusses various probability calculations using tree diagrams, Bayes' rule, and joint distributions. The chapter also explores concepts such as cumulative distribution functions (CDFs) and probability density functions (PDFs) through practical examples.

Uploaded by

dohak6481
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views37 pages

CH 05

Chapter 5 provides a comprehensive review of probability theory, including problem solutions that cover sample spaces, statistical independence, and conditional probabilities. It discusses various probability calculations using tree diagrams, Bayes' rule, and joint distributions. The chapter also explores concepts such as cumulative distribution functions (CDFs) and probability density functions (PDFs) through practical examples.

Uploaded by

dohak6481
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Chapter 5

A Brief Review of Probability


Theory

5.1 Problem Solutions


Problem 5.1
S = sample space is the collection of the 21 parts. Let Ai = event that the pointer stops
on the ith part, i = 1; 2; :::; 21. These events are exhaustive and mutually exclusive. Thus
P (Ai ) = 1=21
a. P (even number) = P (2 or 4 or or 20) = 10=21;
b. P (A21 ) = 1=21;
c. P (4 or 5 or 9) = 3=21 = 1=7;
d. P (number > 10) = P (11; 12; ; or 21) = 11=21:

Problem 5.2

P (A; B) = P (A) P (B)


P (A; C) = P (A) P (C)
P (B; C) = P (B) P (C)
P (A; B; C) = P (A) P (B) P (C)

Problem 5.3

1
2 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

a. Use a tree diagram similar to Fig. 5.2 to show that


4 4 3 3 2 3 4 3 2 3 4 3 2 4 3 3 2
P (3K; 2A) = 3 + 3 + + 3
52 51 50 49 48 51 50 49 48 51 50 49 48 51 50 49 48
6
= 9:2345 10

b. Use the tree diagram of Fig. 5.2 except that we now look at outcomes that give 4 of a
kind. Since the tree diagram is for a particular denomination, we multiply by 13 and
get
4 3 2 1 48 48 3 2 1 48 4 3 2 1
P (4 of a kind) = 13 3 + +
52 51 50 49 48 51 50 49 48 52 51 50 49 48
= 0:0002401

c. The …rst card can be anything. Given a particular suit on the …rst card, the probability
of suit match on the second card is 12=51; on the third card it is 11=50; on the fourth
card it is 10=49; on the …fth and last card it is 9=48. Therefore,
12 11 10 9
P (all same suit) = 1 = 0:001981
51 50 49 48

d. For a royal ‡ush


4 1 1 1 1 8
P (A; K; Q; J; 10 of same suit) = = 1:283 10
52 51 50 49 48

e. The desired probability is


4
P (QjA; K; J; 10 not all of same suit) = = 0:0833
48

Problem 5.4

a. According to (5.11) P (A \ B) = P (A) P (B) for statistical independence. For the


given probabilities, 0:2 0:5 = 0:1 6= 0:4 so they are not statistically independent.

b. According to (5.6) P (A [ B) = P (A) + P (B) P (A \ B) = 0:2 + 0:5 0:4 = 0:3:

c. A and B being mutually exclusive implies that P (AjB) = P (BjA) = 0. A and B


being statistically independent implies that P (AjB) = P (A) and P (BjA) = P (B).
The only way that both of these conditions can be true is for P (A) = P (B) = 0.
5.1. PROBLEM SOLUTIONS 3

Problem 5.5

a. The result is
2
P (AB) = 1 P (1 or more links broken) = 1 1 q2 (1 q)

b. If link 4 is removed, the result is

P (ABj link 4 removed) = 1 1 q 2 (1 q)

c. If link 2 is removed, the result is


2
P (ABj link 2 removed) = 1 1 q2

d. Removal of link 4 is more severe than removal of link 2.

Problem 5.6
Using Bayes’rule
P (BjA) P (A)
P (AjB) =
P (B)
where, by total probability

P (B) = P (BjA) P (A) + P BjA P A = P (BjA) P (A) + 1 P BjA P A


= (0:95) (0:45) + (0:35) (0:55) = 0:62

Therefore
(0:95) (0:45)
P (AjB) = = 0:6895
0:62
Similarly,
P BjA P (A)
P AjB =
P B
with

P B = P BjA P (A) + P BjA P A = (0:05) (0:45) + (0:65) (0:55) = 0:38

(Note also that P B = 1 P (B).) Thus

(0:05) (0:45)
P AjB = = 0:0592
0:38
4 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

Problem 5.7

a. P (A2 ) = 0:3; P (A2 ; B1 ) = 0:05; P (A1 ; B2 ) = 0:05; P (A3 ; B3 ) = 0:05; P (B1 ) =


0:15; P (B2 ) = 0:25; P (B3 ) = 0:6:
b. P(A3 jB3 ) = 0:083; P (B2 jA1 ) = 0:091; P (B3 jA2 ) = 0:333:

Problem 5.8
First, make a table showing outcomes giving the two values of the random variable:
Outcomes (H = head, T = tail) X P (X = xi )
TTH, THT, HTT, HHH (odd # of heads) 0 4=8 = 1=2
HHT, HTH, THH, TTT (even # of heads) 1 4=8 = 1=2
The cdf is zero for x < 0, jumps by 1/2 at x = 0, jumps another 1/2 at x = 1, and remains
at 1 out to 1. The pdf consists of an impulse of weight 1/2 at x = 0 and an impulse of
weight 1/2 at x = 1. It is zero elsewhere.

Problem 5.9
See the tables below for the results. P
Outcomes X1 = spots up P (X1 = xi )
(1; 1) 2 1/36
(1; 2) ; (2; 1) 3 2/36
(1; 3) ; (3; 1) ; (2; 2) 4 3/36
(1; 4) ; (4; 1) ; (2; 3) ; (3; 2) 5 4/36
(1; 5) ; (5; 1) ; (2; 4) ; (4; 2) ; (3; 3) 6 5/36
(a)
(1; 6) ; (6; 1) ; (2; 5) ; (5; 2) ; (3; 4) ; (4; 3) 7 6/36
(2; 6) ; (6; 2) ; (3; 5) ; (5; 3) ; (4; 4) 8 5/36
(3; 6) ; (6; 3) ; (4; 5) ; (5; 4) 9 4/36
(4; 6) ; (6; 4) ; (5; 5) 10 3/36
(5; 6) ; (6; 5) 11 2/36
((6; 6)) 12 1/36
Outcomes X2 P (X2 = xi )
(1; 1) ; (1; 3) ; (3; 1) ; (2; 2) ;
(1; 5) ; (5; 1) ; (2; 4) ; (4; 2) ; (3; 3) ; 1 18=36 = 1=2
(2; 6) ; (6; 2) ; (3; 5) ; (5; 3) ; (4; 4) ;
(b)
(4; 6) ; (6; 4) ; (5; 5) ; (6; 6)
(1; 2) ; (2; 1) ; (1; 4) ; (4; 1) ; (2; 3) ; (3; 2) ;
(1; 6) ; (6; 1) ; (2; 5) ; (5; 2) ; (3; 4) ; (4; 3) ; 0 18=36 = 1=2
(3; 6) ; (6; 3) ; (4; 5) ; (5; 4) ; (5; 6) ; (6; 5)
5.1. PROBLEM SOLUTIONS 5

Problem 5.10

a. As x ! 1, the cdf approaches 1. Therefore B = 1. Assuming continuity of the cdf,


Fx (12) = 1 also, which says A 124 = 1 or A = 4:8225 10 5 .

b. The pdf is given by

dFX (x)
fX (x) = = 4 4:8225 10 5 x3 u (x) u (12 x)
dx
= 1:929 10 4 x3 u (x) u (12 x)

The graph of this pdf is 0 for t < 0, the cubic 1:929 10 4 x3 for 0 x 12, and 0
for t > 12.

c. The desired probability is

5
P (X > 5) = 1 FX (5) = 1 4:8225 10 54 = 1 0:0303 = 0:9699

d. The desired probability is

P (4 X < 6) = FX (6) FX (4)


5
= 4:8225 10 64 4:8225 10 5
44
5
= 4:8225 10 64 44
= 0:0502

Problem 5.11

R1 x dx
a. A = where the condition 0 Ae = 1 must be satis…ed;
R0 x dx
b. B = where the condition 1 Be = 1 must be satis…ed;
R1 x dx
c. C = e where the condition 1 Ce = 1 must be satis…ed;
R
d. D = 1= where the condition 0 Ddx = 1 must be satis…ed.
6 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

Problem 5.12

a. Factor the joint pdf as


p p
fXY (x; y) = A exp [ jxj] A exp [ jyj]

With proper choice of A, the two separate factors are marginal pdfs. Thus X and Y
are statistically independent.

b. Note that the pdf is the volume between a plane which intersects the x and y coor-
dinate axes one unit out and the fXY coordinate axis at C. First …nd C from the
integral
Z 1 Z 1 Z 1Z 1 y
fXY (x; y) dxdy = 1 or C (1 x y) dxdy = 1
1 1 0 0

This gives C = 6. Next …nd fX (x) and fY (y) as


Z (
1 x
3 (1 x)2 ; 0 x 1
fX (x) = 6 (1 x y) dy =
0 0; otherwise

and (
Z 1 y
3 (1 y)2 ; 0 y 1
fY (y) = 6 (1 x y) dx =
0 0; otherwise

Since the joint pdf is not equal to the product of the two marginal pdfs, X and Y are
not statistically independent.

Problem 5.13
RR
a. fXY (x; y) dxdy = 1 gives C = 1=32;
1+1:5
b. fXY (1; 1:5) = 32 = 0:0781

c. fXY (x; 3) = 0 because y = 3 is in the domain where fXY is zero.


(1+2y)
d. By integration, fY (y) = 8 ; 0 y 2, so the desired conditional pdf is

1
fXY (x; y) 32 (1 + xy) (1 + xy)
fXjY (xjy) = = 1 = ; 0 x 4; 0 y 2
fY (y) 8 (1 + 2y)
4 (1 + 2y)
5.1. PROBLEM SOLUTIONS 7

Substitute y = 1 to get the asked for result:

1+x 1+x
fXjY (xj3) = = ; 0 x 4
4 (1 + 2) 12

Problem 5.14

a. Use normalization of the pdf to unity:


Z 1 Z 1
2 2 1 1
x u (x ) dx = x dx = x =1
1

Hence, this is a pdf for any value of .

b. The cdf is Z x
2 0; x <
FX (x) = z u (z ) dz =
1 (1 =x) ; x >

c. The desired probability is

1; 10
P (X 10) = 1 P (X < 10) = 1 FX (10) =
=10; < 10

Problem 5.15

a. To …nd A, evaluate the double integral


Z 1Z 1 Z 1Z 1
fXY (x; y) dxdy = 1 = Axye (x+y) dxdy
1 1
Z 10 0
Z 1
x
= A xe dx ye y dy
0 0
= A

Thus A = 1.

b. Clearly, fX (x) = xe x u (x) and fY (y) = ye y u (y)

c. Yes, because the joint pdf factors into the product of the marginal pdfs.
8 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

Problem 5.16
The result is
8 2
h i 1 < expp( y=2 ) ; y 0
fY (y) = fX (x)jx= p + fX (x)j p p = 2 2y
y x= y
2 y : 0; y < 0

Problem 5.17
First note that
P (Y = 0) = P (X 0) = 1=2
For y > 0, transformation of variables gives
dg 1 (y)
fY (y) = fX (x)
dy x=g 1 (y)

Since y = g (x) = ax, we have g 1 (y) = y=a. Therefore


y2
exp 2a2 2
fY (y) = p ; y>0
2 a2 2
For y = 0, we need to add 0:5 (y) to re‡ect the fact that Y takes on the value 0 with
probability 0.5. Hence, for all y, the result is
y 2
1 exp 2a2 2
fY (y) = (y) + p u (y)
2 2 a2 2
where u (y) is the unit step function.
Problem 5.18
a. The normalization of the pdf to unity provides the relationship
Z 1 Z 1
fX (x) dx = A e bx dx = Ae 2b =b = 1
1 2

Thus A = be2b .
b. The mean is
Z 1
E [X] = be2b xe bx
dx
2
1 1
= be2b 2+ e 2b
b b
1
= 2+
b
where integration by parts was used.
5.1. PROBLEM SOLUTIONS 9

c. The second moment of X is


Z 1
2 2b
E X = be x2 e bx
dx
2
2 1 1
= be2b 2+ 2+ e 2b
b b b
1 1
= 2 2+ 2+
b b
where integration by parts was used twice.

d. The variance of X is
2
X = E X2 fE [X]g2
2
1 1 1
= 2 2+ 2+ 2+
b b b
1
=
b2

Problem 5.19
R2 h i2 R 2 dx h i2
1 x3 1 x2
a. E X 2 = 0 x2 dx 2 = 2 3 0 = 4
3 ; E [X] = 0 x 2 = 4 2
2 2 0 = 1; 3 > 1 so it is
true in this special case.
R4 h i4 R 4 dx h i4
1 x3 1 x2
b. E X 2 = 0 x2 dx 4 = 4 3 0 = 16
3 ; E [X] = 0 x 4 = 4 2 0 = 2;
16 2
3 > 2 so it is
also true in this special case.
n o
c. Use the fact that E [X E (X)]2 0 (0 only if X = 0 with probability one).
Expanding, we have E X 2 fE [X]g2 > 0 if X 6= 0.

Problem 5.20
bB 1
a. A = b 1 e ;

b. The cdf is 0 for x < 0 and 1 for x > B. For 0 x B, the result is (A=b) 1 e bx ;

c. The mean is
1 e bB
E [X] = 1 bB bB
b 1 e
10 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

d. The mean-square is
2A 1 e bB AB 2
E X2 = (1 + bB) e bB
b b2 b2 b
where A must be substituted from part (a.).
e. For the variance, subtract the result of part (c) squared from the result of part (d).

Problem 5.21
a. The mean and variance of the Rayleigh random variable can be obtained by using
tabulated de…nite integrals. The mean is
Z 1 2 r
r r2 1 1 p 2
E [R] = exp dr = 2 2 =
2 2 2 2 22 (1=2 2 ) 2
0
R1
by using a de…nite integral for 0 x2n exp ax2 dx given in Appendix G. The mean
square is
Z 1 3
2 r r2 r2 r
E R = 2
exp 2
dr; let u = 2
; du = 2 dr
0 2 2
Z 1
= 2 2 u exp ( u) du
0
Z 1
= 2 2 u exp ( u)j1 0 + exp ( u) du (integration by parts)
0
2
= 2
Thus, the variance is
var [R] = E R2 E 2 [R]
2 2
= 2
2
which is the same as given in Table 5.4.
b. The mean and second moment of the single-sided exponential pdf are easily found in
terms of tabulated integrals or by integration by parts. The mean is
Z 1
E [X] = x exp ( x) dx
0
Z
1 1
= u exp ( u) du
0
1
=
5.1. PROBLEM SOLUTIONS 11

The second moment is


Z 1
E [X] = x2 exp ( x) dx
0
Z 1
1
= 2
u2 exp ( u) du
0
Z 1
1 2 1
= 2
u exp ( u) 0
+2 u exp ( u) du
0
2
= 2

and the variance is


1
var [X] = E X 2 E 2 [X] = 2

c. The mean of the hyperbolic pdf is zero by virtue of the evenness of the pdf, . For the
variance of the hyperbolic pdf, consider
Z 1 2 Z 1 2
2 x (m 1) hm 1 dx x (m 1) hm 1 dx
E X = m dx = dx
1 2 (jxj + h) 0 (x + h)m
where the second integral follows by evenness of the integrand in the …rst integral,
and the absolute value on x is unnecessary because the integration is for positive x.
Integrate by parts twice to obtain
(" #1 Z )
2 m 1 x2 (h + h) m 1 1
2x (x + h) m+1
E X = (m 1) h + dx
m+1 0 m 1
0
The …rst term is zero if m 3. Therefore
Z 1
m+1 2h2
E X 2 = 2hm 1
x (x + h) dx = ; m 4
0 (m 2) (m 3)

The variance is the same since its mean is zero.


d. The mean of a Poisson random variable is
1
X k
E [K] = k exp ( )
k!
k=0
1
X k 1
= exp ( ) ; let n = k 1
(k 1)!
k=1
X1 n
= exp ( ) = exp ( ) exp ( )
n!
n=0
=
12 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

Its mean-square value is


1
X k
E K2 = k2 exp ( )
k!
k=0
1
X k 1
= exp ( ) k
(k 1)!
k=1
X1 n
= exp ( ) (n + 1)
n!
n=0
"1 1
#
X n X n
= exp ( ) n +
n! n!
n=1 n=0
" 1 1
#
X n 1 X n
= exp ( ) +
(n 1)! n!
n=1 n=0
2
= exp ( ) [ exp ( ) + exp ( )] = +

Hence, var[K] = E K 2 E 2 [K] = :

e. For the geometric distribution, the mean is


1
X
E [K] = kpq k 1

k=1

P1 k 1
Di¤erentiate the sum k=0 q = 1 q; jqj < 1 to get the sum

1
X 1
kq k 1
=
k=1
(1 q)2

Apply this to E [K] to get


p 1
E [K] = 2 =
(1 q) p

A similar procedure results in


1
X
2 p 1 q 1
E K = k 2 pq k 1
= 2 + = 2+ 2
(1 q) p2 p p
k=1
5.1. PROBLEM SOLUTIONS 13

Problem 5.22

Z 1
1 1
E [X] = x (x 5) + [u (x 4) u (x 8)] dx
1 2 8
Z1 Z
1 1 8 1 1 82 42 11
= x (x 5) dx + xdx = (5) + =
2 1 8 4 2 8 2 2

Z 1
2 1 1
E X = x2 (x 5) + [u (x 4) u (x 8)] dx
1 2 8
Z 1 Z
1 1 8 2 1 1 83 43 187
= x2 (x 5) dx + x dx = (25) + =
2 1 8 4 2 8 3 6

2 187 121 11
X = E X2 E 2 [X] = =
6 4 12

Problem 5.23

a. The integral evaluates as follows:

Z 1 x2=2 2 Z 1 p 2 Z 1
2n 2n e
2n e y 2n+1 2n
E X = x p dx = 2 2 2 y p dy = p y 2n e y
dy
1 2 2 0 0

where y = x= 21=2 . Using a table of de…nite integrals, the given result is obtained.

b. The integral is zero by virtue of the oddness of the integrand.


14 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

Problem 5.24
The conditional pdf fXjY (xjy) is given by
h i
1
p x2 2 xy+y 2
exp 2 2 (1 2 )
fXY (x; y) 2 2 1 2
fXjY (xjy) = = exp( y 2 =(2 2 ))
fY (y) p
2 2
1 x 2 2 xy + y2 y2
= p p exp 2 (1 2)
+
2 2 1 2 2 2 2
1 x2 2 xy y2 y2
= p p exp +
2 2 1 2 2 2 (1 2) 2 2 2 2 (1 2)

1 x2 2 xy y2 1
= p p exp + 1
2 2 1 2 2 2 (1 2) 2 2 1 2

1 x2 2 xy y2 2
= p p exp +
2 2 1 2 2 2 (1 2) 2 2 1 2

1 x2
2 xy + 2 y 2
= p p exp
2 2 1 2 2 2 (1 2)
" #
1 (x y)2
= p p exp
2 2 1 2 2 2 (1 2)

Note that the conditional pdf of X given


pY is Gaussian with conditional mean E [XjY ] = Y
and conditional variance var(XjY ) = (1 2) 2.

Problem 5.25
Regardless of the correlation coe¢ cient, the mean of Z is

E fZg = E f3X 4Y g = 3E fXg 4E fY g = 3 (1) 4 (3) = 9

The variance of Z is
n o
var (Z) = E [Z E (Z)]2
n o
= E [3X 4Y 3E(X) + 4E(Y )]2
n o
2
= E 3 X X 4(Y Y )
= E 9 X X 24 X X (Y Y ) + 16(Y Y )2
2 2
= 9 X + 16 Y 24 X Y XY

Putting in numbers, the results for the variance of Z are: (a) 148; (b) 122.6; (c) 59.1; (d)
21.0.
5.1. PROBLEM SOLUTIONS 15

Problem 5.26
Divide the joint Gaussian pdf of two random variables by the Gaussian pdf of Y , collect all
exponential terms in a common exponent, complete the square of this exponent which will
be quadratic in x and y, and the desired result is a Gaussian pdf with
X 2 2
E fXjY g = mX + (Y mY ) and var (XjY ) = X 1
Y
The derivation follow closely that of Problem 5.25 except for the presence of nonzero means
mX and mY :

Problem 5.27
a. From Table 5.4,
E [X] = 0; E X 2 = 1=32 = var [X]
b. By transformation of random variables, the pdf of Y is
dx
fY (y) = fX (x)jx=(y 4)=5
dy
4 8jy 4j=5
= e
5
c. Using the transformation from X to Y , it follows that
E [Y ] = E [4 + 5X] = 4 + 5E [X] = 4;
h i
E Y 2 = E (4 + 5X)2
= E 16 + 40X + 25X 2 = 16 + 40E [X] + 25E X 2
816 51
= 16 + 25=32 = = = 25:5;
32 2
2
= E Y2 E 2 [Y ] = 25:5 42 = 9:5

Problem 5.28
Convolve the two component pdfs. A sketch of the two component pdfs making up the
integrand show that there is no overlap until z > 1, and the overlap ends when z > 7. The
result, either obtained graphically or analytically, is given by (a trapezoid)
8
>
> 0; z < 1
>
>
< (z 1) =8; 1 z < 3
fZ (z) = 1=4; 3 z < 5
>
>
>
> (z 7) =8; 5 z 7
:
0; z > 7
16 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

Problem 5.29

a. The characteristic function is


a
MX (jv) =
a jv

b. The mean and mean-square values are

@MX (jv)
E [X] = j = 1=a
@v v=0

and
@ 2 MX (jv) 2
E X 2 = ( j)2 =
@v 2 v=0 a2
respectively.

d. var[X] = E X 2 E 2 [X] = 1=a2 .

Problem 5.30
P 10 10! 1
a. P (5 or 6 heads in 10 trials) = k=5;6 [10!=k! (10 k)!] 2 = 5!5! 1024 = 0:2461

b. P (…rst head at trial 15) = (1=2)(1=2)4 = 1=32


P
c. P (50 to 60 heads in 100 trials) = 60 100!
k=50 k!(100 k)! 2
100 = 0:1193; P (…rst head at
trial 50) = (1=2)(1=2)49 = 8:89 10 16 .

Problem 5.31
The required distributions are

n k
P (k) = p (1 p)n k
(Binomial)
k
e (k np)2 =[2np(1 p)]
P (k) = p (Laplace)
2 np (1 p)
(np)k np
P (k) = e (Poisson)
k!
Comparison tables are given below:
5.1. PROBLEM SOLUTIONS 17

n; p k Binomial Laplace Poisson


3,1/5 0 0.512 0.396 0.549
a. 1 0.384 0.487 0.329
2 0.096 0.075 0.099
3 0.008 0.0014 0.020

n; p k Binomial Laplace Poisson


3,1/10 0 0.729 0.650 0.741
b. 1 0.243 0.310 0.222
2 0.027 0.004 0.033
3 0.001 1 10 6 0.003

n; p k Binomial Laplace Poisson


10,1/5 0 0.107 0.090 0.135
1 0.268 0.231 0.271
2 0.302 0.315 0.271
3 0.201 0.231 0.180
c. 4 0.088 0.090 0.090
5 0.026 0.019 0.036
6 0.005 0.002 0.012
7 7.9 10 4 1.3 10 4 3.4 10 3
8 7.4 10 5 4.1 10 6 8.6 10 4
9 4.1 10 6 7.1 10 8 1.9 10 4

n; p k Binomial Laplace Poisson


10,1/10 0 0.348 0.241 0.368
1 0.387 0.420 0.368
2 0.194 0.241 0.183
3 0.057 0.046 0.061
d. 4 0.011 0.003 0.015
5 1.5 10 3 6 10 5 3.1 10 3
6 1.4 10 4 3.9 10 7 5.1 10 4
7 3.6 10 7 6.3 10 13 3.4 10 3
8 9.0 10 9 1.5 10 16 1 10 6
9 1 10 10 1.2 10 20 1 10 7
18 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

Problem 5.32

a. Np = 26 (25) (24) (23) = 358; 800;

b. Np = (26)4 = 456; 976;

c. The answers are the reciprocals of the numbers found in (a) and (b), or 2:79 10 6

and 2:19 10 6 ; respectively.

Problem 5.33

a. The probability of exactly one error in 105 digits, by the binomial distribution, is

105 5 99;999
P 1 error in 105 = 1 10 10 5
1
= 105 0:3679 10 5
= 0:3679

By the Poisson approximation, it is

P 1 error in 105 = exp ( 1) = 0:3679

b. The Poisson approximation gives

105 10 5
P 2 errors in 105 = exp 105 10 5
= 0:18395
2!

c. Use the Poisson approximation:

P (more than 5 errors in 105 ) = 1 P (0; 1; 2; 3; 4; or 5 errors)


X5
(np)k
= 1 exp ( np)
k!
k=0
1 1 1 1
= 1 exp ( 1) 1 + 1 + + + +
2 6 24 120
= 1 0:9994 = 0:0006
5.1. PROBLEM SOLUTIONS 19

Problem 5.34
a. The desired probability is
2
X 20
20 1
P (fewer than 3 heads in 20 coins tossed) =
k 2
k=0
20 4
= (1 + 20 + 190) 2 = 2:0123 10

b. var(N ) = npq = 20 (1=2) (1=2) = 5; E [N ] = np = 10. The Laplace-approximated


probability is
2
X (k 10)2 =10
e
P (< 3 heads in 20 coins tossed) = p
k=0
10
1 10 81=10 64=10
= p e +e +e
10
4
= 3:587 10

c. The percent error is


3:587 10 4 2:0123 10 4
% error = 100 = 78:25%
2:0123 10 4

Problem 5.35
a. The marginal pdf for X is
" # " #
1 (x 1)2 1 (x 1)2
fX (x) = p exp = p exp
2 (4) 2 4 8 8

The marginal pdf for Y is of the same form except for Y and y in place of X and x,
respectively.
b. The joint pdf is
8 2
9
>
< x 1 2
2 (0:5) x 1 y 1
+ y 1 >
=
1 2 2 2 2
fXY (x; y) = p exp
2 (2) (2) 1 0:52 >
: 2 (1 2
0:5 ) >
;
( )
1 (x 1)2 (x 1) (y 1) + (y 1)2
= p exp
8 0:75 4 2 (1 0:52 )
1 x2 x xy + y 2 y+1
= p exp
8 0:75 6
20 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

The conditional pdf of X given Y = y is

fXY (x; y)
fXjY (xj y) =
fY (y)
n o
(x 1)2 (x 1)(y 1)+(y 1)2
p1 exp
8 0:75 6
= h i
(y 1)2
p1 exp
8 8
( )
1 (x 1)2 (x 1) (y 1) + (y 1)2 (y 1)2
= p exp +
6 6 8
( )
1 (x 1)2 (x 1) (y 1) (y 1)2
= p exp
6 6 24
( ) ( )
1 [x 1 0:5 (y 1)]2 1 [x 0:5y 0:5]2
= p exp = p exp
6 6 6 6

c. See Problem 5.26 to deduce that

E [Xj Y ] = 1 + 0:5 (Y 1)
2
and var [Xj Y ] = 4 1 0:5 =3

Problem 5.36

a. K = 1= ;

b. E [X] is not de…ned, but one could argue that it is zero from the oddness of the
integrand for this expectation. If one writes down the integral for the second moment,
it clearly does not converge (the integrand approaches a constant as jxj ! 1).

c. The answer is given;

d. Compare the form of the characteristic function with the answer given in (c) and do
a suitable rede…nition of variables.
5.1. PROBLEM SOLUTIONS 21

Problem 5.37

a. The characteristic function is found from


( n )
n Pn 2
o Y
jvXi2
MY (jv) = E ejvY = E ejv i=1 Xi = E e
i=1

But
n o Z x =21 2 2
2 2e
E ejvXi = ejvx p dx
1 2 2
Z 1 2 2
e (1=2 jv)x
= p dx
1 2 2
2 1=2
= 1 j2v

which follows by using the de…nite integral de…ned at the beginning of Problem 4.35.
Thus
N=2
MY (jv) = 1 j2v 2

b. The given pdf follows easily by letting =2 2 in the given Fourier transform pair.

c. The mean of Y is
2
E [Y ] = N
by taking the sum of the expectations of the separate terms of the sum de…ning Y .
The mean-square value of Xi2 is
h i
2 2 1=2
h i d 1 j2v
2
E Xi2 = ( j)2 =3 4
dv 2 v=0

Thus, h i
2
var Xi2 =3 4 4
=2 4

Since the terms in the sum de…ning Y are independent, var[Y ] = 2N 4 . The mean
and variance of Y can be put into the de…nition of a Gaussian pdf to get the desired
approximation.

d. The cases N = 2; 4; 8; and N = 16 are shown in Figure 4.1. The approximations


appear to get better as N increases.

Note the extreme di¤erence between exact and approximation due to the central limit
theorem not being valid for N = 2; 4.
22 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

0.5 0.2
chi-sq; σ 2 = 1; N = 2 chi-sq; σ 2 = 1; N = 4
0.4 Gauss approx Gauss approx
0.15

0.3

fY (y)

fY (y)
0.1
0.2

0.05
0.1

0 0
0 10 20 30 0 10 20 30
y y

chi-sq; σ 2 = 1; N = 8 chi-sq; σ 2 = 1; N = 16
0.12 Gauss approx 0.08 Gauss approx

0.1
0.06
0.08
fY (y)

fY (y)
0.06 0.04

0.04
0.02
0.02

0 0
0 10 20 30 0 10 20 30
y y

e. Evident by direct substitution.

Problem 5.38
R1 exp( t2 =2) exp( x2 =2)
This is a matter of plotting Q (x) = x p
2
dx and Qa (x) = p
2 x
on the same
set of axes.

Problem 5.39
By de…nition, the cdf of a Gaussian random variable is
h i h i
Z x exp (u m)2 Z 1 exp (u m)2
2 2 2 2
FX (x) = p du = 1 p du
1 2 2 x 2 2
Change variables to
u m
v=

with the result that

Z 1 exp u2
2 x m
FX (x) = 1 p du = 1 Q
(x m)= 2
5.1. PROBLEM SOLUTIONS 23

A plot may easily be obtained byR a MATLAB program. It is suggested that you use the
1
MATLAB function erfc(x) = p2 x exp t2 dt.

Problem 5.40
The following relationships involving the Q-function will prove useful:
Z b exp u2 =2
p du = Q (a) Q (b) ; b > a;
a 2
Q (x) = 1 Q (jxj) ; x < 0; Q (0) = 0:5

a. The probability is
Z 15 (x 10)2 =50
e
P (jXj 15) = dx p
50
15
Z
u2 =2
1 exp
x 10
= p du; u = p
5 2 25
= Q ( 5) Q (1) = 1 Q (5) Q (1)
= 0:8413

b. This probability is
Z 20 (x 10)2 =50
e
P (10 < X 20) = dx p
50 10
Z 2
exp u2 =2 x 10
= p du; u = p
0 2 25
= Q (0) Q (2) = 0:5 Q (2)
= 0:4772

c. The desired probability is


Z 25 (x 10)2 =50
e
P (5 < X 25) = dx p
5 50
Z 3
exp u2 =2 x 10
= p du; u = p
1 2 25
= Q ( 1) Q (3) = 1 Q (1) Q (3)
= 0:8400
24 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

d. The result is
Z 30 (x 10)2 =50
e
P (20 < X 30) = dx p
2050
Z 4
exp u2 =2 x 10
= p du; u = p
2 2 25
= Q (2) Q (4) = 0:0227

Problem 5.41
Consider the product of integrals
Z 1 Z 1
exp u2 =2 exp v 2 =2
I (x) = p du p dv
0 2 x 2
Z
1 1 exp v 2 =2 1
= p dv = Q (x)
2 x 2 2
where the 1/2 comes from the fact that the …rst integral is the integral of the right half of
a Gaussian pdf. Now write the product of the two integrals as an iterated integral to get
Z 1Z 1
1 u2 + v 2
I (x) = exp dudv
2 0 x 2
Transform the integral to polar coordinates via u = r cos ; v = r sin ; and dudv = rdrd
to get
Z =2 Z 1
1 1
I (x) = Q (x) = exp r2 =2 rdrd
2 2 0 x= sin
Z =2
1 x2
= exp d
2 0 2 sin2
Multiplying through by 2 gives the answer given in the problem statement.
Problem 5.42
a. Observe that
Z 1
2
= (x m)2 fX (x) dx
1
Z
(x m)2 fX (x) dx
jx mj>k
Z
k2 2
fX (x) dx
jx mj>k
2 2
= k P fjX mj > k g = k 2 2
f1 P [jX mj > k ]g
5.1. PROBLEM SOLUTIONS 25

or

1
P [jX mj < k ] 1
k2

b. Note that for this random variable mX = 0 and 2 = 22 =12 = 1=3: The actual
X
probability is

h p i
P [jX mX j < k X ] = P jXj < k= 3
( R p p p
k= 3 dx
p
2 = k= 3; k < 3
= k= 3p
1; k 3

A comparison is given in the table below:

k Actual prob. Ch. bound


0 0 p -
1 1= 3 = 0:577 0
2 1 0.75
3 1 0.889

Note that in all cases the bound exceeds the actual probability.
26 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

Problem 5.43
a. The mean is 0 by even symmetry of the pdf. Thus, the variance is
2
= var [X] = E X 2 (because the mean is 0)
Z 1
a
= x2 exp ( ajxj) dx
1 2
Z 1
a
= 2 x2 exp ( ax) dx
2 0
Z 1
y 2 dy
= 2a exp ( y)
0 a a
1 1
= 2a = 1=a2
a3 2
Thus, in terms of the variance, the pdf is
1
fX (x) = exp ( jxj = )
2
b. The desired probability is
P [jXj > k ] = P [X < k ] + P [X > k ] ; k = 1; 2; 3
Z k Z 1
1 1
= exp (x= ) dx + exp ( x= ) dx
1 2 k 2
Z 1
= exp ( u) du (by eveness of the integrand and u = x= )
k
= exp ( k)

Thus, the required numerical values are


P [jXj > ] = 0:3679;
P [jXj > 2 ] = 0:1353;
P [jXj > 3 ] = 0:0498

Problem 5.44
The desired probability is
P [jXj > k ] = P [X < k ] + P [X > k ] ; k = 1; 2; 3
Z k Z 1
exp x2 =2 2 exp x2 =2 2
= p dx + p dx
1 2 2 k 2 2
= 2Q (k)
5.1. PROBLEM SOLUTIONS 27

a. P [jXj > ] = 0:3173;

b. P [jXj > 2 ] = 0:0455;

c. P [jXj > 3 ] = 0:0027:

Problem 5.45
Since both X and Y are Gaussian and the transformation is linear, Z is Gaussian. Thus,
all we need are the mean and variance of Z. Since the means of X and Y are 0, so is the
mean of Z. Therefore, its variance is
h i
2 2 2
Z = E Z = E (X + 2Y )
= E X 2 + 4XY + 4Y 2
= E X 2 + 4E [XY ] + 4E Y 2
2 2
= X +4 X Y XY +4 Y (because the means are 0)
p p
= 3 + (4) 3 4 ( 0:4) + (4) (4)
p
= 19 3:2 3 = 13:457

The pdf of Z is
exp z 2 =26:915
fZ (z) = p
26:915

Problem 5.46
exp[ (x 5)2 =2] exp[ (y 3)2 =4]
a. fX (x) = p
2
; fY (y) = p
4

exp[ (x 5)2 =2] exp[ (y 3)2 =4]


b. fXY (x; y) = p
2
p
4

c. E (Z1 ) = mX + mY = 5 + 3 = 8; E (Z2 ) = mX mY = 5 3=2

d. The variances are


h i h i
2
Zi = E (Zi mZi )2 = E (X Y mX mY )2
h i
= E ((X mX ) (Y mY ))2
h i h i
= E (X mX )2 + 2E (X mX ) E (Y mY ) + E (Y mY )2
2 2
= X +0+ Y
= 1+2=3
28 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

e. The desired pdf is h i


exp (z1 8)2 =6
fZ1 (z1 ) = p
6
f. The desired pdf is h i
exp (z2 2)2 =6
fZ2 (z2 ) = p
6

Problem 5.47
Since both X and Y are Gaussian and the transformation is linear, Z is Gaussian. Thus,
all we need are the mean and variance of Z. The mean of Z is

mZ = E [3X + Y ] = 3mX + mY
= (3) (1) + 2 = 5

Its variance is
h i h i
2
Z = E (Z mZ )2 = E (3X + Y 3mX mY )2
h i
= E (3X 3mX + Y mY )2
h i h i
= 9E (X mX )2 + 6E [(X mX ) (Y mY )] + E (Y mY )2
2
= 9 X +6 X Y XY + 2Y
p p
= 9 (3) + (6) 3 2 (0:2) + 2
p
= 29 + 1:2 6 = 31:939

The pdf of Z is h i
exp (z 5)2 =63:879
fZ (z) = p
63:879

Problem 5.48
exp[ (x 4)2 =6] exp[ (y 2)2 =10]
a. fX (x) = p
6
; fY (y) = p
10

exp[ (x 4)2 =6] exp[ (y 2)2 =10]


b. fXY (x; y) = p
6
p
10

c. E (Z1 ) = 3mX + mY = (3) (4) + 2 = 14; E (Z2 ) = 3mX mY = (3) (4) 2 = 10


5.1. PROBLEM SOLUTIONS 29

d. The variances are


h i h i
2
2
Zi = E (Zi mZi ) = E (3X Y mX mY )2
h i
= E (3 (X mX ) (Y mY ))2
h i h i
= 9E (X mX )2 + 6E (X mX ) E (Y mY ) + E (Y mY )2
2 2
= 9 X +0+ Y
= (9) (3) + 5 = 32

e. The pdf of Z1 is h i
exp (z1 14)2 =64
fZ1 (z1 ) = p
64
f. The pdf of Z2 is h i
exp (z2 10)2 =64
fZ2 (z2 ) = p
64
Problem 5.49

a. The probability of mean exceedance for a uniformly distributed random variable is


Z b
1 dx
P X > (a + b) =
2 (a+b)=2 b a
b
1 1
= x =
b a (a+b)=2 2

b. The probability of mean exceedance for a Rayleigh distributed random variable is


r Z 1
r r2 r2
P X> = p 2
exp dr; v =
2 =2 2 2 2 2
Z 1
= e v dv = e =4 = 0:4559
=4

c. The probability of mean exceedance for a one-sided exponential distributed random


variable is
Z 1
1
P X> = exp ( x) dx
1=
1
= e = 0:3679
30 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

5.2 Computer Exercises


Computer Exercise 5.1

% ce5_1.m: Generate an exponentially distributed random variable


% with parameter a from a uniform random variable in [0, 1)
%
clf
a = input(’Enter the parameter of the exponential pdf: f(x) = a*exp(-a*x)*u(x) => ’);
N = input(’Enter number of exponential random variables to be generated => ’);
U = rand(1,N);
V = -(1/a)*log(U);
[M, X] = hist(V);
disp(’’)
disp(’No. of random variables generated’)
disp(N)
disp(’’)
disp(’Bin centers’)
disp(X)
disp(’’)
disp(’No. of random variable counts in each bin’)
disp(M)
disp(’’)
norm_hist = M/(N*(X(2)-X(1)));
plot(X, norm_hist, ’o’)
hold
plot(X, a*exp(-a*X), ’–’), xlabel(’x’), ylabel(’f_X(x)’),...
title([’Theoretical pdf and histogram for ’,num2str(N),...
’computer generated exponential random variables’])
legend([’Histogram points’],[’Exponential pdf; a = ’, num2str(a)])

A typical run follows with a plot given in Fig. 5.2.

>> ce5_1
Enter the parameter of the exponential pdf: f(x) = a*exp(-a*x)*u(x) => 2
Enter number of exponential random variables to be generated => 5000
No. of random variables generated
5000
Bin centers
Columns 1 through 6
5.2. COMPUTER EXERCISES 31

Theoretical pdf and histogram for 5000 computer generated exponential random variables
1.4
Histogram points
Exponential pdf; a = 2
1.2

0.8
fX(x)

0.6

0.4

0.2

0
0 0.5 1 1.5 2 2.5 3 3.5 4
x

0.2094 0.6280 1.0467 1.4653 1.8839 2.3026


Columns 7 through 10
2.7212 3.1398 3.5585 3.9771
No. of random variable counts in each bin
Columns 1 through 5
2880 1200 543 215 90
Columns 6 through 10
38 16 9 5 4
Current plot held

Computer Exercise 5.2


% ce5_2.m: Generate pairs of Gaussian random variables from Rayleigh and uniform
RVs
%
clf
m = input(’Enter the mean of the Gaussian random variables => ’);
sigma = input(’Enter the standard deviation of the Gaussian random variables => ’);
32 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

N = input(’Enter number of Gaussian random variable pairs to be generated => ’);


U = rand(1,N);
V = rand(1,N);
R = sqrt(-2*log(V));
X = sigma*R.*cos(2*pi*U)+m;
Y = sigma*R.*sin(2*pi*U)+m;
disp(’’)
disp(’Covarance matrix of X and Y vectors:’)
disp(cov(X,Y))
disp(’’)
[MX, X_bin] = hist(X, 20);
norm_MX = MX/(N*(X_bin(2)-X_bin(1)));
[MY, Y_bin] = hist(Y, 20);
norm_MY = MY/(N*(Y_bin(2)-Y_bin(1)));
gauss_pdf_X = exp(-(X_bin - m).^2/(2*sigma^2))/sqrt(2*pi*sigma^2);
gauss_pdf_Y = exp(-(Y_bin - m).^2/(2*sigma^2))/sqrt(2*pi*sigma^2);
subplot(2,1,1), plot(X_bin, norm_MX, ’o’)
hold
subplot(2,1,1), plot(X_bin, gauss_pdf_X, ’–’), xlabel(’x’), ylabel(’f_X(x)’),...
title([’Theoretical pdfs and histograms for ’,num2str(N),’...
computer generated independent Gaussian RV pairs’])
legend([’Histogram points’],[’Gauss pdf; nsigma, nmu = ’, num2str(sigma),’, ’, num2str(m)])
subplot(2,1,2), plot(Y_bin, norm_MY, ’o’)
hold
subplot(2,1,2), plot(Y_bin, gauss_pdf_Y, ’–’), xlabel(’y’), ylabel(’f_Y(y)’)

A typical run follows with a plot given in Fig. 5.3.

>> ce5_2
Enter the mean of the Gaussian random variables => 2
Enter the standard deviation of the Gaussian random variables => 3
Enter number of Gaussian random variable pairs to be generated => 5000
Covarance matrix of X and Y vectors:
9.0928 0.2543
0.2543 9.0777
Current plot held
5.2. COMPUTER EXERCISES 33

Theoretical pdfs and histograms for 5000 computer generated independent Gaussian RV pairs
0.2
Histogram points
0.15 Gauss pdf; σ , µ = 3, 2
fX(x)

0.1

0.05

0
-10 -5 0 5 10 15
x

0.2

0.15
fY (y)

0.1

0.05

0
-10 -5 0 5 10 15
y
34 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

Computer Exercise 5.3

% ce5_3.m: Generate a sequence of Gaussian random variables with speci…ed correlation


% between adjacent RVs
%
clf
m = input(’Enter the mean of the Gaussian random variables => ’);
sigma = input(’Enter the standard deviation of the Gaussian random variables => ’);
rho = input(’Enter correlation coe¢ cient between adjacent Gaussian random variables
=> ’);
N = input(’Enter number of Gaussian random variables to be generated => ’);
var12 = sigma^2*(1 - rho^2);
X = [];
X(1) = sigma*randn(1)+m;
for k = 2:N
m12 = m + rho*(X(k-1) - m);
X(k) = sqrt(var12)*randn(1) + m12;
end
[MX, X_bin] = hist(X, 20);
norm_MX = MX/(N*(X_bin(2)-X_bin(1)));
gauss_pdf_X = exp(-(X_bin - m).^2/(2*sigma^2))/sqrt(2*pi*sigma^2);
subplot(2,1,1), plot(X_bin, norm_MX, ’o’)
hold
subplot(2,1,1), plot(X_bin, gauss_pdf_X, ’–’), xlabel(’x’), ylabel(’f_X(x)’),...
title([’Theoretical pdf and histogram for ’,num2str(N),’computer generated Gaussian
RVs’])
legend([’Histogram points’],[’Gauss pdf; nsigma, nmu = ’, num2str(sigma),’, ’, num2str(m)],2)
Z = X(1:50);
subplot(2,1,2), plot(Z, ’x’), xlabel(’k’), ylabel(’X(k)’)
title([’Gaussian sample values with correlation ’, num2str(rho), ’between samples’])

A typical run follows with a plot given in Fig. 5.4.

>> ce5_3
Enter the mean of the Gaussian random variables => 0
Enter the standard deviation of the Gaussian random variables => 2
Enter correlation coe¢ cient between adjacent Gaussian random variables => 0.8
Enter number of Gaussian random variables to be generated => 250
Current plot held
5.2. COMPUTER EXERCISES 35

Theoretical pdf and histogram for 250 computer generated Gaussian RVs
0.4
Histogram points
0.3 Gauss pdf; σ , µ = 2, 0
fX(x)

0.2

0.1

0
-5 -4 -3 -2 -1 0 1 2 3 4 5
x
Gaussian sample values with correlation 0.8 between samples
5
X(k)

-5
0 5 10 15 20 25 30 35 40 45 50
k
36 CHAPTER 5. A BRIEF REVIEW OF PROBABILITY THEORY

Computer Exercise 5.4


% ce5_4.m: Testing the validity of the Central Limit Theorem with sums of uniform
% random numbers
%
clf
N = input(’Enter number of Gaussian random variables to be generated => ’);
M = input(’Enter number of uniform random variables to be summed to generate one
GRV => ’);
% Mean of uniform = 0; variance = 1/12
Y = rand(M,N)-0.5;
X_gauss = sum(Y)/(sqrt(M/12));
disp(’’)
disp(’Estimated variance of the Gaussian RVs:’)
disp(cov(X_gauss))
disp(’’)
[MX, X_bin] = hist(X_gauss, 20);
norm_MX = MX/(N*(X_bin(2)-X_bin(1)));
gauss_pdf_X = exp(-X_bin.^2/2)/sqrt(2*pi);
plot(X_bin, norm_MX, ’o’)
hold
plot(X_bin, gauss_pdf_X, ’–’), xlabel(’x’), ylabel(’f_X(x)’),...
title([’Theoretical pdf & histogram for ’,num2str(N),’...
Gaussian RVs each generated as the sum of ’, num2str(M), ’uniform RVs’])
legend([’Histogram points’],[’Gauss pdf; nsigma = 1, nmu = 0’],2)

A typical run follows with a plot given in Fig. 5.5.

>> ce5_4
Enter number of Gaussian random variables to be generated => 1000
Enter number of uniform random variables to be summed to generate one GRV => 10
Estimated variance of the Gaussian RVs:
0.9091
Current plot held
5.2. COMPUTER EXERCISES 37

Theoretical pdf & histogram for 1000 Gaussian RVs each generated as the sum of 10 uniform RVs
0.45
Histogram points
0.4 Gauss pdf; σ = 1, µ = 0

0.35

0.3

0.25
fX(x)

0.2

0.15

0.1

0.05

0
-3 -2 -1 0 1 2 3 4
x

You might also like