0% found this document useful (0 votes)
210 views37 pages

Problem Solutions - Chapter 4: Yates and Goodman: Probability and Stochastic Processes Solutions Manual

The document provides solutions to probability problems involving random variables with specified cumulative distribution functions (CDFs). It solves problems involving calculating probabilities, expected values, variances, and higher moments. For example, it finds the variance of a random variable V to be 6.55 by first deriving its PDF from the given CDF, then calculating its second and third moments, and using the formula for variance.

Uploaded by

Al Farabi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
210 views37 pages

Problem Solutions - Chapter 4: Yates and Goodman: Probability and Stochastic Processes Solutions Manual

The document provides solutions to probability problems involving random variables with specified cumulative distribution functions (CDFs). It solves problems involving calculating probabilities, expected values, variances, and higher moments. For example, it finds the variance of a random variable V to be 6.55 by first deriving its PDF from the given CDF, then calculating its second and third moments, and using the formula for variance.

Uploaded by

Al Farabi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

113

Problem Solutions Chapter 4


Problem 4.1.1
The CDF of X is
8
< 0

FX (x) =

x < ,1
,1  x < 1
x1

(x + 1)=2

Each question can be answered by expressing the requested probability in terms of FX (x).
(a)
P[X

>

1=2] = 1 , P[X

1

2] = 1 , FX (1=2) = 1 , 3=4 = 1=4

(b) This is a little trickier than it should be. Being careful, we can write
P[,1=2  X

<

3=4] = P[,1=2 < X

3

4] + P[X

,1 2] , P[X = 3
=

4]

Since the CDF of X is a continuous function, the probability that X takes on any
specific value is zero. This implies P[X = 3=4] = 0 and P[X = ,1=2] = 0. (If this is
not clear at this point, it will become clear in Section 4.6.) Thus,
P[,1=2  X

<

3=4] = P[,1=2 < X

3

4] = FX (3=4) , FX (,1=2) = 5=8

(c)
P[jX j  1=2] = P[,1=2  X

1

 1 2] , P[X ,1

2] = P[X

<

2]

Note that P[X  1=2] = FX (1=2) = 3=4. Since the probability that P[X = ,1=2] = 0,
P[X < ,1=2] = P[X  1=2]. Hence P[X < ,1=2] = FX (,1=2) = 1=4. This implies
P[jX j  1=2] = P[X

 1 2] , P[X ,1
=

<

2] = 3=4 , 1=4 = 1=2

(d) Since FX (1) = 1, we must have a  1. For a  1, we need to satisfy


P[X
Thus a = 0:6.

 a] = FX (a) = a +2 1 = 0 8
:

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

114

Problem 4.1.2
The CDF of V was given to be
8
< 0

FV (v) =

c(v + 5)2
1

v < ,5
,5  v < 7
v7

(a) For V to be a continuous random variable, FV (v) must be a continuous function. This
occurs if we choose c such that FV (v) doesnt have a discontinuity at v = 7. We meet
this requirement if c(7 + 5)2 = 1. This implies c = 1=144.
(b)
P[V

>

4] = 1 , P[V

 4] = 1 , FV (4) = 1 , 81

144 = 63=144

(c)
P[,3 < V

 0] = FV (0) , FV (,3) = 25

144 , 4=144 = 21=144

(d) Since 0  FV (v)  1 and since FV (v) is a nondecreasing function, it must be that
,5  a  7. In this range,
P[V

>

a] = 1 , FV (a) = 1 , (a + 5)2=144 = 2=3

The unique solution in the range ,5  a  7 is a = 4 3 , 5 = 1:928.


Problem 4.1.3
In this problem, the CDF of U is
8
0
>
>
>
>
< (u + 5)=8

FU (u) =

>
>
>
>
:

u < ,5
,5  u < ,3
1=4
,3  u < 3
1=4 + 3(u , 3)=8 3  u < 5
1
u  5:

Each question can be answered directly from this CDF.


(a)
P[U

 4] = FU (4) = 1

(b)
P[,2 < U

 2] = FU (2) , FU (,2) = 1 4 , 1
=

4=0

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

115

(c)
P[U

>

0] = 1 , P[U

 0] = 1 , FU (0) = 3

(d) By inspection of FU (u), we observe that P[U


3  a  5. In this range,

 a] = FU (a) = 1

2 for a in the range

FU (a) = 1=4 + 3(a , 3)=8 = 1=2


This implies a = 11=3.
Problem 4.1.4
(a) By definition, dnxe is the smallest integer that is greater than or equal to nx. This
implies
nx  dnxe  nx + 1
(b) By part (a),

 dnxn e  nx n+ 1

nx
n
That is,

x

dnxe  x + 1
n

This implies
x  lim

n!

dnxe  lim x + 1 = x
n

n!

Problem 4.2.1


fX (x) =

cx 0  x  2
0 otherwise

(a) From the above PDF we can determine the value of c by integrating the PDF and
setting it equal to 1.
Z 2
0

Therefore c = 1=2.

cx dx = 2c = 1

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

 1] = R01 2x dx = 1 4
R1 2
P[,1 2  X  1 2] = 0 2x dx = 1

(b) P[0  X
(c)

116

16

(d) The CDF of X is found by integrating the PDF from 0 to x.


Z x

FX (x) =

, 

fX x0 dx0 =

8
< 0

x<0
x2 =4 0  x  2
:
1
x>2

Problem 4.2.2
From the CDF, we can find the PDF by direct differentiation. The CDF and correponding PDF are
8
< 0

x < ,1
(x + 1)=2 ,1  x  1
FX (x) =
:
1
x>1

fX (x) =

1=2 ,1  x  1
0
otherwise

Problem 4.2.3
We find the PDF by taking the derivative of FU (u) on each piece that FU (u) is defined.
The CDF and corresponding PDF of U are
8
0
>
>
>
>
< (u + 5)=8

FU (u) =

>
>
>
>
:

8
0
u<
>
>
>
>
5
< 1=8

u < ,5
,5  u < ,3
1=4
,3  u < 3
1=4 + 3(u , 3)=8 3  u < 5
1
u  5:

fU (u) =

>
>
3=8
>
>
:

,5
,  u ,3
,3  u 3
3u 5
u5
<
<

<

Problem 4.2.4


fX (x) =

ax2 + bx 0  x  1
0
otherwise

First, we note that a and b must be chosen such that the above PDF integrates to 1.
Z 1
0

2
(ax + bx) dx = a=3 + b=2 = 1

Hence, b = 2 , 2a=3 and our PDF becomes

fX (x) = x(ax + 2 , 2a=3)

For the PDF to be non-negative for x 2 [0; 1], we must have ax + 2 , 2a=3  0 for all
x 2 [0; 1]. This requirement can be written as
a(2=3 , x)  2

(0

 x  1)

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

117

For x = 2=3, the requirement holds for all a. However, the problem is tricky because we
must consider the cases 0  x < 2=3 and 2=3 < x  1 separately because of the sign change
of the inequality. When 0  x < 2=3, we have 2=3 , x > 0 and the requirement is most
stringent at x = 0 where we require 2a=3  2 or a  3. When 2=3 < x  1, we can write the
constraint as a(x , 2=3)  ,2. In this case, the constraint is most stringent at x = 1, where
we must have a=3  ,2 or a  ,6. Thus our a complete expression for our requirements
are

,6  a  3

b = 2 , 2a=3

As we see in the following plot, the shape of the PDF fX (x) varies greatly with the value
of a.
3
a=6
a=3
a=0
a=3

2.5
2
1.5
1
0.5
0

0.2

0.4

0.6

0.8

Problem 4.3.1


fX (x) =

1=4 ,1  x  3
0
otherwise

We recognize that X is a uniform random variable from [-1,3].


(a) E [X ] = 1 and Var [X ] =

(3+1)2

12

= 4=3.

(b) The new random variable Y is defined as Y

= h (X ) = X 2 .

Therefore

h(E [X ]) = h(1) = 1
and

E [h(X )] = E X 2

Finally

2
= Var [X ] + E [X ] = 4=3 + 1 = 7=3

E [Y ] = E [h(X )] = E X 2
 
Var [Y ] = E X 4

= 7=3

 22 Z 3 x4
dx
E X
=

,1 4

, 499 = 615 , 499

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

118

Problem 4.3.2
(a) Since the PDF is uniform over [1,9]
E [X ] =

1+9
2

=5

Var [X ] =

(b) Define h(X ) = 1= X then


h(E [X ])

E [h(X )]

(9

, 1)2 = 16
12

1= 5

Z 9 ,1=2
x

dx = 1=2

(c)
E [Y ] = E [h(X )] = 1=2


Var [Y ] = E Y 2

, (E [Y ])2 =

Z 9 ,1
x

dx , E [X ]2 =

ln 9
, 1=4
8

Problem 4.3.3
The CDF of X is
8
< 0

x<0
x=2 0  x < 2
FX (x) =
:
1
x2
(a) We can find the expected value of X by first find the PDF by differentiating the above
CDF.


1=2 0  x  2
0
otherwise

fX (x) =
The expected value is then
E [X ] =

Z 2
x
0

dx = 1

(b)
Z 2 2
 2
x
E X
dx = 8=3
=
0 2
 2
2

Var [X ]

E X

, E [X ]

= 8=3

,1 = 5

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

119

Problem 4.3.4
(a) We can find the expected value of X by direct integration of the given PDF.


y=2 0  y  2
0
otherwise

fY (y) =
The expectation is

Z 2 2
y

E [Y ] =

dy = 4=3

(b)
Z 2 3
 2
y
E Y
dy = 2
=
0 2
 2
2

Var [Y ]

, E [Y ]

E Y

=2

, (4

3)2 = 2=9

Problem 4.3.5
The CDF of Y is
8
< 0

y < ,1
FY (y) =
(y + 1)=2 ,1  y < 1
:
1
y1
(a) We can find the expected value of Y by first find the PDF by differentiating the above
CDF.


fY (y) =

1=2 ,1  y  1
0
otherwise

And
Z 1

E [Y ] =

,1

y=2 dy = 0

(b)


E Y2

Var [Y ]

=
=

Z 1 2
y

,1 2

E Y2

dy = 1=3

, E [Y ]2 = 1 3 , 0 = 1
=

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

120

Problem 4.3.6
To evaluate the moments of V , we need the PDF fV (v), which we find by taking the
derivative of the CDF FV (v). The CDF and corresponding PDF of V are
8
< 0

v < ,5
2
FV (v) =
(v + 5) =144 ,5  v < 7
:
1
v7

(a) The expected value of V is

8
< 0

v < ,5
(v + 5)=72 ,5  v < 7
fV (v) =
:
0
v7

E [V ] =

v fV (v) dv =

1 7 2
(v + 5v) dv
72 ,5

 7

1 v3 5v2
=
+
72 3
2 ,5


1 343 245 125 125
=
+
+
, 2
72
3
2
3
=3
(b) To find the variance, we first find the second moment

Z
 2 Z 2
1 7 3
2
E V =
v fV (v) dv =
(v + 5v ) dv


1 v4

72

72 ,5

 7
5v3
+
4
3 ,5

= 6719=432 = 15:55

The variance is Var [V ] = E V 2

, (E [V ])2 = 2831

432 = 6:55.

(c) The third moment of V is

Z
 3 Z 3
1 7 4
3
E V =
v fV (v) dv =
(v + 5v ) dv


1 v5

72

72 ,5

 7
5v4
+
5
4 ,5

= 86:2

Problem 4.3.7
To find the moments, we first find the PDF of U by taking the derivative of FU (u). The
CDF and corresponding PDF are
8
0
>
>
>
>
< (u + 5)=8

FU (u) =

>
>
>
>
:

u < ,5
,5  u < ,3
1=4
,3  u < 3
1=4 + 3(u , 3)=8 3  u < 5
1
u  5:

8
0
u<
>
>
>
>
1
=
8
5
<

fU (u) =

>
>
3=8
>
>
:

,5
,  u ,3
,3  u 3
3u 5
u5
<
<

<

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

121

(a) The expected value of U is


Z

E [U ]

Z 5
,3 u
3u
du +
du
3 8
,5 8
,3
5
u2
3u2

u fU (u) du =

=
=

16 ,5

16 3

,1 + 3 = 2

(b) The second moment of U is




E U2

Z
Z 2
u fU (u) du =

Z 5 2
,3 u2
3u
du +
du
3 8
,5 8
,3
5
u3
u3

24 ,5

8 3

= 49=3

The variance of U is Var [U ] = E U 2


(c) Note that 2U

= e(ln 2)U .

, (E [U ])2 = 37

3.

This implies that


Z

Z
u

2 du =

e(ln 2)u du =

1 (ln 2)u
2u
e
=
ln 2
ln 2

The expected value of 2U is then


Z 5
,3 2u
3  2u
du +
du
8
3
,5 8


2 u ,3 3  2 u 5

Z
 U Z u
2 fU (u) du =
E 2 =

=
=

8 ln 2 ,5 8 ln 2 3
2307
= 13:001
256 ln 2

Problem 4.4.1
From Appendix A, we observe that an exponential PDF Y with parameter > 0 has
PDF


fY (y) =

e,y y  0
0
otherwise

In addition, the mean and variance of Y are


E [Y ] =

Var [Y ] =

1
2

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

122

(a) Since Var [Y ] = 25, we must have = 1=5.


(b) The expected value of Y is E [Y ] = 1= = 5.
(c)
Z

P[Y

>

5] =

fY (y) dy =

,e,y 5 5 = e,1
=

Problem 4.4.2
From Appendix A, an Erlang random variable X with parameters > 0 and n has PDF


fX (x) =

n xn,1 e,x =(n , 1)! x  0


0
otherwise

In addition, the mean and variance of X are


E [X ] =

Var [X ] =

n
2

(a) Since = 1=3 and E [X ] = n= = 15, we must have n = 5.


(b) Substituting the parameters n = 5 and = 1=3 into the given PDF, we obtain


fX (x) =

(1=3)5 x4 e,x=3 =24

x0
otherwise

(c) From above, we know that Var [X ] = n=2 = 45.


Problem 4.4.3
Since Y is an Erlang random variable with parameters = 2 and n = 2, we find in
Appendix A that


fY (y) =

4ye,2y y  0
0
otherwise

(a) Appendix A tells us that E [Y ] = n= = 1.


(b) Appendix A also tells us that Var [Y ] = n=2 = 1=2.
(c) The probability that 1=2  Y
P[1=2  Y

<

3=2 is
Z 3=2

<

3=2] =

1=2

Z 3=2

fY (y) dy =

1=2

4ye,2y dy

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

123
R

This integral is easily completed using the integration by parts formula u dv = uv ,


R
v du with
dv = 2e,2y

u = 2y

v = ,e,2y

du = 2dy
Making these substitutions, we obtain
3
P[1=2  Y < 3=2] = ,2ye,2y 1

2
+
=2
=

Z 3=2

2e,2y dy

1=2

,1 , 4e,3 = 0:537

= 2e

Problem 4.4.4
(a) The PDF of a continuous uniform random variable distributed from [,5; 5) is


1=10 ,5  x  5
0
otherwise

fX (x) =

(b) For x < ,5, FX (x) = 0. For x  5, FX (x) = 1. For ,5  x  5, the CDF is
Z x

FX (x) =

,5

fX () d =

The complete expression for the CDF of X is


8
< 0

FX (x) =

(x + 5)=10

x+5
10

x < ,5
5x5
x>5

(c) the expected value of X is


Z 5
x

,5 10

dx =

x2
=0
20 ,5

Another way to obtain this answer is to use Theorem 4.7 which says the expected
value of X is
5 + ,5
E [X ] =
=0
2
(d) The fifth moment of X is
Z 5 5
x

,5 10
The expected value of eX is
Z 5 x
e

,5 10

dx =

x6
=0
60 ,5

dx =

ex 5
e5 , e,5
=
10 ,5
10

= 14:84

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

124

Problem 4.4.5
We know that X has a uniform PDF over [a; b) and has mean X = 7 and variance
Var [X ] = 3. All that is left to do is determine the values of the constants a and b, to
complete the model of the uniform PDF.
E [X ] =

a+b
2

=7

Var [X ] =

(b

, a)2 = 3
12

Since we assume b > a, this implies


b,a = 6

a + b = 14
Solving these two equations, we arrive at
a=4

b = 10

And the resulting PDF of X is,




fX (x) =

1=6 4  x  10
0
otherwise

Problem 4.4.6
Given that


fX (x) =

(1=2)e,x=2

x0
otherwise

(a)
P[1  X

 2] =

Z 2

,x 2 dx = e,1

(1=2)e

, e,1 = 0 2387
:

(b) The CDF of X may be be expressed as




FX (x) =

x<0
Rx
,
x
=2
d x  0
0 (1=2)e


=

0
x<0
,
x
=2
1,e
x0

(c) X is an exponential random variable with parameter a = 1=2. By Theorem 4.9, the
expected value of X is E [X ] = 1=a = 2.
(d) By Theorem 4.9, the variance of X is Var [X ] = 1=a2 = 4.

Yates and Goodman: Probability and Stochastic Processes Solutions Manual


Problem 4.4.7
Given the uniform PDF

fU (u) =

125

1=(b , a) a  u  b
0
otherwise

The mean of U can be found by integrating


Z b

u=(b , a) du =

E [U ] =
a

b2 , a2
2(b , a)

b+a
2

Where we factored


(b2 , a2 ) = (b , a)(b + a). The variance of U can also be found by

finding E U 2 .


E U2

Z b
=

Therefore the variance is

, a3 )
3(b , a)

(b3

, a3 ) ,  b + a 2 = (b , a)2
3(b , a)
2
12

(b3

Var [U ] =
Problem 4.4.8
The integral I1 is

u2 =(b , a) du =

I1 =

e,x dx =

,e,x 0 = 1

For n > 1, we have


In =

Z n,1 n,1
x
0

(n

,{z1)!}

,x
e
| {z dt}
dv

We define
u and dv as shown above in order to use the integration by parts formula u dv =
R
uv , v du. Since
du =
we can write
In = uvj

n,1 xn,1
dx
(n , 2)!

v du

0
n
,
1
n
x ,1

, (n , 1 ) !

= 0 + In,1

Hence, In = 1 for all n  1.

v = ,e,x


e,x

Z n,1 n,1
x

+ 0
0

(n

,x dx

, 2)! e

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

126

Problem 4.4.9
For an Erlang random variable X with parameters and n, the kth moment is
h

E Xk

i
=

Z
0

xk fX (x) dt

Z n n+k,1
x

,x

, 1)! e dt
0
Z
(n + k , 1)! n k xn k,1 ,t
=
e dt
k (n , 1)! 0 (n + k , 1)!
|
{z
}
=

(n

The above marked integral equals 1 since it is the integral of an Erlang PDF with parameters
and n + k over all possible values. Hence,
h

E X

i
=

, 1 )!
, 1 )!

(n + k
k (n

This implies that the first and second moments are


E [X ] =

n!
(n , 1)!

E X2

(n + 1)!
(n + 1)n
=
2
(n 1)!
2

It follows that the variance of X is n=2 .


Problem 4.4.10
For n = 1, we have
 the fact E [X ] = 1= that is given in the problem statement. Now we
assume that E X n,1 = (n , 1)!=n,1. To complete the proof, we show that this implies
that E [X n ] = n!=n . Specifically, we write
Z

E [X

]=
0

xn e,x dx
R

Now we use the integration by parts formula u dv = uv , v du with u = xn and dv =


e,x dx. This implies du = nxn,1 dx and v = ,e,x so that
E [X n ] =

,xn e,x 0 +

Z
0

nxn,1 e,x dx

n n,1 ,x
= 0+
x e dx
0
n  n,1 
= E X

By our induction hyothesis, E X n,1

= (n

, 1)!

n,1 which implies

E [X n ] = n!=n

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

127

Problem 4.4.11
(a) Since fX (x)  0 and x  r over the entire integral, we can write
Z

x fX (x) dx 

r fX (x) dx = rP[X

>

r]

(b) We can write the expected value of X in the form


Z r

E [X ] =
Hence,
rP[X

>

r] 

Z
r

x fX (x) dx +

lim rP[X

Since rP[X

>

>

x fX (x) dx

x fX (x) dx = E [X ] ,

Allowing r to approach infinity yields


r!

r]  E [X ] , lim

Z r

r! 0

Z r
0

x fX (x) dx

x fX (x) dx = E [X ] , E [X ] = 0

r]  0 for all r  0, we must have limr! rP[X

(c) We can use the integration by parts formula


1 , FX (x) and dv = dx. This yields
Z
0

[1

>

r] = 0.

u dv = uv , v du by defining u =

, FX (x)] dx = x[1 , FX (x)]j0 +

Z
0

x fX (x) dx

By applying part (a), we now observe that


x[1 , FX (x)]j
0 = lim r [1 , FX (r )] , 0 = lim rP[X
r!

By part (b), limr! rP[X


Z
0

>

[1

r!

>

r]

r] = 0 and this implies x[1 , FX (x)]j


0 = 0. Thus,

, FX (x)] dx =

Z
0

x fX (x) dx = E [X ]

Problem 4.5.1
Given that the peak temperature, T , is a Gaussian random variable with mean 85 and
standard deviation 10 we can use the fact that FT (t ) = ((t , T )=T ) and Table 4.1 on
page 142 to evaluate the following


P[T

>

100]

=
=

P[T
P[70  T

<

60]

 100]

=
=
=
=

100 , 85
1 , P[T  100] = 1 , FT (100) = 1 ,
10
1 , (1:5) = 1 , 0:933 = 0:066


60 , 85
= (,2:5)

10
1 , (2:5) = 1 , :993 = 0:007
FT (100) , FT (70)
(1:5) , (,1:5) = 2(1:5) , 1 = :866

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

128

Problem 4.5.2
The standard normal Gaussian random variable Z has mean = 0 and variance 2 = 1.
Making these substitutions in Definition 4.8 yields
2
1
fZ (z) = p e,z =2
2
Problem 4.5.3
X is a Gaussian random variable with zero mean but unknown variance. We do know,
however, that
P[jX j  10] = 0:1

We can find the variance Var [X ] by expanding the above probability in terms of the ()
function.
 
10
P[,10  X  10] = FX (10) , FX (,10) = 2
,1
X
This implies (10=X ) = 0:55. Using Table 4.1 for the Gaussian CDF, we find that
10=X = 0:15 or X = 66:6.
Problem 4.5.4
Moving to Antarctica, we find that the temperature, T is still Gaussian but with variance
225. We also know that with probability 1/2, T exceeds 10 degrees. First we would like to
find the mean temperature, and we do so by looking at the second fact.


10 , T
P[T > 10] = 1 , P[T  10] = 1 ,
15

= 1=2

By looking at the table we find that if () = 1=2, then = 0. Therefore,




10 , T

15

= 1=2

implies that (10 , T )=15 = 0 or T = 10. Now we have a Gaussian T with mean 10 and
standard deviation 15. So we are prepared to answer the following problems.


P[T

>

32]

=
=

P[T

<

0]

=
=
=

P[T

>

60]

=
=
=

32 , 10
1 , P[T  32] = 1 ,
15
1 , (1:45) = 1 , 0:926 = 0:074


0 , 10
FT (0) =
15
(,2=3) = 1 , (2=3)
1 , (0:67) = 1 , 0:749 = 0:251
1 , P[T  60] = 1 , FT (60)


60 , 10
= 1 , (10=3)
1,
15
Q(3:33) = 4:34  10,4

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

129

Problem 4.5.5
In this problem, we use Theorem 4.14 and the tables for the and Q functions to
answer the questions. Since E [Y20 ] = 40(20) = 800 and Var [Y20 ] = 100(20) = 2000, we
can write


Y20 , 800 1000 , 800


p
P[Y20 > 1000] = P p
>
2000
2000


200
p = Q(4:47) = 3:91  10,6
=P Z>
20 5
The second part is a little trickier. Since E [Y25 ] = 1000, we know that the prof will spend
around $1000 in roughly 25 years. However, to be certain with probability 0.99 that the
prof spends $1000 will require more than 25 years. In particular, we know that


Yn , 40n
P[Yn > 1000] = P p
100n
Hence, we must find n such that

>

1000 , 40n
p
100n

100 , 4n
pn

100 , 4n
pn
= 1,


= 0:99


= 0:01

Recall that (x) = 0:01 for a negative value of x. This is consistent with our earlier observation that we would need n > 25 corresponding to 100 , 4n < 0. Thus, we use the identity
(x) = 1 , (,x) to write


100 , 4n
pn

4n , 100
pn
= 1,

= 0:01

Equivalently, we have


4n , 100
pn


= 0:99

From the table of the function, we have that


4n , 100
pn

= 2:33

or
(n

, 25)2 = (0 58)2n = 0 3393n


:

Solving this quadratic yields n = 28:09. Hence, only after 28 years are we 99 percent
sure that the prof will have spent $1000. Note that a second root of the quadratic yields
n = 22:25. This root is not a valid solution to our problem. p
Mathematically, it is a solution
of our quadratic in which we choose the negative root of n. This would correspond to
assuming the standard deviation of Yn is negative.

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

130

Problem 4.5.6
We are given that there are 100; 000; 000 men in the United States and 23; 000 of them
are at least 7 feet tall, and the heights of U.S men are independent Gaussian random variables with mean 50 1000 .
(a) Let H denote the height in inches of a U.S male. To find X , we look at the fact that
the probability that P[H  84] is the number of men who are at least 7 feet tall divided
by the total number of men (the frequency interpretation of probability). Since we
measure H in inches, we have
P[H  84] =

23; 000
100; 000; 000

70 , 84
=
X


= 0:00023

Since (,x) = 1 , (x) = Q(x),


Q(14=X ) = 2:3  10,4
From Table 4.2, this implies 14=X

= 3:5

or X

= 4.

(b) The probability that a randomly chosen man is at least 8 feet tall is


96 , 70
P[H  96] = Q
4


= Q(6:5)

Unfortunately, Table 4.2 doesnt include Q(6:5), although it should be apparent that
the probability is very small. In fact, Q(6:5) = 4:0  10,11 .
(c) First we need to find the probability that a man is at least 76.


90 , 70
P[H  90] = Q
4


= Q(5)

 3  10,7 =

Although Table 4.2 stops at Q(4:99), if youre curious, the exact value is Q(5) =
2:87  10,7.
Now we can begin to find the probability that no man is at least 76. This can be
modeled as 100,000,000 repetitions of a Bernoulli trial with parameter 1 , . The
probability that no man is at least 76 is
(1

, )100 000 000 = 9 4  10,14


;

(d) The expected value of N is just the number of trials multiplied by the probability that
a man is at least 76.
E [N ]

100; 000; 000  = 30

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

131

Problem 4.5.7
First we note that since W has an N [; 2 ] distribution, the integral we wish to evaluate
is
Z

I=

p1

fW (w) dw =

22 ,

e,(w,)

2 =22

dw

(a) Using the substitution x = (w , )=, we have dx = dw= and


Z

1
I=p

2 ,

e,x

2 =2

dx

(b) When we write I 2 as the product of integrals, we use y to denote the other variable of
integration so that


=
=



2
2
p1
p1
e,x =2 dx
e,y =2 dy
2 ,
2 ,
Z Z
1 ,(x2 +y2 )=2
e
dx dy
2 , ,

(c) By changing to polar coordinates, x2 + y2 = r2 and dx dy = r dr d so that


Z

=
=
=

1 2 ,r2 =2
e
r dr d
2 0 0
Z
1 2 ,r2 =2
,e 0 d
2 0
Z
1 2
d = 1
2 0

Problem 4.6.1
(a) Using the given CDF


P[X < ,1] = FX ,1,


P[X  ,1] = FX (,1)

=
=

,1

3 + 1=3 = 0

Where FX (,1, ) denotes the limiting value of the CDF found by approaching ,1
from the left. Likewise, FX (,1+ ) is interpreted to be the value of the CDF found by
approaching ,1 from the right. We notice that these two probabilities are the same
and therefore the probability that X is exactly ,1 is zero.
(b)
P[X
P[X

0]
 0]
<

=
=

FX 0, = 1=3
FX (0) = 2=3

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

132

Here we see that there is a discrete jump at X = 0. Approached from the left the CDF
yields a value of 1/3 but approached from the right the value is 2/3. This means that
there is a non-zero probability that X = 0, in fact that probability is the difference of
the two values.
P[X

= 0] = P[X

 0] , P[X

<

0] = 2=3 , 1=3 = 1=3

(c)
P[0 < X
P[0  X

 1]
 1]

FX (1) , FX 0+ = 1 , 2=3 = 1=3


, 
FX (1) , FX 0, = 1 , 1=3 = 2=3

=
=

The difference in the last two probabilities above is that the first was concerned with
the probability that X was strictly greater then 0, and the second with the probability
that X was greater than or equal to zero. Since the the second probability is a larger
set (it includes the probability that X = 0) it should always be greater than or equal to
the first probability. The two differ by the probability that X = 0, and this difference
is non-zero only when the random variable exhibits a discrete jump in the CDF.
Problem 4.6.2
Similar to the previous problem we find
(a)
,

,1] = FX ,1,

 ,1] = FX (,1) = 1
Here we notice the discontinuity of value 1/4 at x = ,1.
P[X

<

=0

P[X

(b)
P[X

<

0] = FX 0,

= 1=2

P[X

 0] = FX (0) = 1

Since there is no discontinuity at x = 0, FX (0, ) = FX (0+ ) = FX (0).


(c)

 1] = 1 , FX (,1) = 0
P[X  1] = 1 , P[X 1] = 1 , FX 1, = 1 , 3
P[X

>

1] = 1 , P[X

<

4 = 1=4

Again we notice a discontinuity of size 1/4, here occurring at x = 1,

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

133

Problem 4.6.3
(a) By taking the derivative of the CDF FX (x) given in Problem 4.6.2, we obtain the PDF
 (x+1)

fX (x) =

+ 1=4 +

(x,1)
4

,1  x  1
otherwise

(b) The first moment of X is


Z

E [X ] =

x fX (x) dx = x=4jx=,1 + x2 =8 ,1 + x=4jx=1 = ,1=4 + 0 + 1=4 = 0

(c) The second moment of X is



1

 2 Z 2
E X =
x fX (x) dx = x2 =4 x=,1 + x3 =12 ,1 + x2 =4 x=1 = 1=4 + 1=6 + 1=4 = 2=3
,


Since E [X ] = 0, Var [X ] = E X 2

= 2=3.

Problem 4.6.4
The PMF of a Bernoulli random variable with mean p is
8
< 1

PX (x) =

p
0

,p

x=0
x=1
otherwise

The corresponding PDF of this discrete random variable is


fX (x) = (1 , p)(x) + p(x , 1)
Problem 4.6.5
The PMF of a geometric random variable with mean 1= p is


PX (x) =

p(1 , p)x,1 x = 1; 2; : : :
0
otherwise

The corresponding PDF is


fX (x) = p(x , 1) + p(1 , p)(x , 2) + 

p(1 , p) j,1(x , j)

j=1

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

134

Problem 4.6.6
(a) Since the conversation time cannot be negative, we know that FW (w) = 0 for w < 0.
The conversation time W is zero iff either the phone is busy, no one answers, or if
the conversation time X of a completed call is zero. Let A be the event that the call is
answered. Note that the event Ac implies W = 0. For w  0,
FW (w) = P[Ac ] + P[A]FW jA (w) = (1=2) + (1=2)FX (w)
Thus the complete CDF of W is

FW (w) =

0
w<0
1=2 + (1=2)FX (w) w  0

(b) By taking the derivativeof FW (w), the PDF of W is




(1=2)(w) + (1=2) fX (w)


0
otherwise

fW (w) =

Next, we keep in mind that since X must be nonnegative, fX (x) = 0 for x < 0. Hence,
fW (w) = (1=2)(w) + (1=2) fX (w)
(c) From the PDF fW (w), calculating the moments is straightforward.
Z

E [W ] =

w fW (w) dw = (1=2)

w fX (w) dw = E [X ]=2

The second moment is

Z
 2 Z 2
 
E W =
w fW (w) dw = (1=2)
w2 fX (w) dw = E X 2 =2

The variance of W is


Var [W ] = E W 2

, (E [W ])2 = E

 2
X =2

, (E [X ]

2)2 = (1=2) Var [X ] + (E [X ])2 =4

Problem 4.6.7
The professor is on time 80 percent of the time and when he is late his arrival time is
uniformly distributed between 0 and 300 seconds. The PDF of T , is


fT (t ) =

0:2
0:8(t , 0) + 300
0  t  300
0
otherwise

The CDF can be found be integrating

8
< 0

FT (t ) =

t < ,1
:2t
0:8 + 0300
0  t < 300
:
1
t  300

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

135

Problem 4.6.8
Let G denote the event that the throw is good, that is, no foul occurs. The CDF of D
obeys
FD (y) = P[D  yjG]P[G] + P[D  yjGc ]P[Gc ]
Given the event G,
P[D  yjG] = P[X

 y , 60] = 1 , e, y,60
(

)=10

(y

 60)

Of course, for y < 60, P[D  yjG] = 0. From the problem statement, if the throw is a foul,
then D = 0. This implies
P[D  yjGc ] = u(y)
where u() denotes the unit step function. Since P[G] = 0:7, we can write
FD (y) = P[G]P[D  yjG] + P[Gc ]P[D  yjGc ]


0:3u(y)
y < 60
,
(y,60)=10
0:3 + 0:7(1 , e
) y  60

Another way to write this CDF is


FD (y) = 0:3u(y) + 0:7u(y , 60)(1 , e,(y,60)=10)
However, when we take the derivative, either expression for the CDF will yield the PDF.
However, taking the derivative of the first expression perhaps may be simpler:


fD (y) =

0:3(y)
y < 60
,
(y,60)=10
0:07e
y  60

Taking the derivative of the second expression for the CDF is a little tricky because of the
product of the exponential and the step function. However, applying the usual rule for the
differentation of a product does give the correct answer:
fD (y) = 0:3(y) + 0:7(y , 60)(1 , e,(y,60)=10 ) + 0:07u(y , 60)e,(y,60)=10
= 0:3(y) + 0:07u(y

, 60)e, y,60
(

)=10

The middle term (y , 60)(1 , e,(y,60)=10 ) dropped out because at y = 60, e,(y,60)=10 = 1.
Problem 4.6.9
The professor is on time and lectures the full 80 minutes with probability 0.7. That is,
P[T = 80] = 0:7. Likewise when the professor is more than 5 minutes late, the students
leave and a 0 minute lecture is observed. Since he is late 30% of the time and given that he

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

136

is late, his arrival is uniformly distributed between 0 and 10 minutes, the probability that
there is no lecture is
P[T

= 0] = (0:3)(0:5) = 0:15

The only other possible lecture durations are uniformly distributed between 75 and 80
minutes, because the students will not wait longer then 5 minutes, and that probability
must add to a total of 1 , 0:7 , 0:15 = 0:15. So the PDF of T can be written as
8
0:15(t )
>
>
<

fT (t ) =

0:03
0:7(t , 80)
>
>
:
0

t=0
75  7 < 80
t = 80
otherwise

Problem 4.7.1
Since 0  X  1, Y = X 2 satisfies 0  Y  1. We can conclude that FY (y) = 0 for y < 0
and that FY (y) = 1 for y  1. For 0  y < 1,


FY (y) = P X 2  y

= P[X

 py]

Since fX (x) = 1 for 0  x  1, we see that for 0  y < 1,

Z py
p
p
P[X  y] =
dx = y
0

Hence, the CDF of Y is


8
< 0

FY (y) =

py
1

y<0
0y<1
y1

By taking the derivative of the CDF, we obtain the PDF




fY (y) =

1=(2 y) 0  y < 1
0
otherwise

Problem 4.7.2
From Problem 4.6.1, random variable X has CDF
8
0
>
>
<

x < ,1
x=3 + 1=3 ,1  x < 0
FX (x) =
x
=3 + 2=3 0  x < 1
>
>
:
1
1x

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

137

(a) We can find the CDF of Y , FY (y) by noting that Y can only take on two possible
values, 0 and 100. And the probability that Y takes on these two values depends on
the probability that X < 0 and X  0, respectively. Therefore
8
< 0

FY (y) = P[Y

 y] = : P[X

<

y<0
0  y < 100
y  100

0]

The probabilities concerned with X can be found from the given CDF FX (x). This
is the general strategy for solving problems of this type: to express the CDF of Y in
terms of the CDF of X . Since P[X < 0] = FX (0, ) = 1=3, the CDF of Y is
8
< 0

FY (y) = P[Y

y<0
3 0  y < 100
1
y  100

 y] = : 1

(b) The CDF FY (y) has jumps of 1=3 at y = 0 and 2=3 at y = 100. The corresponding
PDF of Y is
fY (y) = (y)=3 + 2(y , 100)=3
(c) The expected value of Y is
Z

E [Y ] =

y fY (y) dy = 0 

1
2
+ 100 
3
3

= 66:66

Problem 4.7.3
(a) Since X
y  0,

 0 we see that Y = X 2  0. We can conclude that FY (y) = 0 for y




FY (y) = P X 2  y

= P[X

 py]

From the exponential PDF of X , we see that for y  0,

py

FY (y) =

Hence, the CDF of Y is

FY (y) =

py

9e,9x dx = 1 , e,9

0
py y < 0
,
9
1,e
y0

By taking the derivative of the CDF, we obtain the PDF




fY (y) =

py y  0

(9=2)e,9 y =

otherwise

<

0. For

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

138

(b) Finding the moments of Y is easiest using the moments of X . Specifically,


we know

n
n
2
from Problem 4.4.10 that E [X ] = n!=9 . Hence, E [Y ] = E X = 2=81.


(c) The second moment is E Y 2

Var [Y ] = E Y 2

=E

 4
X = 4!=94 . This implies

, (E [Y ])2 = 24

94 , 4=94 = 20=6561

Problem 4.7.4
Random variable X has the exponential PDF


fX (x) =
For y  0, the CDF of Y
FY (y) = P[Y

e,x x  0
0
otherwise

X is given by

 y] = P

y

=P

 y2

Z y2

e,x dy = 1 , e,y

=
0

Therefore we can express the CDF and PDF of Y as




FX (x) =


fY (y) =

0
y0
2
,
y
y0
1 , e

22 ye,y
0

y0
otherwise

From
p Appendix A, we observe that Y is a Rayleigh random variable with parameter a =
2.
Problem 4.7.5
Before solving for the PDF, it is helpful to have a sketch of the function X
U ).

, ln(1 ,

4
2
0
0

0.5
U

(a) From the sketch, we observe that X will be nonnegative. Hence FX (x) = 0 for x < 0.
Since U has a uniform distribution on [0; 1], for 0  u  1, P[U  u] = u. We use this
fact to find the CDF of X . For x  0,


FX (x) = P[, ln(1 , U )  x] = P 1 , U

 e,x

=P

 1 , e,x

Yates and Goodman: Probability and Stochastic Processes Solutions Manual


For x  0, 0  1 , e,x  1 and so

FX (x) = FU 1 , e,x

=1

139

, e,x

The complete CDF can be written as




FX (x) =

0
x<0
1 , e,x x  0

(b) By taking the derivative, the PDF is




fX (x) =

e,x x  0
0
otherwise

Thus, X has an exponential PDF. In fact, since most computer languages provide
uniform [0; 1] random numbers, the procedure outlined in this problem provides a
way to generate exponential random variables from uniform random variables.
(c) Since X is an exponential random variable with parameter a = 1, E [X ] = 1.
Problem 4.7.6
We wish to find a transformation that takes a uniformly distributed random variable on
[0,1] to the following PDF for Y .


fY (y) =

3y2 0  y  1
0
otherwise

We begin by realizing that in this case the CDF of Y must be


8
< 0

FY (y) =

y3

y<0
0y1
otherwise

Therefore, for 0  y  1,

 y] = P[g(X )  y] = y3
Thus, using g(X ) = X 1 3 , we see that for 0  y  1,
P[Y

P[g(X )  y] = P X 1=3  y
which is the desired answer.

=P

 y3

3
=y

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

140

Problem 4.7.7
Since the microphone voltage V is uniformly distributed between -1 and 1 volts, V has
PDF and CDF


fV (v) =

8
< 0

v < ,1
(v + 1)=2 ,1  v  1
FV (v) =
:
1
v>1

1=2 ,1  v  1
0
otherwise

The voltage is processed by a limiter whose output magnitude is given by below




L=

jV j jV j  0 5
:

0:5 otherwise

(a)
P[L = 0:5]

=
=
=

P[jV j  0:5] = P[V  0:5] + P[V


1 , FV (0:5) + FV (,0:5)
1 , 1:5=2 + 0:5=2 = 1=2

 ,0 5]
:

(b) For 0  l  0:5,


FL (l ) = P[jV j  l ] = P[,l  v  l ] = FV (l ) , FV (,l ) = 1=2(l + 1) , 1=2(,l + 1) = l
So the CDF of L is
8
< 0 l<0

FL (l ) =

l 0  l < 0:5
1 l  0:5

(c) By taking the derivative of FL (l ), the PDF of L is




fL (l ) =

1 + (0:5)(l , 0:5) 0  l  0:5


0
otherwise

The expected value of L is


Z

E [L] =

Z 0:5

l fL (l ) dl =

Z 0:5

l dl + 0:5

l (0:5)(l , 0:5) dl = 0:375

Problem 4.7.8
Let X denote the position of the pointer and Y denote the area within the arc defined by
the stopping position of the pointer.

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

141

(a) If the disc has radius r, then the area of the disc is r2 . Since the circumference of the
disc is 1 and X is measured around the circumference, Y = r2 X . For example, when
X = 1, the shaded area is the whole disc and Y = r2 . Similarly, if X = 1=2, then
Y = r2 =2 is half the area of the disc. Since the disc has circumference 1, r = 1=(2)
and
= r

X
4

(b) The CDF of Y can be expressed as




X
FY (y) = P[Y  y] = P
4

y

= P[X

 4y] = FX (4y)

Therefore the CDF is


8
< 0

FY (y) =

y<0
4y 0  y 
:
1
1
y  4

1
4

(c) By taking the derivative of the CDF, the PDF of Y is




fY (y) =

(d) The expected value of Y is E [Y ] =

1
4 0  y  4
0 otherwise

R 1=(4)
0

4y dy = 1=(8).

Problem 4.7.9


fU (u) =

1=2 0  u  2
0
otherwise

The uniform random variable U is subjected to the following clipper.




= g(U ) =

U U
1 U

1
>

We wish to find the CDF of the output of the clipper, W . It will be helpful to have the CDF
of U handy.
8
< 0

u<0
u=2 0  u < 2
FU (u) =
:
1
u>2

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

142

The CDF of W can be found by remembering that W = U for 0  U  1 while W = 1 for


1  U  2. First, this implies W is nonnegative, i.e., FW (w) = 0 for w < 0. Furthermore,
for 0  w  1,

 w] = P[U  w] = FU (w) = w 2
Lastly, we observe that it is always true that W  1. This implies FW (w) = 1 for w  1.
FW (w) = P[W

Therefore the CDF of W is

8
< 0

w<0
w=2 0  w < 1
FW (w) =
:
1
w1
From the jump in the CDF at w = 1, we see that P[W = 1] = 1=2. The corresponding PDF
can be found by taking the derivative and using the delta function to model the discontinuity.


fW (w) =

1=2 + (1=2)(w , 1) 0  w  1
0
otherwise

the expected value of W is


E [W ] =

Z 1

w fW (w) dw =

w[1=2 + (1=2)(w , 1)] dw = 1=4 + 1=2 = 3=4

Problem 4.7.10
Given the following function of random variable X ,


= g(X ) =

10
X
,10 X

0
0
<

we follow the same procedure as in Problem 4.7.2. We attempt to express the CDF of Y
in terms of the CDF of X . We know that Y is always less than ,10. We also know that
,10  Y < 10 when X  0, and finally, that Y = 10 when X < 0. Therefore
8
< 0

FY (y) = P[Y

y ,10
 y] = : P[X  0] = 1 , FX (0) ,10  y
1
y  10
<

Problem 4.7.11
The PDF of U is


fU (u) =

1=2 ,1  u  1
0
otherwise

<

10

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

143

Since W  0, we see that FW (w) = 0 for w < 0. Next, we observe that the rectifier output
W is a mixed random variable since
Z 0

P[W

= 0] = P[U

<

0] =

,1

fU (u) du = 1=2

The above facts imply that


FW (0) = P[W

 0] = P[W = 0] = 1

Next, we note that for 0 < w < 1,


FW (w) = P[U

 w] =

Z w

,1

fU (u) du = (w + 1)=2

Finally, U  1 implies W  1, which implies FW (w) = 1 for w  1. Hence, the complete


expression for the CDF is
8
< 0

w<0
(w + 1)=2 0  w  1
FW (w) =
:
1
w>1
By taking the derivative of the CDF, we find the PDF of W ; however, we must keep in mind
that the discontinuity in the CDF at w = 0 yields a corresponding impulse in the PDF.


fW (w) =

0w1
otherwise

((w) + 1)=2

From the PDF, we can calculate the expected value


Z 1

E [W ] =
0

w((w) + 1)=2 dw = 0 +

Z 1
(w=2) dw = 1=4
0

Perhaps an easier way to find the expected value is to use Theorem 2.10. In this case,
Z

E [W ] =

Z 1

g(u) fW (w) du =

u(1=2) du = 1=4

As we expect, both approaches give the same answer.


Problem 4.7.12
If X has a uniform distribution from 0 to 1 then the PDF and corresponding CDF of X
are


fX (x) =

1 0x1
0 otherwise

8
< 0 x<0

FX (x) =

x 0x1
1 x>1

Yates and Goodman: Probability and Stochastic Processes Solutions Manual


For a > 0, we can find the CDF of the function Y
FY (y)

= aX + b

y,b
a

 y] = P[aX + b  y] = P X 


y,b
y,b
=

P[Y

FX

Therefore the CDF of Y is

8
< 0

FY (y) =

144

y<b
b  y  a+b
y  a+b

y,b
a

By differentiating with respect to y we arrive at the PDF




fY (y) =

1=a b  x  a + b
0
otherwise

which we recognize as the PDF of a random variable that is uniformly distributed over
[b; a + b].

Problem 4.7.13
The relationship between X and Y is shown in the following figure:
3
Y

2
1
0
0

(a) Note that Y

= 1=2

P[Y

if and only if 0  X

= 1=2] = P[0

 1. Thus,

 X  1] =

Z 1
0

Z 1

fX (x) dx =

(b) Since Y  1=2, we can conclude that FY (y) = 0 for y


P[Y = 1=2] = 1=4. Similarly, for 1=2 < y  1,
Next, for 1 < y  2,

FY (y) = P[0  X

 1] = P[Y = 1

(x=2) dx = 1=4
0
<

1=2. Also, FY (1=2)

2] = 1=4

Z y

 y] = fX (x) dx = y2 4
0
Lastly, since Y  2, FY (y) = 1 for y  2. The complete expression of the CDF is
FY (y) = P[X

8
0
>
>
<

y < 1=2
1=4 1=2  y  1
FY (y) =
2 =4 1 < y < 2
y
>
>
:
1
y2

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

145

Problem 4.7.14
We can prove the assertion by considering the cases where a > 0 and a < 0, respectively.
For the case where a > 0 we have


FY (y)

P[Y

 y] = P X 

y,b
a


= FX

y,b
a

Therefore by taking the derivative we find that


fY (y) =

1
fX
a

y,b
a

a>0

Similarly for the case when a < 0 we have




FY (y) = P[Y

 y] = P X 

y,b
a


=1

, FX

y,b
a

And by taking the derivative, we find that for negative a,


1
fY (y) = , fX
a

y,b
a

a<0

A valid expression for both positive and negative a is


fY (y) =

1
jaj fX

y,b
a

Therefore the assertion is proved.


Problem 4.7.15
Understanding this claim may be harder than completing the proof. Since 0  F (x)  1,
we know that 0  U  1. This implies FU (u) = 0 for u < 0 and FU (u) = 1 for u  1.
Moreover, since F (x) is an increasing function, we can write for 0  u  1,


FU (u) = P[F (X )  u] = P X

 F ,1(u)

Since FX (x) = F (x), we have for 0  u  1,


FU (u) = F (F ,1 (u)) = u
Hence the complete CDF of U is
8
< 0 u<0

FU (u) =

u 0u<1
1 u1

That is, U is a uniform [0; 1] random variable.

= FX

F ,1 (u)

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

146

Problem 4.7.16
First, we must verify that F ,1 (u) is a nondecreasing function. To show this, suppose
that for u  u0 , x = F ,1 (u) and x0 = F ,1 (u0 ). In this case, u = F (x) and u0 = F (x0 ). Since
F (x) is nondecreasing, F (x)  F (x0 ) implies that x  x0 . Hence, we can write


FX (x) = P F ,1 (U )  x

= P[U

 F (x)] = F (x)

Problem 4.8.1
The PDF of X is


1=10 ,5  x  5
0
otherwise

fX (x) =
(a) The event B has probability

P[B] = P[,3  X

 3] =

Z 3
1

,3 10

dx =

3
5

From Definition 4.15, the conditional PDF of X given B is




fX jB (x) =

fX (x) =P[B] x 2 B
0
otherwise


=

1=6 jxj  3
0
otherwise

(b) Given B, we see that X has a uniform PDF over [a; b] with a = ,3 and b = 3. From
Theorem 4.7, the conditional expected value of X is E [X jB] = (a + b)=2 = 0.
(c) From Theorem 4.7, the conditional variance of X is Var [X jB] = (b , a)2=12 = 3.
Problem 4.8.2
From Definition 4.6, the PDF of Y is


fY (y) =

(1=5)e,y=5

y0
otherwise

(a) The event A has probability


Z 2

P[A] = P[Y

<

2] =

,y 5 dy = ,e,y

(1=5)e
0

2
=1

, e,2 5

From Definition 4.15, the conditional PDf of Y given A is




fY jB (y) =
=

fY (y) =P[A] x 2 A
0
otherwise

(1=5)e,y=5 =(1

, e,2 5 )
=

0y<2
otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

147

(b) The conditional expected value of Y given A is


E [Y jA] =

y fY jA (y) dy =

Using the integration by parts formula


e,y=5 dy yields
E [Y jA] =
=
=

1=5
1 , e,2=5

Z 2

ye,y=5 dy

u dv = uv , v du with u

y and dv =

2
2
1=5
,
y=5
,
5ye
+
5e,y=5 dy

,
2
=5
0
1,e
0

2 
1=5
,
2=5
,
y=5
,10e , 25e 0
1 , e,2=5
5 , 7e,2=5
1 , e,2=5

Problem 4.8.3
from Definition 4.8, the PDF of W is
fW (w) =

p1

32

e,w

2 =32

(a) Since W has expected value = 0, fW (w) is symmetric about w = 0. Hence P[C] =
P[W > 0] = 1=2. From Definition 4.15, the conditional PDF of W given C is


fW jC (w) =

fW (w) =P[C] w 2 C
0
otherwise

2e,w
0

(b) The conditional expected value of W given C is


E [W jC] =

w fW jC (w) dw =

p2
4 2

2 =32

32 w > 0
otherwise

we,w

2 =32

dw

Making the substitution v = w2 =32, we obtain


E [W jC] =

p32
32

e,v dv =

p32

32

(c) The conditional second moment of W is

Z
 2  Z 2
E W C =
w fW jC (w) dw = 2
w2 fW (w) dw

We observe that w2 fW (w) is an even function. Hence

Z
Z
 2 
 
2
E W C =2
w fW (w) dw =
w2 fW (w) dw = E W 2 = 2 = 16

Lastly, the conditional variance of W given C is




Var [W jC] = E W 2 jC

, (E [W jC])2 = 16 , 32

= 5:81

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

148

Problem 4.8.4
(a) To find the conditional moments, we first find the conditional PDF of T . The PDF of
T is


100e,100t t  0
0
otherwise

fT (t ) =

The conditioning event has probability


P[T

0:02] =

>

fT (t ) dt =

0:02

,e,100t 0 02 = e,2
:

From Definition 4.15, the conditional PDF of T is


(

fT jT >0:02 (t ) =

t  0:02
otherwise

fT (t )
P[T >0:02]

The conditional mean of T is


E [T jT

>

0:02] =

The substitution = t , 0:02 yields


E [T jT

>

0:02] =

0:02

Z
Z0

100e,100(t ,0:02) t  0:02


0
otherwise

t (100)e,100(t ,0:02) dt

,100 d

( + 0:02)(100)e

( + 0:02) fT () d

= E [T + 0:02] = 0:03

(b) The conditional second moment of T is



E T2 T

>

0:02

The substitution = t , 0:02 yields



E T2 T

>

0:02

0:02

Z
Z0

t 2(100)e,100(t ,0:02) dt

2
,100 d
( + 0:02) (100)e

( + 0:02)

= E (T + 0:02)

Now we can calculate the conditional variance.


Var [T jT

>

0:02] = E T 2 jT


>

0:02

= E (T + 0:02)


2

= Var [T + 0:02]
= Var [T ] = 0:01

fT () d

, (E [T jT 0 02])2
, (E [T + 0 02])2
>
:

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

149

Problem 4.8.5
(a) In Problem 4.6.8, we found that the PDF of D is


fD (y) =

0:3(y)
y < 60
,
(y,60)=10
0:07e
y  60

First, we observe that D > 0 if the throw is good so that P[D > 0] = 0:7. A second
way to find this probability is
Z

P[D > 0] =

fD (y) dy = 0:7

0+

From Definition 4.15, we can write


(

fDjD>0 (y) =

fD (y)
P[D>0]

y>0
otherwise


=

(1=10)e,(y,60)=10

y  60
otherwise

(b) If instead we learn that D  70, we can calculate the conditional PDF by first calculating
P[D  70] =

Z 70
0

fD (y) dy

Z 60

Z 70

0:3(y) dy +
0

60

0:07e,(y,60)=10 dy
70

,0 7e, y,60 10 60
,1
,1
= 0 3 + 0 7(1 , e ) = 1 , 0 7e
= 0:3 +
:

)=

The conditional PDF is


(

fDjD70 (y) =

fD (y)
P[D70]

y  70
otherwise

8
0:3
0 y < 60
< 1,0:7e,1 (y)
0
:07
,
(y,60)=10
=
e
60 y 70
: 1,0:7e,1


 

otherwise

You might also like