0% found this document useful (0 votes)
8 views

Chapter 5 - Functions of Random Variables

The document provides solutions to various problems related to functions of random variables, specifically focusing on deriving probability density functions (pdfs) from given random variables. It includes examples of uniform distributions, transformations of random variables, and the application of theorems to verify that derived functions are indeed pdfs. The solutions cover both continuous and discrete random variables, demonstrating the process of finding cumulative distribution functions (cdf) and their corresponding pdfs.

Uploaded by

jindalvat2401
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Chapter 5 - Functions of Random Variables

The document provides solutions to various problems related to functions of random variables, specifically focusing on deriving probability density functions (pdfs) from given random variables. It includes examples of uniform distributions, transformations of random variables, and the application of theorems to verify that derived functions are indeed pdfs. The solutions cover both continuous and discrete random variables, demonstrating the process of finding cumulative distribution functions (cdf) and their corresponding pdfs.

Uploaded by

jindalvat2401
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Introductory Probability and Statistical Applications, Second Edition

Paul L. Meyer

Notes and Solutions by David A. Lee

Solutions to Chapter 5: Functions of Random Variables

Suppose that X is uniformly distributed over (−1, 1). Let Y = 4 − X 2 . Find the pdf of Y ,
say g(y), and sketch it. Also verify that g(y) is a pdf.
5.1
By uniform distribution, the pdf of X is f (x) = 1/2, −1 < x < 1. The task is to now find a corresponding pdf
for Y . Given Y = H(X) = 4 − x2 , we can derive the cdf of Y , namely

G(y) = P (Y ≤ y) = P (4 − X 2 ≤ y)
p p
= P (X ≤ − 4 − y, 4 − y ≤ x)
p p
= 1 − P (− 4 − y ≤ x ≤ 4 − y)
Z √4−y √
1  x  4−y
=1− √ dx = 1 − √
− 4−y 2 2 − 4−y
√ √
4−y 4−y p
=− − =− 4−y
2 2
We derive the pdf of Y by differentiating the cdf G(y):

1 1
G0 (y) = g(y) = (4 − y)−1/2 = √
2 2 4−y

Which is distributed over 3 < y < 4, since X is distributed over −1 < x < 1, which maps to 3 < y < 4 under
H(x). To verify g(y) is indeed a pdf, it is clear that g(y) ≥ 0 for the given domain. All that remains to ascertain
R4 1
is whether 3 2√4−y dy = 1. Integrate by u-substitution: let u = 4 − y, du = −1 · dy. Then
Z u=0(y=4) √ 0
1 1
− √ du = − u = 1
2 u=1(y=3) u 1

4
g(y)

0
3 4
y

Suppose that X is uniformly distributed over (1, 3). Obtain the pdf of the following random
variables:
5.2
By uniform distribution, the pdf f (x) = 1/2, 1 < x < 3.

1
Y = 3X + 4
(a)
Finding the cdf of Y , we get

G(y) = P (Y ≤ y) = P (3X + 4 ≤ y)
 y − 4
=P X≤
3
Z y−4 y−4
3 1 x 3
= dx =
1 2 2 1
y−4 1 y−7
= − =
6 2 6

Differentiating with respect to y, we get G0 (y) = g(y) = 1/6 . Clearly g(y) is positive. For Y = 3X + 4
R 13
such that 1 < x < 3, we can deduce 7 < y < 13. Then 7 1/6 dy = 1 .
1
g(y)

0
7 13
y

Z = eX
(b)
Finding the cdf of Z, we get

G(z) = P (Z ≤ z) = P (eX ≤ z)
= P (X ≤ ln z)
Z ln z
1 x ln z
= dx =
1 2 2 1
ln z 1
= −
2 2
Observe that since the natural logarithm is a strictly increasing function, the inequality direction is pre-
served. Therefore, G0 (z) = g(z) = 1/2z . For Z = eX , 1 < x < 3, we get the domain e < z < e3 for
R e3 3
Z = H(X). Therefore, g(z) ≥ 0 across this domain. Moreover, 21 e z1 dz = 12 ln z|ee = 1 .
1
g(z)

0
e e3
z

2
Suppose that the continuous random variable X has pdf f (x) = e−x , x > 0. Find the pdf of
the following random variables:
5.3

Y = X3
(a)
The cdf of Y is derived as

G(y) = P (Y ≤ y) = P (X 3 ≤ y)
= P (X ≤ y 1/3 )
Z y1/3 y 1/3 1/3
= e−x dx = −e−x = 1 − e−y
0 0

The pdf of Y then follows:


1 −2/3 −y1/3
G0 (y) = g(y) = y e
3
For Y = X 3 , x > 0, it follows that y > 0. Therefore, g(y) ≥ 0 for y > 0. Numerically evaluating
1 +∞ −2/3 −y 1/3
R
3 0 y e dy gives us 1 .

Z = 3/(X + 1)2
(b)
3
Here, since Z = (X+1)2 is monotonic over the interval 0 < X < +∞, we may apply Theorem 5.1 and find
dx
p p
g(z) = f (x) dz. The inverse function X(z) has two branches, X(z) = 3/z − 1 and X(z) = − 3/z −
p
1. However, because X is distributed over the positive reals, we choose the branch X(z) = 3/z − 1,
1 3 −1/2 1 31/2
defined on 0 < z < 3. Then dx − z32 , therefore dx
 
dz = 2 z dz = 2 z 3/2 . By Theorem 5.1, g(z) =
31/2 −(√3/z−1) 1 R 3 31/2 −(√3/z−1) 1
e . For 0 < z < 3, g(z) ≥ 0. Moreover, numerical evaluation of 0 2 e z 3/2
dz
2 z 3/2
gives us 1 .

Suppose that the discrete random variable X assumes the values 1, 2, and 3 with equal
probability. Find the probability distribution of Y = 2X + 3.
5.4
Each of the outcomes X = 1, 2, 3 has probability P (X = 1) = P (X = 2) = P (X = 3) = 1/3. Therefore,
H(X = 1) = 5, H(X = 2) = 7, H(X = 3) = 9, and P (Y = 5) = P (Y = 7) = P (Y = 9) = 1/3.

Suppose that X is uniformly distributed over the interval (0, 1). Find the pdf of the following
random variables:
5.5
By uniform distribution, f (x) = 1 for 0 < X < 1.

Y = X2 + 1
(a)
First we find the cdf of Y :

G(y) = P (Y ≤ y) = P (X 2 + 1 ≤ y)
= P (X 2 ≤ y − 1)
p
= P (0 ≤ X ≤ y − 1)
Z √y−1 p
= dx = y − 1
0

Differentiating with respect to y gives us the pdf:


1
G0 (y) = g(y) = (y − 1)−1/2
2

3
First, note that Y = X 2 + 1, 0 < X < 1 implies 1 < Y < 2. Then g(y) ≤ 0 for 1 < Y < 2. Secondly,
R2 1 2
1 2
(y − 1)−1/2 dy = (y − 1)1/2 1 = 1 .
We may√alternatively apply Theorem 5.1 as Y = X 2 + 1 is monotonic on the interval 0 < X < 1. Then
−1/2
X(y) = y − 1 and dx dx 1
dy = dy = 2 (y − 1) . Therefore g(y) = 12 (y − 1)−1/2 , as expected.

Z = 1/(X + 1)
(b)
First we find the cdf of Z:
 1 
G(z) = P (Z ≤ z) = P ≤z
X +1
1 
=P −1≤X ≤1
z
Z 1
= dx = 1 − (1/z − 1) = 2 − 1/z
1/z−1

Then we derive g(z) as follows:

G0 (z) = g(z) = 1/z 2


Note that Z = 1/(X + 1) over 0 < X < 1 implies 1/2 < Z < 1. Then g(z) ≥ 0 over 1/2 < Z < 1. Moreover,
G(1) − G(1/2) = 1 .
Alternatively, we can apply Theorem 5.1 because Z = 1/(X + 1) is monotonic over 1/2 < Z < 1. Then
X(z) = 1/z − 1 and dx 2 dx 2 2
dz = −1/z , then dz = 1/z . Therefore g(z) = 1/z , as expected.

Suppose that X is uniformly distributed over the interval (−1, 1). Find the pdf of the
following random variables:
5.6
1
By uniform distribution, f (x) = 1−(−1) = 1/2 for −1 < X < 1.

Y = sin (πX/2)
(a)
We first derive the cdf of Y . Here, note that the inverse sine function is increasing over the given interval:

G(y) = P (Y ≤ y) = P (sin(πX/2) ≤ y)
= P (X ≤ (2/π) sin−1 (y))
Z (2/π) sin−1 (y)
1 1 1
= dx = sin−1 (y) +
−1 2 π 2
And now we derive the pdf of Y by differentiating G(y) with respect to y. But first we must determine the
derivative of the inverse sine function.
Theorem.
d 1
sin−1 (x) = √
dx 1 − x2
Proof. First observe that sin(sin−1 (x)) = x. Then dxd
sin(sin−1 (x)) = dx
d
x, and it immediately follows that
−1 d −1 d −1 1
cos(sin (x)) dx sin (x) = 1 =⇒ dx sin (x) = cos(sin−1 (x)) . Now, using the fact that sin2 y + cos2 y = 1,
p p √
and from that deriving cos y = 1 − sin2 y, we can write cos(sin−1 (x)) = 1 − sin2 (sin−1 (x)) = 1 − x2
d
by virtue of inverses. Therefore, dx sin−1 (x) = √1−x
1
2
.
Proceeding, we derive:
d 1 1 1 1
G0 (y) = g(y) = sin−1 (y) + = p
dy π 2 π 1 − y2

For Y = sin(πX/2) on −1 < X < 1, it follows that −1 < Y < 1. Then g(y) ≥ 0 for −1 < Y < 1.
R1
Additionally, −1 π1 √ 1 2 dy = 1 , confirming g(y) is a pdf.
1−y
Alternatively, because Y = sin(πX/2), −1 < X < 1 is monotonic, we may apply Theorem 5.1. Finding
X = (2/π) sin−1 (y) as before, it follows that dx dx
dy = | dy | =
√ 2 2 , and g(y) = 12 · √ 2 2 = √ 1 2 , as
π 1−y π 1−y π 1−y
expected.

4
Y = cos (πX/2)
(b)
First we derive the cdf of Z. Here, note that the inverse cosine function is strictly decreasing on the given
interval:

G(z) = P (Z ≤ z) = P (cos(πX/2) ≤ z)
 2 2 
= P X ≥ cos−1 (z), X ≤ − cos−1 (z)
π π
Z − π2 cos−1 (z) Z 1
1 1
= dx + dx
−1 2 2
π cos
−1 (z) 2

1 1 1 1
= − cos−1 (z) + − cos−1 (z) +
π 2 π 2
2
= − cos−1 (z) + 1
π
Analogously, we must determine the derivative of the inverse cosine function before proceeding.

Theorem.
d 1
cos−1 (x) = − √
dx 1 − x2

Proof. Since cos(cos−1 (x)) = x, it follows that dx


d
cos(cos−1 (x)) = dx
d
x, implying − sin(cos−1 (x)) dx
d
cos−1 (x) =
p
1. Then dx d
cos−1 (x) = − sin(cos1−1 (x)) . Since sin2 y + cos2 y = 1 implies sin y = 1 − cos2 y, we have
d −1 1
dx cos (x) = − √ 2 −1
= −√ 1 2 .
1−x
1−cos (cos (x))

Differentiating G(z) with respect to z yields the pdf of Z:


d 2  2 1
G0 (z) = g(z) = − cos−1 (z) + 1 = √
dz π π 1 − z2
For Z = cos(πX/2) on −1 < X < 1, the distribution of Z is over 0 < Z < 1. Then g(z) ≥ 0 on that
R1
interval. Moreover, 0 π2 √1−z
1
2
dz = 1 , ascertaining g(z) is a pdf.
Because Z = cos(πX/2) is not monotonic over −1 < X < 1, we cannot apply Theorem 5.1 to derive the
pdf of Z.

W = |X|
(c)
In particular, W is defined as:

W = X, 0≤X<1
n
−X, −1 < X < 0
Beginning with the derivation of the cdf of W :

G(w) = P (W ≤ w) = P (|X| ≤ w)
= P (X ≤ w, −w ≤ X)
Z w Z 0
1 1
= dx + dx
0 2 −w 2
w w
= + =w
2 2
Differentiating with respect to w yields G0 (w) = g(w) = 1 . Since W = |X|, −1 < X < 1 implies
R1
0 < W < 1, clearly g(w) ≥ 0 on that interval. Moreover, 0 w dw = 1 , confirming g(w) is a pdf.

Suppose that the radius of a sphere is a continuous random variable. (Due to inaccuracies
of the manufacturing process, the radii of different spheres may be different.) Suppose that
the radius R has pdf f (r) = 6r(1 − r), 0 < r < 1. Find the pdf of the volume V and the
surface area S of the sphere.
5.7
Volume. The volume of a sphere is V (r) = 43 πr3 . First finding the cdf of the volume v:

5
4 
G(v) = P (V ≤ v) = P πr3 ≤ v
3
  3 1/3 
=P r≤ v

3 1/3
Z ( 4π v) 3
( 4π v)1/3
= 6r(1 − r) dr = 3r2 − 2r3
0 0
 3 2/3 3
=3 v − v
4π 2π
Differentiating with respect to v gives us the pdf of V :
 3 −1/3  3  3
G0 (v) = g(v) = 2 v −
4π 4π 2π
3  3 −1/3 
= v −1
2π 4π

For V (r) = 43 πr3 on 0 < r < 1, we have 0 < V < 43 π. It follows that g(v) ≥ 0 on this interval, and
R 43 π 3  3 −1/3 
0 2π 4π v − 1 dv = 1 , ascertaining that g(v) is a pdf.
Surface Area. The surface area of a sphere is given by A(r) = 4πr2 . First deriving the cdf of A:

G(a) = P (A ≤ a) = P (4πr2 ≤ a)
  1 1/2 
=P r≤ a

1
Z ( 4π a)1/2 1
( 4π a)1/2
= 6r(1 − r) dr = 3r2 − 2r3
0 0
 1   1 3/2
=3 a −2 a
4π 4π
And now differentiating with respect to a to find the pdf of A:

3  1 1/2  1 
G0 (a) = g(a) = −3 a
4π 4π 4π
3   1  1/2 
= 1− a
4π 4π
R 4π 
3
For A(r) = 4πr2 over 0 < r < 1, we have 0 < A(r) < 4π. Then g(a) ≥ 0 over this interval, and 0 4π 1−
 1/2 
1
4π a da = 1 , ascertaining that g(a) is a pdf.

A fluctuating electric current I may be considered as a uniformly distributed random vari-


able over the interval (9, 11). If this current flows through a 2-ohm resistor, find the pdf of
the power P = 2I 2 .
5.8
1
By uniform distribution, f (i) = 11−9 = 21 , 9 < I < 11, where I is the random variable for current and i a specific
outcome of current. Let P ∗ be the random variable for power and p∗ be a specific outcome of power. Deriving
the cdf of P ∗ gives us:

G(p∗ ) = P (P ∗ ≤ p∗ ) = P (2I 2 ≤ p∗ )
  p∗ 1/2 
=P I≤
2
Z ( p2∗ )1/2 ∗
( p2 )1/2
1 i
= di =
9 2 2 9
1  p∗ 1/2 
= −9
2 2
Deriving the pdf of P ∗ :

6
1  1  p∗ −1/2  1 
G0 (p∗ ) = g(p∗ ) =
2 2 2 2
∗ −1/2
1 p
 1  2 1/2
= =
8 2 8 p∗

R 242 1  1/2
For P ∗ = 2I 2 , 9 < I < 11, we have 162 < P ∗ < 242. Then g(p∗ ) ≥ 0 and 162 8
2
p∗ dx = 1 , ascertaining

that g(p ) is a pdf.

The speed of a molecule in a uniform gas at equilibrium is a random variable V whose pdf
2
is given by f (v) = av 2 e−bv , v > 0, where b = m/2kT and k, T , and m denote Boltzmann’s
constant, the absolute temperature, and the mass of the molecule, respectively.
5.9

Evaluate the constant a (in terms of b).


(a)
2 2
a −bv
We proceed by integration by parts. Let u = v, du = dv, dw = ave−bv dv, and w = − 2b e . Then
Z +∞ Z +∞
2 a −bv2 +∞ a 2
av 2 e−bv dv = − ve + e−bv dv
0 2b 0 2b 0
r
a 1 π
= =1
2b 2 b
4b3/2
=⇒ a = √
π

Derive the distribution of the random variable W = mv 2 /2, which represents the kinetic
energy of the molecule.
(b)
Without needing to deal with error functions, because W = mv 2 /2 is monotonic for v > 0, we may make
 1/2   −1/2 3/2 2
use of Theorem 5.1. Then V = 2w m , and dv
dw = dv
dw = 1 2
2 m
2w
m . With f (v) = 4b√π v 2 e−bv , we
can derive the pdf of the kinetic energy W :
 
4b3/2  2w  −b 2w
m
 1  2w −1/2
g(w) = √ e
π m m m
2
= w1/2 e−(w/kT ) , W > 0
(kT )3/2 π 1/2

A random voltage X is uniformly distributed over the interval (−k, k). If X is the input of a
nonlinear device with the characteristics shown in Fig. 5.12, find the probability distribution
of Y in the following three cases:
5.10

X
−x0 −a a x0

7
1
By uniform distribution, f (x) = 2k , −k < X < k.

k<a
(a)
Since the event that Y = 0 is equivalent to the event that X ∈ (−k, k), we may simply calculate P (Y =
Rk 1
0) = −k 2k dx = g(0) = 1 . Therefore, g(y) = 0 , y 6= 0.

a < k < x0
(b)
Here, we define Y piecemeal as follows:
 y0 ay0
 − X− , −k < x < −a
 x0 − a x0 − a


Y = 0, −a ≤ x ≤ a

 y0 ay0

 X− , a<x<k
x0 − a x0 − a
Therefore, we must find the probability distribution function of Y such that
Z y
P (0 ≤ Y ≤ y) = g(y) dy + P (Y = 0) = 1
0
Or equivalently:
Z y(x=−k) Z x=a Z y(x=k)
g(y) dy + f (x) dx + g(y) dy = 1
0(x=−a) x=−a 0(x=a)

First we find G(y) over the X interval (−k, −a):


 y0 ay0 
G(y) = P (Y ≤ y) = P − X− ≤y
x0 − a x0 − a
  x − a  ay0 
0
=P X≥− y+
y0 x0 − a
Z −a
1 a (x0 − a)y a (x0 − a)y
= dx = − + + =
x0 −a
−( y )y−a 2k 2k 2ky0 2k 2ky0
0

x0 − a y0 (k − a)
Then G0 (y) = g(y) = ,0 < y < .
2ky0 x0 − a
Next we find G(y) over the X interval (−a, a). Because Y = 0 over this interval, we may interpret the
events Y = 0 and X ∈ (−a, a) to be equivalent. Then we may simply write
Z a
1 a a a
P (Y = 0) = dx = + =
−a 2k 2k 2k k
Lastly, we drive G(y) over the X interval (a, k):
 y ay0 
0
G(y) = P (Y ≤ y) = P X− ≤y
x0 − a x0 − a
 x − a 
0
=P X≤ y−a
y0
Z ( x0y−a )y−a
0 1 (x0 − a)y a
= dx = −
a 2k 2ky0 k

x0 − a y0 (k − a)
Then G0 (y) = g(y) = ,0 < y < . Since both events X ∈ (−k, −a) and X ∈ (a, k) are
2ky0 x0 − a
equivalent to Y ∈ (0, y), we need only sum the corresponding probability distribution functions of Y over
those respective intervals. In particular, we have:
"Z
−a Z ( x0y−a )y−a #
d d 1 0 1
g(y) = G(y) = dx + dx
dy dy −( x0y−a )y−a 2k a 2k
0

x0 − a y0 (k − a)
= ,0 < y <
ky0 x0 − a

x0 − a y0 (k − a) a
Namely, we can conclude that g(y) = ,0 < y < and g(y) = , y = 0 .
ky0 x0 − a k

8
k > x0
(c)
By part (b), we know the probability distribution functions over the X range spaces (−a, a), (−x0 , −a),
and (a, x0 ); for the latter two pdfs, the interval over Y for which they are defined is 0 < y < y0 . All that
remains is to determine the probability distribution function of Y for when y = y0 . Simply, because Y is a
constant value for which the equivalent events are X ∈ (−k, −x0 ) and X ∈ (x0 , k), we may write
Z k Z −x0
1 1
P (Y = y0 ) = dx + dx
x0 2k −k 2k
k − x0 k − x0 k − x0
= + = = 1 − x0 /k
2k 2k k
Therefore, the probability distribution function here is:
a/k, y=0


 x0 − a


g(y) = , 0 < y < y0
ky0
x0


1 − , y = y0


k

The radiant energy (in Btu/hr/ft2 ) is given as the following function of temperature T (in
degree Fahrenheit): E = 0.173(T /100)4 . Suppose that the temperature T is considered to
be a continuous random variable with pdf
−2
f (t) = 200t , 40 ≤ t ≤ 50
n
0, elsewhere
Find the pdf of the radiant energy E.
5.11
Let E be the random variable for radiant energy and e a specific outcome of E. First we derive the cdf of E:

G(e) = P (E ≤ e) = P (0.173(t/100)4 ≤ e)
  e 1/4 
= P t ≤ 100
0.173
e
Z 100( 0.173 )1/4  e −3/4
= 200 t−2 dt = −2 +5
40 0.173
Then the pdf of E is:

1  e −3/4  1 
G0 (e) = g(e) =
2 0.173 0.173
 e −5/4
= 2.89 = 0.322e−5/4 , 0.0044 ≤ E ≤ 0.0108
0.173

To measure air velocities, a tube (known as Pitot static tube) is used which enables one to
measure differential pressure. This differential pressure is given by P = 12 dV 2 , where d is
the density of the air and V is the wind speed (mph). If V is a random variable uniformly
distributed over (10, 20), find the pdf of P .
5.12
1
By uniform distribution, f (v) = 10 , 10 < X < 20. We first determine the cdf of X:
1 
G(x) = P (X ≤ x) = P dV 2 ≤ x
2
  2X 1/2 
=P V ≤
d
Z ( 2X 1/2
d ) 1 1  2X 1/2
= dv = −1
10 10 10 d
The pdf of X is then:

1  2  2X −1/2 1  2X −1/2
G0 (x) = g(x) = = , 50d < X < 200d
20 d d 10d d

9
R 200d  −1/2
1 2X
Which is clearly positive for X ∈ (50d, 200d), and it also follows that 50d 10d d dx = 1 .

Suppose that P (X ≤ 0.29) = 0.75, where X is a continuous random variable with some
distribution defined over (0, 1). If Y = 1 − X, determine k so that P (Y ≤ k) = 0.25.
5.13
R 0.29 R1
By premise, P (X ≤ 0.29) = 0 f (x) dx = 0.75. Since it must be the case that P (0 ≤ X ≤ 1) = 0 f (x) dx = 1,
R1
it immediately follows that 0.29 f (x) dx = 0.25. Lastly, in finding the cdf of Y , we determine

G(y) = P (Y ≤ y) = P (1 − X ≤ y)
= P (X ≥ 1 − y)
Z 1
= f (x) dx
1−y
R1
Now, suppose y = k, then 1−y f (x) dx = 0.25. Therefore, 1−y = 0.29 and y = 0.71 . Observe that determining
the pdf of X, f (x), was completely unnecessary.

10

You might also like