Chapter 6 - Two - and Higher-Dimensional Random Variables
Chapter 6 - Two - and Higher-Dimensional Random Variables
Paul L. Meyer
Suppose that the following table represents the joint probability distribution of the discrete
random variable (X, Y ). Evaluate all the marginal and conditional distributions.
6.1
X
1 2 3
Y
1 1
1 12 6 0
1 1
2 0 9 5
1 1 2
3 18 4 15
Marginal Probabilities:
Conditional Probabilities:
P (X = 1, Y = 1) 1/12
P (X = 1|Y = 1) = = = 1/3
P (Y = 1) 3/12
P (X = 2, Y = 1) 1/6
P (X = 2|Y = 1) = = = 2/3
P (Y = 1) 3/12
P (X = 3, Y = 1)
P (X = 3|Y = 1) = = 0
P (Y = 1)
P (X = 1, Y = 2)
P (X = 1|Y = 2) = = 0
P (Y = 2)
P (X = 2, Y = 2) 1/9
P (X = 2|Y = 2) = = = 5/14
P (Y = 2) 14/45
P (X = 3, Y = 2) 1/5
P (X = 3|Y = 2) = = = 9/14
P (Y = 2) 14/45
P (X = 1, Y = 3) 1/18
P (X = 1|Y = 3) = = = 10/79
P (Y = 3) 79/180
P (X = 2, Y = 3) 1/4
P (X = 2|Y = 3) = = = 45/79
P (Y = 3) 79/180
P (X = 3, Y = 3) 2/15
P (X = 3|Y = 3) = = = 24/79
P (Y = 3) 79/180
1
Suppose that the two-dimensional random variable (X, Y ) has joint pdf
kx(x − y), 0 < x < 2, −x < y < x,
n
f (x, y) = 0, elsewhere
6.2
Therefore, k = 1/8 .
2
y3
Z
1 1 1
x(x − y) dx = − y+ , 0<y<2
y 8 3 4 48
where the bound 0 < y < 2 is apparent from the first inequality 0 < y < x < 2; namely as 0 < x < 2, it
must be that 0 < y < 2. Second, integrating over (−y, 2):
2
5y 3
Z
1 1 1
x(x − y) dx = − y+ , −2 < y < 0
−y 8 3 4 48
where the bound −2 < y < 0 is analogously derived as in the first case, using the second inequality.
Suppose that the joint pdf of the two-dimensional random variable (X, Y ) is given by
xy
2
f (x, y) = x + 3 , 0 < x < 1, 0 < y < 2,
0, elsewhere
Compute the following.
6.3
P (X > 12 )
(a)
First, we must derive the marginal pdf of X. We do so by integrating with respect to y over (0, 2):
2
Z 2
xy 2
x2 + dy = 2x2 + x
0 3 3
Lastly, we integrate the above result over (1/2, 1):
Z 1
2
2x2 + x dx = 5/6
1/2 3
P (Y < X)
(b)
RR for y are simply
There are two approaches. The direct approach is to observe that the bounds of integration
(0, x). If we define A = {(x, y)|0 < y < x and 0 < x < 1}, then we need only evaluate A f (x, y)dydx. In
the alternative, we observe that P (Y < X) = 1−P (Y ≥ X), and therefore the bounds of integration
RR for y are
(x, 2). Defining B = {(x, y)|x < y < 2 and 0 < x < 1}, the path forward then becomes 1 − B f (x, y)dydx.
We will proceed using the first approach:
Z 1Z x
xy 7
x2 + dydx =
0 0 3 24
P (Y < 12 |X < 12 )
(c)
For problems of the form P (Y < a|X < b), we need only think of f (x, y) as being akin to the probability
R the conjunction of events P (Y < a ∩ X < b), and the conditional statement P (X < b) represented by
of
g(x) dx.
The marginal density of X is derived by:
Z 2
xy 6x2 + 2x
x2 + dy =
0 3 3
Then we calculate:
R 1/2 R 1/2
0 0
x2 + xy3 dydx
R 1/2 6x2 +2x = 5/32
0 3
Suppose that two cards are drawn at random from a deck of cards. Let X be the number
of aces obtained and let Y be the number of queens obtained.
6.4
X
0 1 2
Y
4 3 1
0 0 0 52 · 51 = 221
4 4 4
1 0 52 · 51 = 663 0
4 3 1
2 52 · 51 = 221 0 0
1 1
P (X = 0) = P (Y = 0) =
221 221
4 4
P (X = 1) = P (Y = 1) =
663 663
1 1
P (X = 2) = P (Y = 2) =
221 221
3
Obtain the conditional distribution of X (given Y ) and of Y (given X).
(c)
P (X = 0, Y = 2) P (Y = 0, X = 2)
P (X = 0|Y = 2) = = 1 P (Y = 0|X = 2) = = 1
P (Y = 2) P (X = 2)
P (X = 1, Y = 1) P (Y = 1, X = 1)
P (X = 1|Y = 1) = = 1 P (Y = 1|X = 1) = = 1
P (Y = 1) P (X = 1)
P (X = 2, Y = 0) P (Y = 2, X = 0)
P (X = 2|Y = 0) = = 1 P (Y = 2|X = 0) = = 1
P (Y = 2) P (X = 0)
For what value of k is f (x, y) = ke−(x+y) a joint pdf of (X, Y ) over the region 0 < x < 1, 0 <
y < 1?
6.5
We must find the value of k such that
Z 1 Z 1
ke−x e−y dydx = 1
0 0
1
Evaluating the integral and isolating k yields k = .
(1 − e−1 )2
Suppose that the continuous two-dimensional random variable (X, Y ) is uniformly dis-
tributed over the square whose vertices are (1, 0), (0, 1), (−1, 0), and (0, −1). Find the
marginal pdf’s of X and of Y .
6.6
1 1
y =x+1 y = −x + 1 x=y−1 x = −y + 1
−1 1 −1 1
y = −x − 1 y =x−1 x = −y − 1 x=y−1
−1 −1
Since we are dealing with a uniform joint distribution, we know that f (x, y) = 1/area(R), where R is the region
√
over which (X, Y ) is distributed. The region defined above is a square with length 2, so f (x, y) = 1/2 .
To find the marginal pdf of X, we calculate piecewise over the intervals (0, 1) and (−1, 0):
Z −x+1
1
dy, 0 < x < 1
x−1 2
g(x) = Z x+1
1
dy, −1 < x < 0
−x−1 2
1 − x, 0 < x < 1
=
1 + x, −1 < x < 0
In the interest of brevity, we may write g(x) = 1 − |x|, −1 < x < 1 is the marginal pdf of X. By symmetry,
h(y) = 1 − |y|, −1 < y < 1 is the marginal pdf of Y .
4
Suppose that the dimensions, X and Y , of a rectangular metal plate may be considered to
be independent continuous random variables with the following pdfs.
x − 1, 1 < x ≤ 2
X : g(x) = −x + 3 2 < x < 3
0, elsewhere
1
, 2<y<4
Y : h(y) = 2
0, elsewhere
Find the pdf of the area of the plate, A = XY .
6.7
Combining the bounds for X and Y , we can derive the following bounds for x in terms of the area a:
In general, derivation of the pdf of a product of two random variables X, Y with corresponding pdfs g(x), h(y)
is given by
Z +∞ w
p(w) = g(u)h det(J ) du
−∞ u
where w = xy and x = u, and det(J ) is the Jacobian determinant of x, y in terms of u, w.1 Here, a = xy and
x = u, implying x = u and y = a/x. we derive the Jacobian as:
∂x ∂x
∂a ∂u 0 1 1
J= = =−
∂y ∂y u
∂a ∂u
1
u − ua2
Therefore, we proceed by deriving the pdfs of the joint distribution at each of the previously derived intervals:
Z a/2 a1 Z a/2 11
p(a) = g(u)h du = (x − 1) dx
1 u u 1 2 x
a−2 1 a
− ln , 2 < a ≤ 4
=
4 2 2
Z 2 a1 Z 2 11
p(a) = g(u)h du = (x − 1) dx
a/4 u u a/4 2 x
8−a 1 a
+ ln , 4 < a ≤ 8
=
8 2 8
Z a/2 a1 Z a/2 11
p(a) = g(u)h du = (−x + 3) dx
2 u u 2 2 x
4−a 3 a
+ ln , 4 < a < 6
=
4 2 4
Z 3 a1 Z 3 11
p(a) = g(u)h du = (−x + 3) dx
a/4 u u a/4 2 x
a − 12 3 12
= + ln , 8 < a < 12
8 2 a
1 For some simple, geometric intuition behind the Jacobian term, check out this Cross Validated post. No measure theory or any advanced
5
In order to derive the bounds for a, we simply determine which values of a satisfy the following:
a
∈ (1, 2] =⇒ 2 < a ≤ 4
2
a
∈ (1, 2] =⇒ 4 < a ≤ 8
4
a
∈ (2, 3) =⇒ 4 < a < 6
2
a
∈ (2, 3) =⇒ 8 < a < 12
4
All these statements are saying is the either the lower or upper bound that is a function of a must ultimately lie
within the interval of x for which f (x) is nonzero.
Taking a step back, each of the piecewise densities derived above give us the probability of the area a lying
within some range α to β. A subtle but critical point is as follows: there are mutually exclusive
events leading to the outcome that a ∈ (α, β). For instance, we may have x ∈ (1, 2] or – namely, mutually
exclusive or – x ∈ (2, 3) with y ∈ (2, 4) and come to the outcome a ∈ (4, 6). The individual densities we derived
from each case, as in the case of probabilities of mutually exclusive events, must be summed to get the “total
contribution” of the probabilities of all disjoint events leading to the same outcome.
This explanation makes the most sense when thinking of it geometrically. In more precise terms, the area under
the density derived from one of the events is the probability of that event leading to an outcome. The areas of
the densities for all other disjoint events leading to that same outcome must then be summed to get the total
contribution of probabilities of events leading to that outcome.
Now, in the first two piecewise densities, we have 1 < x ≤ 2 and 2 < y < 4, implying 2 < a < 8. The union
intervals of a for which these two piecewise densities are defined indeed spans the entirety of 2 < a < 8. However,
in the latter two piecewise densities, we have 2 < x < 3 and 2 < y < 4 which implies 4 < a < 12. Particularly,
(4, 6) ∪ (8, 12) implies a gap; namely [6, 8] is unaccounted for. Conceptually, we have “missing events” that need
to lead to the outcome a ∈ (6, 8).
To further investigate this point, consider the following contour plot of the area of the plate a = xy over the
bounds of x and y for which their respective pdfs are non-zero: 2
4 a=8
a = 11
a=7
a = 10
3 a=6 a=9
y
a=5
2 3
x
Observe that all of the contours between 6 and 8 are defined for all x ∈ (2, 3) and for some corresponding subset
of y ∈ (2, 4). For contours with a < 6, the contours are defined only from x ∈ (2, a/2); for a > 8, they are defined
from x ∈ (a/4, 3). Therefore, the shaded region where 6 < a < 8 has this unique properties that the contours of
a outside of this boundary do not.
Conceptually, this seems to be a striking point in explaining the aforementioned “gap.” Precisely, these contours
represent the set of events where the x dimension is able to range entirely from 2 to 3, with some
corresponding value of y. Combined, these events map to a ∈ (6, 8). To express the integration bounds in
terms of a, it suffices to intuitively ask: does there exist a function mapping a to (2, 3), namely is there a h(a)
such that
h(a) ∈ (2, 3), 6<a<8
The natural first choice of test is to investigate whether some linear function of a accomplishes this purpose.
Consider
ma + b ∈ (2, 3)
Mapping (6, 8) to (2, 3) linearly, then, gives us x = a2 − 1. And at long last the path forward is clear, for
x = a2 − 1 is precisely the “partition” dividing up x ∈ (2, 3) that we are looking for in order to express the
probability function for this segment in terms of a.
2 Thank you to whuber on Cross Validated for the tip here. See: https://stats.stackexchange.com/questions/609878/2-dimensional-
functions-of-random-variables-with-piecewise-densities
6
Integrating,
Z a/2−1 a1 Z a/2−1 11
p(a) = g(u)h du = (−x + 3) dx
2 u u 2 2 x
6 − a 3 a − 2
= + ln , (6 < a < 8)
4 2 4
Z 3 a1 Z 3 11
p(a) = g(u)h du = (−x + 3) dx
a/2−1 u u a/2−1 2 x
a−8 3 6
= + ln , (6 < a < 8)
4 2 a−2
For our grand finale, we can now provide a definition and plot for p(a):
a−2 1 a
− ln , 2 < a ≤ 4
4 2 2
16 − 3a 1 a 3 a
+ ln + ln , 4 < a ≤ 6
p(a) = 8 2 8 2 4
4−a 1 a 3 3
+ ln + ln , 6 < a < 8
8 2 8 2 2
a − 12 + 3 ln 12 , 8 < a < 12
8 2 a
0.4
p(a)
0
2 4 6 8 10 12
a
Clearly p(a) ≥ 0 and integrating p(a) piecewise across the respective bounds yields unity, satisfying the Kol-
mogorov axioms and ascertaining that p(a) is a pdf. I leave the details of that calculation to the reader.
Let X represent the life length of an electronic device and suppose that X is a continuous
random variable with pdf.
1000
, x > 1000
f (x) = 2
x
0, elsewhere
Let X1 and X2 be two independent determinations of the above random variable X. (That
is, suppose that we are testing the life length of two such devices.) Find the pdf of the
random variable Z = X1 /X2 .
6.8
In general, for some quotient function of independent random variables z = x/y, and v = y, the density of z
may be derived as
Z +∞
q(z) = g(vz)h(v)|v| dv
−∞
In this instance, however, because we are effectively testing two independent determinations of X, we are
considering f (x) with the random variables x1 and x2 :
1000 1000
f (x1 ) = f (x2 ) =
x21 x22
By the above result, let z = x1 /x2 and v = x2 . Then we may write:
7
Z ∞ 1000 1000 1
p(z) = v dv = , z≥1
1000 (vz)2 v2 2z 2
The bound z ≥ 1 is derived from the fact that since we integrate over x2 = v, with a lower bound of integration
x2 = v = 1000, the lowest z can be 1 if we allow x1 ≥ 1000. Put differently, this is the segment of the density
function that accounts for the events when x1 ≥ x2 .
For the case where 0 < z < 1, it must be the case that x1 < x2 . By complementary events:
Z ∞
1 1
p(z) = 1 − 2
dz = , 0 < z < 1
1 2z 2
In the alternative, we may frame the problem in the following manner. Consider again z = x1 /x2 and v = x2 .
Then zx2 = zv = x1 . Applying the condition that x1 > 1000, then it follows that zv > 1000 =⇒ v > 1000 z .
This is our lower bound of integration. Then we integrate:
Z ∞
1000 1000 1
p(z) = 2 2
v dv = , 0 < z < 1
1000/z (vz) v 2
However, since it must also be that x2 = v > 1000, we cannot have z ≥ 1, for it would drop the lower bound
below 1000. Therefore it must be the case that 0 < z < 1.
Applying the condition that x2 = v > 1000 as the lower bound of integration for v yields the other segment of
the density as before:
Z ∞
1000 1000 1
p(z) = 2 2
v dv = 2
, z≥1
1000 (vz) v 2z
Therefore, the piecewise density is defined as:
1/2, 0<z<1
p(z) = 2
1/2z , z≥1
X
0 1 2 3 4 5
Y
P (V = 5) = P (X = 5, Y = 0) + P (X = 5, Y = 1) + P (X = 5, Y = 2) + P (X = 5, Y = 3)
= 0.09 + 0.08 + 0.06 + 0.05 = 0.28
P (V = 4) = P (X = 4, Y = 0) + P (X = 4, Y = 1) + P (X = 4, Y = 2) + P (X = 4, Y = 3)
= 0.07 + 0.06 + 0.05 + 0.06 = 0.24
P (V = 3) = P (X = 3, Y = 0) + P (X = 3, Y = 1) + P (X = 3, Y = 2) + P (X = 3, Y = 3)
+ P (X = 0, Y = 3) + P (X = 1, Y = 3) + P (X = 2, Y = 3)
= 0.05 + 0.05 + 0.05 + 0.06 + 0.01 + 0.02 + 0.04 = 0.28
P (V = 2) = P (X = 2, Y = 0) + P (X = 2, Y = 1) + P (X = 2, Y = 2) + P (X = 0, Y = 2) + P (X = 1, Y = 2)
= 0.03 + 0.04 + 0.05 + 0.01 + 0.03 = 0.16
P (V = 1) = P (X = 1, Y = 0) + P (X = 1, Y = 1) + P (X = 0, Y = 1)
= 0.01 + 0.02 + 0.01 = 0.04
P (V = 0) = P (X = 0, Y = 0) = 0
8
P5
Therefore, i=1 P (V = i) = 0.28 + 0.24 + 0.28 + 0.16 + 0.04 + 0 = 1 .
Next, we do W = X + Y :
P (W = 0) = P (X = 0, Y = 0) = 0
P (W = 1) = P (X = 1, Y = 0) + P (X = 0, Y = 1)
= 0.01 + 0.01 = 0.02
P (W = 2) = P (X = 1, Y = 1) + P (X = 2, Y = 0) + P (X = 0, Y = 2)
= 0.02 + 0.03 + 0.01 = 0.06
P (W = 3) = P (X = 3, Y = 0) + P (X = 0, Y = 3) + P (X = 2, Y = 1) + P (X = 1, Y = 2)
= 0.05 + 0.01 + 0.04 + 0.03 = 0.13
P (W = 4) = P (X = 4, Y = 0) + P (X = 3, Y = 1) + P (X = 1, Y = 3) + P (X = 2, Y = 2)
= 0.07 + 0.05 + 0.02 + 0.05 = 0.19
P (W = 5) = P (X = 5, Y = 0) + P (X = 4, Y = 1) + P (X = 3, Y = 2) + P (X = 2, Y = 3)
= 0.09 + 0.06 + 0.05 + 0.04 = 0.24
P (W = 6) = P (X = 5, Y = 1) + P (X = 4, Y = 2) + P (X = 3, Y = 3)
= 0.08 + 0.05 + 0.06 = 0.19
P (W = 7) = P (X = 5, Y = 2) + P (X = 4, Y = 3)
= 0.06 + 0.06 = 0.12
P (W = 8) = P (X = 5, Y = 3) = 0.05
P8
As expected, i=1 P (W = i) = 0 + 0.02 + 0.06 + 0.13 + 0.19 + 0.24 + 0.19 + 0.12 + 0.05 = 1 .
Proof. (a) ( =⇒ ) By premise, X, Y are independent. By definition of independence, p(xi , yj ) = p(xi )q(yj )∀i, j.
By definition of conditional probability:
p(xi , yj )
p(xi |yj ) =
q(yj )
p(xi )q(yj )
=
q(yj )
= p(xi ), ∀i, j
q(yj , xi )
q(yj |xi ) =
p(xi )
q(yj p(xi ))
=
p(xi )
= q(yj ), ∀i, j
( ⇐= ) By premise, p(xi |yj ) = p(xi ), ∀i, j. Then
p(xi , yj )
p(xi |yj ) = = p(xi )
q(yj )
=⇒ p(xi , yj ) = p(xi )q(yj )
which is definitionally the independence of X, Y .
f (x,y) f (x,y)
(b) ( =⇒ ) By premise, X, Y are independent. Then f (x, y) = g(x)h(y) implies g(x) = h(y) and h(y) = g(x) .
f (x,y) f (x,y)
Definitionally, g(x) = h(y) = g(x|y) and h(y) = g(x) = h(y|x).
f (x,y)
( ⇐= ) By premise, g(x|y) = g(x) and h(y|x) = h(y). By definition and premise, g(x|y) = h(y) = g(x) and
f (x,y)
h(y|x) = g(x) = h(y), which both imply f (x, y) = g(x)h(y), or the independence of X, Y .
9
The magnetizing force H at a point P , X units from a wire carrying a current I, is given by
H = 2I/X. Suppose that P is a variable point. That is, X is a continuous variable uniformly
distributed over (3, 5). Assume that the current I is also a continuous random variable,
uniformly distributed over (10, 20). Suppose, in addition, that the random variables X and
I are independent. Find the pdf of the random variable H.
6.11
1
By uniform distribution, g(x) = 5−3 = 21 , 3 < x < 5 and f (i) = 20−10
1 1
= 10 , 10 < i < 20. We next determine the
range and partitioning of the interval over which the magnetizing force H is non-zero. By premise, 3 < x < 5
and 10 < i < 20. The lower bound of H is when i approaches 10 and x approaches 5, so that lower bound is 4.
Analogously, the upper bound is when i approaches 20 and x approaches 3, meaning the upper bound for H is
40/3. Therefore,
4 < H < 40/3
But we may partition H further. When i approaches 10 but x approaches 3, H approaches 20/3. Therefore,
4 < H < 20/3 is one such partition. Next, as i approaches 20, and x approaches 5, we have H approaches 8. The
last two sub-intervals are 20/3 < H < 8 and 8 < H < 40/3. In deriving the bounds of integration to determine
the piecewise densities, we are effectively capturing the summed probabilities of each possible configuration of
x, i leading to outcomes of H in a specific sub-interval.
Given h = 2i/x we can derive
hx
=⇒ =i
2
hx
=⇒ 10 < < 20
2
20 40
=⇒ <x<
h h
Which lastly gives us the bounds of integration 20/h < x < 5 and 3 < x < 40/h. Up next is to determine the
Jacobian, taking advantage of the independence of X and I and applying the theorem for deriving the density
function of a quotient of random variables. If h = 2i/x and v = x, then we have i = hv/2 and x = v. Therefore,
∂i ∂i v w
∂h ∂v 2 2 v
= =
∂x ∂x 2
∂h ∂v 0 1
The density is then derived from integrating the following over our previously obtained bounds of integration,
Z +∞ Z Z
hv v v x
q(h) = f g(v) dv = dv = dx
−∞ 2 2 Bounds 40 Bounds 40
and in doing so, we get
Z
40/h
x 1600 − 9h2
dx = , 8 < h < 40/3
40 80h2
3
q(h) =
5
5h2 − 80
Z
x
dx = , 4 < h < 20/3
40 16h2
20/h
Where the respective intervals for which each segment is defined are simply the values of h that satisfy the
requirement that 40/h, 20/h ∈ (3, 5). Now, we see that the segment corresponding to the sub-interval 20/3 <
h < 8 is missing. To find our missing segment, we need to find bounds of integration such that we can go from
(20/3, 8) to the domain of whichever variable of integration we choose, which is given to us by premise. One way
to do so is to consider expressing x in terms of i and h (we had previously only examined i in terms of x and
h). Doing so, we derive
2i
=⇒ x =
h
2i
=⇒ 3 < <5
h
3h 5h
=⇒ <i<
2 2
We note that when h approaches 20/3, 3h/2 approaches 10, and when h approaches 8, 5h/2 approaches 20.
Therefore, when 20/3 < h < 8, we are able to “restore” 10 < i < 20. Indeed this will be our bounds of
integration, but we will first need to rewrite our integral with i as the variable of integration.
10
Let h = 2i/x, v = i. Then i = v and x = 2v/h. Our new Jacobian is
∂i ∂i
∂h ∂v 0 1 2v
= =
∂x ∂x h2
∂h ∂v − h2v2 2
h
And our derivation of the last segment of the density function amounts to
Z +∞ Z 5h/2
2v 2v 1 2v 1
q(h) = g f (v) 2 dv = 2
dv = , 20/3 < h < 8
−∞ h h 3h/2 20 h 5
Therefore the pdf of H can be written piecewise as
2
5h − 80 , 4 < h < 20/3
16h2
1
q(h) = , 20/3 < h < 8
5
2
1600 − 9h , 8 < h < 40/3
80h2
Integrating over each segment of the piecewise density and summing yields unity, satisfying the Kolmogorov
axioms. Verification left to the reader.
The intensity of light at a given point is given by the relationship I = C/D 2 , where C
is the candlepower of the source and D is the distance that the source is from the given
point. Suppose that C is uniformly distributed over (1, 2), while D is a continuous random
variable with pdf f (d) = e−d , d > 0. Find the pdf of I, if C and D are independent.
6.12
By uniform distribution, f (c) = 1, 1 < c < 2. The first plan of attack is to resolve the issue with the D2 in the
denominator. Letting Y = D2 , using the cdf method, we can write
G(y) = P (Y ≤ y) = P (D2 ≤ y)
√ √
= P (− y ≤ D ≤ y)
√ √
= F ( y) − F (− y)
Differentiating with respect to y, we get
1 √ √ 1 √
g(y) = √ [f ( y) + f (− y)] = √ f ( y)
2 y 2 y
The last equality arises from the fact that because f (d) = e−d only when d > 0, it follows that the pdf of Y
must only be non-zero when y > 0. Then we can finally write
1 −1/2 −y1/2 C
f (c) = 1, g(y) = y e , for I =
2 Y
With i = c/y and letting v = y, we may write
Z +∞ Z
1 1/2 −y1/2
q(i) = g(vi)h(v)|v| dv = y e dy
−∞ Bounds 2
Clearly we are still missing our bounds. No bother. If we know that we must have 1 < c < 2 and y > 0, it must
be the case that 0 < I < +∞, namely I need only be positive. We may rewrite i = c/y as iy = c, implying
1 < iy < 2, further implying 1/i < y < 2/i. Because I can be any positive number, this is consistent with the
fact that y may be any positive number too.
With the bounds, we can finally write down our derivation for the pdf of I as
The integral must be evaluated by way of u-substitution, by letting u = y 1/2 and du = 12 y −1/2 dy. Details left
to the reader. The distribution is then
1/2
2 2 1/2 1/2
1 1 1/2
q(i) = e−(2/i) − −2 − 2 + e−(1/i) +2 +2 , 0<i
i i i i
11
When a current I (amperes) flows through a resistance R (ohms), the power generated is
given by W = I 2 R (watts). Suppose that I and R are independent random variables with
the following pdf’s.
6i(1 − i), 0 ≤ i ≤ 1
I : f (i) =
0, elsewhere
2r, 0 < r < 1
R : g(r) =
0, elsewhere
Determine the pdf of the random variable W and sketch its graph.
6.13
First we deal with the I 2 term. Let Y = I 2 . Note that because by premise 0 ≤ i ≤ 1, it follows that 0 ≤ y ≤ 1.
Proceeding using the cdf method,
√ √
P (I 2 ≤ y) = P (− y ≤ I ≤ y)
1 √ √
=⇒ g(y) = y −1/2 [f ( y) + f (− y)]
2
1 √
= y −1/2 [f ( y)
2
= 3 − 3y 1/2
with the penultimate equality justified by the fact that f (i) is non-zero only when 0 ≤ i ≤ 1. With W = Y R,
we can now derive the integral
Z +∞
w 1
p(w) = f g(y) dy
−∞ y y
Deriving the bounds of integration, we observe that 0 ≤ w/y ≤ 1 (since 0 < r < 1 by premise), implying
w ≤ y ≤ 1. Therefore the pdf of W is specifically given by
Z 1
w 1
p(w) = 2 (3 − 3y 1/2 ) dy = 6 + 6w − 12w1/2 , 0 < w < 1
w y y
with the domain 0 < w < 1 following from 0 ≤ i ≤ 1 and 0 < r < 1.
6
p(w)
0
0 0.5 1
w
12
Z +∞
g(x) = f (x, y) dy
−∞
Z +∞
= e−y dy = e−x , x>0
x
e−2 − 3e−4
P (X > 2|Y < 4) =
1 − 5e−4
13