Problem Solutions: September 28, 2005 Draft
Problem Solutions: September 28, 2005 Draft
• This solution manual remains under construction. The current count is that 678 (out of 687)
problems have solutions. The unsolved problems are
If you volunteer a solution for one of those problems, we’ll be happy to include it . . . and, of
course, “your wildest dreams will come true.”
• Of course, the correctness of every single solution reamins unconfirmed. If you find errors or
have suggestions or comments, please send email: [email protected].
• If you need to make solution sets for your class, you might like the Solution Set Constructor
at the instructors site www.winlab.rutgers.edu/probsolns. If you need access, send email:
[email protected].
• Matlab functions written as solutions to homework problems can be found in the archive
matsoln.zip (available to instructors) or in the directory matsoln. Other Matlab functions
used in the text or in these homework solutions can be found in the archive matcode.zip
or directory matcode. The .m files in matcode are available for download from the Wiley
website. Two other documents of interest are also available for download:
• A web-based solution set constructor for the second edition is available to instructors at
http://www.winlab.rutgers.edu/probsolns
• The next update of this solution manual is likely to occur in January, 2006.
1
Problem Solutions – Chapter 1
M O
(b) Every pizza is either Regular (R), or Tuscan (T ). Hence R ∪ T = S so that R and T are
collectively exhaustive. Thus its also (trivially) true that R ∪ T ∪ M = S. That is, R, T and
M are also collectively exhaustive.
(c) From the Venn diagram, T and O are mutually exclusive. In words, this means that Tuscan
pizzas never have onions or pizzas with onions are never Tuscan. As an aside, “Tuscan” is
a fake pizza designation; one shouldn’t conclude that people from Tuscany actually dislike
onions.
(d) From the Venn diagram, M ∩ T and O are mutually exclusive. Thus Gerlanda’s doesn’t make
Tuscan pizza with mushrooms and onions.
(e) Yes. In terms of the Venn diagram, these pizzas are in the set (T ∪ M ∪ O)c .
M O
(a) An outcome specifies whether the fax is high (h), medium (m), or low (l) speed, and whether
the fax has two (t) pages or four (f ) pages. The sample space is
2
(b) The event that the fax is medium speed is A1 = {mt, mf }.
(c) The event that a fax has two pages is A2 = {ht, mt, lt}.
(d) The event that a fax is either high speed or low speed is A3 = {ht, hf, lt, lf }.
(e) Since A1 ∩ A2 = {mt} and is not empty, A1 , A2 , and A3 are not mutually exclusive.
(f) Since
A1 ∪ A2 ∪ A3 = {ht, hf, mt, mf, lt, lf } = S, (2)
the collection A1 , A2 , A3 is collectively exhaustive.
(d) Since ZF ∪ XA = {aaa, aaf, af a, af f, f af, f f f} = S, ZF and XA are not collectively exhaus-
tive.
D = {f f a, f af, af f, f f f} . (5)
3
Problem 1.2.4 Solution
The sample space is
⎧ ⎫
⎨ 1/1 . . . 1/31, 2/1 . . . 2/29, 3/1 . . . 3/31, 4/1 . . . 4/30, ⎬
S= 5/1 . . . 5/31, 6/1 . . . 6/30, 7/1 . . . 7/31, 8/1 . . . 8/31, . (1)
⎩ ⎭
9/1 . . . 9/31, 10/1 . . . 10/31, 11/1 . . . 11/30, 12/1 . . . 12/31
The event H defined by the event of a July birthday is described by following 31 sample points.
H = {7/1, 7/2, . . . , 7/31} . (2)
2. If we need to check whether the first resistance exceeds the second resistance, an event space
is
B1 = {R1 > R2 } B2 = {R1 ≤ R2 } . (2)
3. If we need to check whether each resistance doesn’t fall below a minimum value (in this case
50 ohms for R1 and 100 ohms for R2 ), an event space is
C1 = {R1 < 50, R2 < 100} , C2 = {R1 < 50, R2 ≥ 100} , (3)
C3 = {R1 ≥ 50, R2 < 100} , C4 = {R1 ≥ 50, R2 ≥ 100} . (4)
4. If we want to check whether the resistors in parallel are within an acceptable range of 90 to
110 ohms, an event space is
D1 = (1/R1 + 1/R2 )−1 < 90 , (5)
−1
D2 = 90 ≤ (1/R1 + 1/R2 ) ≤ 110 , (6)
D2 = 110 < (1/R1 + 1/R2 )−1 . (7)
4
Problem 1.3.1 Solution
The sample space of the experiment is
From the problem statement, we know that P [LF ] = 0.5, P [BF ] = 0.2 and P [BW ] = 0.2. This
implies P [LW ] = 1 − 0.5 − 0.2 − 0.2 = 0.1. The questions can be answered using Theorem 1.5.
The problem statement tells us that P [HF ] = 0.2, P [M W ] = 0.1 and P [F ] = 0.5. We can use
these facts to find the probabilities of the other outcomes. In particular,
P [F ] = P [HF ] + P [M F ] . (2)
This implies
P [M F ] = P [F ] − P [HF ] = 0.5 − 0.2 = 0.3. (3)
Also, since the probabilities must sum to 1,
Now that we have found the probabilities of the outcomes, finding any other probability is easy.
(b) The probability that a cell hpone is mobile and fast is P [M F ] = 0.3.
5
Problem 1.3.3 Solution
A reasonable probability model that is consistent with the notion of a shuffled deck is that each
card in the deck is equally likely to be the first card. Let Hi denote the event that the first card
drawn is the ith heart where the first heart is the ace, the second heart is the deuce and so on. In
that case, P [Hi ] = 1/52 for 1 ≤ i ≤ 13. The event H that the first card is a heart can be written
as the disjoint union
H = H1 ∪ H2 ∪ · · · ∪ H13 . (1)
Using Theorem 1.1, we have
13
6
Problem 1.4.2 Solution
(a) From the given probability distribution of billed minutes, M , the probability that a call is
billed for more than 3 minutes is
(b) The probability that a call will billed for 9 minutes or less is
9
To prove the union bound by induction, we first prove the theorem for the case of n = 2 events. In
this case, by Theorem 1.7(c),
7
By the first axiom of probability, P [A1 ∩ A2 ] ≥ 0. Thus,
which proves the union bound for the case n = 2. Now we make our induction hypothesis that the
union-bound holds for any collection of n − 1 subsets. In this case, given subsets A1 , . . . , An , we
define
A = A1 ∪ A2 ∪ · · · ∪ An−1 , B = An . (4)
By our induction hypothesis,
P [A1 ∪ · · · ∪ An ] = P [A ∪ B] (6)
≤ P [A] + P [B] (by the union bound for n = 2) (7)
= P [A1 ∪ · · · ∪ An−1 ] + P [An ] (8)
≤ P [A1 ] + · · · P [An−1 ] + P [An ] (9)
(a) For convenience, let pi = P [F Hi ] and qi = P [V Hi ]. Using this shorthand, the six unknowns
p0 , p1 , p2 , q0 , q1 , q2 fill the table as
H0 H1 H2
F p0 p1 p2 . (1)
V q0 q1 q2
Other facts, such as q0 + q1 + q2 = 7/12, can be derived from these facts. Thus, we have
four equations and six unknowns, choosing p0 and p1 will specify the other unknowns. Un-
fortunately, arbitrary choices for either p0 or p1 will lead to negative values for the other
probabilities. In terms of p0 and p1 , the other unknowns are
0 ≤ p0 ≤ 1/3, (6)
0 ≤ p1 ≤ 1/3, (7)
1/12 ≤ p0 + p1 ≤ 5/12. (8)
8
Although there are an infinite number of solutions, three possible solutions are:
and
and
(b) In terms of the pi , qi notation, the new facts are p0 = 1/4 and q1 = 1/6. These extra facts
uniquely specify the probabilities. In this case,
The above “proof” used the property that for mutually exclusive sets A1 and A2 ,
The problem is that this property is a consequence of the three axioms, and thus must be proven.
For a proof that uses just the three axioms, let A1 be an arbitrary set and for n = 2, 3, . . ., let
An = φ. Since A1 = ∪∞i=1 Ai , we can use Axiom 3 to write
P [A1 ] = P [∪∞
i=1 Ai ] = P [A1 ] + P [A2 ] + P [Ai ] . (3)
i=3
By subtracting P [A1 ] from both sides, the fact that A2 = φ permits us to write
∞
9
Problem 1.4.8 Solution
Following the hint, we define the set of events {Ai |i = 1, 2, . . .} such that i = 1, . . . , m, Ai = Bi and
for i > m, Ai = φ. By construction, ∪m ∞
i=1 Bi = ∪i=1 Ai . Axiom 3 then implies
∞
∞
P [∪m
i=1 Bi ] = P [∪i=1 Ai ] = P [Ai ] . (1)
i=1
m m
For i > m, P [Ai ] = P [φ] = 0, yielding the claim P [∪m
i=1 Bi ] = i=1 P [Ai ] = i=1 P [Bi ].
Note that the fact that P [φ] = 0 follows from Axioms 1 and 2. This problem is more challenging
if you just use Axiom 3. We start by observing
m−1 ∞
P [∪m
i=1 Bi ] = P [Bi ] + P [Ai ] . (2)
i=1 i=m
Now, we use Axiom 3 again on the countably infinite sequence Am , Am+1 , . . . to write
∞
m−1 ∞
m−1
∞
= P [Bi ] + P [Ai ] . (3)
i=1 i=m
m
P [B1 ∪ B2 ∪ · · · ∪ Bm ] = P [Bi ] . (5)
i=1
Thus, P [φ] = 0. Note that this proof uses only Theorem 1.4 which uses only Axiom 3.
10
(b) Using Theorem 1.4 with B1 = A and B2 = Ac , we have
P [S] = P [A ∪ Ac ] = P [A] + P [Ac ] . (7)
Since, Axiom 2 says P [S] = 1, P [Ac ] = 1 − P [A]. This proof uses Axioms 2 and 3.
(c) By Theorem 1.2, we can write both A and B as unions of disjoint events:
A = (AB) ∪ (AB c ) B = (AB) ∪ (Ac B). (8)
Now we apply Theorem 1.4 to write
P [A] = P [AB] + P [AB c ] , P [B] = P [AB] + P [Ac B] . (9)
We can rewrite these facts as
P [AB c ] = P [A] − P [AB], P [Ac B] = P [B] − P [AB]. (10)
Note that so far we have used only Axiom 3. Finally, we observe that A ∪ B can be written
as the union of mutually exclusive events
A ∪ B = (AB) ∪ (AB c ) ∪ (Ac B). (11)
Once again, using Theorem 1.4, we have
P [A ∪ B] = P [AB] + P [AB c ] + P [Ac B] (12)
Substituting the results of Equation (10) into Equation (12) yields
P [A ∪ B] = P [AB] + P [A] − P [AB] + P [B] − P [AB] , (13)
which completes the proof. Note that this claim required only Axiom 3.
(d) Observe that since A ⊂ B, we can write B as the disjoint union B = A ∪ (Ac B). By
Theorem 1.4 (which uses Axiom 3),
P [B] = P [A] + P [Ac B] . (14)
By Axiom 1, P [Ac B] ≥ 0, hich implies P [A] ≤ P [B]. This proof uses Axioms 1 and 3.
11
Problem 1.5.2 Solution
Let si denote the outcome that the roll is i. So, for 1 ≤ i ≤ 6, Ri = {si }. Similarly, Gj =
{sj+1 , . . . , s6 }.
(a) Since G1 = {s2 , s3 , s4 , s5 , s6 } and all outcomes have probability 1/6, P [G1 ] = 5/6. The event
R3 G1 = {s3 } and P [R3 G1 ] = 1/6 so that
P [R3 G1 ] 1
P [R3 |G1 ] = = . (1)
P [G1 ] 5
(b) The conditional probability that 6 is rolled given that the roll is greater than 3 is
P [R6 G3 ] P [s6 ] 1/6
P [R6 |G3 ] = = = . (2)
P [G3 ] P [s4 , s5 , s6 ] 3/6
(c) The event E that the roll is even is E = {s2 , s4 , s6 } and has probability 3/6. The joint
probability of G3 and E is
P [G3 E] = P [s4 , s6 ] = 1/3. (3)
The conditional probabilities of G3 given E is
P [G3 E] 1/3 2
P [G3 |E] = = = . (4)
P [E] 1/2 3
(d) The conditional probability that the roll is even given that it’s greater than 3 is
P [EG3 ] 1/3 2
P [E|G3 ] = = = . (5)
P [G3 ] 1/2 3
12
Problem 1.5.5 Solution
The sample outcomes can be written ijk where the first card drawn is i, the second is j and the
third is k. The sample space is
S = {234, 243, 324, 342, 423, 432} . (1)
and each of the six outcomes has probability 1/6. The events E1 , E2 , E3 , O1 , O2 , O3 are
E1 = {234, 243, 423, 432} , O1 = {324, 342} , (2)
E2 = {243, 324, 342, 423} , O2 = {234, 432} , (3)
E3 = {234, 324, 342, 432} , O3 = {243, 423} . (4)
(a) The conditional probability the second card is even given that the first card is even is
P [E2 E1 ] P [243, 423] 2/6
P [E2 |E1 ] = = = = 1/2. (5)
P [E1 ] P [234, 243, 423, 432] 4/6
(b) The conditional probability the first card is even given that the second card is even is
P [E1 E2 ] P [243, 423] 2/6
P [E1 |E2 ] = = = = 1/2. (6)
P [E2 ] P [243, 324, 342, 423] 4/6
(c) The probability the first two cards are even given the third card is even is
P [E1 E2 E3 ]
P [E1 E2 |E3 ] = = 0. (7)
P [E3 ]
(d) The conditional probabilities the second card is even given that the first card is odd is
P [O1 E2 ] P [O1 ]
P [E2 |O1 ] = = = 1. (8)
P [O1 ] P [O1 ]
(e) The conditional probability the second card is odd given that the first card is odd is
P [O1 O2 ]
P [O2 |O1 ] = = 0. (9)
P [O1 ]
13
(b) The conditional probability that a tick has HGE given that it has Lyme disease is
P [LH] 0.0236
P [H|L] = = = 0.1475. (5)
P [L] 0.16
• P [A] = 1 implying A = B = S.
• P [A] = 0 implying A = B = φ.
In the Venn diagram, assume the sample space has area 1 correspond-
A ing to probability 1. As drawn, both A and B have area 1/4 so that
P [A] = P [B] = 1/4. Moreover, the intersection AB has area 1/16
and covers 1/4 of A and 1/4 of B. That is, A and B are independent
since
B P [AB] = P [A] P [B] . (1)
14
(c) Since C and D are independent,
P [C ∩ D] = P [C] P [D] = 15/64. (3)
The next few items are a little trickier. From Venn diagrams, we see
P [C ∩ Dc ] = P [C] − P [C ∩ D] = 5/8 − 15/64 = 25/64. (4)
It follows that
P [C ∪ Dc ] = P [C] + P [Dc ] − P [C ∩ Dc ] (5)
= 5/8 + (1 − 3/8) − 25/64 = 55/64. (6)
Using DeMorgan’s law, we have
P [C c ∩ Dc ] = P [(C ∪ D)c ] = 1 − P [C ∪ D] = 15/64. (7)
15
Problem 1.6.5 Solution
For a sample space S = {1, 2, 3, 4} with equiprobable outcomes, consider the events
Each event Ai has probability 1/2. Moreover, each pair of events is independent since
A plant has yellow seeds, that is event Y occurs, if a plant has at least one dominant y gene. Except
for the four outcomes with a pair of recessive g genes, the remaining 12 outcomes have yellow seeds.
From the above, we see that
P [Y ] = 12/16 = 3/4 (2)
and
P [R] = 12/16 = 3/4. (3)
To find the conditional probabilities P [R|Y ] and P [Y |R], we first must find P [RY ]. Note that
RY , the event that a plant has rounded yellow seeds, is the set of outcomes
RY = {rryy, rryg, rrgy, rwyy, rwyg, rwgy, wryy, wryg, wrgy} . (4)
16
Problem 1.6.7 Solution
(a) For any events A and B, we can write the law of total probability in the form of
(b) Proving that Ac and B are independent is not really necessary. Since A and B are arbitrary
labels, it is really the same claim as in part (a). That is, simply reversing the labels of A and
B proves the claim. Alternatively, one can construct exactly the same proof as in part (a)
with the labels A and B reversed.
(c) To prove that Ac and B c are independent, we apply the result of part (a) to the sets A and
B c . Since we know from part (a) that A and B c are independent, part (b) says that Ac and
B c are independent.
A AC
In the Venn diagram at right, assume the sample space has area 1 cor-
responding to probability 1. As drawn, A, B, and C each have area 1/2
AB ABC C
and thus probability 1/2. Moreover, the three way intersection ABC has
probability 1/8. Thus A, B, and C are mutually independent since
B BC P [ABC] = P [A] P [B] P [C] . (1)
A AB B
In the Venn diagram at right, assume the sample space has area 1 cor-
responding to probability 1. As drawn, A, B, and C each have area
AC C BC
1/3 and thus probability 1/3. The three way intersection ABC has zero
probability, implying A, B, and C are not mutually independent since
However, AB, BC, and AC each has area 1/9. As a result, each pair of events is independent
since
P [AB] = P [A] P [B] , P [BC] = P [B] P [C] , P [AC] = P [A] P [C] . (2)
17
Problem 1.7.1 Solution
A sequential sample space for this experiment is
This implies
P [H1 H2 ] 1/16
P [H1 |H2 ] = = = 1/4. (2)
P [H2 ] 1/4
(b) The probability that the first flip is heads and the second flip is tails is P [H1 T2 ] = 3/16.
The conditional probability that the first light was green given the second light was green is
Finally, from the tree diagram, we can directly read that P [G2 |G1 ] = 3/4.
18
G2
3/4 •G1 G2 3/8
1/2 G 1 XXXX
X
1/4 X B2 •G1 B2 1/8
HH
HH 1/4 G2 •B1 G2 1/8
1/2HH B1
XXX
XX
3/4 X B2 •B1 B2 3/8
The game goes into overtime if exactly one free throw is made. This event has probability
P [O] = P [G1 B2 ] + P [B1 G2 ] = 1/8 + 1/8 = 1/4. (1)
19
Problem 1.7.6 Solution
Let Ai and Di indicate whether the ith photodetector is acceptable or defective.
(a) We wish to find the probability P [E1 ] that exactly one photodetector is acceptable. From
the tree, we have
(b) The probability that both photodetectors are defective is P [D1 D2 ] = 6/25.
The probability of H1 is
Similarly,
Thus P [H1 H2 ] = P [H1 ]P [H2 ], implying H1 and H2 are not independent. This result should not
be surprising since if the first flip is heads, it is likely that coin B was picked first. In this case, the
second flip is less likely to be heads since it becomes more likely that the second coin flipped was
coin A.
20
Problem 1.7.8 Solution
(a) The primary difficulty in this problem is translating the words into the correct tree diagram.
The tree for this problem is shown below.
21
Problem 1.7.9 Solution
(a) We wish to know what the probability that we find no good photodiodes in n pairs of diodes.
Testing each pair of diodes is an independent trial such that with probability p, both diodes
of a pair are bad. From Problem 1.7.6, we can easily calculate p.
The probability of Zn , the probability of zero acceptable diodes out of n pairs of diodes is pn
because on each test of a pair of diodes, both must be defective.
n
n
6
P [Zn ] = p = pn = (2)
25
i=1
(b) Another way to phrase this question is to ask how many pairs must we test until P [Zn ] ≤ 0.01.
Since P [Zn ] = (6/25)n , we require
n
6 ln 0.01
≤ 0.01 ⇒ n ≥ = 3.23. (3)
25 ln 6/25
From the tree, P [C1 ] = p and P [C2 ] = (1 − p)p. Finally, a fish is caught on the nth cast if no fish
were caught on the previous n − 1 casts. Thus,
22
Problem 1.8.3 Solution
(a) The experiment of picking two cards and recording them in the order in which they were
selected can be modeled by two sub-experiments. The first is to pick the first card and
record it, the second sub-experiment is to pick the second card without replacing the first
and recording it. For the first sub-experiment we can have any one of the possible 52 cards
for a total of 52 possibilities. The second experiment consists of all the cards minus the one
that was picked first(because we are sampling without replacement) for a total of 51 possible
outcomes. So the total number of outcomes is the product of the number of outcomes for
each sub-experiment.
52 · 51 = 2652 outcomes. (1)
(b) To have the same card but different suit we can make the following sub-experiments. First
we need to pick one of the 52 cards. Then we need to pick one of the 3 remaining cards that
are of the same type but different suit out of the remaining 51 cards. So the total number
outcomes is
52 · 3 = 156 outcomes. (2)
(c) The probability that the two cards are of the same type but different suit is the number of
outcomes that are of the same type but different suit divided by the total number of outcomes
involved in picking two cards at random from a deck of 52 cards.
156 1
P [same type, different suit] = = . (3)
2652 17
(d) Now we are not concerned with the ordering of the cards. So before, the outcomes (K♥, 8♦)
and (8♦, K♥) were distinct. Now, those two outcomes are not distinct and are only considered
to be the single outcome that a King of hearts and 8 of diamonds were selected. So every
pair of outcomes before collapses to a single outcome when we disregard ordering. So we can
redo parts (a) and (b) above by halving the corresponding values found in parts (a) and (b).
The probability however, does not change because both the numerator and the denominator
have been reduced by an equal factor of 2, which does not change their ratio.
3. Of the remaining
14 field players, choose 8 for the remaining field positions. There are
14
N3 = 8 to do this.
4. For the 9 batters (consisting of the 8 field players and the designated hitter), choose a batting
lineup. There are N4 = 9! ways to do this.
23
So the total number of different starting lineups when the DH is selected among the field players is
14
N = N1 N2 N3 N4 = (10)(15) 9! = 163,459,296,000. (1)
8
Note that this overestimates the number of combinations the manager must really consider because
most field players can play only one or two positions. Although these constraints on the manager
reduce the number of possible lineups, it typically makes the manager’s job more difficult. As
for the counting, we note that our count did not need to specify the positions played by the field
players. Although this is an important consideration for the manager, it is not part of our counting
of different lineups. In fact, the 8 nonpitching field players are allowed to switch positions at any
time in the field. For example, the shortstop and second baseman could trade positions in the
middle of an inning. Although the DH can go play the field, there are some coomplicated rules
about this. Here is an an excerpt from Major league Baseball Rule 6.10:
The Designated Hitter may be used defensively, continuing to bat in the same posi-
tion in the batting order, but the pitcher must then bat in the place of the substituted
defensive player, unless more than one substitution is made, and the manager then must
designate their spots in the batting order.
If you’re curious, you can find the complete rule on the web.
So the total number of different starting lineups when the DH is selected among the field
players is
14
N = N1 N2 N3 N4 = (10)(15) 9! = 163,459,296,000. (1)
8
• The DH is a pitcher. In this case, there are 10 choices for the pitcher,
10 choices for the
DH among the pitchers (including the pitcher batting for himself), 15 8 choices for the field
players, and 9! ways of ordering the batters into a lineup. The number of possible lineups is
15
N = (10)(10) 9! = 233, 513, 280, 000. (2)
8
24
Problem 1.8.6 Solution
(a) We can find the number of valid starting lineups by noticing that the swingman presents
three situations: (1) the swingman plays guard, (2) the swingman plays forward, and (3) the
swingman doesn’t play. The first situation is when the swingman can be chosen to play the
guard position, and the second where the swingman can only be chosen to play the forward
position. Let Ni denote the number of lineups corresponding to case i. Then we can write
the total number of lineups as N1 + N2 + N3 . In the first situation, we have to choose 1 out
of 3 centers, 2 out of 4 forwards, and 1 out of 4 guards so that
3 4 4
N1 = = 72. (1)
1 2 1
In the second case, we need to choose 1 out of 3 centers, 1 out of 4 forwards and 2 out of 4
guards, yielding
3 4 4
N2 = = 72. (2)
1 1 2
Finally, with the swingman on the bench, we choose 1 out of 3 centers, 2 out of 4 forward,
and 2 out of four guards. This implies
3 4 4
N3 = = 108, (3)
1 2 2
n 9 11 14 17
k 0 1 2 3 (2)
p 0.0079 0.012 0.0105 0.0090
25
(a) Since the probability of a zero is 0.8, we can express the probability of the code word 00111
as 2 occurrences of a 0 and three occurrences of a 1. Therefore
(b) The probability that a code word has exactly three 1’s is
5
P [three 1’s] = (0.8)2 (0.2)3 = 0.0512. (2)
3
The probability of each of these events is less than 1 in 1000! Given that these events took place
in the relatively short fifty year history of the NBA, it should seem that these probabilities should
be much higher. What the model overlooks is that the sequence of 10 titles in 11 years started
when Bill Russell joined the Celtics. In the years with Russell (and a strong supporting cast) the
probability of a championship was much higher.
The probability that the number of green lights equals the number of red lights
P [G = R] = P [G = 1, R = 1, Y = 3] + P [G = 2, R = 2, Y = 1] + P [G = 0, R = 0, Y = 5] (2)
3
2
2
5
5! 7 7 1 5! 7 7 1 5! 1
= + + (3)
1!1!3! 16 16 8 2!1!2! 16 16 8 0!0!5! 8
≈ 0.1449. (4)
26
1−p W2 •W1 W2 p(1−p)
p W3 •W1 L2 W3 p3
p W1 L2
p2 (1−p)
p 1−p L3 •W1 L2 L3
XXXX
X
1−p X L1
1−p
W2 XX
p
W3 •L1 W2 W3 p(1−p)2
HH XXX
HH 1−p X L3 •L1 W2 L3 (1−p)3
pHH
L2 •L1 L2 p(1−p)
The probability that the team with the home court advantage wins is
P [H] = P [W1 W2 ] + P [W1 L2 W3 ] + P [L1 W2 W3 ] (1)
3 2
= p(1 − p) + p + p(1 − p) . (2)
Note that P [H] ≤ p for 1/2 ≤ p ≤ 1. Since the team with the home court advantage would win
a 1 game playoff with probability p, the home court team is less likely to win a three game series
than a 1 game playoff!
(a) There are 3 group 1 kickers and 6 group 2 kickers. Using Gi to denote that a group i kicker
was chosen, we have
P [G1 ] = 1/3 P [G2 ] = 2/3. (1)
In addition, the problem statement tells us that
P [K|G1 ] = 1/2 P [K|G2 ] = 1/3. (2)
Combining these facts using the Law of Total Probability yields
P [K] = P [K|G1 ] P [G1 ] + P [K|G2 ] P [G2 ] (3)
= (1/2)(1/3) + (1/3)(2/3) = 7/18. (4)
(b) To solve this part, we need to identify the groups from which the first and second kicker were
chosen. Let ci indicate whether a kicker was chosen from group i and let Cij indicate that
the first kicker was chosen from group i and the second kicker from group j. The experiment
to choose the kickers is described by the sample tree:
Since a kicker from group 1 makes a kick with probability 1/2 while a kicker from group 2
makes a kick with probability 1/3,
P [K1 K2 |C11 ] = (1/2)2 P [K1 K2 |C12 ] = (1/2)(1/3) (5)
2
P [K1 K2 |C21 ] = (1/3)(1/2) P [K1 K2 |C22 ] = (1/3) (6)
27
By the law of total probability,
Note that 15/96 and (7/18)2 are close but not exactly the same. The reason K1 and K2 are
dependent is that if the first kicker is successful, then it is more likely that kicker is from
group 1. This makes it more likely that the second kicker is from group 2 and is thus more
likely to miss.
(c) Once a kicker is chosen, each of the 10 field goals is an independent trial. If the kicker is
from group 1, then the success probability is 1/2. If the kicker is from group 2, the success
probability is 1/3. Out of 10 kicks, there are 5 misses iff there are 5 successful kicks. Given
the type of kicker chosen, the probability of 5 misses is
10 5 5 10
P [M |G1 ] = (1/2) (1/2) , P [M |G2 ] = (1/3)5 (2/3)5 . (15)
5 5
28
W1 W2 W3 W5
W4 W6
To find the probability that the device works, we replace series devices 1, 2, and 3, and parallel
devices 5 and 6 each with a single device labeled with the probability that it works. In particular,
(1-q)3 2
1-q
1-q
The probability P [W ] that the two devices in parallel work is 1 minus the probability that neither
works:
P W = 1 − q(1 − (1 − q)3 ). (3)
Finally, for the device to work, both composite device in series must work. Thus, the probability
the device works is
P [W ] = [1 − q(1 − (1 − q)3 )][1 − q 2 ]. (4)
Note that if a 0 is transmitted, then 0 is sent five times and we call decoding a 0 a success.
You should convince yourself that this a symmetric situation with the same deletion and error
probabilities. Introducing deletions reduces the probability of an error by roughly a factor of 20.
However, the probability of successfull decoding is also reduced.
29
Problem 1.10.3 Solution
Note that each digit 0 through 9 is mapped to the 4 bit binary representation of the digit. That is,
0 corresponds to 0000, 1 to 0001, up to 9 which corresponds to 1001. Of course, the 4 bit binary
numbers corresponding to numbers 10 through 15 go unused, however this is unimportant to our
problem. the 10 digit number results in the transmission of 40 bits. For each bit, an independent
trial determines whether the bit was correct, a deletion, or an error. In Problem 1.10.2, we found
the probabilities of these events to be
P [C] = γ = 0.91854, P [D] = δ = 0.081, P [E] = = 0.00046. (1)
Since each of the 40 bit transmissions is an independent trial, the joint probability of c correct bits,
d deletions, and e erasures has the multinomial probability
40! c d e
c!d!e! γ δ c + d + e = 40; c, d, e ≥ 0,
P [C = c, D = d, E = d] = (2)
0 otherwise.
W1 W2 W3 W5
W4 W6
By symmetry, note that the reliability of the system is the same whether we replace component 1,
component 2, or component 3. Similarly, the reliability is the same whether we replace component
5 or component 6. Thus we consider the following cases:
I Replace component 1 In this case
q
P [W1 W2 W3 ] = (1 − )(1 − q)2 , P [W4 ] = 1 − q, P [W5 ∪ W6 ] = 1 − q 2 . (1)
2
This implies
q2
P [W1 W2 W3 ∪ W4 ] = 1 − (1 − P [W1 W2 W3 ])(1 − P [W4 ]) = 1 − (5 − 4q + q 2 ). (2)
2
In this case, the probability the system works is
q2
P [WI ] = P [W1 W2 W3 ∪ W4 ] P [W5 ∪ W6 ] = 1 − (5 − 4q + q 2 ) (1 − q 2 ). (3)
2
30
III Replace component 5 In this case,
q2
P [W1 W2 W3 ] = (1 − q)3 , P [W4 ] = 1 − q, P [W5 ∪ W6 ] = 1 − . (7)
2
This implies
P [W1 W2 W3 ∪ W4 ] = 1 − (1 − P [W1 W2 W3 ])(1 − P [W4 ]) = (1 − q) 1 + q(1 − q)2 . (8)
From these expressions, its hard to tell which substitution creates the most reliable circuit. First,
we observe that P [WII ] > P [WI ] if and only if
q q q2
1− + (1 − q)3 > 1 − (5 − 4q + q 2 ). (11)
2 2 2
Some algebra will show that P [WII ] > P [WI ] if and only if q 2 < 2, which occurs for all nontrivial
(i.e., nonzero) values of q. Similar algebra will show that P [WII ] > P [WIII ] for all values of
0 ≤ q ≤ 1. Thus the best policy is to replace component 4.
Keep in mind that 50*rand(200,1) produces a 200 × 1 vector of random numbers, each in the
interval (0, 50). Applying the ceiling function converts these random numbers to rndom integers in
the set {1, 2, . . . , 50}. Finally, we add 50 to produce random numbers between 51 and 100.
function [C,H]=twocoin(n);
C=ceil(2*rand(n,1));
P=1-(C/4);
H=(rand(n,1)< P);
The first line produces the n × 1 vector C such that C(i) indicates whether coin 1 or coin 2 is chosen
for trial i. Next, we generate the vector P such that P(i)=0.75 if C(i)=1; otherwise, if C(i)=2,
then P(i)=0.5. As a result, H(i) is the simulated result of a coin flip with heads, corresponding
to H(i)=1, occurring with probability P(i).
31
function C=bit100(n);
% n is the number of 100 bit packets sent
B=floor(2*rand(n,100));
P=0.03-0.02*B;
E=(rand(n,100)< P);
C=sum((sum(E,2)<=5));
First, B is an n × 100 matrix such that B(i,j) indicates whether bit i of packet j is zero or one.
Next, we generate the n×100 matrix P such that P(i,j)=0.03 if B(i,j)=0; otherwise, if B(i,j)=1,
then P(i,j)=0.01. As a result, E(i,j) is the simulated error indicator for bit i of packet j. That
is, E(i,j)=1 if bit i of packet j is in error; otherwise E(i,j)=0. Next we sum across the rows of
E to obtain the number of errors in each packet. Finally, we count the number of packets with 5 or
more errors.
For n = 100 packets, the packet success probability is inconclusive. Experimentation will show
that C=97, C=98, C=99 and C=100 correct packets are typica values that might be observed. By
increasing n, more consistent results are obtained. For example, repeated trials with n = 100, 000
packets typically produces around C = 98, 400 correct packets. Thus 0.984 is a reasonable estimate
for the probability of a packet being transmitted correctly.
function N=reliable6(n,q);
% n is the number of 6 component devices
%N is the number of working devices
W=rand(n,6)>q;
D=(W(:,1)&W(:,2)&W(:,3))|W(:,4);
D=D&(W(:,5)|W(:,6));
N=sum(D);
The n×6 matrix W is a logical matrix such that W(i,j)=1 if component j of device i works properly.
Because W is a logical matrix, we can use the Matlab logical operators | and & to implement the
logic requirements for a working device. By applying these logical operators to the n × 1 columns
of W, we simulate the test of n circuits. Note that D(i)=1 if device i works. Otherwise, D(i)=0.
Lastly, we count the number N of working devices. The following code snippet produces ten sample
runs, where each sample run tests n=100 devices for q = 0.2.
>> for n=1:10, w(n)=reliable6(100,0.2); end
>> w
w =
82 87 87 92 91 85 85 83 90 89
>>
As we see, the number of working devices is typically around 85 out of 100. Solving Problem 1.10.1,
will show that the probability the device works is actually 0.8663.
32
function n=countequal(x,y)
%Usage: n=countequal(x,y)
%n(j)= # elements of x = y(j)
[MX,MY]=ndgrid(x,y);
%each column of MX = x
%each row of MY = y
n=(sum((MX==MY),1))’;
for countequal is quite short (just two lines excluding comments) but needs some explanation.
The key is in the operation
[MX,MY]=ndgrid(x,y).
The Matlab built-in function ndgrid facilitates plotting a function g(x, y) as a surface over the
x, y plane. The x, y plane is represented by a grid of all pairs of points x(i), y(j). When x has n
elements, and y has m elements, ndgrid(x,y) creates a grid (an n × m array) of all possible pairs
[x(i) y(j)]. This grid is represented by two separate n × m matrices: MX and MY which indicate
the x and y values at each grid point. Mathematically,
Next, C=(MX==MY) is an n × m array such that C(i,j)=1 if x(i)=y(j); otherwise C(i,j)=0. That
is, the jth column of C indicates indicates which elements of x equal y(j). Lastly, we sum along
each column j to count number of elements of x equal to y(j). That is, we sum along column j to
count the number of occurrences (in x) of y(j).
function N=ultrareliable6(n,q);
% n is the number of 6 component devices
%N is the number of working devices
for r=1:6,
W=rand(n,6)>q;
R=rand(n,1)>(q/2);
W(:,r)=R;
D=(W(:,1)&W(:,2)&W(:,3))|W(:,4);
D=D&(W(:,5)|W(:,6));
N(r)=sum(D);
end
This above code is based on the code for the solution of Problem 1.11.4. The n × 6 matrix W is a
logical matrix such that W(i,j)=1 if component j of device i works properly. Because W is a logical
matrix, we can use the Matlab logical operators | and & to implement the logic requirements for
a working device. By applying these logical opeators to the n × 1 columns of W, we simulate the
test of n circuits. Note that D(i)=1 if device i works. Otherwise, D(i)=0. Note that in the code,
we first generate the matrix W such that each component has failure probability q. To simulate the
replacement of the jth device by the ultrareliable version by replacing the jth column of W by the
column vector R in which a device has failure probability q/2. Lastly, for each column replacement,
we count the number N of working devices. A sample run for n = 100 trials and q = 0.2 yielded
these results:
33
>> ultrareliable6(100,0.2)
ans =
93 89 91 92 90 93
From the above, we see, for example, that replacing the third component with an ultrareliable
component resulted in 91 working devices. The results are fairly inconclusive in that replacing
devices 1, 2, or 3 should yield the same probability of device failure. If we experiment with
n = 10, 000 runs, the results are more definitive:
>> ultrareliable6(10000,0.2)
ans =
8738 8762 8806 9135 8800 8796
>> >> ultrareliable6(10000,0.2)
ans =
8771 8795 8806 9178 8886 8875
>>
In both cases, it is clear that replacing component 4 maximizes the device reliability. The somewhat
complicated solution of Problem 1.10.4 will confirm this observation.
34