0% found this document useful (0 votes)
37 views72 pages

Eec 161 Ch02-Annotated

This document discusses sequential experiments and tree diagrams. It provides examples of using tree diagrams to represent sequential experiments and calculate probabilities of outcomes. In Example 2.1, a tree diagram is used to find the probability of choosing a resistor from machine B2 that is not acceptable. In Example 2.2, a tree is used to calculate probabilities related to traffic light timing. Example 2.3 uses a tree to find conditional probabilities given outcomes of flipping a coin.

Uploaded by

Hanna Shui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views72 pages

Eec 161 Ch02-Annotated

This document discusses sequential experiments and tree diagrams. It provides examples of using tree diagrams to represent sequential experiments and calculate probabilities of outcomes. In Example 2.1, a tree diagram is used to find the probability of choosing a resistor from machine B2 that is not acceptable. In Example 2.2, a tree is used to calculate probabilities related to traffic light timing. Example 2.3 uses a tree to find conditional probabilities given outcomes of flipping a coin.

Uploaded by

Hanna Shui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Chapter 2

Sequential Experiments
Section 2.1

Tree Diagrams
ree ia rams

Man experiments onsists o a se en es o s b experiments he pro ed re ollo ed b


ea h s b experiment ma depend on the res lts o pre io s s b experiments ree
dia ram is se l or representin s h a sit ation
Example 2.1 Problem

For the resistors of Example 1.19, we used A to denote the event that
a randomly chosen resistor is “within 50 ⌦ of the nominal value.” This
could mean “acceptable.” We use the notation N (“not acceptable”) for
the complement of A. The experiment of testing a resistor can be viewed
as a two-step procedure. First we identify which machine (B1, B2, or B3)
produced the resistor. Second, we find out if the resistor is acceptable.
Draw a tree for this sequential experiment. What is the probability of
choosing a resistor from machine B2 that is not acceptable?
Example 2.1 Solution

0.8
⇠⇠⇠ A •B1 A 0.24 This two-step procedure is shown
⇠⇠⇠
⇠⇠ 0.06
0.3 B1 0.2 N •B1 N in the tree on the left. To use the
0.9
⇠ A •B2 A
⇠⇠ 0.36
HH
0.4 ⇠⇠⇠
B2 XXXX
⇠⇠ tree to find the probability of the
H XXXX
HH
H 0.1 N •B2 N 0.04 event B2N , a nonacceptable re-
0.3 HH
B3 XXXX0.6 A •B3 A 0.18
XX
XX
sistor from machine B2, we start
0.4 N •B3 N 0.12
at the left and find that the prob-
ability of reaching B2 is P[B2] = 0.4. We then move to the right to B2N
and multiply P[B2] by P[N |B2] = 0.1 to obtain P[B2N ] = (0.4)(0.1) =
0.04.

PC Ba NJ P C NI BI PCB

B A A BE PCBs
PC P
0 X 0.3 0,18
Example 2.2 Problem

Traffic engineers have coordinated the timing of two traffic lights to


encourage a run of green lights. In particular, the timing was designed
so that with probability 0.8 a driver will find the second light to have the
same color as the first. Assuming the first light is equally likely to be red
or green, what is the probability P[G2] that the second light is green?
Also, what is P[W ], the probability that you wait for at least one of the
first two lights? Lastly, what is P[G1|R2], the conditional probability of a
green first light given a red second light?

A A
s
Ge n
PC Gil
pl
G Uri
p Gan
Example 2.2 Solution

0.8 G2 •G1 G2

⇠⇠ 0.4 The tree for the two-light
I
experi-
⇠ ⇠⇠⇠

0.5 G ⇠⇠
1 XXXXX ment is shown on the left. The prob-
XXX
X
X 0.1
0.2 R2 •G1 R2 ability that the second light is green
HH
H
HH
is
H 0.2
⇠⇠⇠
⇠G2 •R1 G2 0.1
HH
⇠⇠⇠⇠
0.5 H
H R ⇠⇠
1 XXXXX
XXX
P [G2] = P [G1G2] + P [R1G2]
X
X R2 •R1 R2 0.4
0.8
= 0.4 + 0.1 = 0.5. (1)
The event W that you wait for at least one light is the event that at least
one light is red.

W = {R1G2 [ G1R2 [ R1R2} . (2)


The probability that you wait for at least one light is

P [W ] = P [R1G2] + P [G1R2] + P [R1R2] = 0.1 + 0.1 + 0.4 = 0.6. (3)


[Continued]
Example 2.2 Solution (Continued 2)

An alternative way to the same answer is to observe that W is also the


complement of the event that both lights are green. Thus,

P [W ] = P [(G1G2)c] = 1 P [G1G2] = 0.6. (1)


To find P[G1|R2], we need P[R2] = 1 P[G2] = 0.5. Since P[G1R2] = 0.1,
the conditional probability that you have a green first light given a red
second light is
P [G1R2] 0.1
P [G1|R2] = = = 0.2. (2)
P [R2] 0.5
Example 2.3 Problem

Suppose you have two coins, one biased, one fair, but you don’t know
which coin is which. Coin 1 is biased. It comes up heads with probability
3/4, while coin 2 comes up heads with probability 1/2. Suppose you pick
a coin at random and flip it. Let Ci denote the event that coin i is picked.
Let H and T denote the possible outcomes of the flip. Given that the
outcome of the flip is a head, what is P[C1|H], the probability that you
picked the biased coin? Given that the outcome is a tail, what is the
probability P[C1|T ] that you picked the biased coin?
Example 2.3 Solution
First, we construct the sample tree on the
3/4 ⇠⇠
⇠ H •C1 H 3/8
⇠⇠
⇠⇠ left. To find the conditional probabilities,
⇠⇠
⇠ C1 ⇠
1/2 ⇠⇠ T •C1 T 1/8 we see
⇠ 1/4
⇠⇠⇠
⇠⇠⇠
XXXX
XXX P [C1 H]
XX
X
1/2 C2 XXX
1/2
H •C2 H 1/4 P [C1 |H] =
XXX
XXX
P [H]
1/2 X T •C2 T 1/4 P [C1 H]
=
P [C1 H] + P [C2 H]
From the leaf probabilities in the sample
tree,
3/8 3
P [C1 |H] = = .
3/8 + 1/4 5
Similarly,
P [C1 T ] P [C1 T ] 1/8 1
P [C1 |T ] = = = = . (1)
P [T ] P [C1 T ] + P [C2 T ] 1/8 + 1/4 3
As we would expect, we are more likely to have chosen coin 1 when the first flip is
heads, but we are more likely to have chosen coin 2 when the first flip is tails.
Quiz 2.1

In a cellular phone system, a mobile phone must be paged to receive a


phone call. However, paging attempts don’t always succeed because the
mobile phone may not receive the paging signal clearly. Consequently,
the system will page a phone up to three times before giving up. If the
results of all paging attempts are independent and a single paging attempt
succeeds with probability 0.8, sketch a probability tree for this experiment
and find the probability P[F ] that the phone receives the paging signal
clearly.
Quiz 2.1 Solution

Let Fi denote the event that that the user is found on page i. The tree
for the experiment is

0.8 F1 0.8 F2 0.8 F3

F1c F2c F3c


0.2 0.2 0.2

The user is found unless all three paging attempts fail. Thus the proba-
bility the user is found is

P [F ] = 1 P [F1cF2cF3c] = 1 (0.2)3 = 0.992. (1)

P F V FCF UCF FFS


p F I PC FIFA TP FIEF
Section 2.2

Counting Methods
Theorem 2.1

An experiment consists of two subexperiments. If one subexperiment


has n outcomes and the other subexperiment has m outcomes, then the
experiment has nm outcomes.
Example 2.6

There are two subexperiments. The first subexperiment is “Flip a coin


and observe either heads H or tails T .” The second subexperiment is “Roll
a six-sided die and observe the number of spots.” It has six outcomes,
1, 2, . . . , 6. The experiment, “Flip a coin and roll a die,” has 2 ⇥ 6 = 12
outcomes:
(H, 1), (H, 2), (H, 3), (H, 4), (H, 5), (H, 6),
(T, 1), (T, 2), (T, 3), (T, 4), (T, 5), (T, 6).
Theorem 2.2

The number of k-permutations of n distinguishable objects is


n!
(n)k = n(n 1)(n 2) · · · (n k + 1) = .
(n k)!

Permutations: Order matters

I 17 I
Example 2.5 Problem

Choose 7 cards at random from a deck of 52 di↵erent cards. Display


the cards in the order in which you choose them. How many di↵erent
sequences of cards are possible?

771111 AAAA
it
5251
Example 2.5 Solution

The procedure consists of seven subexperiments. In each subexperiment,


the observation is the identity of one card. The first subexperiment has 52
possible outcomes corresponding to the 52 cards that could be drawn. For
each outcome of the first subexperiment, the second subexperiment has
51 possible outcomes corresponding to the 51 remaining cards. Therefore
there are 52 ⇥ 51 outcomes of the first two subexperiments. The total
number of outcomes of the seven subexperiments is

52 ⇥ 51 ⇥ · · · ⇥ 46 = 674,274,182,400 . (1)
Theorem 2.3

The number of ways to choose k objects out of n distinguishable objects


is
⇣n ⌘ (n)k n!
= = .
k k! k!(n k)!

Combination: Order does not matter

N
k
Example 2.9
• The number of combinations of seven cards chosen from a deck of 52 cards is
⇣52⌘ 52 ⇥ 51 ⇥ · · · ⇥ 46
= = 133,784,560, (1)
7 2 ⇥ 3 ⇥ ··· ⇥ 7
which is the number of 7-combinations of 52 objects. By contrast, we found in
Example 2.5 674,274,182,400 7-permutations of 52 objects. (The ratio is 7! =
5040).

• There are 11 players on a basketball team. The starting lineup consists of five
players. There are 11
5
= 462 possible starting lineups.

• There are 12060


⇡ 1036 . ways of dividing 120 students enrolled in a probability
course into two sections with 60 students in each section.
Example 2.10 Problem

There are four queens in a deck of 52 cards. You are given seven cards at
random from the deck. What is the probability that you have no queens?
Example 2.10 Solution

Consider an experiment in which the procedure is to select seven cards


at random from a set of 52 cards and the observation is to determine if
there⇣are⌘ one or more queens in the selection. The sample space contains
H = 52 7 possible combinations of seven cards, each with probability 1/H.
⇣ ⌘
52 4
There are HN Q = 7 combinations with no queens. The probability of
receiving no queens is the ratio of the number of outcomes with no queens
to the number of outcomes in the sample space. HN Q/H = 0.5504.

Another way of analyzing this experiment is to consider it as a sequence


of seven subexperiments. The first subexperiment consists of selecting
a card at random and observing whether it is a queen. If it is a queen,
an outcome with probability 4/52 (because there are 52 outcomes in the
sample space and four of them are in the event {queen}), stop looking
for queens. [Continued]
Example 2.10 Solution (Continued 2)

Otherwise, with probability 48/52, select another card from the remain-
ing 51 cards and observe whether it is a queen. This outcome of this
subexperiment has probability 4/51. If the second card is not a queen, an
outcome with probability 47/51, continue until you select a queen or you
have seven cards with no queen. Using Qi and Ni to indicate a “Queen”
or “No queen” on subexperiment i, the tree for this experiment is

4 Q1 4 Q2 4 Q3 4 Q7
52 51 50 46

N1 N2 N2 ... N7
48/52 47/51 46/50 42/46

The probability of the event N7 that no queen is received in your seven


cards is the product of the probabilities of the branches leading to N7:

(48/52) ⇥ (47/51) · · · ⇥ (42/46) = 0.5504. (1)


Example 2.11 Problem

There are four queens in a deck of 52 cards. You are given seven cards
at random from the deck. After receiving each card you return it to the
deck and receive another card at random. Observe whether you have not
received any queens among the seven cards you were given. What is the
probability that you have received no queens?
Example 2.11 Solution

The sample space contains 527 outcomes. There are 487 outcomes with
no queens. The ratio is (48/52)7 = 0.5710, the probability of receiv-
ing no queens. If this experiment is considered as a sequence of seven
subexperiments, the tree looks the same as the tree in Example 2.10,
except that all the horizontal branches have probability 48/52 and all the
diagonal branches have probability 4/52.
Example 2.12 Problem

A laptop computer has USB slots A and B. Each slot can be used for
connecting a memory card (m), a camera (c) or a printer (p). It is possible
to connect two memory cards, two cameras, or two printers to the laptop.
How many ways can we use the two USB slots?
Example 2.12 Solution

This example corresponds to sampling two times with replacement from


the set {m, c, p}. Let xy denote the outcome that device type x is used
in slot A and device type y is used in slot B. The possible outcomes are
S = {mm, mc, mp, cm, cc, cp, pm, pc, pp}. The sample space S contains nine
outcomes.
Theorem 2.4

Given m distinguishable objects, there are mn ways to choose with re-


placement an ordered sample of n objects.
Example 2.13

There are 210 = 1024 binary sequences of length 10.


Example 2.14

The letters A through Z can produce 264 = 456,976 four-letter words.


Example 2.15

A chip fabrication facility produces microprocessors. Each microprocessor


is tested to determine whether it runs reliably at an acceptable clock
speed. A subexperiment to test a microprocessor has sample space Ssub =
{0, 1} to indicate whether the test was a failure (0) or a success (1). For
test i, we record xi = 0 or xi = 1 to indicate the result. In testing
four microprocessors, the observation sequence, x1x2x3x4, is one of 16
possible outcomes:
( )
0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111,
S= .
1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111
Example 2.16

There are ten students in a probability class. Each earns a grade s 2


Ssub = {A, B, C, F }. We use the notation xi to denote the grade of the
ith student. For example, the grades for the class could be

x1x2 · · · x10 = CBBACF BACF (1)


The sample space S of possible sequences contains 410 = 1,048,576
outcomes.
Example 2.17 Problem

For five subexperiments with sample space Ssub = {0, 1}, what is the
number of observation sequences in which 0 appears n0 = 2 times and 1
appears n1 = 3 times?
Example 2.17 Solution

The 10 five-letter words with 0 appearing twice and 1 appearing three


times are:

{00111, 01011, 01101, 01110, 10011, 10101, 10110, 11001, 11010, 11100} .
Theorem 2.6

The number of observation sequences for n subexperiments with sample


space S = ⌘ 1} with 0 appearing n0 times and 1 appearing n1 = n
⇣ {0, n0
times is nn .
1

h IIn no

I ntfn.net
Theorem 2.7

For n repetitions of a subexperiment with sample space S = {s0, . . . , sm 1},


the number of length n = n0 + · · · + nm 1 observation sequences with si
appearing ni times is
⇣ n ⌘ n!
= .
n 0 , . . . , nm 1 n0!n1! · · · nm 1!
Proof: Theorem 2.7
n
Let M = n0 ,...,n m 1
. Start with n empty slots and perform the following sequence of
subexperiments:

Subexperiment Procedure
0 Label n0 slots as s0 .
1 Label n1 slots as s1 .
... ...
m 1 Label the remaining nm 1 slots as sm 1.

There are nn0 ways to perform subexperiment 0. After n0 slots have been labeled, there
are n n1n0 ways to perform subexperiment 1. After subexperiment j 1, n0 + · · · + nj 1
slots have already been filled, leaving n (n0 +···+n
nj
j 1)
ways to perform subexperiment j.
From the fundamental counting principle,
⇣ n ⌘⇣n n ⌘⇣n n n1 ⌘ ⇣n n · · · nm 2 ⌘
0 0 0
M = ···
n0 n1 n2 nm 1
n! (n n0 )! (n n0 · · · nm 2 )!
= ··· . (1)
(n n0 )!n0 ! (n n0 n1 )!n1 ! (n n0 · · · nm 1 )!nm 1 !
Canceling the common factors, we obtain the formula of the theorem.
Example 2.18 Problem

In Example 2.16, the professor uses a curve in determining student grades.


When there are ten students in a probability class, the professor always
issues two grades of A, three grades of B, three grades of C and two
grades of F. How many di↵erent ways can the professor assign grades to
the ten students?
Example 2.18 Solution

In Example 2.16, we determine that with four possible grades there are
410 = 1,048,576 ways of assigning grades to ten students. However, now
we are limited to choosing n0 = 2 students to receive an A, n1 = 3
students to receive a B, n2 = 3 students to receive a C and n3 = 4
students to receive an F . The number of ways that fit the curve is the
multinomial coefficient
⇣ n ⌘ ⇣ 10 ⌘ 10!
= = = 25,200. (1)
n0 , n1 , n2 , n3 2, 3, 3, 2 2!3!3!2!
Quiz 2.2

Consider a binary code with 4 bits (0 or 1) in each code word. An example


of a code word is 0110.
(a) How many di↵erent code words are there?
(b) How many code words have exactly two zeroes?
(c) How many code words begin with a zero?
(d) In a constant-ratio binary code, each code word has N bits. In every
word, M of the N bits are 1 and the other N M bits are 0. How
many di↵erent code words are in the code with N = 8 and M = 3?
Quiz 2.2 Solution
(a) We can view choosing each bit in the code word as a subexperiment. Each subex-
periment has two possible outcomes: 0 and 1. Thus by the fundamental principle of
counting, there are 2 ⇥ 2 ⇥ 2 ⇥ 2 = 24 = 16 possible code words.

(b) An experiment that can yield all possible code words with two zeroes is to choose
which 2 bits (out of 4 bits) will be zero. The other two bits then must be ones. There
are 42 = 6 ways to do this. Hence, there are six code words with exactly two zeroes.
For this problem, it is also possible to simply enumerate the six code words:
1100, 1010, 1001,
0101, 0110, 0011.

(c) When the first bit must be a zero, then the first subexperiment of choosing the
first bit has only one outcome. For each of the next three bits, we have two choices.
In this case, there are 1 ⇥ 2 ⇥ 2 ⇥ 2 = 8 ways of choosing a code word.

(d) For the constant ratio code, we can specify a code word by choosing M of the bits
to be ones. The other N M bits will be zeroes. The number of ways of choosing such
a code word is MN
. For N = 8 and M = 3, there are 83 = 56 code words.
Section 2.3

Independent Trials
Example 2.19 Problem

What is the probability P[E2,3] of two failures and three successes in five
independent trials with success probability p.

I 11 A D D I
S S S
F F Z
Ci p
l P Ci pl P P p p

F S S F S
I P P
1 P
P P z

E P CI p
Example 2.19 Solution

To find P[E2,3], we observe that the outcomes with three successes in


five trials are 11100, 11010, 11001, 10110, 10101, 10011, 01110, 01101,
01011, and 00111. We note that the probability of each outcome is
a product of five probabilities, each related to one subexperiment. In
outcomes with three successes, three of the probabilities are p and the
other two are 1 p. Therefore each outcome with three successes has
probability (1 p)2p3.
⇣ ⌘
From Theorem 2.6, we know that the number of such sequences is 5 3 .
To find P[E2,3], we add up the probabilities associated with the 10 out-
comes with 3 successes, yielding
h i ⇣5⌘
P E2,3 = (1 p)2p3. (1)
3
Theorem 2.8

The probability of n0 failures and n1 successes in n = n0 +n1 independent


trials is
⇥ ⇤ ⇣n⌘ ⇣n⌘
P En0,n1 = (1 p)n n1 pn1 = (1 p)n0 pn n0 .
n1 n0

theorem E l P
Binomial
É 1 pkg n k

I Pta k o
Example 2.20 Problem

In Example 1.19, we found that a randomly tested resistor was acceptable


with probability P[A] = 0.78. If we randomly test 100 resistors, what is
the probability of Ti, the event that i resistors test acceptable?
Example 2.20 Solution

Testing each resistor is an independent trial with a success occurring


when a resistor is acceptable. Thus for 0  i  100,
⇣100⌘
P [Ti] = (0.78)i(1 0.78)100 i (1)
i
We note that our intuition says that since 78% of the resistors are ac-
ceptable, then in testing 100 resistors, the number acceptable should be
near 78. However, P[T78] ⇡ 0.096, which is fairly small. This shows that
although we might expect the number acceptable to be close to 78, that
does not mean that the probability of exactly 78 acceptable is high.
Example 2.21 Problem

To communicate one bit of information reliably, cellular phones trans-


mit the same binary symbol five times. Thus the information “zero” is
transmitted as 00000 and “one” is 11111. The receiver detects the cor-
rect information if three or more binary symbols are received correctly.
What is the information error probability P[E], if the binary symbol error
probability is q = 0.1?
Example 2.21 Solution

In this case, we have five trials corresponding to the five times the binary
symbol is sent. On each trial, a success occurs when a binary symbol is
received correctly. The probability of a success is p = 1 q = 0.9. The
error event E occurs when the number of successes is strictly less than
three:
h i h i h i
P [E] = P E0,5 + P E1,4 + P E2,3 (1)
⇣5⌘ ⇣5⌘ ⇣5⌘
= q5 + pq 4 + p2q 3 = 0.00856. (2)
0 1 2
By increasing the number of binary symbols per information bit from 1
to 5, the cellular phone reduces the probability of error by more than one
order of magnitude, from 0.1 to 0.0081.
Theorem 2.9

A subexperiment has sample space Ssub = {s0, . . . , sm 1} with P[si] = pi.


For n = n0 +· · ·+nm 1 independent trials, the probability of ni occurences
of si, i = 0, 1, . . . , m 1, is
h i ⇣ n ⌘ n
n
P En0,...,nm 1 = p00 · · · pmm 11 .
n 0 , . . . , nm 1
Example 2.22

A packet processed by an Internet router carries either audio information


with probability 7/10, video, with probability 2/10, or text with probability
1/10. Let Ea,v,t denote the event that the router processes a audio
packets, v video packets, and t text packets in a sequence of 100 packets.
In this case,
h i ⇣ 100 ⌘ ✓ 7 ◆a ✓ 2 ◆v ✓ 1 ◆t
P Ea,v,t = (1)
a, v, t 10 10 10
Keep in mind that by the extended definition of the multinomial coef-
ficient, P[Ea,v,t] is nonzero only if a + v + t = 100 and a, v, and t are
nonnegative integers.
Example 2.23 Problem

Continuing with Example 2.16, suppose that for each student all four grades
have probability 0.25, independent of any other student. Suppose there are
100 students, what is the probability of exactly 25 students of each grade?
Example 2.23 Solution

Let E25,25,25,25 denote the probability of exactly 25 microprocessors of


each grade. From Theorem 2.9,
h i ⇣ 100 ⌘
P E25,25,25,25 = (0.25)100 = 0.0010. (1)
25, 25, 25, 25

1
0.25 0.25
2 3
0.25 0.25
Quiz 2.3

Data packets containing 100 bits are transmitted over a communication


link. A transmitted bit is received in error (either a 0 sent is mistaken for
a 1, or a 1 sent is mistaken for a 0) with probability ✏ = 0.01, independent
of the correctness of any other bit. The packet has been coded in such a
way that if three or fewer bits are received in error, then those bits can be
corrected. If more than three bits are received in error, then the packet
is decoded with errors.
(a) Let Ek,100 k denote the event that a received packet has k bits in
error and 100 k correctly decoded bits. What is P[Ek,100 k ] for
k = 0, 1, 2, 3?
(b) Let C denote the event that a packet is decoded correctly. What is
P[C]?
Quiz 2.3 Solution
(a) In this problem, k bits received in error is the same as k failures in
100 trials. The failure probability is ✏ = 1 p and the success probability
is 1 ✏ = p. That is, the probability of k bits in error and 100 k correctly
received bits is
h i ⇣100⌘
P Ek,100 k = ✏k (1 ✏)100 k . (1)
k
For ✏ = 0.01,
h i
P E0,100 = (1 ✏)100 = (0.99)100 = 0.3660. (2)
h i
P E1,99 = 100(0.01)(0.99)99 = 0.3700. (3)
h i
P E2,98 = 4950(0.01)2(0.99)98 = 0.1849. (4)
h i
P E3,97 = 161, 700(0.01)3(0.99)97 = 0.0610. (5)
(b) The probability a packet is decoded correctly is just
h i h i h i h i
P [C] = P E0,100 + P E1,99 + P E2,98 + P E3,97 = 0.9819. (6)
Section 2.4

Reliability Analysis
Figure 2.2

W1

W1 W2 W3 W2

W3

Components in Series Components in Parallel

Serial and parallel devices.


Example 2.24 Problem

An operation consists of two redundant parts. The first part has two
components in series (W1 and W2) and the second part has two compo-
nents in series (W3 and W4). All components succeed with probability
p = 0.9. Draw a diagram of the operation and calculate the probability
that the operation succeeds.

P
CINI OMID
P WI U Wo
Figure 2.3

W1 W2 W5

W3 W4 W6

The operation described in Example 2.24. On the left is the original


operation. On the right is the equivalent operation with each pair of
series components replaced with an equivalent component.
l
Example 2.24 Solution

A diagram of the operation is shown in Figure 2.3. We can create an


equivalent component, W5, with probability of success p5 by observing
that for the combination of W1 and W2,

P [W5] = p5 = P [W1W2] = p2 = 0.81. (1)


Similarly, the combination of W3 and W4 in series produces an equivalent
component, W6, with probability of success p6 = p5 = 0.81. The entire
operation then consists of W5 and W6 in parallel, which is also shown in
Figure 2.3. The success probability of the operation is

P [W ] = 1 (1 p5)2 = 0.964 (2)


We could consider the combination of W5 and W6 to be an equivalent
component W7 with success probability p7 = 0.964 and then analyze a
more complex operation that contains W7 as a component.
Quiz 2.4

A memory module consists of nine chips. The device is designed with


redundancy so that it works even if one of its chips is defective. Each chip
contains n transistors and functions properly only if all of its transistors
work. A transistor works with probability p independent of any other
transistor.
(a) What is the probability P[C] that a chip works?
(b) What is the probability P[M ] that the memory module works?
(c) If p = 0.999, what is the maximum number of transistors per chip n
that produces P[M ] 0.9 (a 90% success probability for the memory
module)?
(d) If the memory module can tolerate two defective chips, what is the
maximum number of transistors per chip n that produces P[M ] 0.9?
Quiz 2.4 Solution
(a) Since the chip works only if all n transistors work, the transistors in
the chip are like devices in series. The probability that a chip works
is P[C] = pn.
(b) The module works if either 8 chips work or 9 chips work. Let Ck
denote the event that exactly k chips work. Since transistor failures
are independent of each other, chip failures are also independent.
Thus each P[Ck ] has the binomial probability
⇣ 9⌘
P [C8] = (P [C])8 (1 P [C])9 8 = 9p8n(1 pn), (1)
8
P [C9] = (P [C])9 = p9n. (2)
The probability a memory module works is

P [M ] = P [C8] + P [C9] = p8n(9 8pn). (3)


[Continued]
Quiz 2.4 Solution (Continued 2)
(c) Given that p = 0.999. For and we need to find the largest value of n
such that P[M ] > 0.9. Although this quiz is not a Matlab quiz, this
matlab script is an easy way to calculate the largest n:
%chipsize1.m
n=1:80;
PM=(p.^(8*n)).*(9-8*(p.^n));
plot(n,PM)
nmax = sum(PM>0.9)
The script includes a plot command to verify that P[M ] is a decreasing
function of n. The output is
>> chipsize1
nmax =
62
(d) Now the event C7 that seven chips works also yields an acceptable
module. Since each chip works with probability P[C] = pn,
⇣9⌘
P [C7] = (P [C])7(1 P [C])2 = 36p7n(1 p n )2
7
= 36p7n 72p8n + 36p9n. (4)
[Continued]
Quiz 2.4 Solution (Continued 3)
The probability a memory module works is

P [M ] = P [C7] + P [C8] + P [C9]


= 36p7n 72p8n + 36p9n + p8n(9 8pn) (5)
= 36p7n 63p8n + 28p9n. (6)
Just as we did in the previous part, we use Matlab to find the maxi-
mum n:
%chipsize2.m
n=1:150;
PM=36*(p.^(7*n))-(63*p.^(8*n))+(28*p.^(9*n));
plot(n,PM)
nmax = sum(PM>0.9)
The answer is
>> chipsize2
nmax =
138
The additional redundancy at the chip level to enable one more de-
fective chip allows us to more than double the number of transistors
per chip.
Section 2.5

Matlab
Figure 2.4

Y =
Columns 1 through 12
47 52 48 46 54 48 47 48 59 44 49 48
Columns 13 through 24
42 52 40 40 47 48 48 48 53 49 45 61
Columns 25 through 36
60 59 49 47 49 45 48 51 48 53 52 53
Columns 37 through 48
56 54 60 53 52 51 58 47 50 48 44 49
Columns 49 through 60
50 46 52 50 51 51 57 50 49 56 44 56

The simulation output of 60 repeated experiments of 100 coin flips.


Example 2.25 Problem

Using Matlab, perform 60 experiments. In each experiment, flip a coin


100 times and record the number of heads in a vector Y such that the
jth element Yj is the number of heads in subexperiment j.
Example 2.25 Solution

>> X=rand(100,60)<0.5; The Matlab code for this task appears on the left.
>> Y=sum(X,1)
The 100 ⇥ 60 matrix X has i, jth element X(i,j)=0
(tails) or X(i,j)=1 (heads) to indicate the result of flip i of subexperiment
j. Since Y sums X across the first dimension, Y(j) is the number of
heads in the jth subexperiment. Each Y(j) is between 0 and 100 and
generally in the neighborhood of 50. The output of a sample run is shown
in Figure 2.4.
Example 2.26 Problem

Simulate the testing of 100 microprocessors as described in Example 2.23.


Your output should be a 4 ⇥ 1 vector X such that Xi is the number of
grade i microprocessors.
Example 2.26 Solution

%chiptest.m The first line generates a row vector G of random


G=ceil(4*rand(1,100));
T=1:4; grades for 100 microprocessors. The possible test
X=hist(G,T); scores are in the vector T. Lastly, X=hist(G,T)
returns a histogram vector X such that X(j) counts the number of
elements G(i) that equal T(j).

Note that “ help hist” will show the variety of ways that the hist function
can be called. Morever, X=hist(G,T) does more than just count the
number of elements of G that equal each element of T. In particular,
hist(G,T) creates bins centered around each T(j) and counts the number
of elements of G that fall into each bin.
Quiz 2.5

The flip of a thick coin yields heads with probability 0.4, tails with prob-
ability 0.5, or lands on its edge with probability 0.1. Simulate 100 thick
coin flips. Your output should be a 3 ⇥ 1 vector X such that X1, X2, and
X3 are the number of occurrences of heads, tails, and edge.
Quiz 2.5 Solution
For a Matlab simulation, we first generate a vector R of 100 random numbers. Second,
we generate vector X as a function of R to represent the 3 possible outcomes of a flip.
That is, X(i)=1 if flip i was heads, X(i)=2 if flip i was tails, and X(i)=3) is flip i landed
on the edge. The matlab code is

R=rand(1,100);
X=(R<= 0.4) ...
+ (2*(R>0.4).*(R<=0.9)) ...
+ (3*(R>0.9));
Y=hist(X,1:3)

To see how this works, we note there are three cases:

• If R(i) <= 0.4, then X(i)=1.

• If 0.4 < R(i) and R(i)<=0.9, then X(i)=2.

• If 0.9 < R(i), then X(i)=3.

These three cases will have probabilities 0.4, 0.5 and 0.1. Lastly, we use the hist
function to count how many occurences of each possible value of X(i).

You might also like