Probability and Statistics in Engineering - Hines, Montgomery, Goldsman, Borror 4e Solutions (Thedrunkard1234)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 222

1

Chapter 1

1–1. (a) P (A ∪ B ∪ C) = 1 − P (A ∪ B ∪ C) = 1 − 0.25 = 0.75


(b) P (A ∪ B) = P (A) + P (B) − P (A ∩ B) = 0.18

1–2. (a) A B A B
U U

C C

A∪(B∪C) = (A∪B)∪C A∩(B∩C) = (A∩B)∩C

(b) A∪B
A B A B
U U
=
C A∪C C

A∪(B∩C) = (A∪B)∩C(A∪C)

A B A B
U U
=
C A C C

A∩(B∪C) = (A∩B)∪(A∩C)

(c) (d)
U U
A B A B

A∩B = A A∪B = B

(e) A B (f)
U U
ABC

A∩B = ∅ ⇒ A⊂B A⊂B and B⊂C ⇒ A⊂C

1–3. (a) A ∩ B = {5},


(b) A ∪ B = {1, 3, 4, 5, 6, 7, 8, 9, 10},
(c) A ∩ B = {2, 3, 4, 5},
(d) A ∩ (B ∩ C) = U = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10},
(e) A ∩ (B ∪ C) = {1, 2, 5, 6, 7, 8, 9, 10}
2

1–4. P (A) = 0.02, P (B) = 0.01, P (C) = 0.015


P (A ∩ B) = 0.005, P (A ∩ C) = 0.006
P (B ∩ C) = 0.004, P (A ∩ B ∩ C) = 0.002
P (A ∪ B ∪ C) = 0.02 + 0.01 + 0.015 − 0.005 − 0.006 − 0.004 + 0.002 = 0.032

1–5. S = {(t1 , t2 ): t1 ∈ R, t2 ∈ R, t1 ≥ 0, t2 ≥ 0}
t2 t2 t2

0.15
t1 + t 2 = 0.3
C
B
A

t1 t1 t1
0.15

A = {(t1 , t2 ): t1 ∈ R, t2 ∈ R, 0 ≤ t1 ≤ 0.3, 0 ≤ t2 ≤ 0.3 − t1 }


B = {(t1 , t2 ): t1 ∈ R, t2 ∈ R, 0 ≤ t1 ≤ 0.15, 0 ≤ t2 ≤ 0.15}
C = {(t1 , t2 ): t1 ∈ R, t2 ∈ R, t1 ≥ 0, t2 ≥ 0, t1 − 0.06 ≤ t2 ≤ t1 + 0.06}

1–6. (a) S = {(x, y): x ∈ R, y ∈ R, 0 ≤ x ≤ y ≤ 24}


(b) i) t2 ii) y

24 24

y–x=1

23 t2
1 t1 x
24 t1

iii) y

24

4.8
x
19.2 24
3

1–7. S = {N N N N N, N N N N D, N N N DN, N N N DD, N N DN N,


N N DN D, N N DD, N DN N N, N DN N D, N DN D,
N DD, DN N N N, DN N N D, DN N D, DN D, DD}

1–8. {0, 1}A = {∅, {a}, {b}, {c}, {d}, {a, b}, {a, c}, {a, d}, {b, c}, {b, d},
{c, d}, {a, b, c}, {a, b, d}, {a, c, d}, {b, c, d}, {a, b, c, d}}
1–9. N = Not Defective, D = Defective
(a) S = {N N N, N N D, N DN, N DD, DN N, DN D, DDN, DDD}
(b) S = {N N N N, N N N D, N N DN, N DN N, DN N N }

1–10. p0 = Lot Fraction Defective


50 · p0 = Lot No. of Defectives
µ ¶µ ¶
50p0 50(1 − p0 )
0 10
P (Scrap Lot|n = 10, N = 50, p0 ) = 1 − µ ¶
50
10
If p0 = 0.1, P (scrap lot) ∼
= 0.689.
She might wish to increase sample size.

1–11. 6 · 5 = 30 routes

1–12. 263 · 103 = 17,576,000 possible plates ⇒ scheme feasible


µ ¶µ ¶µ ¶
15 8 4
1–13. = 560,560 ways
6 2 1
µ ¶µ ¶
20 80
X 2
k 4−k ∼
1–14. P (X ≤ 2) = µ ¶ = 0.97
k=0
100
0.4
µ ¶µ ¶
300p0 300(1 − p0 )
X1
0
k 10 − k
1–15. P (Accept|p ) = µ ¶
k=0
300
10

1–16. There are 512 possibilities, so the probability of randomly selecting one is 5−12 .
µ ¶
8
1–17. = 28 comparisons
2
4

µ ¶
40
1–18. = 780 tests
2

40!
1–19. P240 = = 1560 tests
38!
µ ¶
10
1–20. = 252
5
µ ¶µ ¶
5 5
1–21. = 25
1 1
µ ¶µ ¶
5 5
= 100
2 2

1–22. [1 − (0.2)(0.1)(0.1)][1 − (0.2)(0.1)](0.99) = 0.968

1–23. [1 − (0.2)(0.1)(0.1)][1 − (0.2)(0.1)](0.9) = 0.880

1–24. RS = R1 {1 − [1 − (1 − (1 − R2 )(1 − R4 ))(R5 )][1 − R3 ]}

1–25. S = Siberia U = Ural


P (S) = 0.6, P (U ) = 0.4, P (F |S) = P (F |S) = 0.5
P (F |U ) = 0.7, P (F |U ) = 0.3
(0.6) · (0.5) .
P (S|F ) = = 0.714
(0.6)(0.5) + (0.4)(0.3)

1–26. RS = (0.995)(0.993)(0.994) = 0.9821

1–27. A: 1st ball numbered 1


B: 2nd ball numbered 2

P (B) = P (A) · P (B|A) + P (A) · P (B|A)


1 1 m−1 1
= · + ·
m m−1 m m
2
m −m+1
= 2
m (m − 1)

1–28. 9 × 9 − 9 = 72 possible numbers


D1 + D2 even: 32 possibilities
20
P (D1 odd and D2 odd|D1 + D2 even) = 32
.
5

1–29. A: over 60
M : male
F : female

P (M ) = 0.6, P (F ) = 0.4, P (A|M ) = 0.2, P (A|F ) = 0.01

P (F ) · P (A|F )
P (F |A) =
P (F ) · P (A|F ) + P (M ) · P (A|M )
(0.04)(0.01) 0.004
= =
(0.4)(0.01) + (0.6)(0.2) 0.124

= 0.0323

1–30. A: defective
Bi : production on machine i

(a) P (A) = P (B1 ) · P (A|B1 ) + P (B2 ) · P (A|B2 ) + P (B3 ) · P (A|B3 )


+ P (B4 ) · P (A|B4 )
= (0.15)(0.04) + (0.30)(0.03) + (0.20)(0.05) + (0.35)(0.02)
= 0.032
(0.2)(0.05)
(b) P (B3 |A) = = 0.3125
0.032
1–31. r = radius
³ r ´2
π 1
P (closer to center) = 2 =
πr2 4
1–32. P (A ∪ B ∪ C) = P ((A ∪ B) ∪ C) (associative law)
= P (A ∪ B) + P (C) − P ((A ∪ B) ∩ C)
= P (A) + P (B) − P (A ∩ B) + P (C) − P ((A ∩ C) ∪ (B ∩ C))
= P (A) + P (B) + P (C) − P (A ∩ B) − P (A ∩ C)
− P (B ∩ C) + P (A ∩ B ∩ C)

1–33. For k = 2, P (A1 ∪ A2 ) = P (A1 ) + P (A2 ) − P (A1 ∩ A2 ); Thm. 1–3.


Using induction we show that if true for k − 1, then true for k, i.e.,
6

If
k
X X X
P (A2 ∪ A3 ∪ · · · ∪ Ak ) = P (Ai ) − P (Ai ∩ Aj ) + P (Ai ∩ Aj ∩ Ar )
i=2 2≤i<j≤k 2≤i<j<r≤k
X
− P (Ai ∩ Aj ∩ Ar ∩ A` ) + · · · (Eq. 1)
2≤i<j<r<`≤k

Then
k
X X X
P (A1 ∪ A2 ∪ · · · ∪ Ak ) = P (Ai ) − P (Ai ∩ Aj ) + P (Ai ∩ Aj ∩ Ar )
i=1 1≤i<j≤k 1≤i<j<r≤k
X
− P (Ai ∩ Aj ∩ Ar ∩ A` ) + · · · (Eq. 2)
1≤i<j<r<`≤k

By hypothesis, and letting A1 ∩ Ai replace Ai in Eq. 1,


k
X X
P ((A1 ∩ A2 ) ∪ (A1 ∩ A3 ) ∪ · · · ∪ (A1 ∩ Ak )) = P (A1 ∩ Ai ) − P (A1 ∩ Ai ∩ Aj )
i=2 2≤i<j≤k
X X
+ P (A1 ∩Ai ∩Aj ∩Ar )− P (A1 ∩Ai ∩Aj ∩Ar ∩A` )+· · · (Eq. 3)
2≤i<j<r≤k 2≤i<j<r≤k

By Thm. 1–3,
P (A1 ∪ (A2 ∪ A3 ∪ · · · ∪ Ak )) = P (A1 ) + P (A2 ∪ A3 ∪ · · · ∪ Ak )
− P ((A1 ∩ A2 ) ∪ · · · ∪ (A1 ∩ Ak ))
So from using Eq. 1 through 3,
P (A1 ∪ A2 ∪ · · · ∪ Ak )
" k #
X X X
= P (A1 ) + P (Ai ) − P (Ai ∩ Aj ) + P (Ai ∩ Aj ∩ Ar ) − · · ·
i=2 2≤i<j≤k 2≤i<j<r≤k
" k
#
X X X
− P (A1 ∩ Ai ) − P (A1 ∩ Ai ∩ Aj ) + P (A1 ∩ Ai ∩ Aj ∩ Ar ) − · · ·
i=2 2≤i<j≤k 2≤i<j<r≤k
k
X X X
= P (Ai ) − P (Ai ∩ Aj ) + P (Ai ∩ Aj ∩ Ar )
i=1 1≤i<j≤k 1≤i<j<r≤k

+ · · · + (−1)k−1 · P (A1 ∩ A2 ∩ · · · ∩ Aj )
7

(365)(364) · · · (365 − n + 1)
1–35. P (B) =
365n
n 10 20 21 22 23 24 25 30 40 50 60
P (B) 0.117 0.411 0.444 0.476 0.507 0.538 0.569 0.706 0.891 0.970 0.994

6 2 8
1–36. P (win on 1st throw) = + =
36 36 36

P (win after 1st throw) = P (win on 4) + P (win on 5) + P (win on 6)


+ P (win on 8) + P (win on 9) + P (win on 10)
" µ ¶ µ ¶2 #
3 3 27 3 27 3 1
P (win on 4) = · + · + · + ··· =
36 36 36 36 36 36 36
" µ ¶ µ ¶ µ ¶2 µ ¶ #
4 4 26 4 26 4 2
P (win on 5) = · + + · + ··· =
36 36 36 36 36 36 45
" µ ¶ µ ¶ µ ¶2 µ ¶ #
5 5 25 5 25 5 25
P (win on 6) = + + · + ··· =
36 36 36 36 36 36 396
25
P (win on 8) = P (win on 6) =
396
2 1
P (win on 9) = P (win on 5) = P (win on 10) = P (win on 4) =
45 36
· ¸
8 1 2 25
P (win) = + 2· +2· +2· = 0.4929
36 36 45 396

1–37. P88 = 8! = 40,320

1–38. Let B, C, D, E, X represent the events of arriving at points so labeled.


1
P (B) = P (C) = P (D) = P (E) =
4
1 2
P (X|B) = , P (X|C) = 1, P (X|D) = 1, P (X|E) =
3 5
P (X) = P (B) · P (X|B) + P (C) · P (X|C) + P (D) · P (X|D) + P (E) · P (X|E)
µ ¶ µ ¶ µ ¶ µ ¶
1 1 1 1 1 2 41
= · + ·1 + ·1 + · =
4 3 4 4 4 5 60
8

· ¸
P (B3 ) · P (A|B3 ) (0.5)(0.3)
1–39. P (B3 |A) = 3 =
X (0.2)(0.2) + (0.3)(0.5) + (0.5)(0.3)
P (Bi ) · P (A|Bi )
i=1

= 0.441
1–40. F : Structural Failure
DS : Diagnosis as Structural Failure

P (F ) · P (DS |F )
P (F |DS ) =
P (F ) · P (DS |F ) + P (F ) · P (DS |F )
(0.25)(0.9) 0.225
= = = 0.6
(0.25)(0.9) + (0.75)(0.2) 0.225 + 0.150
1

Chapter 2
  
 
4 48
0 5
2–1. RX = {0, 1, 2, 3, 4}, P (X = 0) =  
52
 
5
µ ¶µ ¶ µ ¶µ ¶
4 48 4 48
1 4 2 3
P (X = 1) = µ ¶ , P (X = 2) = µ ¶ ,
52 52
5 5
µ ¶µ ¶ µ ¶µ ¶
4 48 4 48
3 2 4 1
P (X = 3) = µ ¶ , P (X = 4) = µ ¶
52 52
5 5

1 1 1 1 1 1 26
2–2. µ = 0 · +1· +2· +3· +4· +5· =
6 6 3 12 6 12 12
· ¸ µ ¶2
2 1 2 1 2 1 2
2 1 2 1 2 1 26 83
σ = 0 · +1 · +2 · +3 · +4 · +5 · − =
6 6 3 12 6 12 12 36
Z ∞
2–3. ce−x dx = 1 ⇒ c = 1, so
0
½
e−x if x ≥ 0
f (x) =
0 otherwise
Z ∞ Z ∞
−x
µ= xe dx = −xe−x |∞
0 + e−x dx = 1
0 0
Z ∞ · Z ∞ ¸
2 −x 2
σ = 2
xe dx − 1 = −x2 e−x |∞
0 + 2xe −x
dx − 1
0 0
· Z ∞ ¸
= −2xe−x |∞
0 + 2e−x
dx − 1
0
=2−1=1

2–4. FT (t) = PT (T ≤ t) = 1 − e−ct ; t ≥ 0


½ −ct
0 ce if t ≥ 0
∴ fT (t) = FT (t) =
0 otherwise
2

2–5. (a) Yes


(b) No, since GX (∞) 6= 1 and GX (b) 6≥ GX (a) if b ≥ a
(c) Yes

2–6. (a) fX (x) = FX0 (x) = e−x ; 0 < x < ∞


= 0; x ≤ 0
0
(c) hX (x) = HX (x) = ex ; −∞ < x ≤ 0
= 0; x > 0

2–7. Both are since pX (x) ≥ 0, all x; and Σall x pX (x) = 1

2–8. The probability mass function is

x pX (x)

1
−1 5

1
0 10

2
+1 5

3
+2 10

ow 0
µ ¶ µ ¶ µ ¶ µ ¶
1 1 2 3 4
E(X) = −1 · + 0· + 1· + 2· =
5 10 5 10 5
µ ¶ µ ¶ µ ¶ µ ¶ µ ¶2
2 1 12 22 2 3 4 29
V (X) = (−1) · + 0 · + 1 · + 2 · − =
5 10 5 10 5 25
3



 0, x<0





 1

 , −1 ≤ x < 0

 5





3
FX (x) = , 0≤x<1

 10





 7

 , 1≤x<2

 10





1, x≥2

29
X e−20 (20)x
2–9. P (X < 30) = P (X ≤ 29) = = 0.978
x=0
x!

2–10. (a) PX (x) = FX (x) − FX (x − 1)


" µ ¶x+1 # · µ ¶x ¸
1 1
= 1− − 1−
2 2
µ ¶x µ ¶x+1
1 1
= −
2 2
¡ ¢ ¡ ¢x
= 21 12 ; x = 0, 1, 2, . . .

= 0; ow

(b) PX (0 < X ≤ 8) = FX (8) − FX (0) = 0.498


∞ µ ¶k
1X 1 2
(c) PX (X even) = =
2 k=0 4 3

Z 2 Z 4
1
2–11. (a) kx dx + k(4 − x) dx = 1 ⇒ k =
0 2 4
1
and fX (x) ≥ 0 for k =
4
4

Z 2 Z 4
1 2 1
(b) µ = x dx + (4x − x2 ) dx = 2
0 4 2 4
Z 2 Z 4
1 3 1 2
σ2 = x dx + (4x2 − x3 ) dx − 22 =
0 4 2 4 3

(c) FX (x) = 0; x < 0


Z x
1 x2
= t dt = ; 0≤x<2
0 4 8
Z 2 Z x
1 1 x2
= x dx + (4 − t) dt = −1 + x − ; 2≤x<4
0 4 2 4 8
= 1;
x>4
·Z a Z 2a ¸
1
2–12. (a) k x dx + (2a − x) dx = 1 ⇒ k = 2
0 a a

(b) FX (x) = 0; x < 0


Z x µ 2¶
x
= kt dt = k ;0 ≤ x < a
0 2
Z a Z x µ 2¶ · ¸
a a 2 − x2
= kx dx + k(2a − t) dt = k + k 2a(x − a) +
0 a 2 2
for a ≤ x ≤ 2a
= 1; x > 2a
·Z a Z 2a ¸
2 2
(c) µ = k x dx + (2ax − x ) dx = a
0 a

a2
σ2 =
6
1

2–13. From Chebyshev’s inequality 1 − = 0.75 ⇒ k = 2 with µ = 2, σ = 2, and the
√ √ k2
interval is [14 − 2 2, 14 + 2 2].
Z 0
2–14. (a) kt2 dt = 1 ⇒ k = 3
−1
5

Z ¯ # "
0 4 ¯0
t ¯ 3
(b) µ = 3t3 dt = 3 ¯ =−
−1 4 −1 4
Z 0 µ ¶2 Ã ¯ !
0
3 t5 ¯¯ 9 3
σ2 = 3t4 dt − − =3 ¯ − =
−1 4 5 −1 16 80

(c) FT (t) = 0; t < −1


Z t
= 3u2 du = t3 + 1; −1 ≤ t ≤ 0
−1

= 1; t>0
µ ¶
1 1 1 8
2–15. (a) k + + =1⇒k=
2 4 8 7
" µ ¶ µ ¶2 µ ¶3 #
8 1 1 1 11
(b) µ = 1· +2· +3 =
7 2 2 2 7
" µ ¶ µ ¶2 µ ¶3 # µ ¶2
2 8 2 1 2 1 2 1 11 26
σ = 1 · +2 · +3 · − =
7 2 2 2 7 49

(c) FX (x) = 0; x<1


8
= ; 1≤x<2
14
12
= ; 2≤x<3
14
= 1; x ≥ 3

X
2–16. krn = 1 => k = 1 − r
n=0

2–17. Using Chebychev’s inequality, 1 − k12 = 0.99 ⇒ k = 10,



µ = 2, σ = 0.4, so the interval is [2 − 10(0.6324)]; 2 + 10(0.6324)]. The letters
should be mailed 8.3 days before delivery date required.

2–18. (a) µA = 1000(0.2) + 1050(0.3) + 1100(0.1) + 1150(0.3) + 1200(0.05) + 1350(0.05)


= 1097.5
µB = 1135
(b) Assume independence so µA|B=1130 = µA = 1097.5
6

(c) With independence, P (A = k and B = k) = P (A = k) · P (B = k). So


X
P (A = k)P (B = k) = (0.1)(0.2) + (0.3)(0.1) + (0.1)(0.3) + (0.3)(0.3)
k
+ (0.05)(0.1) + (0.05)(0.1) = 0.18
xi − 1
2–19. p(xi ) = ; xi = 2, 3, . . . , 6
36
13 − xi
= ; xi = 7, 8, . . . , 12
36
so
xi 2 3 4 5 6 7 8 9 10 11 12
1 2 3 4 5 6 5 4 3 2 1
p(xi )
36 36 36 36 36 36 36 36 36 36 36

105
2–20. µ = 7, σ 2 =
18

2–21. (a) FX (x) = 0; x<0


= x2 /9; 0≤x<3
= 1; x≥3
1
(b) µ = 2, σ 2 =
2
54
(c) µ03 =
5
3
(d) m = √
2

2–22. µ = 0, σ 2 = 25, σ = 5
P [|X − µ| ≥ kσ] = P [|X| ≥ 5k] = 0 if k > 1 and = 1, 0 < k ≤ 1.
From Chebychev’s inequality, the upper bound is k12 .

2–23. F (x) = 0; x < 0


Z x³ ´ Z x2 /2t2
u −(u2 /2t2 )
F (x) = 2
e du = e−v dv
0 t 0
2 /2t2
= 1 − e−x ;x ≥ 0
7

Z x
1 du
2–24. F (x) = ; −∞ < x ≤ ∞
0 σπ {1 + ((u − µ)2 /σ 2 )}

u−µ 1
Let t = , dt = du and
σ σ
Z x−u µ ¶
σ 1 dt 1 −1 x−u
F (x) = · = tan ; −∞ < x < ∞
0 π 1 + t2 π σ

Z π/2
π/2
2–25. k sin y dy = 1 ⇒ k[− cos y|0 ] = 1 ⇒ k = 1
0
Z π/2 ³π ´
µ= y sin y dy = sin =1
0 2
2–26. Assume X continuous
Z Z " k µ ¶ #
∞ ∞ X k
µk = (x − µ)k fX (x) dx = (−µj )xk−j fX (x) dx
−∞ −∞ j
j=0
k
X µ ¶ Z ∞
j k
= (−1) µj xk−j fX (x) dx
j −∞
j=0
k
X µ ¶
j k
= (−1) µj µ0k−j
j
j=0
1

Chapter 3

3–1. (a) y pY (y)


0 0.6
20 0.3
80 0.1
ow 0
X
(b) E(Y ) = y · pY (y) = 14
y
X
V (Y ) = y 2 · pY (y) − (14)2 = 564
y

3–2. Let Profit = P = 10 + 2X


µ ¶ Z p−10
p − 10 2 x
P (P ≤ p) = P (10 + 2X ≤ p) = P X≤ = dx
2 0 18
¯ p−10
x2 ¯¯ 2 1 2
FP (p) = ¯ = (p − 20p + 100)
36 0 144
1
fP (p) = (p − 10); 10 ≤ p ≤ 22
72
= 0; ow

3–3. (a) P (T < 1) = 1 − e−1/4 = 0.221


(b) E[P ] = 200 − 200P (T < 1) = 155.80

3–4. (a) x pX (x) y = 2000(12 − x) pY (y)


10 0.1 4000 0.1
11 0.3 2000 0.3
12 0.4 0 0.4
13 0.1 −2000 0.1
14 0.1 −4000 0.1
ow 0 ow 0
(b) E(X) = 10(0.1) + 11(0.3) + 12(0.4) + 13(0.1) + 14(0.1) = 11.8 days
V (X) = 102 (0.1) + 112 (0.3) + 122 (0.4) + 132 (0.1) + 142 (0.1) − (11.8)2
= 1.16 days2
2

E(Y ) = (4000)(0.1) + (2000)(0.3) + 0(0.4) + (−2000)(0.1)


+ (−4000)(0.1) = $400
V (Y ) = (4000)2 (0.1) + (2000)2 (0.3) + 02 (0.4) + (−2000)2 (0.1)
+ (−4000)2 (0.1) − 4002 = 4,640,000($2 )

3–5. FZ (z) = P (Z ≤ z) = P (X 2 ≤ z) = P (|X| ≤ z)
√ R √z 2
= P (0 ≤ X ≤ z) = 0 2xe−x dx
Let u = x2 , du = 2x dx, so
Z z
FZ (z) = e−u du = 1 − ez
0

fZ (z) = e−z ; z ≥ 0
= 0; ow
9
X 1
3–6. (a) E(Di ) = d · pDi (d) = (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9) = 4.5
d=0
10
9
X
(b) V (Di ) = d2 · pDi (d) − (4.5)2
d=0
1 2
= [1 + 22 + 32 + 42 + 52 + 62 + 72 + 82 + 92 ] − (20.25) = 8.25
10
(c) d y y pY (y)
0 4 0 0.2
1 3 1 0.2
2 2 2 0.2
3 1 3 0.2
4 0 4 0.2
5 0 ow 0
6 1
7 2
8 3
9 4
2
E(Y ) = (1 + 2 + 3 + 4) = 2
10
2 2
V (Y ) = (1 + 22 + 32 + 42 ) − 4 = 2
10
3

3–7. R = Revenue/Gal
R = 0.92; A < 0.7
= 0.98; A ≥ 0.7
E(R) = 0.92 P (A < 0.7) + 0.98 P (A ≥ 0.7)
= 0.92(0.7) + 0.98(0.3) = 93.8c|/gal.
Z ∞ Z ∞
tx 1 − θ1 (x−β) 1 1
3–8. MX (t) = e e dx = eβ/θ e−x( θ −t) dx
β θ θ β
µ ¶−1
1 1
= −t eβt , for 1
θ
−t>0
θ θ
µ ¶−2 ¸ ·
1 1 − θt 1 − θt βt
MX0 (t) = e β+1
θ θ θ
µ ¶−3 "µ ¶2 µ ¶ µ ¶ #
1 1 − θt 1 − θt 1 − θt 1 − θt
MX00 (t) = eβt β2 + β+2+ β
θ θ θ θ θ
E(X) = MX0 (0) = β + θ
V (X) = MX00 (0) − (β + θ)2 = θ2

· r r ¸
y 2 y
3–9. (a) FY (y) = P (2X ≤ y) = P − ≤X≤+
2 2
µ r ¶
y
=P 0≤X≤
2
Z y√ √y
2 √y
= e dx = −e |0 2 = 1 − e− 2
−x −x
0

1 ³ y ´−1/2 −(y/2)1/2
fY (y) = FY0 (y) = ·e ;y > 0
4 2
= 0; otherwise
4

(b) FV (v) = P (X 1/2 ≤ v) = P (X ≤ v 2 )


Z v2
2 2
= e−x dx = −e−x |v0 = 1 − e−v
0
2
fV (v) = FV0 (v) = 2ve−v ; v > 0
= 0; otherwise
Z eu
u
(c) FU (u) = P (`n(X) ≤ u) = P (X ≤ e ) = e−x dx
u 0
= 1 − e−e
u −u)
fU (u) = FU0 (u) = e−(e ;u > 0
= 0; otherwise

3–10. Note that as stated Y > y0 ⇒ signal read (not Y > |y0 |), so
Z π/2 Z 3π/2
1 1
PY (Y > y) = dx + dx
tan−1 y 2π tan−1 y+π 2π
FY (y) = 1 − PY (Y > y)
1 1
= + tan−1 y; y ≥ 0
2 π
µ ¶
1 1
fY (y) = ; −∞ < y < +∞
π 1 + y2
1
Note the symmetry in ( 1+y 2 ).

3–11. Let S = Stock Level


L(X, S) = 0.5(X − S), X > S
= 0.25(S − X), X ≤ S
Z S Z 2·106
6
E(L(X, S)) = 0.25(S − X) · 10 dx + 0.5(X − S) · 10−6 dx
106 S

dE[L(X, S)] 6 5 5
= 10−6 S − = 0 ⇒ S = 106
dS 8 4 3
5

Z 4 Z 4
g2
3–12. µG = E(G) = gfG (g) dg = dg = 8/3.
0 0 8
Z 4
g3
E(G2 ) = dg = 8.
0 8
2
σG = V (G) = E(G2 ) − (E(G))2 = 8/9.
H(G) = (3 + 0.05G)2
H(µG ) = (3 + 0.05µG )2
H 0 (µG ) = (0.1)(3 + 0.05µG )
H 00 (µG ) = 0.005.
Thus,
1 1 8
µA ≈ H(µG ) + H 00 (µG )σG
2
= H(8/3) + (0.005) = 9.82
2 2 9
and

σA2 ≈ (H 0 (µG ))2 σG


2
= 0.082.

3–13. (a) FY (y) = P (4 − X 2 ≤ y) = P (X 2 ≥ 4 − y)


√ √
= P (X ≤ − 4 − y) + P (X ≥ 4 − y)
Z 2 p
= 0 + √ dx = 2 − 4 − y
4−y
1
fY (y) = (4 − y)−1/2 ; 0 ≤ y ≤ 3
2
= 0; otherwise
(b) FY (y) = P (ex ≤ y) = P (X ≤ `n(y))
Z `n(y)
= dx = `n(y) − 1
1
1
fY (y) = ; e1 ≤ y ≤ e2
y
= 0; otherwise
6

3
3–14. y=
(1 + x)2
√ −1/2
x = 3y −1

dx 3 −3/2
=− y
dy 2
¯ ¯ √
¯ dx ¯
¯ ¯ = 3 y −3/2
¯ dy ¯ 2
¯ ¯
¯ dx ¯
fY (y) = fX (x) ¯¯ ¯¯
dy

√ −1/2 3 −3/2
= fX ( 3y − 1) y
2

√ −1/2 3 −3/2
= exp[− 3y + 1] y ; 0≤y≤3
2
3–15. With equal probability pX (1) = · · · = pX (6) = 16 ,
6 µ ¶
X 1
MX (t) = etx
x=1
6
7
E(X) = MX0 (0) =
2
35
V (X) = MX00 (0) − [MX0 (0)]2 =
12

3–16. (a) Using the substitution y = bx2 , we obtain


Z ∞
2
1 = ax2 e−bx dx
Z0 ∞
y dy
= a e−y √
0 b 2 by
Z ∞
a
= y (3/2)−1 e−y dy
2b3/2 0

a 3 a 1 1 a π
= Γ( ) = Γ( ) =
2b3/2 2 2b3/2 2 2 4b3/2
Thus,

4b3/2
a = √ .
π
7

(b) Similarly,
Z ∞
2 a 2
µX = E(X) = ax3 e−bx dx = Γ(2) = √
0 2b2 πb
and
µ ¶
2 a 5 3
E(X ) = Γ = .
2b5/2 2 2b
These facts imply that
2 3 4 3π − 8
σX = E(X 2 ) − [E(X)]2 = − = .
2b πb 2πb
Now we have
H(x) = 18x2
4 72
H(µX ) = 18µ2X = 18 · =
πb πb
2 72
H 0 (µX ) = 36µX = 36 · √ = √
πb πb
H 00 (µX ) = 36
Then
. 1
µY = H(µX ) + H 00 (µX ) · σX
2
2
72 1 3π − 8 27
= + · 36 · =
πb 2 2πb b
and
.
σY2 = (H 0 (µX ))2 σX
2
µ ¶2 µ ¶
72 3π − 8 2592(3π − 8)
= √ =
πb 2πb π 2 b2


3–17. E(Y ) = 1, V (Y ) = 1 H(Y ) = Y 2 + 36
1
E(X) ∼
= H(µY ) + H 00 (µY )σY2
2
V (X) ∼
= [H 0 (µY )]2 σY2

H 0 (y) = y(y 2 + 36)−1/2 , H 00 (y) = 36(y 2 + 36)−3/2


1 .
E(X) ∼= (37)1/2 + (36)(37)−3/2 = 6.16
2
V (X) ∼
= (37)−1 = 0.027027
8

Z 1 Z 1
5
3–18. E(P ) = (1 + 3r)6r(1 − r) dr = (6r + 12r2 − 18r3 ) dr =
0 0 2

Since H(r) = 1 + 3r is increasing,


µ ¶µ ¶µ ¶
p−1 p−1 1 2
fP (p) = 6 1− = (5p − p2 − 4); 1 < p < 4
3 3 3 9
= 0; otherwise
Z ∞
3–19. MX (t) = etx · 4xe−2x dx
0
Z ∞ µ ¶−2
x(t−2) t
= 4xe dx = 1−
0 2
E(X) = MX0 (0) = 1
1
V (X) = MX00 (0) − [MX0 (0)]2 =
2

π · X2
3–20. V = ·1
4
π π
E(V ) = E(X 2 ) = [V (X) + (E(X))2 ]
4 4
π
= · [25 · 10−6 + 22 ] = 3.14162
4

3–21. MY (t) = E(etY ) = E(et(aX+b) )


= etb E(e(at)X )
= etb MX (at)

R1
3–22. (a) 0
k(1 − x)a−1 xb−1 dx = 1.
R∞
Let Γ(p) = 0 up−1 e−u du define the gamma function. Integrating by parts
and applying L’Hôspital’s Rule, it can be shown that Γ(p + 1) = p · Γ(p).

Make a variable change u = v 2 so that Γ( 12 ) = π.
9

Working with Γ(p), let u = v 2 so du = 2v dv and


Z ∞ Z ∞
2 p−1 −v 2 2
Γ(p) = 2 (v ) e v dv = 2 v 2p−1 e−v dv.
0 0
Z ∞ Z ∞
2 2
Then Γ(a) · Γ(b) = 4 s2a−1 e−s ds t2b−1 e−t dt
0 0
Z ∞ Z ∞
2 +t2 )
=4 s2a−1 t2b−1 e−(s ds dt
0 0

Let s = ρ cos θ, t = ρ sin θ, so the Jacobian = ρ


Z ∞ Z π/2
2
Γ(a) · Γ(b) = 4 (ρ cos θ)2a−1 (ρ sin θ)2b−1 e−ρ ρ dθ dρ
0 0
Z ∞ Z π/2
2a+2b−1 −ρ2
=4 ρ e dρ (cos θ)2a−1 (sin θ)2b−1 dθ
0 0

Substitute ρ2 = y in the first integral and sin2 θ = x in the second.


Z ∞ Z 1
a+b−1 −y
Γ(a)Γ(b) = y e dy xb−1 (1 − x)a−1 dx
0 0
Z 1
= Γ(a + b) xb−1 (1 − x)a−1 dx
0
Z 1
Γ(a) · Γ(b) Γ(a + b)
so xb−1 (1 − x)a−1 dx = ⇒k=
0 Γ(a + b) Γ(a) · Γ(b)
10

Z 1
Γ(a + b)
(b) E(X ) =k
xb+k−1 (1 − x)a−1
Γ(a) · Γ(b) 0
Z 1
Γ(a + b)
= x(b+k)−1 (1 − x)a−1 dx
Γ(a) · Γ(b) 0

Γ(a + b) Γ(a)Γ(b + k)
= · .
Γ(a) · Γ(b) Γ(a + b + k)

Since Γ(1) = 1 we obtain Γ(p + 1) = p! for p a positive integer.

(a + b − 1)! b
Then E(X) = =
(a − 1)!(b − 1)! a+b
[(a + b) − 1]! (a − 1)!(b + 1)! b(b + 1)
E(X 2 ) = · =
(a − 1)!(b − 1)! (a + b + 1)! (a + b + 1)(a + b)
ab
so V (X) =
(a + b)2 (a + b + 1)

1 1 t 1 2t 1 3t
3–23. MX (t) = + e + e + e
2 4 8 8
1 1 3
MX0 (t) = et + e2t + e3t
4 4 8
1 1 9
MX00 (t) = et + e2t + e3t
4 2 8
7
(a) E(X) = MX0 (0) =
8
µ ¶2
15 7 71
V (Y ) = MX00 (0) − [MX0 (0)]2 = − =
8 8 64
(b) x y y pY (y)
1
0 4 0
8
3
1 1 1
8
1
2 0 4
2
3 1 ow 0
ow 0
11

FY (y) = 0; y < 0

1
= ;0 ≤ y < 1
8
1
= ;1 ≤ y < 4
2
= 1; y ≥ 4

3–24. µ3 = E(X − µ01 )3


= E(X 3 ) − 3µ01 · E(X 2 ) + 3µ01 · E(X) − (µ01 )3
= µ03 − 3µ01 µ02 + 2(µ01 )3 .

A r.v. is symmetric about a if f (x + a) = f (−x + a) and we assume X contin-


uous. Since X − a and a − X have the same p.d.f., E(X − a) = E(a − X) or
E(X) − a = a − E(X), so E(X) = a = µ01 . Now,
Z ∞
r
E(X − µ01 )r = E(X − a) = (x − a)r f (x) dx
−∞
Z a Z ∞
r
= (x − a) f (x) dx + (x − a)r f (x) dx
−∞ 0
Z 0 Z ∞
r
= y f (y + a) dy + y r f (y + a) dy
−∞ 0
Z 0 Z ∞
r
=− (1 − y) f (−y + a) dy + y r f (y + a) dy
−∞ 0
Z ∞ Z ∞
= (−y)r f (y + a) dy + y r f (y + a) dy
0 0
Z ∞ Z ∞
r r
= (−1) y f (y + a) dy + y r f (y + a) dy
0 0

= 0 for odd r.

Thus E(X − µ01 )3 = 0.


12

R∞ R∞
3–25. If −∞
|x|r f (x) dx = k < ∞, then −∞
|x|n f (x) dx < ∞, where 0 ≤ n < r.
Proof: If |x| ≤ 1, then |x|n ≤ 1 and if |x| > 1, then |x|n ≤ |x|r .
Then
Z ∞ Z Z
n n
|x| f (x) dx = |x| · f (x) dx + |x|n · f (x) dx
−∞ |x|≤1 |x|>1
Z Z
≤ 1f (x) dx + |x|r f (x) dx ≤ 1 + k < ∞
|x|≤1 |x|>1

µ ¶
σ 2 t2 σ 2 t2
3–26. MX (t) = exp µt + ⇒ ψX (t) = µt +
2 2
dψX (t)/dt|t=0 = [µ + σ 2 t]t=0 = µ
d2 ψX (t) dt2 |t=0 = σ 2 |t=0 = σ 2
dr ψX (t)/dtr |t=0 = 0; r ≥ 3

3–27. From Table XV, using the first column with scaling: u1 = 0.10480, u2 = 0.22368,
u3 = 0.24130, u4 = 0.42167, u5 = 0.37570, u6 = 0.77921, . . . , u20 = 0.07056.

FX (x) = 0; x<0
= 0.5; 0≤x<1
= 0.75; 1≤x<2
= 0.875; 2≤x<3
= 1; x≥3

So, x1 = 0, x2 = 0, x3 = 0, x4 = 0, x5 = 0, x6 = 2, . . . , x20 = 0

3–28. The c.d.f. is

FT (t) = 1 − e−t/4

Thus, we set

t = −4`n(1 − FT (t)) = −4`n(1 − u)

From Table XV, using the second column with scaling: u1 = 0.15011, u2 = 0.46573,
u3 = 0.48360, . . . , u10 = 0.36257
13

So,

t1 = −4`n(0.84989) = 0.6506
t2 = −4`n(0.53427) = 2.5074
..
.
t10 = −4`n(0.63143) = 1.8391
1

Chapter 4

4–1. (a) x 0 1 2 3 4 5
pX (x) 27/50 11/50 6/50 3/50 2/50 1/50
y 0 1 2 3 4
pY (y) 20/50 15/50 10/50 4/50 1/50

(b) y 0 1 2 3 4
pY |0 (y) 11/27 8/27 4/27 3/27 1/27

(c) x 0 1 2 3 4 5
pX|0 (x) 11/20 4/20 2/20 1/20 1/20 1/20

4–2. (a) y 1 2 3 4 5 6 7 8 9
pY (y) 44/100 26/100 12/100 8/100 4/100 2/100 2/100 1/100 1/100
x 1 2 3 4 5 6
pX (x) 26/100 21/100 17/100 15/100 11/100 10/100

(b) y 1 2 3 4 5 6 7 8 9
x=1 pY |1 (y) 10/26 6/26 3/26 2/26 1/26 1/26 1/26 1/26 1/26
x=2 pY |2 (y) 8/21 5/21 3/21 2/21 1/21 1/21 1/21 0 0
x=3 pY |3 (y) 8/17 5/17 2/17 1/17 1/17 0 0 0 0
x=4 pY |4 (y) 7/15 4/15 2/15 1/15 1/15 0 0 0 0
x=5 pY |5 (y) 6/11 3/11 1/11 1/11 0 0 0 0 0
x=6 pY |6 (y) 5/10 3/10 1/10 1/10 0 0 0 0 0
Z 10 Z 100
k
4–3. (a) dx1 dx2 = 1 ⇒ k = 1
0 0 1000
Z 10
1 1
(b) fX1 (x1 ) = dx2 = ; 0 ≤ x1 ≤ 100
0 1000 100
= 0; otherwise
Z 100 ¯100
1 1 ¯ 1
fX2 (x2 ) = dx1 = x1 ¯¯ = ; 0 ≤ x2 ≤ 10
0 1000 1000 0 10
= 0; otherwise
2

(c) FX1 ,X2 (x1 , x2 ) = 0; x1 ≤ 0, x2 ≤ 0


= 0; x1 ≤ 0, x2 > 0
= 0; x1 > 0, x2 ≤ 0
Z x1 Z x2
1 x1 x2
= dt2 dt1 = ; 0 < x1 < 100, 0 < x2 < 10
0 0 1000 1000
Z x1 Z 10
dt2 dt1 x1
= = ; 0 < x1 < 100, x2 ≥ 10
0 0 1000 100
Z x2 Z 100
dt1 dt2 x2
= = ; x1 ≥ 100, 0 < x2 < 10
0 0 1000 10
= 1; x1 ≥ 100, x2 ≥ 10.
Z 4 Z 2
1
4–4. (a) k (6 − x1 − x2 ) dx1 dx2 = 1 ⇒ k =
2 0 8
Z 3 Z 1
1 3
(b) P (X1 < 1, X2 < 3) = (6 − x1 − x2 ) dx1 dx2 =
2 0 8 8
Z 4Z 4−x2
1 2
(c) P (X1 + X2 ≤ 4) = (6 − x1 − x2 ) dx1 dx2 =
8 2 0 3
Z 4
1 1
(d) fX1 (x1 ) = (6 − x1 − x2 ) dx2 = (6 − 2x1 ); 0 ≤ x1 ≤ 2
2 Z 8 8
3/2
1 .
P (X1 < 1.5) = (6 − 2x1 ) dx1 = 0.844
0 8
(e) fX1 (x1 ) see (d)
Z 2
1 1
fX2 (x2 ) = (6 − x1 − x2 ) dx1 = (10 − 2x2 ); 2 ≤ x2 ≤ 4
0 8 8
µ ¶ Z 2/3 Z 1 Z 1/2 Z 1
2 1 1
4–5. (a) P W ≤ , Y ≤ = 16wxyz dz dy dx dw =
3 2 0 0 0 0 9
µ ¶ Z 1 Z 1/2 Z 1 Z 1/4
1 1 1
(b) P X ≤ , Z ≤ = 16wxyz dz dy dx dw =
2 4 0 0 0 0 64
Z 1Z 1Z 1
(c) 16wxyz dz dy dx = 2w; 0 ≤ w ≤ 1
0 0 0
= 0; otherwise

Note: x, y, z and w are independent


3

4–6. From Problem 4–4


1
fX (x) = (6 − 2x); 0 ≤ x ≤ 2
8
1
fY (y) = (10 − 2y); 2 ≤ y ≤ 4
8
fX,Y (x, y) 6−x−y
fX|y (y) = = ; 0 ≤ x ≤ 2, 2 ≤ y ≤ 4
fY (y) 10 − 2y
fX,Y (x, y) 6−x−y
fY |x (x) = = ; 2 ≤ y ≤ 4, 0 ≤ x ≤ 2
fX (x) 6 − 2x
9
X 33
4–7. E(Y |X = 3) = y · pY |3 (y) =
y=1
17

4–8. (a) x1 51 52 53 54 55
pX1 (x1 ) 0.28 0.28 0.22 0.09 0.13
x2 51 52 53 54 55
pX2 (x2 ) 0.18 0.15 0.35 0.12 0.20
x2 51 52 53 54 55 otherwise
(b)
6 7 5 5 5
PX2 |51 (x2 ) 0
28 28 28 28 28
5 5 10 2 6
PX2 |52 (x2 ) 0
28 28 28 28 28
5 1 10 1 5
PX2 |53 (x2 ) 0
22 22 22 22 22
1 1 5 1 1
PX2 |54 (x2 ) 0
9 9 9 9 9
1 1 5 3 3
PX2 |55 (x2 ) 0
13 13 13 13 13
55
X
E(X2 |x1 = 51) = x2 · pX2 |51 (x2 ) = 52.857
x2 =51
X55
E(X2 |x1 = 52) = x2 · pX2 |52 (x2 ) = 52.964
x2 =51
X55
E(X2 |x1 = 53) = x2 · pX2 |53 (x2 ) = 53.0
x2 =51
4

55
X
E(X2 |x1 = 54) = x2 · pX2 |54 (x2 ) = 53.0
x2 =51
55
X
E(X2 |x1 = 55) = x2 · pX2 |55 (x2 ) = 53.461
x2 =51
Z 1
4–9. fX1 (x1 ) = 6x21 x2 dx2 = 3x21 x22 |1x2 =0 = 3x21 ; 0 ≤ x1 ≤ 1
0
Z 1
fX2 (x2 ) = 6x21 x2 dx1 = 2x31 x2 |1x1 =0 = 2x2 ; 0 ≤ x2 ≤ 1
0
fX ,X (x1 , x2 ) 6x2 x2
fX1 |x2 (x1 ) = 1 2 = 1 = 3x21 ; 0 ≤ x1 ≤ 1
fX2 (x2 ) 2x2
fX ,X (x1 , x2 )
fX2 |x1 (x2 ) = 1 2 = 2x2 ; 0 ≤ x2 ≤ 1
fX1 (x1 )
Z 1
3
E(X1 |x2 ) = x1 fX1 |x2 (x1 ) dx1 =
0 4
Z 1
2
E(X2 |x1 ) = x2 fX2 |x1 (x2 ) dx2 =
0 3
2
4–10. (a) fX1 (x1 ) = 2x1 e−x1 ; x1 ≥ 0
−x22
fX2 (x2 ) = 2x2 e ; x2 ≥ 0

fX1 ,X2 (x1 , x2 ) 2


(b) fX1 |x2 (x1 ) = = 2x1 e−x1 ; x1 ≥ 0
fX2 (x2 )
2
fX2 |x1 (x2 ) = 2x2 e−x2 ; x2 ≥ 0

Since fX1 |x2 (x1 ) = fX1 (x1 ) and fX2 |x1 (x2 ) = fX2 (x2 ), X1 and X2 are indepen-
dent
Z ∞ Z ∞
−x21 2
(c) E(X1 |x2 ) = E(X1 ) = x1 (2x1 e ) dx1 = 2x21 e−x1 dx1
0 0

2 2
Let u = x1 , du = dx1 , dv = 2x1 e−x1 , v = −e−x1 , and integrate by parts, so
Z ∞
2 2 π
E(X1 ) = −x1 e−x1 |∞
0 + e−x1 dx1 = 0 +
0 4
Similarly, E(X2 ) = π/4.
5

z
4–11. z = xy, t = x ⇒ x = t, y =
t
¯ ¯ ¯ ¯
¯∂x/∂t ∂x/∂z ¯ ¯1 0¯¯
¯ ¯ ¯
¯ ¯ = ¯ z 1¯ = 1
¯ ¯ ¯ ¯ t
¯∂y/∂t ∂y/∂z ¯ ¯− 2 ¯
t t
so p(z, t) = g(t)h(z/t)|1/t|
R∞
Thus `(z) = −∞ g(t)h(z/t)|1/t| dt
a
4–12. a = s1 · s2 , t = s1 so s2 = , s1 = t
t
1
Jacobian is t

And p(a, t) = (2t)( 81 )( at ) · 1


t
= a
4t
Z 1 ¯1
a a ¯
∴ `(a) = dt = `n(t) ¯
a/4 4t 4 a/4
a a
= − `n ; 0 < a < 4
4 4
4–13. f (x, y) = g(x) · h(y)
X
Z= U = Y y = u x = uz
Y
¯ ¯ ¯ ¯
¯∂x/∂u ∂x/∂z ¯ ¯z u¯
¯ ¯ ¯ ¯
¯∂y/∂u ∂y/∂z ¯ = ¯1 0 ¯ = u
p(z, u) = g(uz) · h(u)|u|
Z ∞
`(z) = g(uz)h(u)|u| du
−∞

4–14. f (v, i) = 3e−3i · e−v


v
r= u = i v = ru i = u
i
The Jacobian is u, thus

p(r, u) = (e−ru )(3e−3u )(u) = 3ue−u(3+r) ; u ≥ 0


Z ∞
3
∴ `(r) = 3 ue−u(3+r) du = ;r ≥ 0
0 (3 + r)2
6

4–15. Y = X1 + X2 + X3 + X4
E(Y ) = 80, V (Y ) = 4(9) = 36
4–16. If X1 and X2 are independent, then

pX1 ,X2 (x1 , x2 ) = pX1 (x1 )pX2 (x2 ) for all x1 , x2 .

Thus,

pX1 ,X2 (x1 , x2 ) pX (x1 )pX2 (x2 )


pX2 |x1 (x2 ) = = 1 = pX2 (x2 ) for all x1 , x2 .
pX1 (x1 ) pX1 (x1 )

Similarly,

pX1 |x2 (x1 ) = pX1 (x1 ).

Now prove the converse. Assume that

pX2 |x1 (x2 ) = pX2 (x2 ) and pX1 |x2 (x1 ) = pX1 (x1 ) for all x1 , x2 .

Then

pX1 ,X2 (x1 , x2 ) = pX2 |x1 (x2 )pX1 (x1 ) = pX2 (x2 )pX1 (x1 ) for all x1 , x2 .

4–17. X2 = A + BX1 ⇒ E(X2 ) = A + B · E(X1 )


V (X2 ) = B 2 · V (X1 ), E(X1 X2 ) = E(X1 (A + BX1 ))
= AE(X1 ) + BE(X12 )
So
[E(X1 X2 ) − E(X1 )E(X2 )]2
ρ2 =
V (X1 ) · V (X2 )
[AE(X1 ) + BE(X12 ) − E(X1 )(A + B · E(X1 ))]2
=
V (X1 ) · B 2 V (X1 )
[BE(X12 ) − B(E(X1 ))2 ]2 B 2 V (X1 )2
= = 2 =1
B 2 V (X1 )2 B V (X1 )2
ρ = −1 if B < 0 and ρ = 1 if B > 0.
7

4–18. MX2 (t) = E(etX2 ) = E(et(A+BX1 ) ) = E(eAt · eBtX1 )


= eAt · E(eBtX1 ) = eAt · MX1 (Bt)
Z 1
4–19. fX1 (x1 ) = 2 dx2 = 2(1 − x1 ); 0 ≤ x1 ≤ 1
x1
Z x2
fX2 (x2 ) = 2 dx1 = 2x2 ; 0 ≤ x2 ≤ 1
0
Z 1
1
E(X1 ) = 2 x1 (1 − x1 ) dx1 =
0 3
Z 1
2
E(X2 ) = 2 x22 dx2 =
0 3
Z 1 · ¸1
x3 x4 1
E(X12 ) =2 x21 (1 − x1 ) dx1 = 2 1 − 1 =
0 3 4 0 6
Z 1
1
E(X22 ) = 2 x32 dx2 =
0 2
1
V (X1 ) = E(X12 ) − (E(X1 ))2 =
18
1
V (X2 ) = E(X22 ) − (E(X2 ))2 =
18
Z 1 Z 1/2 Z 1 · ¸x2
x21
E(X1 X2 ) = 2x1 x2 dx1 dx2 = 2 x2 dx2
0 0 0 2 0
Z 1
1
= x32 dx2 =
0 4

E(X1 X2 ) − E(X1 )E(X2 )


ρ= p
V (X1 )V (X2 )
µ ¶µ ¶
1 1 2

4 3 3 1
= sµ ¶ µ ¶ =
1 1 2
18 18
8

4–20. E(U ) = A + BE(X1 ), E(V ) = C + DE(X2 )


V (U ) = B 2 · V (X1 ), V (V ) = D2 V (X2 )
E(U V ) = E[(A + BX1 )(C + DX2 )]
= AC + AD · E(X2 ) + BC · E(X1 ) + BD · E(X1 X2 )

E(U V ) − E(U ) · E(V )


ρU,V = p
V (U ) · V (V )
AC + AD · E(X2 ) + BC · E(X1 ) + BD · E(X1 X2 ) − [(A + BE(X1 ))(C + DE(X2 ))]
= p
B 2 V (X1 )D2 V (X2 )
BD[E(X1 X2 ) − E(X1 )E(X2 )] BD
= p = ρX ,X
|BD| V (X1 )V (X2 ) |BD| 1 2

4–21. X and Y are not independent. Note


1
(a) PX,Y (5, 4) = 0 6= pX (5) · pY (4) = .
502
(b) E(X) = 0.9 E(Y ) = 1.02
E(X 2 ) = 2.38 E(Y 2 ) = 2.14
V (X) = 1.57 V (Y ) = 1.0996
E(XY ) = 0.74
E(XY ) − E(X)E(Y ) 0.74 − (0.9)(1.02)
ρ= p = p = −0.135
V (X)V (Y ) (1.57)(1.0996)

4–22. (a) A sale will take place if Y ≥ X


ZZ
(b) f (x, y) dx dy
x≤y
ZZ , ZZ
(c) E(Y |Y ≥ x) = yf (x, y) dx dy f (x, y) dx dy
x≤y x≤y

Z √ ¯√1−x2
1−x2
2 2 ¯¯ 2√
4–23. (a) fX (x) = dy = y ¯ = 1 − x2 ; −1 < x < +1
0 π π 0 π
Z √1−y2
2 4p
fY (y) = √ dx = 1 − y2; 0 < y < 1
− 1−y 2 π π
9

2
fX,Y (x, y) 1 p p
(b) fX|y (x) = π
= µ ¶p = p ; − 1 − y2 < x < 1 − y2
fY (y) 4 2 1 − y2
1 − y2
π
2
fX,Y (x, y) π 1 √
fY |x (y) = = µ ¶√ =√ ; 0 < y < 1 − x2
fX (x) 2 1 − x2
1 − x2
π
 
Z √1−y2 ¯√ 2
2 ¯ 1−y
x 1 x ¯ =0
(c) E(X|y) = √ p dx = p
− 1−y 2 2 1 − y2 2 1 − y2 2 ¯−√1−y2

Z √ à ¯√1−x2 ! √
1−x2
y 1 y 2 ¯¯ 1 − x2
E(Y |x) = √ dy = √ =
0 1 − x2 1 − x2 2 ¯0 2

4–24. X and Y continuous


Z ∞ Z ∞
fX,Y (x, y)
E(X|Y = y) = xfX|y (x) dx = x dx
−∞ −∞ fY (y)
Z ∞ Z ∞
fX (x)fY (y)
= dx = xfX (x) dx = E(X)
−∞ fY (y) −∞

Change X and Y , reversing roles to show E(Y |X) = E(Y ) if X and Y are inde-
pendent.

Z ∞ Z ∞
fX,Y (x, y)
4–25. E(X|y) = xfX|y (x) dx = x dx
−∞ −∞ fY (y)
Z ∞ Z ∞ Z ∞ · ¸
fX,Y (x, y)
E[E(X|Y )] = E(X|y) · fY (y) dy = x dx fY (y) dy
−∞ −∞ −∞ fY (y)
Z ∞ ·Z ∞ ¸ Z ∞
= x fX,Y (x, y) dy dx = xfX (x) dx = E(X).
−∞ −∞ −∞

Reverse roles of X and Y to show E(E(Y |X)) = E(Y ).


10

4–26. w = s + d, let y = s − d
w+y w−y
s= , d=
2 2
s = 10 → w + y = 20 d = 10 → w − y = 20
s = 40 → w + y = 80 d = 30 → w − y = 60
As illustrated

d y

30 30 w
+
y
=

20
20 20 80

=
y

10 10

w
s w
10 20 30 40 10 20
w 30 40 50 60 70
+

60
–10 y

=
=

y
20


–20

w
¯ ∂s ¯ ¯ ¯
¯ ∂s ¯
¯1/2 ¯
¯ ∂w
The Jacobian J = ¯ ∂d
∂y ¯
¯ = ¯¯
1/2¯ = −1
¯ ∂w ∂d ¯ 1/2 −1/2¯ 2
∂y

1 1 1 w+y w−y
∴ fW,Y (w, y) = · · ; 10 < < 40, 10 < < 30
30 20 2 2 2
Z w−20
1 w − 20
fW (w) = dy = ; 20 < w < 40
20−w 1200 600
Z w−20
1 20
= dy = ; 40 < w ≤ 50
w−60 1200 600
Z 80−w
1 70 − w
= dy = ; 50 < w < 70
w−60 1200 600
= 0; ow
Z 1
1
4–27. (a) fY (y) = (x + y) dx = y + ; 0 < y < 1
0 2
fX,Y (x, y) x+y
fX|y (x) = = ; 0 < x < 1, 0 < y < 1
fY (y) 1
y+
2
11

Z · 3
1 ¸1
(x + y) 1 x x2 y
E(X|Y = y) = x· µ ¶ dx = +
0 1 1 3 2 0
y+ y +
2 2
2 + 3y
= ; 0 < y < 1.
3(1 + 2y)
Z 1 · ¸1
y2 1
(b) fX (x) = (x + y) dy = xy + = x + ;0 < x < 1
0 2 y=0 2
Z 1
7
E(X) = [x2 + (x/2)] dx = [(x3 /3) + (x2 /4)]10 =
0 12
Z 1
7
(c) E(Y ) = [y 2 + (y/2)] dy = [(y 3 /3) + (y 2 /4)]10 =
0 12

Z ∞ Z ∞
k(1 + x + y) 9
4–28. (a) 4 4
dx dy = 1 ⇒ k =
0 0 (1 + x) (1 + y) 2
Z ∞ Z ∞
(9/2)(1 + x + y) dy (9/2) dy
(b) fX (x) = 4 4
=
0 (1 + x) (1 + y) (1 + x) 0 (1 + y)4
4

Z ∞ Z ∞
(9/2)x dy (9/2) y dy
+ 4 4
+
(1 + x) 0 (1 + y) (1 + x) 0 (1 + y)4
4

3(3 + 2x)
= ;x > 0
4(1 + x)4
Z ∞ Z ∞ Z ∞ · ¸∞
−n (1 + x + y)−n+1
4–29. (a) k (1 + x + y) dx dy = 1, k dy = 1
0 0 0 (−n + 1) 0
Z ∞
k dy k
so n−1
=1⇒ = 1 ⇒ k = (n − 1)(n − 2)
n−1 0 (1 + y) (n − 1)(n − 2)
Z x Z y
(b) FX,Y (x, y) = P (X ≤ x, Y ≤ y) = k (1 + s + t)−n dt ds
0 0
· ¸x
(1 + s + y)−n+2 (1 + s)−n+2
=k −
(−n + 2)(−n + 1) (−n + 2)(−n + 1) s=0

= 1 − (1 + x)2−n − (1 + y)2−n + (1 + x + y)2−n ; x > 0, y > 0


12

4–30. From Eq. 4–58


p(1 − p) (0.5)(0.5)
n≥ = = 2000
²2 · α (0.05)2 (0.05)
2 2
4–31. (a) g(x, y) = (2xe−x )(2ye−y ) so X and Y are independent
(b) X and Y are not independent. For given X, Y may only assume values greater
than that value of X.
(c) See Prob. 4–29 with k = (4 − 1)(4 − 2) = 6 and n = 4.
fX (x) = 2(1 + x)−3 ; x > 0
fY (y) = 2(1 + y)−3 ; y > 0
fX,Y (x, y) 6= fX (x) · fY (y) so the variables are not independent.
1
4–32. The probability is 2/3 since P (X > Y > Z) = P (X < Y < Z) = 6
µ ¶ Z 1/2 Z 1/2
1 1 1
4–33. (a) P 0 ≤ X ≤ , 0 ≤ Y ≤ = 1 dx dy =
2 2 0 0 4
Z 1Z x Z 1
1
(b) P (X > Y ) = 1 dy dx = x dx = .
0 0 0 2
µ ¶
1 7
4–34. P 0 ≤ Y ≤ 1, Y ≤ X ≤ Y + ≤ 1 =
4 32

x (request)

1
x = y + (1/4)

x=y

y (receipt)
1
µ ¶
z−a
4–35. (a) FZ (z) = P (Z ≤ z) = P (a + bX ≤ z) = P X ≤
µ ¶ b
z−a
= FX
b
µ ¶¯ ¯
z − a ¯¯ 1 ¯¯
fZ (z) = fX ¯b¯.
b
13

µ ¶ µ ¶ µ ¶
1 1 1
(b) FZ (z) = P ≤z =P X≥ = 1 − FX
X z z
µ ¶
1 1
fZ (z) = fX
z z2

(c) FZ (z) = P (`n(X) ≤ Z) = P (e`n(x) ≤ ez ) = P (X ≤ ez ) = FX (ez )


fZ (z) = fX (ez )ez

(d) FZ (z) = P (eX ≤ z) = P (X ≤ `n z) = FX (`n z)


¯ ¯
¯1¯
fZ (z) = fX (`n(z)) ¯¯ ¯¯
z

The range space for Z is determined from the range space of X and the
definition of the transformation.
1

Chapter 5

5–1. The probability mass function is

x p(x)
0 (1 − p)4
1 4p(1 − p)3
2 6p2 (1 − p)2
3 4p3 (1 − p)
4 p4
otherwise 0
5–2.
6 µ ¶
X 6
P (X ≥ 5) = (0.95)x (0.05)6−x
x
x=5
= 6(0.95)5 (0.05) + (0.95)6
= 0.9672

5–3. Assume independence and let W represent the number of orders received.
12 µ ¶
X 12
P (W ≥ 4) = (0.5)w (0.5)12−w
w
w=4
12 µ ¶
X
12 12
= (0.5)
w
w=4
3 µ ¶
X
12 12
= 1 − (0.5) = 0.9270
w
w=0

5–4. Assume customer decisions are independent.


9 µ ¶ µ ¶x µ ¶20−x
X 20 1 2
P (X ≥ 10) = 1 − P (X ≤ 9) = 1 − = 0.0918
x 3 3
x=0

5–5.
2 µ ¶
X 50
P (X > 2) = 1 − P (X ≤ 2) = 1 − (0.02)x (0.98)50−x = 0.0784.
x
x=0
2

5–6.
n
X µ ¶
tX tx n x
MX (t) = E[e ] = e p (1 − p)n−x
x
x=0
t n
= (pe + q) , where q = 1 − p

0
E[X] = MX (0) = [n(pet + q)n−1 pet ]t=0 = np

00
E[X 2 ] = MX (0) = np[et (n − 1)(pet + q)n−2 (pet ) + (pet + q)n−1 et ]t=0
= (np)2 − np2 + np

V (X) = E[X 2 ] − (E[X])2 = n2 p2 − np2 + np − n2 p2 = np(1 − p) = npq

5–7.
µ ¶
X
P (p̂ ≤ 0.03) = P ≤ 0.03 = P (X ≤ 3)
100
X3 µ ¶
100
= (0.01)x (0.99)100−x = 0.9816
3
x=0

5–8.
µ ¶
p p
P (p̂ > p + pq/n) = P p̂ > 0.07 + (0.07)(0.93)/200

= P (X > 200(0.088))
= P (X > 17.6)
= 1 − P (X ≤ 17.6)
X17 µ ¶
200
= 1− (0.07)x (0.93)200−x
x
x=0
= 0.1649
3

By two standard deviations,


p
P (p̂ > p + 2 pq/n) = P (p̂ > 0.106)
= P (X > 21.2)
= 1 − P (X ≤ 21)
X21 µ ¶
200
= 1− (0.07)x (0.93)200−x
x
x=0
= 0.0242
By three standard deviations,
p
P (p̂ > p + 3 pq/n) = P (X > 24.8)
= 1 − P (X ≤ 24)
X24 µ ¶
200
= 1− (0.07)x (0.93)200−x
x
x=0
= 0.0036

5–9. P (X = 5) = (0.95)4 (0.05) = 0.0407


5–10. A: Successful on first three calls, B: Unsuccessful on fourth call

P (B|A) = P (B) = 0.90 if A and B are independent


5–11.

P (X = 5) = p4 (1 − p) = f (p)
df (p) 4
= 5p4 − 4p3 = 0 ⇒ p =
dp 5
0.09

0.08

0.07

0.06

0.05
f(p)

0.04

0.03

0.02

0.01

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
p
4

5–12. (a) X = trials required up to and including the first sale

C(X) = 1000X + 3000(X − 1)


= 4000X − 3000
µ ¶
1
E[C(X)] = 4000E[X] − 3000 = 4000 − 3000 = 37000
0.1
(b) Since $37000 > $15000, the trips should not be undertaken.

(c)

P (C(X) > 100000) = P (4000X − 3000 > 100000)


µ ¶
103000 .
= P X> = P (X > 25.75)
4000
= 1 − P (X ≤ 25)
X25
= 1− (0.90)x−1 (0.10)
x=1

5–13.

pet
MX (t) = , where q = 1 − p
1 − qet

· ¸
0 (1 − qet )(pet ) + (pet )(qet )
E[X] = MX (0) =
(1 − qet )2 t=0
(1 − q)p + pq p 1
= 2
= 2 =
(1 − q) p p

· ¸
2 (1 − qet )2 (pet ) + 2pet (1 − qet )(qet )
00
E[X ] = MX (0) =
(1 − qet )4 t=0
(1 + q)p 1+q
= 3
=
(1 − q) p2

q
V (X) =
p2
5

5–14.

P (X ≤ 2) = P (X = 1) + P (X = 2) = 0.8 + (0.2)(0.8) = 0.96


X3
P (X ≤ 3) = (0.2)x−1 (0.8) = 0.992
x=1

5–15.

P (X = 36) = (0.95)35 (0.05) = 0.0083

5–16. (a)
µ ¶
7
P (X = 8) = (0.1)2 (0.9)5 = 0.0124
2

(b)
∞ µ
X ¶
x−1
P (X > 8) = (0.1)2 (0.9)x−3
2
x=9

5–17.
µ ¶
3
P (X = 4) = (0.8)2 (0.2)2 = 0.0768
1
µ ¶
2 2
P (X < 4) = P (X = 2) + P (X = 3) = (0.8) + (0.8)2 (0.2) = 0.896
1

5–18. Suppose X1 , X2 , . . . , Xr are independent geometric random variables, each with


parameter p. X1 is the number of trials to first success, X2 the number of trials
from first to the second, etc. Let

X = X1 + X2 + . . . + Xr
pet
The moment generating function for the geometric is 1−qet
, so
r
Y · ¸r
pet
MX (f ) = MXi (t) =
i=1
1 − qet

0 r
E[X] = MX (t)|t=0 =
p
6

We could also have obtained this result as follows.


r
X µ ¶
1 r
E[X] = E[Xi ] = r =
i=1
p p

Continuing,

h 00 i µ r ¶2 rq
V (X) = MX (t)|t=0 − = 2
p p

We could also have obtained this result as follows.


r
X rq
V (X) = V (Xi ) =
i=1
p2

5–19.
r 5 rq (5)(0.2)
E[X] = = = 6.25, V (X) = 2
= = 1.5625
p 0.8 p (0.8)2

5–20. X = Mission number on which 4th hit occurs.


 µ ¶
 x−1
(0.8)4 (0.2)x−4 x = 4, 5, 6, . . .
p(x) = 3

0 otherwise

7 µ
X ¶
x−1
P (X ≤ 7) = (0.8)4 (0.2)x−4
3
x=4

5–21. (X, Y, Z) ∼ multinomial (n = 3, p1 = 0.4, p2 = 0.3, p3 = 0.3). The probability that


one company receives all orders is

P (3, 0, 0) + P (0, 3, 0) + P (0, 0, 3)


3! 3! 3!
= (0.4)3 (0.3)0 (0.3)0 + (0.4)0 (0.3)3 (0.3)0 + (0.4)0 (0.3)0 (0.3)3
3!0!0! 0!3!0! 0!0!3!
.
= 0.43 + 0.33 + 0.33 = 0.118
7

5–22. (a) (X1 , X2 , X3 , X4 ) ∼ multinomial (n = 5, p1 = p2 = p3 = p4 = 41 ). Therefore,


P (5, 0, 0, 0) + P (0, 5, 0, 0) + P (0, 0, 5, 0) + P (0, 0, 0, 5) is the probability that
one company gets all five. That is,
· ¸ µ ¶5 µ ¶0 µ ¶0 µ ¶0
5! 1 1 1 1 1
4 =
5!0!0!0! 4 4 4 4 256

(b)
" µ ¶5 #
1 .
1 − 4(60) = 0.7656
4

5–23.
10! .
P (Y1 = 4, Y2 = 1, Y3 = 3, Y4 = 2) = (0.2)4 (0.2)1 (0.2)3 (0.4)2 = 0.005
4!1!3!2!

5–24.
10!
P (Y1 = 0, Y2 = 0, Y3 = 0, Y4 = 10) = (0.2)0 (0.2)0 (0.2)0 (0.4)10
0!0!0!10!
10!
P (Y1 = 5, Y2 = 0, Y3 = 0, Y4 = 5) = (0.2)5 (0.2)0 (0.2)0 (0.4)5
5!0!0!5!
5–25. (a)
µ ¶µ ¶
4 21
X2
x 5−x .
P (X ≤ 2) = µ ¶ = 0.98
x=0
24
5

(b)

X2 µ ¶µ ¶x µ ¶5−x
5 4 21 .
P (X ≤ 2) = = 0.97
x 25 25
x=0

n
5–26. The approximation improves as N
decreases. n = 5, N = 100 is a better condition
than n = 5, N = 25.
8

5–27. we want the smallest n such that


µ ¶µ ¶ µ ¶
7 18 18
0 n n
P (X ≥ 1) = 1 − µ ¶ ≥ 0.95 ⇔ µ ¶ ≤ 0.05.
25 25
n n
By trial and error, we find that n = 8 does the job.

We could instead use the binomial approximation; now we want n such that
µ ¶ µ ¶0 µ ¶n µ ¶n
n 7 18 18
0.05 ≥ = .
0 25 25 25
.
We find that n = 9.
5–28.

X −c x ∞
X
tX tx e c −c (cet )x t t
MX (t) = E[e ] = e = e = e−c ece = ec(e −1)
x=0
x! x=0
x!

5–29.
X9
e−25 (25)x .
P (X < 10) = P (X ≤ 9) = = 0.0002
x=0
x!

5–30.
X∞
e−10 (10)x
P (X > 20) = P (X ≥ 21) =
x=21
x!
20
X e−10 (10)x
= 1 − P (X ≤ 20) = 1 −
x=0
x!
= 0.002

5–31.

P (X > 5) = P (X ≥ 6)
5
X e−4 4x
= 1 − P (X ≤ 5) = 1 −
x=0
x!
.
= 0.2149
9

5–32. Mean count rate = (1 − p)c. Therefore,

e−[(1−p)c]t [(1 − p)ct]y


P (Yt = y) = y = 0, 1, 2, . . .
y!

5–33. Using a Poisson model,


3
X e−λ λx
P (X ≤ 3) = λ = 15000(0.002) = 30
x=0
x!
X∞ X4
e−30 (30)x e−30 (30)x
P (X ≥ 5) = =1−
x=5
x! x=0
x!

5–34. Y = Number of requests.


3
X
(a) e−2 2y
P (Y > 3) = 1 − P (Y ≤ 3) = 1 −
y=0
y!
(b)
E[Y ] = c = 2
(c)

P (Y ≤ y) ≥ 0.9 so y = 4 and P (Y ≤ 4) = 0.9473

(d) X = Number serviced.

y x p(x) xp(x)
0 0 e−2 0
1 1 2e−2 2e−2
2 2 2e−2 4e−2
3 or more 3 1 − 5e−2 3 − 15e−2

E[X] = 1.78

(e) Let M = number of crews going to central stores. Then M = Y − X

E[M ] = E[Y ] − E[X] = 2 − 1.78 = 0.22

5–35. Using a Poisson model,


X2
e−2.5 (2.5)x .
P (X < 3) = P (X ≤ 2) = = 0.544
x=0
x!
10

5–36. Let Y = No. Boarding


Let X = No. Recorded

Y 0 1 2 3 4 5 6 7 8 9 ≥ 10
X 0 1 2 3 4 5 6 7 8 9 10

e−c cx
pX (x) = , x = 0, 1, 2, . . . , 9
x!
X∞ X9
e−c ci e−c ci
= =1− , x = 10
i=10
i! i=0
i!

5–37. (a) Let X denote the number of errors on 50 pages. Then


50
X ∼ Binomial(5, ) = Binomial(5, 1/4).
200
This implies that
µ ¶
5
P (X ≥ 1) = 1 − P (X = 0) = 1 − (1/4)0 (3/4)5 = 0.763.
0
n
(b) Now X ∼ Binomial(5, 200 ), where n is the number of pages sampled.

We want the smallest n such that


X5 µ ¶µ ¶i µ ¶5−i
5 n 200 − n
≥ 0.90
i 200 200
i=3

By trial and error, we find that n = 151 does the trick.

We could also have done this problem using a Poisson approximation. For (a),
we would use λ = 0.025 errors / page with 50 pages. Then c = 50(0.025) =
−1.25 0 .
1.25, and we would eventually obtain P (X ≥ 1) = 1 − e 0!(1.25) = 0.7135,
which is a bit off of our exact answer. For (b), we would take c = n(0.025),
eventually yielding n = 160 after trial and error.
5–38.
e−c c0
P (X = 0) = with c = 10000(0.0001) = 1,
0!
P (X = 0) = e−1 = 0.3679
and
P (X ≥ 2) = 1 − P (X ≤ 1) = 0.265
11

5–39. X ∼ Poisson with α = 10(0.1) = 0.10


P (X ≥ 2) = 1 − P (X ≤ 1) = 0.0047

5–40. Kendall and Stuart state: “the liability of individuals to accident varies.” That
is, the individuals who compose a population have different degrees of accident
proneness.
5–41. Use Table XV and scaling by 10−5 .
(a) From Col. 3 of Table XV,
Realization 1 Realization 2
u1 = 0.01536 < 0.5 ⇒ x1 = 1 u1 = 0.63661 > 0.5 ⇒ x1 = 0
u2 = 0.25595 < 0.5 ⇒ x2 = 1 u2 = 0.53342 > 0.5 ⇒ x2 = 0
u3 = 0.22527 < 0.5 ⇒ x3 = 1 u3 = 0.88231 > 0.5 ⇒ x3 = 0
u4 = 0.06243 < 0.5 ⇒ x4 = 1 u4 = 0.48235 < 0.5 ⇒ x4 = 1
u5 = 0.81837 > 0.5 ⇒ x5 = 0 u5 = 0.52636 > 0.5 ⇒ x5 = 0
u6 = 0.11008 < 0.5 ⇒ x6 = 1 u6 = 0.87529 > 0.5 ⇒ x6 = 0
u7 = 0.56420 > 0.5 ⇒ x7 = 0 u7 = 0.71048 > 0.5 ⇒ x7 = 0
u8 = 0.05463 < 0.5 ⇒ x8 = 1 u8 = 0.51821 > 0.5 ⇒ x8 = 0
x=6 x=1
Continue to get three more realizations.

(b) Use Col. 4 of Table XV (p = 0.4).


Realization 1
u1 = 0.02011 ≤ 0.4 ⇒ x = 1
Realization 2
u1 = 0.85393 > 0.4
u2 = 0.97265 > 0.4
u3 = 0.61680 > 0.4
u4 = 0.16656 < 0.4 ⇒ x = 4
Realization 3
u1 = 0.42751 > 0.4
u2 = 0.69994 > 0.4
u3 = 0.07972 < 0.4 ⇒ x = 3
Continue to get seven more realizations of X.
12

(c) λt = c = 0.15, e−0.15 = 0.8607. Using Col. 6 of Table XV,


Realization ui product < e−0.15 ? x
#1 u1 = 0.91646 0.91646 No
u2 = 0.89198 0.81746 Yes x=1
#2 u1 = 0.64809 0.64809 Yes x=0
#3 u1 = 0.16376 0.16376 Yes x=0
#4 u1 = 0.91782 0.91782 No
u2 = 0.53498 0.49102 Yes x=1
#5 u1 = 0.31016 0.31016 Yes x=0

5–42. X ∼ Geometric with p = 1/6.

y = x1/3

Using Col. 5 of Table XV, we obtain the following realizations.

#1

u1 = 0.81647 > 1/6


u2 = 0.30995 > 1/6
u3 = 0.76393 > 1/6
u4 = 0.07856 < 1/6 ⇒ x = 4, y = 1.587

#2

u1 = 0.06121 < 1/6 ⇒ x = 4, y = 1

Continue to get additional realizations.


1

Chapter 6
6–1.
1
fX (x) = ; 0 < x < 4
4
µ ¶ Z 7/4 µ ¶ Z 27/8
1 7 dx 5 9 27 dx 9
P <X< = = , P <X< = =
2 4 1/2 4 16 4 8 9/4 4 32
6–2.
4 143 177
fX (x) = ; <x<
34 4 4
Z 40
4 1
P (X < 40) = dx =
143/4 34 2
Z 42
4 4
P (40 < X < 42) = dx =
40 34 17
6–3.
1
fX (x) = , 0≤x≤2
2
µ ¶ Z y−5
y−5 2 dx y−5
FY (y) = P (Y ≤ y) = P X ≤ = =
2 0 2 4
So
1
fY (y) = , 5<y<9
4
6–4. The p.d.f. of profit, X, is
1
fX (x) = ; 0 < x < 2000
2000
Y = Brokers Fees = 50 + 0.06X

FY (y) = P (50 + 0.06X ≤ y)


µ ¶
y − 50
= P X≤
0.06
Z y−50
0.06 dx y − 50
= =
0 2000 120
1
fY (y) = ; 50 < y < 170
120
2

6–5.
Z β
tX dx
MX (t) = E(e ) = etx
α β−α
¯β
etx ¯¯
=
(β − α)t ¯α
etβ − etα
=
(β − α)t

Using L’Hôspital’s rule when necessary, we obtain

1 £ −1 tβ ¤
E(X) = MX0 (0) = t βe − t−2 etβ − t−1 αetα + t−2 etα t=0
β−α
1 £ 2 tβ ¤
= β e − β 2 etβ /2 − α2 etα + α2 etα /2 t=0
β−α
β+α
=
2

and
00
E(X 2 ) = MX (0)
1 £ −1 2 tβ
= t β e − βetβ t−2 + 2etβ t−3 − t−2 βetβ
β−α
¤
−t−1 α2 etα + αetα t−2 + t−2 αetα − 2etα t−3 t=0
· 3 ¸
1 β − α3
=
β−α 3

(β − α)2
V (X) = E(X 2 ) − [E(X)]2 =
12
6–6.
β+α
E(X) = = 0 ⇒ β+α = 0
2
(β − α)2
V (X) = = 1 ⇒ β 2 − 2αβ + α2 = 12
12
√ √
⇒ α = − 3, β = + 3
3

6–7. The CDF for Y is

y FY (y)
y<1 0
1≤y<2 0.3
2≤y<3 0.5
3≤y<4 0.9
y>4 1
Generate realizations of ui ∼ Uniform[0,1] random numbers as described in Section
6–6; use these in the inverse as yi = FY−1 (ui ), i = 1, 2, . . .. For example, if u1 =
0.623, then y1 = FY−1 (0.623) = 3.

6–8.

1
fX (x) = ; 0<x<4
4
= 0; otherwise

The roots of y 2 + 4xy + (x + 1) = 0 are real if 16x2 − 4(x + 1) ≥ 0, or if


1 17
(x − )2 − ≥0
8 64
√ √
or where x ≤ 81 (1 − 17) or x ≥ 18 (1 + 17)
µ ¶
1 √
P X ≤ (1 − 17) = 0
8
µ ¶ Z 4
1 √ dx 1 √
P X ≥ (1 + 17) = √
= (31 − 17)
8 1
8
(1+ 17) 4 32

6–9.
Z ∞ Z ∞
tx −λx
MX (t) = e λe dx = λ ex(t−λ) dx,
0 0

which converges if t < λ. Thus for t < λ,

λ λ 1
MX (t) = [ex (t − λ)]∞
x=0 = = , t<λ
t−λ λ−t 1 − λ/t
4

0 £ ¤ 1
E(X) = MX (0) = λ(λ − t)−2 t=0 =
λ

00 £ ¤ 2
E(X 2 ) = MX (0) = 2λ(λ − t)−3 t=0 = 2
λ

2 1 1
V (X) = E(X 2 ) − [E(X)]2 = − =
λ2 λ2 λ2

6–10. X denotes life length,


1 1
E(X) = = 3 ⇒ λ =
λ 3

P denotes profit

P ∗ = 1000, X > 1
= 750, X ≤ 1

E(P ∗ ) = 1000P (X > 1) + 750P (X ≤ 1)


= 1000e−1/3 + 750(1 − e−1/3 ) = $929.13

6–11.
Z 1/2
1 1 −x/3
P (X < ) = e dx = 1 − e(−1/3)(1/2) = 0.154
2 0 3
or 15.4% experience failure in the first six months.
6–12. P ∗ = Profit, T = Life Length

P ∗ = rY − dY, T >Y
= rT − dY, T ≤Y

Z Y

E(P ) = (rY − dY )P (T ≥ Y ) − dY P (T ≤ Y ) + r tθe−θt dt
0
= r(θ−1 − θ−1 e−θY ) − dY

dE(P k ) d
= re−θY − d = 0 ⇒ Y = −θ−1 `n( )
dY r
d
For Y to be positive, 0 < r < 1.
5

6–13. X = Life Length,


1 1
E(X) = =3⇒λ=
λ 3
P (X < 1) = 1 − e−1/3 = 0.283
28.3% of policies result in a claim.
6–14. No.

P (X ≤ 2) = 1 − e−2λ

P (X ≤ 3) = 1 − e−3λ
2
1 − e−2λ = (1 − e−3λ ) ⇒ 1 = 3e−2λ − 2e−3λ
3
Only λ = 0 satisfies this condition; but we must have λ > 0 , so there is no value
of λ for which
2
P (X ≤ 2) = P (X ≤ 3)
3
6–15.

CI = C, X > 15;
= C + Z, X ≤ 15
CII = 3C, X > 15
= 3C + Z, X ≤ 15

E(CI ) = CP (X > 15) + (C + Z)P (X ≤ 15)


.
= Ce−15/25 + (C + Z)(1 − e−15/25 ) = C + 0.4512Z
E(CII ) = 3CP (X > 15) + (3C + Z)P (X ≤ 15)
.
= 3Ce−15/25 + (3C + Z)(1 − e−15/35 ) = 3C + 0.3426Z

E(CII ) − E(CI ) = 2C − 0.1026Z


which favors process I if C > 0.0513Z.
6–16.

P (X > x+s|X > x) = P (X > s) = P (X > 10000) = e−10000/20000 = 0.6064

and P (X < 30000|X > 20000) = 0.3936.


6

6–17.
Z ∞
Γ(p) = xp−1 e−x dx
0

dx
R∞ 2 R∞ 2
Let x = y 2 ⇒ dy
= 2y. So Γ(p) = 0
y 2p−1 e−y dy and Γ( 12 ) = 2 0
e−y dy.

µ ¶2 Z ∞ Z ∞
1 −y 2 2
Γ( ) = 4 e dy · e−x dx
2 0 0
Z ∞Z ∞
−(x2 +y 2 )
= 4 e dx dy
0 0

Let x = ρ cos(θ) and y = ρ sin(θ). So


µ ¶2 Z π/2 Z ∞
1 2
Γ( ) = 4 ρe−ρ dρ dθ
2 0 0
Z π/2
= 2 dθ = π
0


So Γ( 12 ) = π.

6–18. Integrate by parts with u = xn−1 , dv = e−x dx


Z ∞ Z ∞
n−1 −x
£ ¤∞
Γ(n) = e dx = x + (n − 1) −e−x xn−1 0
xn−2 e−x dx
0 0
Z ∞
= 0 + (n − 1) xn−2 e−x dx = (n − 1) · Γ(n − 1)
0

Repeatedly using the approach above, we get

Γ(n) = (n − 1) · Γ(n − 1) = (n − 1)(n − 2) · Γ(n − 2)


= · · · = (n − 1)(n − 2) · · · Γ(1).
R∞
Since Γ(1) = 0
e−x dx = 1, we have Γ(n) = (n − 1)!

6–19.

Y = X1 + · · · + X10
7

½
7e−7xi if xi > 0
g(xi ) =
0 otherwise
Z ∞
7
P (Y > 1) = (7y)9 e−7y dy
1 Γ(10)
9
X 7k
= e−7·1 = 0.8305
k=0
k!

6–20.
λ = 6; t = 4 ⇒ λt = 24
23
X e−24 24x
P (X ≥ 24) = 1 − P (X ≤ 23) = 1 −
x=0
x!

6–21.
Z ∞
tX λ
MX (t) = E(e ) = etx (λx)r−1 e−λx dx
0 Γ(r)
r Z ∞
λ
= xr−1 e−x(λ−t) dx
Γ(r) 0
dx 1
The integral converges if (λ − t) > 0 or λ > t. Let u = x(λ − t), du
= (λ−t)
. So
Z ∞µ ¶r−1
λr u
MX (t) = e−u (λ − t)−1 du
Γ(r) 0 λ−t
µ ¶r Z ∞
λ 1
= · ur−1 e−u du
λ−t Γ(r) 0
µ ¶r µ ¶r
λ 1 λ
= · · Γ(r) =
λ−t Γ(r) λ−t
−r
= (1 − (t/λ)) , where λ > t

6–22.
Z ∞
0.25
P (Y > 24) = (0.25y)3 e−0.25y dy
24 Γ(4)
3
X (0.25 · 24)k
= e−0.25·24 = 0.1512
k=0
k!
8

6–23. E(X) = r/λ = 40, V (X) = r/λ2 = 400


λ = 0.1, r = 4
Z 20
0.1
P (X < 20) = (0.1x)3 e−0.1x dx
0 Γ(4)
X3
(0.1 · 20)k
= 1− e−0.1·20 = 0.1429
k=0
k!

3
X (0.1 · 60)k
P (X < 60) = 1 − e−0.1·60 = 0.8488
k=0
k!

6–24.
Z ∞
λr
E(X) = x(x − u)r−1 e−λ(x−u) dx
Γ(r) 0

dx 1
Let y = λ(x − u) ⇒ dy
= λ

Z
λr−2 ∞
E(X) = (y + λu)(y/λ)r−1 e−y dy
Γ(r) 0
Z ∞ Z ∞
1 r −y λu
= y e dy + y r−1 e−y dy
λΓ(r) 0 λΓ(r) 0
Γ(r + 1) uΓ(r)
= +
λΓ(r) Γ(r)
r
= +u
λ

6–26.
Γ(λ + r) λ−1
fX (x) = x (1 − x)r−1
Γ(λ)Γ(r)

λ = r = 1 gives

Γ(2)
fX (x) = x0 (1 − x)0
Γ(1)Γ(1)
½
1 if 0 < x < 1
=
0 otherwise
9

6–27. λ = 2, r = 1

Γ(3)
fX (x) = x(1 − x)0
Γ(2)Γ(1)
½
2x if 0 < x < 1
=
0 otherwise

λ = 1, r = 2

Γ(3)
fX (x) = x0 (1 − x)
Γ(1)Γ(2)
½
2(1 − x) if 0 < x < 1
=
0 otherwise

6–29. See solution to 3-22.


6–30.
Z ∞ µ ¶β−1
β x−γ x−γ β
E(X) = x e−( δ
)
dx
0 δ δ
x−γ
First let y = δ
⇒ dx = δdy
Z ∞
β β−1 −yβ
E(X) = (δy + γ) y e δ dy
0 δ
Z ∞
β
= β (δy + γ)y β−1 e−y dy
0

Let u = y β ⇒ dy = β −1 y −β+1 du
Z ∞
E(X) = β (δu1/β + γ)u(1−1/β) e−u β −1 u(1/β−1) du
Z0∞ Z ∞
−u
= γ e du + δu1/β e−u du
0 0
µ ¶
1
= γ + δΓ 1 +
β

Using the same approach


Z ∞ µ ¶β−1
2 2 β x−γ x−γ
E(X ) = x e−( δ ) dx
0 δ δ
10

x−γ
Let y = ⇒ dx = δdy
δ
Z ∞
2 β β
E(X ) = (δy + γ)2 y β−1 e−y δ dy
0 δ

u = y β ⇒ dy = β −1 y 1−β
Z ∞
2 β
E(X ) = (δu1/β + γ)2 u1−1/β e−u δ β −1 u1/β−1 du
0 δ
µ ¶ µ ¶
2 2 1
= δ Γ 1+ + 2γδ Γ 1 + + γ2
β β

So
· µ ¶ µ ¶¸
2 2 22 1 2
V (X) = E(X ) − [E(X)] = δ Γ 1 + −Γ 1+
β β

6–31.
x−γ β
F (x) = 1 − e−( δ
)

1.5−1 2 .
F (1.5) = 1 − e−( 0.5
)
= 0.63

6–32.
x−0 1/3
F (x) = 1 − e−( 400 )
600 1/3 .
1 − F (600) = e−( 400 ) = 0.32

6–33.
x−0 1/2
F (x) = 1 − e−( 400 )
1/2 .
1 − F (800) = 1 − e−2 = 0.24

6–34. The graphs are identical to those shown in Figure 6–8.

6–35.
x−0 1/4
F (x) = 1 − e−( 200 )
1/4 .
(a) 1 − F (1000) = 1 − e−5 = 0.22
(b) 0 + 200Γ(5) = 200 · 24 = 4800 = E(X)
11

6–36. P ∗ = Profit

P ∗ = $100; x ≥ 8760
= −$50; x < 8760

Z 8760 Z ∞
∗ −1 −20000−1 x −1 x
E(P ) = −50 20000 e dx + 100 20000−1 e−20000 dx
0 8760
= −50(1 − e−876/2000 ) + 100e−876/2000
= $46.80/set

6–37. r/λ = 20

(r/λ2 )1/2 = 10

r = 4, λ = 0.2
3
X
P (X ≤ 15) = F (15) = 1 − e−3 3k /k! = 0.3528
k=0

6–38. (a) Use Table XV, Col. 1 with scaling and Equation 6–35.

u1 = 0.10480 x1 = 10 + u1 (10) = 11.0480


u2 = 0.22368 x2 = 10 + u2 (10) = 12.2368
u3 = 0.24130 x3 = 10 + u3 (10) = 12.4130
.. ..
. .
u10 = 0.85475 x10 = 10 + u10 (10) = 18.5475

(b) Use Table XV, Col. 2; xi = −50000 `n(1 − ui )

u1 = 0.15011 x1 = −50000 `n(0.84989) = 8132.42


u2 = 0.46573 x2 = −50000 `n(0.53427) = 31345.51
u3 = 0.48360 x3 = −50000 `n(0.51640) = 33043.68
u4 = 0.93093 x4 = −50000 `n(0.06907) = 133631.74
u5 = 0.39975 x5 = −50000 `n(0.60025) = 25520.45

(c) a = 3 = 1.732, b = 4 − `n(4) + 3−1/2 = 3.191. Now use Table XV, Col. 3.
12

Realization 1: u1 = 0.01563, u2 = 0.25595


µ ¶1.732
0.01563
y=2 = 0.00153
0.98437

3.191 − `n[(0.01563)2 0.25595] = 12.871


y ≤ 12.87
x1 = y/4 = 0.0003383

Realization 2: u1 = 0.22527, u2 = 0.06243


µ ¶1.732
0.22527
y=2 = 0.2354
0.77473

3.191 − `n[(0.22527)2 0.06243] = 8.9456


y ≤ 8.9456
x2 = y/4 = 0.05885

Continue for additional realizations.

Note: Since r is an integer, an alternate scheme which may be more efficient here
is to let xi = xi1 + xi2 , where xij is exponential with parameter λ = 4.

xij = −0.25 `n(1 − uij ), i = 1, 2, . . . , 5, j = 1, 2

Realization 1: u1 = 0.01563, u2 = 0.25595

x11 = −0.25 `n(0.98437) = 0.003938


x12 = −0.25 `n(0.74405) = 0.073712
This yields x1 = 0.07785. Continue for more realizations.

(d) Use Table XV, Col. 4 with scaling.

u1 = 0.02011 x1 = 0 + 100[−`n(0.02011)]2 = 390.654


u2 = 0.85393 x2 = 0 + 100[−`n(0.85393)]2 = 15.791
.. ..
. .
u10 = 0.53988 x10 = 0 + 100[−`n(0.53988)]2 = 61.641
13

6–39. (a) Using Table XV, Col. 5, and y = x0.3 , we get


u1 = 0.81647 x1 = −10 `n(0.18353) = 16.954 y1 = 2.3376
u2 = 0.30995 x2 = −10 `n(0.69005) = 3.7099 y2 = 1.4818
.. .. ..
. . .
u10 = 0.53060 x10 = −10 `n(0.46940) = 7.5630 y10 = 1.8348

(b) Using the gamma variates in Problem 6–38(c) and Table XV, Col. 3 entry #25,

(0.000383)1/2
y1 = = 0.03645
(0.28834)1/2

(0.05885)1/2
y2 = = 1.102797
(0.04839)1/2
..
.
etc.
1

Chapter 7

7–1. (a) P (0 ≤ Z ≤ 2) = Φ(2) − Φ(0) = 0.97725 − 0.5 = 0.47725


(b) P (−1 ≤ Z ≤ 1) = Φ(1) − Φ(−1) = 2Φ(1) − 1 = 0.68268
(c) P (Z ≤ 1.65) = Φ(1.65) = 0.95053
(d) P (Z ≥ −1.96) = Φ(1.96) = 0.9750
(e) P (|Z| ≥ 1.5) = 2[1 − Φ(1.5)] = 0.1336
(f) P (−1.9 ≤ Z ≤ 2) = Φ(2) − Φ(−1.9) = Φ(2) − [1 − Φ(1.9)] = 0.94853
(g) P (Z ≤ 1.37) = 0.91465
(h) P (|Z| ≤ 2.57) = 2Φ(2.57) − 1 = 0.98984

7–2. X ∼ N (10, 9).


µ ¶ µ ¶
8 − 10 2
(a) P (X ≤ 8) = Φ =Φ − = 0.2525
3 3
µ ¶ µ ¶
12 − 10 2
(b) P (X ≥ 12) = 1 − Φ =1−Φ = 0.2525
3 3
µ ¶ µ ¶
10 − 10 2 − 10
(c) P (2 ≤ X ≤ 10) = Φ −Φ = 0.5 − Φ(−2.67) = 0.496
3 3

7–3. From Table II of the Appendix

(a) c = 1.56
(b) c = 1.96
(c) c = 2.57
(d) c = −1.645

7–4. P (Z ≥ Zα ) = α ⇒ Φ(Zα ) = 1 − α.

(a) Z0.025 = 1.96


(b) Z0.005 = 2.57
(c) Z0.05 = 1.645
(d) Z0.0014 = 2.99
2

7–5. X ∼ N (80, 100).


µ ¶
100 − 80
(a) P (X ≤ 100) = Φ = Φ(2) = 0.97725
10
µ ¶
80 − 80
(b) P (X ≤ 80) = Φ = 0.5
10
µ ¶ µ ¶
100 − 80 75 − 80
(c) P (75 ≤ X ≤ 100) = Φ −Φ = Φ(2) − Φ(−0.5) =
10 10
0.97725 − 0.30854 = 0.66869
µ ¶
75 − 80
(d) P (X ≥ 75) = 1 − Φ = 1 − Φ(−0.5) = Φ(0.5) = 0.69146
10
(e) P (|X − 80| ≤ 19.6) = Φ(1.96) − Φ(−1.96) = 0.95

µ ¶
680 − 600
7–6. (a) P (X > 680) = 1 − Φ = 1 − Φ(1.33) = 0.09176
60
µ ¶
550 − 600
(b) P (X ≤ 550) = Φ = Φ(−5/6) = 1 − Φ(5/6) = 0.20327
60
µ ¶
500 − 485
7–7. P (X > 500) = 1 − Φ = 1 − Φ(0.5) = 0.30854, i.e., 30.854%
30
µ ¶
28.5 − 30
7–8. (a) P (X ≥ 28.5) = 1 − Φ = 1 − Φ(−1.36) = Φ(1.36) = 0.91308
1.1
µ ¶
31 − 30
(b) P (X ≤ 31) = Φ = 0.819
1.1
· µ ¶ µ ¶¸
2 2
(c) P (|X − 30| > 2) = 1 − Φ −Φ − = 1 − [0.96485 − 0.03515] =
1.1 1.1
0.0703

7–9. X ∼ N (2500, 5625). Then P (X < `) = 0.05 implies that


µ ¶
` − 2500
P Z< = 0.05
75
or
` − 2500
= −1.645.
75
Thus, ` = 2376.63 is the lower specification limit.
3

7–10.
Z ∞
1 (x−µ)2
MX (t) = tX
E(e ) = √ etx e− 2σ2 dx
σ 2π −∞
Z ∞
σ 2
= √ et(yσ+µ) e−y /2 dy (letting y = (x − µ)/σ)
σ 2π −∞
Z ∞
eµt 2
= √ e−(y −2σty)/2 dy
2π −∞
µt Z ∞
e 2 2 2 2 2
= √ e−(y −2σty+σ t −σ t )/2 dy
2π −∞
Z ∞
eµt 2 2 2
= √ e−(y−σt) /2 eσ t /2 dy
2π −∞
Z ∞
µt+(1/2)σ 2 t2 1 2
= e √ e−w /2 dw (letting w = y − σt)
2π −∞
µt+(1/2)σ 2 t2
= e ,

since the integral term equals 1.

7–11.
µ ¶
y−b
FY (y) = P (aX + b ≤ y) = P X ≤
a
µ y−b ¶
−µ
= Φ a
σ
µ ¶ µ ¶
y − b − aµ y − (aµ + b)
= Φ = Φ
aσ aσ

This implies that Y ∼ N (aµ + b, a2 σ 2 ).

7–12. X ∼ N (12, (0.02)2 ).


µ ¶
12.05 − 12
(a) P (X > 12.05) = 1 − Φ = 1 − Φ(2.5) = 0.00621
0.02
4

(b)
P (X > c) = 0.9
µ ¶
c − 12
⇒ 1−Φ = 0.9
0.02
µ ¶
c − 12
⇒ Φ = 0.1
0.02
c − 12
⇒ = −1.28
0.02
⇒ c = 12 − 0.0256 = 11.97

(c)
µ ¶ µ ¶
12.05 − 12 11.95 − 12
P (11.95 ≤ X ≤ 12.05) = Φ −Φ
0.02 0.02
= Φ(2.5) − Φ(−2.5) = 0.9876

7–13. X ∼ N (µ, (0.1)2 ).


(a) Take µ = 7.0. Then

P (X > 7.2) + P (X < 6.8)


µ ¶ µ ¶
7.2 − 7 6.8 − 7
= 1−Φ +Φ
0.1 0.1
= 1 − Φ(2) + Φ(−2)
= 1 − 0.97725 + 0.02275 = 0.0455
(b)
µ ¶ µ ¶
7.2 − 7.05 6.8 − 7.05
1−Φ +Φ
0.1 0.1
= 1 − Φ(1.5) + Φ(−2.5)
= 1 − 0.93319 + 0.00621 = 0.07302

(c)
µ ¶ µ ¶
7.2 − 7.25 6.8 − 7.25
Φ −Φ
0.1 0.1
= Φ(−0.5) − Φ(−4.5)
.
= 1 − Φ(0.5) = 0.3085
5

(d)
µ ¶ µ ¶
7.2 − 6.75 6.8 − 6.75 .
Φ −Φ = 1 − Φ(0.5) = 0.3085
0.1 0.1

7–14. X ∼ N (50, 25), Y ∼ N (45, 6.25).

If Y ≥ X, i.e., if Y − X ≥ 0, a transaction will occur.

Let W = Y − X ∼ N (−5, 31.25).


µ ¶
0+5
P (W > 0) = P Z ≥ = 1 − Φ(0.89) = 0.1867.
5.59

7–15. $9.00 = revenue / capacitor, k = manufacturing cost for process A, 2k = manufac-


turing cost for process B. The profits are
½
∗ 9−k if 1000 ≤ X ≤ 5000
PA =
9 − k − 3 otherwise
½
∗ 9 − 2k if 1000 ≤ X ≤ 5000
PB =
9 − 2k − 3 otherwise
Therefore,

E(PA∗ ) = (9 − k)P (1000 ≤ X ≤ 5000) + (6 − k)[1 − P (1000 ≤ X ≤ 5000)]


= (9 − k)0.9544 + (6 − k)0.0456 = 8.8632 − k
E(PB∗ ) = (9 − 2k)P (1000 ≤ X ≤ 5000) + (6 − 2k)[1 − P (1000 ≤ X ≤ 5000)]
= (9 − 2k)(1) + (6 − k)(0) = 9 − 2k

Since E(PA∗ ) < E(PB∗ ) when k < 0.1368, use process B; When k ≥ 0.1368, use
process A.

7–16. The profit P is



 C if 6 ≤ X ≤ 8
P = −R1 if X < 6

−R2 if X > 8
6

E(P ) = CP (6 ≤ X ≤ 8) − R1 P (X < 6) − R2 P (X > 8)


= C[Φ(8 − µ) − Φ(6 − µ)] − R1 Φ(6 − µ) − R2 [1 − Φ(8 − µ)]
= (C + R2 )Φ(8 − µ) − (C + R1 )Φ(6 − µ) − R2

Then
dE(P )
= −(C + R2 )Φ(8 − µ) + (C + R1 )Φ(6 − µ) = 0,

or
C + R2 Φ(6 − µ)
= = e14−2µ
C + R1 Φ(8 − µ)

Thus,
µ ¶
1 C + R2
µ = 7 − `n .
2 C + R1

7–17. If R1 = R2 = R, then µ = 7 − 0.5 `n(1) = 7, which is the midpoint of the interval


[6,8].
µ ¶
1 12
7–18. µ = 7 − `n = 6.909.
2 10

7–19. X ∼ N (70, 16).

(a) We have
µ ¶ µ ¶
72 − 70 62 − 70
P (62 ≤ X ≤ 72) = Φ −Φ
4 4
= Φ(0.5) − Φ(−2)
= 0.69146 − 0.02275 = 0.66871.

(b) c = 1.96σ = 7.84.


(c) (9)(0.66871) = 6.018.
7

µX
n ¶ n
X
7-20. E(Y ) = E Xi = E(Xi ) = nµ, so
i=1 i=1

E(Y ) − nµ
E(Zn ) = √ = 0
σ2n
µXn ¶ Xn
V (Y ) = V Xi = V (Xi ) = nσ 2
i=1 i=1

V (Y )
V (Zn ) = = 1
nσ 2
µ X n ¶ n
1 1X nµ
7–21. E(X̄) = E Xi = E(Xi ) = = µ
n i=1 n i=1 n
µ X n ¶ n
1 1 X nσ 2 σ2
V (X̄) = V Xi = 2 V (Xi ) = =
n i=1 n i=1 n2 n

7–22. X1 ∼ N (1.25, 0.0009) and X2 ∼ N (1.2, 0.0016).

Y = X1 − X2 , E(Y ) = 0.05, V (Y ) = 0.0025.

Y ∼ N (0.05, 0.0025)
µ ¶
0 − 0.05
P (Y < 0) = Φ = Φ(−1) = 1 − Φ(1) = 0.15866.
0.05

7–23. Xi ∼ N (2, 0.04), i = 1, 2, 3, and Y = X1 + X2 + X3 ∼ N (6, 0.12).

Then
µ
¶ µ ¶
6.3 − 6.0 5.7 − 6.0
P (5.7 < Y < 6.3) = Φ −Φ
0.3464 0.3464
= Φ(0.866) − Φ(−0.866) = 0.6156.

7–24. E(Y ) = E(X1 ) + 2E(X2 ) + E(X3 ) + E(X4 ) = 4 + 2(4) + 2 + 3 = 17.


8

With independence,

V (Y ) = V (X1 ) + 22 V (X2 ) + V (X3 ) + V (X4 ) = 3 + 4(4) + 4 + 2 = 25.


µ ¶ µ ¶
20 − 17 15 − 17
P (15 ≤ Y ≤ 20) = Φ −Φ
5 5
= Φ(0.6) − Φ(−0.4)
= 0.72575 − 0.34458 = 0.38117.

7–25. E(Xi ) = 0, V (Xi ) = 1/12.

50
X
Y = Xi , E(Y ) = 0, V (Y ) = 50/12
i=1
µ ¶
5−0
P (Y > 5) = 1 − Φ p = 1 − Φ(2.45) = 0.00714.
50/12

7–26. E(Xi ) = 1, V (Xi ) = 0.0001, i = 1, 2, . . . , 100.

100
X
Y = Xi .
i=1

Assuming that the Xi ’s are independent, we use the central limit theorem to ap-
proximate the distribution of Y ∼ N (1000, 0.01). Then
µ ¶
102 − 100 .
P (Y > 102) = P Z > = 1 − Φ(20) = 0.
0.1

7–27. X̄ ∼ N (11.9, 0.0025) and n = 9. Thus, µ = 11.9, σ 2 /n = σ 2 /9 = 0.0025, so


σ 2 = 0.0225. All of this implies that X ∼ N (11.9, 0.0225). Then
P (11.8 < X < 12.2) = Φ(2) − Φ(−0.67) = 0.7258,
so that there are 27.4% defective.

If µ = 12, then
P (11.8 < X < 12.2) = Φ(1.33) − Φ(−1.33) = 0.8164,
or 18.4% defective. This is the optimal value of the mean.
9

n
X
7–28. Y = E(Xi ), where Xi is the travel time between pair i.
i=1

n
X
E(Y ) = E(Xi ) = 30
i=1
n
X
V (Y ) = V (Xi )
i=1
= (0.4)2 + (0.6)2 + (0.3)2 + (1.2)2 + (0.9)2 + (0.4)2 + (0.4)2 = 3.18.

Thus,
µ ¶
32 − 30
P (Y ≤ 32) = Φ √ = Φ(1.12) = 0.86864.
3.18


7–29. p = 0.08, n = 200, np = 16, npq = 3.84.
µ ¶
16.5 − 16
(a) P (X ≤ 16) = Φ = Φ(0.13) = 0.55172.
3.84
µ ¶ µ ¶
15.5 − 16 14.5 − 16
(b) Φ −Φ = Φ(−0.13) − Φ(−0.391) = 0.1.
3.84 3.84
µ ¶ µ ¶
20.5 − 16 11.5 − 16
(c) Φ −Φ = Φ(1.17) − Φ(−1.17) = 0.758.
3.84 3.84
µ ¶ µ ¶
14.5 − 16 13.5 − 16
(d) Φ −Φ = Φ(−0.391) − Φ(−0.651) = 0.09.
3.84 3.84

7–30. P (0.05 ≤ p̂ ≤ 0.15) = 0.95 implies that


µ ¶ µ ¶ µ ¶
0.05 − 0.10 0.15 − 0.10 0.05 −0.05
P p ≤Z≤ p = Φ √ −Φ √ = 0.95
0.09/n 0.09/n 0.3/ n 0.3/ n
µ ¶
0.05
⇒ 2Φ √ = 1.95
0.3/ n

⇒ Φ(0.167 n) = 0.9750


⇒ 0.167 n = 1.96

⇒ n = 139
10

p p
7–31. Z1 = −2 `n(u1 ) · cos(2πu2 ), Z2 = −2 `n(u1 ) · sin(2πu2 ). Note that the sine and
cosine calculations are carried out in radians.

Obtain uniforms from Col. 2.

u1 u2 z1 z2
0.15011 0.46573 −1.902 0.416
0.48360 0.93093 1.093 −0.507
0.39975 0.06907 1.229 0.569

These results give

x1 = 100 + 2(−1.902) = 96.196


x2 = 100 + 2(0.416) = 100.832
x3 = 100 + 2(1.093) = 102.186
x4 = 100 + 2(−0.507) = 98.986
x5 = 100 + 2(1.229) = 102.458
x6 = 100 + 2(0.569) = 101.138

7–32. Calculate Z1 and Z2 as in Problem 7–31, obtaining uniforms from Col. 4.

u1 u2 z1 z2
0.02011 0.08539 2.402 1.429
0.97265 0.61680 −0.175 −0.158
0.16656 0.42751 −1.700 0.833

These results give realizations of X1 .

x1 3x1
10 + 1.732(2.402) = 14.161 42.483
10 + 1.732(1.429) = 12.475 37.425
10 + 1.732(−0.175) = 9.697 29.091
10 + 1.732(−0.158) = 9.726 29.179
10 + 1.732(−1.700) = 7.056 21.167
10 + 1.732(0.833) = 11.443 34.328
11

Meanwhile, we use Col. 5 to calculate realizations of X2 .

x2 −2x2
20(0.81647) = 16.329 −32.658
20(0.30995) = 6.199 −12.398
20(0.76393) = 15.279 −30.558
20(0.07856) = 1.571 −3.142
20(0.06121) = 1.224 −2.448
20(0.27756) = 5.551 −11.102

Finally, we get the six realizations of Y ,

y1 = 9.825
y2 = 25.027
y3 = −1.467
y4 = 26.039
y5 = 18.719
y6 = 23.226

7–33. Using the zi realizations from Problem 7–31,

(−1.092)2 = 3.618
(0.416)2 = 0.173
(1.093)2 = 1.195
(−0.507)2 = 0.257
(1.229)2 = 1.510

7–34. Let z1 , z2 , . . . , zn be realizations of N (0, 1) r.v.’s.

yi = µY + σzi , i = 1, 2, . . . , n.

xi = eyi , i = 1, 2, . . . , n.
12

7–35. Generate a pair z1 , z2 of N (0, 1) r.v.’s.

Let x1 = µ1 + σ1 z1 and x2 = µ2 + σ2 z2 . Thus, Xi ∼ N (µi , σi2 ), i = 1, 2.


Let y1 = x1 /x22 .

Repeat this procedure for as many realizations as desired.

7–36. This is a normal distribution truncated on the right, with p.d.f.


· ¸
1 (x − µ)2
f (x) = √ exp − if −∞ < x ≤ r
Φ( r−µ
σ
) σ 2π 2σ 2

= 0 if x > r

For our problem, r = 2600, µ = 2500, and σ = 50. Now, after a bit of calculus,
Z ∞
E(X) = xf (x) dx
−∞
· ¸
σ (r − µ)2
= µ− √ exp −
Φ( r−µ
σ
) 2π 2σ 2
· ¸
50 (2600 − 2500)2
= 2500 − √ exp −
0.9772 2π 2(50)2
= 2497.24

7–37. E(X) = e62.5 , V (X) = e125 (e25 − 1), median(X) = e50 , mode(X) = e25

7–38. W is lognormal with parameters 17.06 and 7.0692, or `n(W ) ∼ N (17.06, 7.0692).

Thus, P (L ≤ W ≤ R) = 0.90 implies P (`n(L) ≤ `n(W ) ≤ `n(R)) = 0.90, or


µ ¶ µ ¶
`n(R) − 17.06 `n(L) − 17.06
Φ √ −Φ √ = 0.90.
7.0692 7.0692

Assuming that the interval [`n(L), `n(R)] is symmetric about 17.06, we obtain
`n(L) = 17.06 − c and `n(R) = 17.06 + c, so that
µ ¶ µ ¶
c −c
Φ −Φ = 0.90.
2.6588 2.6588
13

c
This means that = 1.645, or c = 4.374.
2.6588

`n(L) = 12.69, or L = 324486.8 and


`n(R) = 21.43, or R = 2027359410

7–39. Y ∼ N (µ, σ 2 ), Y = `n(X), or X = eY . The function ey is strictly increasing in y;


thus, from Theorem 3–1,
· ¸
1 (`n(x) − µ)2
f (x) = √ exp − ; x ≥ 0.
xσ 2π 2σ 2

7–41. X1 ∼ N (2000, 2500), X2 ∼ N (0.10, 0.01), ρ = 0.87.

E(X1 |x2 ) = µ1 + ρ(σ1 /σ2 )(x2 − µ2 )


= 2000 + (0.87)(50/0.1)(0.098 − 0.10)
= 1999.13

V (X1 |x2 ) = σ12 (1 − ρ2 ) = 2500(1 − 0.7569) = 607.75


µ ¶
1950 − 1999.13
P (X1 > 1950|x2 = 0.098) = P Z > √
607.75
= 1 − Φ(−1.993) = 0.9769

7–42. X1 ∼ N (75, 25), X2 ∼ N (83, 16), ρ = 0.8.

E(X2 |x1 ) = µ2 + ρ(σ2 /σ1 )(x1 − µ1 )


= 83 + (0.8)(4/5)(80 − 75)
= 86.2

V (X2 |x1 ) = σ22 (1 − ρ2 ) = 16(1 − 0.64) = 5.76


µ ¶
80 − 86.2
P (X2 > 80|x1 = 80) = P Z > √ = 0.9951
5.76
14

7–43. (a) f (x1 , x2 ) = k implies that


½ ·µ ¶2 µ ¶2 ¸¾
1 −1 x1 − µ1 2ρ(x1 − µ1 )(x2 − µ2 ) x2 − µ2
k = p exp − +
2πσ1 σ2 1 − ρ2 2(1 − ρ2 ) σ1 σ1 σ2 σ2

For a selected value of k, the quantity in brackets assumes a value, say c; thus,
µ ¶2 µ ¶2
x 1 − µ1 2ρ(x1 − µ1 )(x2 − µ2 ) x 2 − µ2
− + − c = 0,
σ1 σ1 σ2 σ2
which is a quadratic in x1 − µ1 and x2 − µ2 . If we write the general second-
degree equation as

Ay12 + By1 y2 + Cy22 + Dy1 + Ey2 + F = 0,

we can determine the nature of the curve from the second-order terms. In
particular, if B 2 − 4AC < 0, the curve is an ellipse. In any case,
µ ¶2
2 2ρ 4 4(ρ2 − 1)
B − 4AC = − 2 2 = < 0,
σ1 σ2 σ1 σ2 σ12 σ22

the last inequality a result of the fact that ρ2 < 1 (for ρ 6= 0). Thus, we have
an ellipse.
(b) Let σ12 = σ22 = σ 2 and ρ = 0. Then the equation of the curve becomes
µ ¶2 µ ¶2
x 1 − µ1 x 2 − µ2
+ − c = 0,
σ1 σ2

which is a circle with center (µ1 , µ2 ) and radius σ c.

7–44.

F (r) = P (R ≤ r)
µq ¶
2 2
= P X1 + X2 ≤ r

= P (X12 + X22 ≤ r2 )
Z Z · ¸
1 (t21 + t22 )
= 2
exp − dt1 dt2 ,
A 2πσ 2σ 2

where A = {(x1 , x2 ) : x21 + x22 ≤ r2 }.


15

Let x1 = ρ cos(θ) and x2 = ρ sin(θ). Then


Z rZ 2π
ρ
P (R ≤ r) = 2
exp(−ρ2 /2σ 2 ) dθ dρ
0 0 2πσ
= 1 − exp(−r2 /2σ 2 )

Thus,

f (r) = (r/σ 2 ) exp(−r2 /2σ 2 ); r > 0.

Pn
7–45. Using the fact that i=1 Xi2 has a χ2n distribution, we obtain
2
rn−1 e−r /2
f (r) = (n−2)/2 ; r≥0
2 Γ(n/2)

7–46. Let Y1 = X1 /X2 and Y2 = X2 , with X2 6= 0. Then the Jacobian is


¯ ¯
¯ y y ¯
|J| = ¯¯ 2 1 ¯ = |y2 |
¯
0 1

So we have
½ ·µ ¶2 µ ¶2 ¸¾
1 −1 y1 y2 2ρy1 y22 y2
f (y1 , y2 ) = p |y2 | exp − +
2πσ1 σ2 1 − ρ2 2(1 − ρ2 ) σ1 σ1 σ2 σ2

So the marginal is
Z ∞
fY1 (y1 ) = f (y1 , y2 ) dy2
−∞
p ·µ ¶2 ¸−1
1 − ρ2 y1 ρ 1 − ρ2
= − + ; −∞ < y1 < ∞
πσ1 σ2 σ1 σ2 σ22

When ρ = 0 and σ12 = σ22 = σ 2 , the distribution becomes


1
fY1 (y1 ) = ; −∞ < y1 < ∞,
π(1 + y12 )

also known as the Cauchy distribution.


16

7–47. The CDF is

FY (y) = P (Y ≤ y) = P (Z 2 ≤ y)
√ √
= P (− y ≤ Z ≤ y)
Z √y
1 −z2 /2
= 2 e dz
0 2π
√ √
Take z = u so that dz = (2 u)−1 du. Then
Z y
1 (1/2)−1 −u/2
FY (y) = u e du
0 2π

and so, by Leibniz’ rule,


1 −1/2 −y/2
f (y) = y e , y > 0.

7–48. The CDF is


µX
n ¶
FY (y) = P (Y ≤ y) = P Xi2 ≤y
i=1
Z Z Z · n ¸
−n/2 1X 2
= ··· (2π) exp − x dx1 dx2 · · · dxn ,
A 2 i=1 i

where
½ n
X ¾
2
A = (x1 , x2 , . . . , xn ) : xi ≤ y .
i=1

Transform to polar coordinates:

x1 = y 1/2 cos(θ1 )
x2 = y 1/2 sin(θ1 ) cos(θ2 )
x3 = y 1/2 sin(θ1 ) sin(θ2 ) cos(θ3 )
..
.
xn−1 = y 1/2 sin(θ1 ) sin(θ2 ) · · · sin(θn−2 ) cos(θn−1 )
xn = y 1/2 sin(θ1 ) sin(θ2 ) · · · sin(θn−2 ) sin(θn−1 )
17

The Jacobian of this transformation is


¯ ∂x1 ∂x1 ¯
¯ ∂y ∂θ · · · ∂θ∂xn−1
1 ¯
¯ ∂x ∂x1 ¯
¯ 2 2 ∂x2 ¯
· · · ∂θn−1 ¯
¯ ∂y ∂θ1
¯ . .. .. .. ¯¯
¯ ..
¯ ∂x ∂x. . . ¯
¯ n n ∂xn ¯
· · · ∂θn−1
∂y ∂θ1

or, after a little algebra,

¯ √ ¯
¯ cos(θ1 )
√ − y sin(θ1 ) 0 ··· 0 ¯
¯ 2 y ¯
¯ sin(θ1 ) cos(θ2 ) √ √ ¯
¯ √ y cos(θ1 ) cos(θ2 ) − y cos(θ1 ) sin(θ2 ) · · · 0 ¯
¯ 2 y ¯
¯ .. .. .. .. .. ¯
¯ . . . . . ¯
¯ ¯
¯ sin(θ1 )··· sin(θn−1 ) √ √ ¯
¯ √
2 y y cos(θ1 ) sin(θ2 ) · · · sin(θn−1 ) ··· ··· y sin(θ1 ) sin(θ2 ) · · · cos(θn−1 ) ¯

In other words, J = (1/2)y (n/2)−1 |∆n |, where ∆n is an n × n matrix obtained by


√ √
taking out (2 y)−1 from the first column and y from the last n − 1 columns.
Expanding this determinant with respect to the last column, we have

|∆n | = sin(θ1 ) sin(θ2 ) · · · sin(θn−2 )|∆n−1 |


= sinn−2 (θ1 ) sinn−3 (θ2 ) · · · sin(θn−2 )

This transformation gives variables whose limits are much easier. In the region
covered by A, we have 0 ≤ θi ≤ π for i = 1, 2, . . . , n − 2, and 0 < θn−1 < 2π. Thus,
µX
n ¶
P Xi2 ≤y ∗

i=1
Z Z Z
1 1 (n/2)−1 −y/2
= ··· n/2
y e |∆n | dy dθn−1 · · · dθ1
A (2π) 2
Z y∗ Z 2π Z π Z π
1 (n/2)−1 −y/2
= y e dy dθn−1 sin(θn−2 ) dθn−2 · · · sinn−2 (θ1 ) dθ1
2(2π)n/2 0 0 0 0
Z y∗
= K y (n/2)−1 e−y/2 dy ≡ F (y ∗ )
0

Thus,

f (y) = F 0 (y) = Ky (n/2)−1 e−y/2 , y ≥ 0.


18

R∞
To evaluate K, use K 0
f (y) dy = 1. This finally gives

1
f (y) = y (n/2)−1 e−y/2 , y ≥ 0.
2n/2 Γ(n/2)

7–49. For x ≥ 0,

F (x) = P (|X| ≤ x) = P (−x ≤ X ≤ x)


Z x
1 2
= √ e−t /2 dt
−x 2π
Z x
1 2
= 2 √ e−t /2 dt,
0 2π

so that
2 2
f (x) = √ e−x /2 , x > 0

= 0, otherwise
1

Chapter 8

8–1. x̄ = 131.30, s2 = 113.85, s = 10.67.

8–2. Descriptive Statistics for Y

Mean = 34.767
Variance = 1.828
Standard Dev = 1.352
Skewness = 0.420
Kurtosis = 2.765
Minimum = 32.100
Maximum = 37.900

n = 64

Lower Limit Cell Count


32.1000 1 X
32.4625 4 XXXX
32.8250 3 XXX
33.1875 2 XX
33.5500 7 XXXXXXX
33.9125 6 XXXXXX
34.2750 9 XXXXXXXXX
34.6375 7 XXXXXXX
35.0000 7 XXXXXXX
35.3625 5 XXXXX
35.7250 2 XX
36.0875 2 XX
36.4500 4 XXXX
36.8125 1 X
37.1750 1 X
37.5375 3 XXX
2

8–3. Descriptive Statistics for X

Mean = 89.476
Variance = 17.287
Standard Dev = 4.158
Skewness = 0.251
Kurtosis = 1.988
Minimum = 82.600
Maximum = 98.000

n = 90

Lower Limit Cell Count


82.60 4 XXXX
83.37 6 XXXXXX
84.14 4 XXXX
84.91 6 XXXXXX
85.68 7 XXXXXXX
86.45 3 XXX
87.22 8 XXXXXXXX
87.99 4 XXXX
88.76 4 XXXX
89.53 7 XXXXXXX
90.30 6 XXXXXX
91.07 4 XXXX
91.84 3 XXX
92.61 4 XXXX
93.38 4 XXXX
94.15 5 XXXXX
94.92 4 XXXX
95.69 3 XXX
96.46 1 X
97.23 3 XXX
3

8–4.
Number of Defects Frequency Relative Freq
1 1 0.0067
2 14 0.0933
3 11 0.0733
4 21 0.1400
5 10 0.0667
6 18 0.1200
7 15 0.1000
8 14 0.0933
9 9 0.0600
10 15 0.1000
11 4 0.0267
12 4 0.0267
13 6 0.0400
14 5 0.0333
15 1 0.0067
16 1 0.0067
17 1 0.0067
150 1.0000
x̄ = 6.9334, s2 = 12.5056, R = 16, x̃ = 6.5, MO = 4. The data appear to follow a
Poisson distribution, though s2 seems to be somewhat greater than x̄.

8–5. x̄ = 131.30, s2 = 113.85, s = 10.67.

8–6.
Class Interval Frequency Relative Freq
32 ≤ X < 33 6 0.094
33 ≤ X < 34 11 0.172
34 ≤ X < 35 22 0.344
35 ≤ X < 36 14 0.219
36 ≤ X < 37 6 0.094
37 ≤ X < 38 5 0.077
64 1.000
x̄ = 34.7672, s2 = 1.828, x̃ = (34.6 + 34.7)/2 = 34.65. The data appear to follow a
normal distribution.
4

8–7.
Class Interval Frequency Relative Freq
82 ≤ X < 84 6 0.067
84 ≤ X < 86 14 0.156
86 ≤ X < 88 18 0.200
88 ≤ X < 90 11 0.122
90 ≤ X < 92 14 0.156
92 ≤ X < 94 8 0.088
94 ≤ X < 96 12 0.133
96 ≤ X < 98 6 0.067
98 ≤ X < 100 1 0.011
x̄ = 89.4755, s2 = 17.2870. The data appear to follow a either a gamma or a
Weibull distribution.

8–8. (a) Descriptive Statistics for Time


Mean = 14.355
Variance = 356.577
Standard Dev = 18.883
Skewness = 1.809
Kurtosis = 5.785
Minimum = 0.190
Maximum = 72.890

n = 19

Lower Limit Cell Count


0.1900 13 XXXXXXXXXXXXX
10.5757 1 X
20.9614 0
31.3471 4 XXXX
41.7329 0
52.1186 0
62.5043 1 X
(b) x̄ = 14.355, s2 = 356.577, s = 18.88, x̃ = 6.5.

8–9. x̄ = 126.875, s2 = 660.12, s = 25.693


5

8–10.(a,b)
83 4
84 3
85 3
86 7 7
87 7 5 8 6 9 4
88 5 6 3 2 3 5 3 6 7 49
89 8 2 0 9 8 6 3 8 3 7
90 8 3 1 9 4 1 4 6 4 3507
91 5 1 0 0 8 2 8 6 1 1620
92 7 3 7 6 7 2 2 2
93 3 2 4 3 0 7
94 7 2 2 4
95 6
96 1
97
98 8
99
100 3
(c) x̄ = 90.6425, s2 = 7.837, s = 2.799
(d) x̃ = median = 90.45. There are several modes, e.g., 91.0, 919.1, 92.7.
6

8–12. (a)
Frequency
32 5 6 9 8 1 7 6
33 1 6 6 8 4 681656 11
34 2 5 3 7 7 27697160167656173 22
35 6 1 0 4 1 320149857 14
36 2 8 8 4 6 8 6
37 9 8 1 6 3 5
(b) x̄ = 34.7672, s2 = 1.828
(c)
Frequency
32 1 5 6 7 8 9 6
33 1 1 4 5 6 666688 11
34 0 1 1 1 2 23355666667777779 22
35 0 0 1 1 1 234456789 14
36 2 4 6 8 8 8 6
37 1 3 6 8 9 5
(d) x̃ = 34.65

8–13. (a)
Frequency
82 6 9 2
83 0 1 6 7 4
84 0 1 1 1 2 569 8
85 0 1 1 1 4 4 6
86 1 1 1 4 4 44677 10
87 3 3 3 3 5 667 8
88 2 2 3 6 8 5
89 1 1 4 6 6 7 6
90 0 0 1 1 3 45666 10
91 1 2 4 7 4
92 1 4 4 3
93 1 1 2 27 5
94 1 1 1 33467 8
95 1 2 3 6 4
96 1 3 4 8 4
97 3 8 2
98 0 1
7

(b) x̄ = 89.25, Q1 = 86.1, Q3 = 93.1.


(c) IQR = Q3 − Q1 = 7.0

8–14. min = 82.6, Q1 = 86.1, x̄ = 89.25, Q3 = 93.1, max = 98.1

8–15. min = 32.1, Q1 = 33.8, x̄ = 34.65, Q3 = 35.45, max = 37.9

8–16. min = 1, Q1 = 4, x̄ = 7, Q3 = 10, max = 17

8–18. The descriptive measures developed in this chapter are for numerical data only.
The mode, however, does have some meaning. For these data, the mode is the
letter e.

8–19. (a)
n
X n
X n
X n
X
(Xi − X̄) = Xi − X̄ = Xi − nX̄
i=1 i=1 i=1 i=1
n
X n
X
= Xi − Xi = 0
i=1 i=1

(b)
n
X n
X
2
(Xi − X̄) = (Xi2 + X̄ 2 − 2Xi X̄)
i=1 i=1
n
X n
X
= Xi2 + nX̄ 2 − 2X̄ Xi
i=1 i=1
n
X
= Xi2 + nX̄ 2 − 2nX̄ 2
i=1
Xn
= Xi2 − nX̄ 2
i=1

8–20. x̄ = 1.1933, s2 = 0.000266, s = 0.016329, x̃ = median = (1.19 + 1.20)/2 = 1.195,


mode = 1.21.

8–21. x̄ = 74.0020, s = 0.0026, s2 = 6.875 × 10−6


8

8–22. x̄ = 62.75, s = 2.12, s2 = 4.5

8–23. (a) Sample average will be reduced by 63.


(b) Sample mean and standard deviation will be 100 units larger; the sample
variance will be 10000 units larger.

8–24. ȳ = a + bx̄, sy = bsx

8–25. a = x̄

8–26. (a) 89.336


(b) 89.237

8–27. There is no guarantee that LN is an integer. For example, if we want a 10% trimmed
mean with 23 observations, then we would have to trim 2.3 observations from each
end. Since we cannot do this, some other procedure must be used. A reasonable
alternative is to calculate the trimmed mean with two observations trimmed from
each end, then to repeat this procedure with three observations trimmed from each
end, and finally to interpolate between the two different values of the trimmed
mean.

8–29. (a) x̄ = 120.22481, s2 = 5.66001


(b) median = 120, mode = 121

8–30. (a) x̄ = −0.20472, s2 = 3.96119


(b) median = mode = 0

8–31. For 8–29, cv = 2.379/120.225 = 0.01979.


For 8–30, cv = 1.990/(−0.205) = −9.722.

8–32. x̄ ≈ 51.124, s2 ≈ 586.603, x̃ ≈ 48.208, mode ≈ 36.334

8–33. x̄ ≈ 22.407, s2 ≈ 208.246, x̃ ≈ 22.813, mode ≈ 23.64

8–34. x̄ ≈ 847.885, s2 ≈ 15987.81, s ≈ 126.44, x̃ ≈ 858.98, mode ≈ 1050


1

Chapter 9

9–1. Since
· ¸
1 (xi − µ)2
f (xi ) = √ exp − ,
σ 2π 2σ 2

we have
5
Y
f (x1 , x2 , . . . , x5 ) = f (xi )
i=1
Y5 · ¸
1 (xi − µ)2
= √ exp −
i=1
σ 2π 2σ 2
µ ¶5/2 · 5 ¸
1 1 X 2
= exp − 2 (xi − µ)
σ 2 2π 2σ i=1

9–2. Since

f (xi ) = λe−λxi ,

we have
n
Y
f (x1 , x2 , . . . , xn ) = f (xi )
i=1
Yn
= λe−λxi
i=1
· n
X ¸
n
= λ exp −λ xi
i=1

9–3. Since f (xi ) = 1, we have


4
Y
f (x1 , x2 , x3 , x4 ) = f (xi ) = 1
i=1
2

9–4. The joint probability function for X1 and X2 is


µ ¶µ ¶
N −M M
0 2
pX1 ,X2 (0, 0) = µ ¶
N
2
µ ¶µ ¶
N −M M
1 1
pX1 ,X2 (0, 1) = µ ¶
N
2
2
µ ¶µ ¶
N −M M
1 1
pX1 ,X2 (1, 0) = µ ¶
N
2
2
µ ¶µ ¶
N −M M
2 0
pX1 ,X2 (1, 1) = µ ¶
N
2

Of course,
1
X
pX1 (x1 ) = pX1 ,X2 (x1 , x2 ) and
x2 =0
1
X
pX2 (x2 ) = pX1 ,X2 (x1 , x2 )
x1 =0

So pX1 (0) = M/N , pX1 (1) = 1 − (M/N ), pX2 (0) = M/N , pX2 (1) = 1 − (M/N ).

Thus, X1 and X2 are not independent since

pX1 ,X2 (0, 0) 6= pX1 (0)pX2 (0)

9–5. N (µ, σ 2 /n) = N (5, 0.00125)


√ √
9–6. σ/ n = 0.1/ 8 = 0.0353
3


9–7. Use estimated standard error S/ n.

9–8. N (−5, 0.22)

9–9. The standard error of X̄1 − X̄2 is


s r
σ12 σ22 (1.5)2 (2.0)2
+ = + = 0.473
n1 n2 25 30

9–10. Y = X̄1 − X̄2 is a linear combination of the 55 variables Xij , i = 1, j = 1, 2, . . . , 25,


i = 2, j = 1, 2, . . . , 30. As such, we would expect Y to be very nearly normal with
mean µY = −0.5 and variance (0.473)2 = 0.223.

9–11. N (0, 1)

9–12. N (p̂, p̂(1 − p̂)/n)


p p
9–13. se(p̂) = p(1 − p)/n, se(p̂)
b = p̂(1 − p̂)/n,

9–14.
Z ∞
tX 1
MX (t) = E(e ) = etx x(n/2)−1 e−x/2 dx
2n/2 Γ(n/2)
Z0 ∞
1
= n/2 x(n/2)−1 e−x[(1/2)−t] dx
2 Γ(n/2) 0

This integral converges if 1/2 > t.

Let u = x[(1/2) − t]. Then dx = [(1/2) − t]−1 du. Thus,


Z ∞
1 u(n/2)−1 1
MX (t) = n/2 (n/2)−1
e−u du
2 Γ(n/2) 0 [(1/2) − t] [(1/2) − t]
Z ∞
1
= n/2 u(n/2)−1 e−u du
2 Γ(n/2)[(1 − 2t)/2]n/2 0
1
= , t < 1/2,
(1 − 2t)n/2
Z ∞
since Γ(n/2) = u(n/2)−1 e−u du.
0
4

9–15. First of all,

MX0 (t) = n(1 − 2t)−(n/2)−1


MX00 (t) = n(n + 2)(1 − 2t)−(n/2)−2

Then

E(X) = MX0 (0) = n


E(X 2 ) = MX00 (0) = n(n + 2)
V (X) = E(X 2 ) − [E(X)]2 = 2n

p p
9–16. Let T = Z/ χ2n /n = Z n/χ2n . Now
p
E(T ) = E(Z)E( n/χ2n ) = 0, because E(Z) = 0.

V (T ) = E(T 2 ), because E(T ) = 0. Thus,

V (T ) = E[Z 2 (n/χ2n )] = E(Z 2 )E(n/χ2n ).

Note that E(Z 2 ) = V (Z) = 1, so that

V (T ) = E(n/χ2n )
Z ∞
(n/s)
= n/2
s(n/2)−1 e−s/2 ds
0 2 Γ(n/2)
Z ∞
n
= n/2 s(n/2)−2 e−s/2 ds
2 Γ(n/2) 0
Z ∞
n
= (n/2)−1 (2u)(n/2)−2 e−u du
2 Γ(n/2) 0
nΓ( n2 − 1)
= , if n > 2
2Γ( n2 )
nΓ( n2 − 1)
= , if n > 2
2( n2 − 1)Γ( n2 − 1)
n
= , if n > 2
n−2
5

9–17. E(Fm,n ) = E[(χ2m /m)/(χ2n /n)] = E(χ2m /m)E(n/χ2n ).

E(χ2m /m) = (1/m)E(χ2m ) = 1.

From Problem 9–16, we have E(n/χ2n ) = n/(n − 2).

Therefore, E(Fm,n ) = n/(n − 2), if n > 2.

To find V (Fm,n ), let X ∼ χ2m and Y ∼ χ2n . Then

2
E(Fm,n ) = (n/m)2 E(X 2 )E(1/Y 2 ).

Since E(X 2 ) = V (X) + [E(X)]2 and X ∼ χ2m , we have E(X 2 ) = 2m + m2 . Now

Z ∞
2 (1/y 2 )
E(1/Y ) = n/2 Γ(n/2)
y (n/2)−1 e−y/2 dy
0 2
Z ∞
1
= n/2 y (n/2)−3 e−y/2 dy
2 Γ(n/2) 0
Z ∞
1
= n/2 2(2u)(n/2)−3 e−u du
2 Γ(n/2) 0
1
= , if n > 4
(n − 2)(n − 4)
Thus,
µ ¶2
2 2 2 n
V (Fm,n ) = (n/m) E(X )E(1/Y ) −
n−2
n2 (2m + m2 ) n2 2n2 (m + n − 2)
= − =
m2 (n − 2)(n − 4) (n − 2)2 m(n − 2)2 (n − 4)

9–18. X(1) is greater than t if and only if every observation is greater than t. Then

P (X(1) > t) = P (X1 > t, X2 > t, . . . , Xn > t)


= P (X1 > t)P (X2 > t) · · · P (Xn > t)
= P (X > t)P (X > t) · · · P (X > t)
= [1 − F (t)]n
6

So FX(1) (t) = 1 − P (X(1) > t) = 1 − [1 − F (t)]n .

If X is continuous, then so is X(1) ; so


0
fX(1) (t) = FX(1) (t) = n[1 − F (t)]n−1 f (t)

Similarly,

FX(n) (t) = P (X(n) ≤ t)


= P (X1 ≤ t, X2 ≤ t, . . . , Xn ≤ t)
= P (X1 ≤ t)P (X2 ≤ t) · · · P (Xn ≤ t)
= P (X ≤ t)P (X ≤ t) · · · P (X ≤ t)
= [F (t)]n

Since X(n) is continuous,


0
fX(n) (t) = FX(n) (t) = n[F (t)]n−1 f (t)

9–19.

 0 t<0
F (t) = 1−p 0≤t<1

1 t≥1

Then

P (X(n) = 1) = FX(n) (1) − FX(n) (0) = [F (1)]n − [F (0)]n = 1 − (1 − p)n

P (X(1) = 0) = 1 − [1 − F (0)]n = 1 − [1 − (1 − p)]n = 1 − pn

9–20.
· µ ¶¸n−1 · ¸
t−µ 1 (x − µ)2
fX(1) (t) = n 1 − Φ √ exp −
σ σ 2π 2σ 2
· µ ¶¸n−1 · ¸
t−µ 1 (x − µ)2
fX(n) (t) = n Φ √ exp −
σ σ 2π 2σ 2
7

9–21. f (t) = λe−λt , t>0

F (t) = 1 − e−λt

FX(1) (t) = 1 − [1 − F (t)]n = 1 − [1 − (1 − e−λt )]n = 1 − e−nλt

fX(1) (t) = nλe−nλt , t>0

FX(n) (t) = [F (t)]n = (1 − e−λt )n

fX(n) (t) = n(1 − e−λt )n−1 λe−λt , t>0

9–22. fX(n) (X(n) ) = n[F (X(n) )]n−1 f (X(n) )

Treat F (X(n) ) as a random variable giving the fraction of objects in the population
having values of X ≤ X(n) .

Let Y = F (X(n) ). Then dy = f (X(n) )dx(n) , and thus f (y) = ny n−1 , 0 ≤ y ≤ 1.

This gives
Z 1
n
E(Y ) = ny n dy = .
0 n+1

Similarly, fX(1) (X(1) ) = n[1 − F (X(1) )]n−1 f (X(1) )

Treat F (X(1) ) as a random variable giving the fraction of objects in the population
having values of X ≤ X(1) .

Let Y = F (X(1) ). Then dy = f (X(1) )dx(1) , and thus f (y) = n(1 − y)n−1 , 0 ≤ y ≤ 1.

This gives
Z 1
E(Y ) = ny(1 − y)n−1 dy
0
8

The family of Beta distributions is defined by p.d.f.’s of the form


½
[β(r, s)]−1 xr−1 (1 − x)s−1 0 < x < 1
g(x) =
0 otherwise

where β(r, s) = Γ(r)Γ(s)/Γ(r + s).

Thus,
Z 1
E(Y ) = n y(1 − y)n−1 dy = nβ(2, n)
0
nΓ(2)Γ(n) n!1! 1
= = =
Γ(n + 2) (n + 1)! n+1

9–23. (a) 2.73


(b) 11.34
(c) 34.17
(d) 20.48

9–24. (a) 2.228


(b) 0.687
(c) 1.813

9–25. (a) 1.63


(b) 2.85
(c) 0.241
(d) 0.588
1

Chapter 10

10–1. Both estimators are unbiased. Now, V (X 1 ) = σ 2 /2n while V (X 2 ) = σ 2 /n. Since
V (X 1 ) < V (X 2 ), X 1 is a more efficient estimator than X 2 .
10–2. E(θ̂1 ) = µ, E(θ̂2 ) = (1/2)E(2X1 − X6 + X4 ) = (1/2)(2µ − µ + µ) = µ. Both
estimators are unbiased.
V (θ̂1 ) = σ 2 /7,
µ ¶2
1
V (θ̂2 ) = V (2X1 − X6 + X4 )
2
µ ¶ µ ¶
1 1
= [4V (X1 ) + V (X6 ) + V (X4 )] = 6σ 2 = 3σ 2 /2
4 4
θ̂1 has a smaller variance than θ̂2 .
10–3. Since θ̂1 is unbiased, M SE(θ̂1 ) = V (θ̂1 ) = 10.
M SE(θ̂2 ) = V (θ̂2 ) + (Bias)2 = 4 + (θ − θ/2)2 = 4 + θ2 /4.

If θ < 24 = 4.8990, θ̂2 is a better estimator of θ than θ̂1 , because it would have
smaller M SE.
10–4. M SE(θ̂1 ) = V (θ̂1 ) = 12, M SE(θ̂2 ) = V (θ̂2 ) = 10,
M SE(θ̂3 ) = E(θ̂3 − θ)2 = 6. θ̂3 is a better estimator because it has smaller M SE.
10–5. E(S 2 ) = (1/24)E(10S12 + 8S22 + 6S32 ) = (1/24)(10σ 2 + 8σ 2 + 6σ 2 )
= (1/24)24σ 2 = σ 2
P
10–6. Any linear estimator of µ is of the form θ̂ = ni=1 ai Xi where ai are constants. θ̂ is
P
an unbiased estimator of µ only if E(θ̂) = µ, which implies that ni=1 ai = 1. Now
P
V (θ̂) = ni=1 2 2
P ai σ . Thus we must choose the ai to minimize V (θ̂) subject to the
constraint ai = 1. Let λ be a Lagrange multiplier. Then
n
à n !
X X
F (ai , λ) = a2i σ 2 − λ ai − 1
i=1 i=1

and ∂F/∂ai = ∂F/∂λ = 0 gives


2ai σ 2 − λ = 0; i = 1, 2, . . . , a
n
X
ai = 1
i=1

The solution is ai = 1/n. Thus θ̂ = X is the best linear unbiased estimator of µ.


2

n
, n
Y Y
Xi −α ΣXi −nα
10–7. L(α) = α e /Xi ! = α e Xi !
i=1 i=1
n
à n !
X Y
`n L(α) = Xi `n α − nα − `n Xi !
i=1 i=1
n
d `n L(α) X
= Xi /α − n = 0
dα i=1
n
X
α̂ = Xi /n = X
i=1

10–8. For the Poisson distribution, E(X) = α = µ01 . Also, M10 = X. Thus α̂ = X is the
moment estimator of α.
n
Y Pn
10–9. L(λ) = λe−λti = λn e−λ i=1 ti

i=1
n
X
`n L(λ) = n `n λ − λ ti
i=1
n
X
d `n L(λ)
= (n/λ) − ti = 0
dλ i=1
, n
X
λ̂ = n ti = (t)−1
i=1

10–10. E(t) = 1/λ = µ01 , and M10 = t. Thus 1/λ = t or λ̂ = (t)−1 .

10–11. If X is a gamma random variable, then E(X) = r/λ P and V (X) = r/λ2 . Thus
n
E(X 2 ) = (r + r2 )λ2 . Now M10 = X and M20 = (1/n) i=1 Xi2 . Equating moments,
we obtain n
X
r/λ = X, (r + r2 )λ2 = (1/n) Xi2
i=1
or, ," #
n
X 2
λ̂ = X (1/n) Xi2 − X
i=1
," n
#
2 X 2
r̂ = X (1/n) Xi2 − X
i=1
3

10–12. E(X) = 1/p, M10 = X. Thus 1/p = X or p̂ = 1/X.

n
Y
10–13. L(p) = (1 − p)Xi −1 p = pn (1 − p)ΣXi −n
i=1
Pn
`n L(p) = n `n p + ( Xi − n)`n (1 − p). From d `n L(p)/dρ = 0, we obtain
i=1
à n !,
X
(n/p̂) − Xi − n (1 − ρ̂) = 0
i=1
, n
X
p̂ = n Xi = 1/X
i=1

10–14. E(X) = p, M10 = X. Thus p̂ = X.

10–15. E(X) = np (n is known), M10 = X N (X is based on a sample of N observations.)


Thus np = X N or p̂ = X N /n.

10–16. E(X) = np, V (X) = np(1 − p), E(X 2 ) = np − np2 + n2 p2


P
M10 = X, M20 = (1/N ) N 2
i=1 Xi . Equating moments,

N
X
np = X, np − np2 + n2 p2 = (1/N ) Xi2
i=1
," N
#
2 X
n̂ = X X − (1/N ) (Xi − X)2 , p̂ = X/N̂
i=1

N µ ¶ " N µ ¶#
Y n Y n Pn PN
10–17. L(p) = pXi (1 − p)n−Xi = p i=1 Xi
(1 − p)nN − i=1 Xi
Xi Xi
i=1 i=1

N µ ¶ Ã N
! Ã N
!
X n X X
`n L(p) = `n + Xi `n p + nN − Xi `n(1 − p)
Xi
i=1 i=1 i=1

N
, Ã N
!,
d `n L(p) X X
= Xi p− nN − Xi (1 − p) = 0
dp i=1 i=1

p̂ = X/n
4

n µ ¶µ ¶βτ " µ ¶β #
Y β Xi − γ Xi − γ
10–18. L = exp −
i=1
δ δ δ

The system of partial derivatives ∂L/∂δ = ∂L/∂β = ∂L/∂γ = 0 yield simultane-


ous nonlinear equations that must be solved to produce the maximum likelihood
estimators. In general, iterative methods must be used to find the maximum like-
lihood estimates. A number of special cases are of practical interest; for example,
if γ = 0, the two-parameter Weibull distribution results. Both iterative and linear
estimation techniques can be used for the two-parameter case.
10–19. Let X be a random variable and c be a constant. Then Chebychev’s inequality is

P (|X − c| ≥ ²) ≤ (1/²2 )E(X − c)2

Thus,
P (|θ̂ − θ| ≥ ²) ≤ (1/²2 )E(θ̂ − θ)2

Now E(θ̂ − θ)2 = V (θ̂) + [E(θ̂) − θ]2 .


Then
P (|θ̂ − θ| ≥ ²) ≤ (1/²2 ){V (θ̂) + [E(θ̂) − θ]2 }

If θ̂ is unbiased then E(θ̂) − θ = 0 and if limn→∞ V (θ̂) = 0 we see that limn→∞


P (|θ̂ − θ| ≥ ²) ≤ 0, or limn→∞ P (|θ̂ − θ| ≥ ²) = 0.
10–20. E(X) = E[aX 1 + (1 − a)X 2 ] = aE(X 1 ) + (1 − a)E(X 2 )
= aµ + (1 − a)µ = µ
V (X) = a2 V (X 1 ) + (1 − a)2 V (X 2 ) = a2 (σ 2 /n1 ) + (1 − a)2 (σ 2 /n2 )
µ ¶ µ ¶
dV (X) σ2 σ2
= 2a − 2(1 − a) =0
da n1 n2

σ 2 /n2 n1
a∗ = =
σ 2 /n1 + σ 2 /n2 n1 + n2
n
Y n
Y
10–21. L(γ) = (γ + 1)Xiγ = (γ + 1) n
Xiγ
i=1 i=1
n
X
`n L(γ) = n `n(γ + 1) + γ `n Xi
i=1
5

, n
d `n L(γ) X
=n (γ + 1) + `n Xi = 0
dγ i=1
à , n
!
X
γ̂ = −1 − n `n Xi
i=1

n
Y Pn
10–22. L(γ) = λe−λ(Xi −X` ) = λn e−λ( i=1 Xi −nX` )

i=1
à n !
X
`n L(λ) = n `n λ − λ Xi − nX`
i=1
, Ã n !
d `n L(λ) X
=n λ− Xi − nX` =0
dλ i=1
P
10–23. Assume X` unknown, and we want to maximizePn `n λ − λ ni=1 (Xi − X` ) with
respect to X` , subject to Xi ≥ X` . Thus we want ni=1 (Xi − X` ) to be a minimum,
subject to Xi ≥ X` . Thus X̂` = min(X1 , X2 , . . . , Xn ) = X(1) .
" n−1 # " n−1 #
X X
10–24. E(G) = E K (Xi+1 − Xi )2 = K E(Xi+1 − Xi )2
i=1 i=1
" n−1 #
X ¡ ¢
2 2
=K E Xi+1 − 2Xi Xi+1 + Xi
i=1

n−1
X
2
=K [E(Xi+1 ) − 2E(Xi Xi+1 ) + E(Xi2 )]
i=1

= K[(n − 1)(σ 2 + µ2 ) − 2(n − 1)µ2 + (n − 1)(µ2 + σ 2 )]

= K[2(n − 1)σ 2 ]

For K[2(n − 1)σ 2 ] to equal σ 2 , K = [2(n − 1)]−1 . Thus

X n−1
1
G= (Xi+1 − Xi )2
2(n − 1) i=1

is an unbiased estimator of σ 2 .
6

µ ¶
2 −n/2 1 2 2
10–25. f (x1 , x2 , . . . , xn |µ) = (2πσ ) exp − Σ(xi − µ) /σ ,
2
µ ¶
2 −1/2 1 2 2
f (µ) = (2πσ0 ) exp − (x − µ0 ) /σ0
2
( · µ ¶¸2 )
c 1 nx µ 0
f (µ|x1 , x2 , . . . , xn ) = c−1/2 (2π)−1/2 exp − µ − + 2
2 c σ2 σ0
n 1
where c = 2
+ 2
σ σ0
µ ¶
2 −n/2 1
2
10–26. f (x1 , x2 , . . . , xn |1/σ ) = (2πσ ) exp − Σ(xi − µ)2 /σ 2
2
1 2 2
f (1/σ 2 ) = (mσ02 )m+1 (1/σ 2 )m e−mσ0 /σ
Γ(m + 1)

The posterior density for 1/σ 2 is gamma with parameters m + (n/2) + 1 and mσ02 +
Σ(xi − µ)2 .

10–27. f (x1 , x2 , . . . , xn |p0 ) = pn (1 − p)Σxi −n ,


Γ(a + b) a−1
f (p) = p (1 − p)b−1
Γ(a)Γ(b)
The posterior density for p is a beta distribution with parameters a + n and b +
Σxi − n.

10–28. f (x1 , x2 , . . . , xn |p) = pΣxi (1 − p)n−Σxi ,


Γ(a + b) a−1
f (p) = p (1 − p)b−1 ,
Γ(a)Γ(b)
The posterior density for p is a beta distribution with parameters a + Σxi and
b + n − Σxi

10–29. f (x1 , x2 , . . . , xn |λ) = λΣxi e−nλ /Πxi !


µ ¶m+1
1 m+1
f (λ) = λm e−(m+1)λ/λ0
Γ(m + 1) λ0
The posterior density for λ is gamma with parameters r = m + Σxi + 1 and
δ = n + (m + 1)/λ0 .
7

10–30. From Exercise 10–25 and using the relationship that the Bayes’ estimator for µ
using a squared-error loss function is given by µ̂ = 1c [ nx
σ2
+ σµ02 ], we have
0

· ¸−1 · ¸
25 1 25(4.85) 4
µ̂ − + + = 4.708
40 8 40 8

0.1
10–31. µ̂ = 1.05 − µ ¶2 µ ¶ (2π)−1/2
1.20 − 1.05 0.98 − 1.05
Φ 0.1 −Φ 0.1
2 2
h 1.20−1.05 0.98−1.05
i
× e−1/2( 0.1/2
)
− e−1/2( 0.1/2
)

= 1.05 − (0.0545)(0.399)(0.223 − 4.055) = 0.967

10–32. Σχi = 6270, λ̂ = 0.000323


· ¸−1 · ¸
25 1 25(10.05) 10
10–33. µ̂ = + + = 10.045
0.1 0.04 0.1 0.04

weight ∼ N (10.045, 0.1)


µ ¶
9.95 − 10.045
P (weight < 9.95) = P Z < = Φ(−0.301) ≈ 0.3783
0.316
10–34. From a previous exercise, the posterior is gamma with parameters r = Σxi + 1 and
δ = n + (1/λ0 ). Since n = 10 and Σxi = 45,
1
f (λ|x1 , . . . , x10 ) = (14)46 λ45 e−46λ
Γ(46)

The Bayes interval requires us to find L and U so that


Z
(14)46 U 45 −46λ
λ e dλ = 0.95
Γ(46) L

Since r is integer, tables of the Poisson distribution could be used to find L and U .
2x 2x
10–35. (a) f (x1 |θ) = , f (θ) = 1, 0 < θ < 1, f (x1 , θ) = , and
θ2 θ2
Z 1 Z 1
2x 1
f (x1 ) = 2
dθ = −2x = 2 − 2x; 0 < x < 1
x θ θ x
8

The posterior density is

f (x1 , θ) 2x
f (θ|x1 ) = = 2
f (x1 ) θ (2 − 2x)

(b) The estimator must minimize


Z Z 1
2x
Z = `(θ̂; θ)f (θ|x1 ) dθ = θ2 (θ̂ − θ)2 2 dθ
0 θ (2 − 2x)
· ¸
2x 2 1
= θ̂ − θ̂ +
2 − 2x 3
dZ 1
From dθ̂
= 0 we get θ̂ = 2

10–36. f (x1 |p) = px (1 − p)1−x , f (x1 , p) = 6px+1 (1 − p)2−x , and


Z 1
Γ(x + 2)Γ(3 − x)
f (x1 ) = 6px+1 (1 − p)2−x dp =
0 4
The posterior density for p is

24px+1 (1 − p)2−x
f (p|x1 ) =
Γ(x + 2)Γ(3 − x)

For a squared-error loss, the Bayes estimator is


Z 1
24px+2 (1 − p)2−x Γ(x + 3) x+3
p̂ = E(p|x1 ) = = =
0 Γ(x + 2)Γ(3 − x) 5Γ(x + 2) 5

If `(p̂; p) = 2(p̂ − p)2 , the Bayes estimator must minimize


Z 1
24
Z= 2(p̂ − p)2 px+1 (1 − p)2−x dp
Γ(x + 2)Γ(3 − x) 0
· ¸
2 2 2p̂Γ(x + 3) Γ(x + 4)
= p̂ Γ(x + 2) − +
Γ(x + 2) 5 30
x+2
From dz/dp̂ = 0, p̂ = 5

10–37. For α1 = α2 = α/2, α = 0.05; x ± 1.96( √σn )


For α1 = 0.01, α2 = 0.04; x − 1.751( √σn ) ≤ µ ≤ x + 2.323( √σn )
α1 = α2 = α/2 is shorter.
9

10–38. (a) N (0, 1)


s  s 
X X
(b) X − Zα/2  ≤ λ ≤ X + Zα/2 
n n

σ σ
10–39. (a) x − Zα/2 √ ≤ µ ≤ x + Zα/2 √
n n
74.03533 ≤ µ ≤ 74.03666
σ
(b) x − Zα √ ≤ µ
n
74.0356 ≤ µ

10–40. (a) 1003.04 ≤ µ ≤ 1024.96


(b) 1004.80 ≤ µ

10–41. (a) 3232.11 ≤ µ ≤ 3267.89


(b) 3226.49 ≤ µ ≤ 3273.51
The width of the confidence interval in (a) is 35.78, and the width of the interval
in (b) is 47.01. The wider confidence interval in (b) reflects the higher confidence
coefficient.

10–42. n = (Zα/2 σ/E)2 = [(1.96)25/5]2 = 96.04 ' 97

10–43. For the total width to be 8, the half-width must be 4, therefore n = (Zα/2 σ/E)2 =
[(1.96)25/4]2 = 150.06 ' 150 or 151.

10–44. n = (Zα/2 σ/E)2 = 1000(1.96/15)2 = 17.07 ' 18.

10–45. (a) 0.0723 ≤ µ1 − µ2 ≤ 0.3076


(b) 0.0499 ≤ µ1 − µ2 ≤ 0.33
(c) µ1 − µ2 ≤ 0.3076

10–46. 3.553 ≤ µ1 − µ2 ≤ 8.447

10–47. −3.68 ≤ µ1 − µ2 ≤ −2.12

10–48. (a) 2238.6 ≤ µ ≤ 2275.4


(b) 2242.63 ≤ µ
(c) 2240.11 ≤ µ ≤ 2275.39
10

10–49. 183.0 ≤ µ ≤ 256.6



10–50. 4.05 − t0.10,24 (0.08/ 25) ≤ µ ⇒ 4.029 ≤ µ

10–51. 13

10–52. (a) 546.12 ≤ µ ≤ 553.88


(b) 546.82 ≤ m
(c) µ ≤ 553.18

10–53. 94.282 ≤ µ ≤ 111.518

10–54. (a) 7.65 ≤ µ1 − µ2 ≤ 12.346


(b) 8.03 ≤ µ1 − µ2
(c) µ1 − µ2 ≤ 11.97

10–55. −0.839 ≤ µ1 − µ2 ≤ −0.679

10–56. (a) −0.561 ≤ µ1 − µ2 ≤ 1.561


(b) µ1 − µ2 ≤ 1.384
(c) −0.384 ≤ µ1 − µ2

10–57. 0.355 ≤ µ1 − µ2 ≤ 0.455

10–58. −30.24 ≤ µ1 − µ2 ≤ −19.76

10–59. From 10–48, s = 34.51, n = 16, and (n − 1)s2 = 17864.1


(a) 649.60 ≤ σ 2 ≤ 2853.69
(b) 714.56 ≤ σ 2
(c) σ 2 ≤ 2460.62

10–60. (a) 1606.18 ≤ σ 2 ≤ 26322.15


(b) 1755.68 ≤ σ 2
(c) σ 2 ≤ 21376.78

10–61. 0.0039 ≤ σ 2 ≤ 0.0157

10–62. σ 2 ≤ 193.09

10–63. 0.574 ≤ σ12 /σ22 ≤ 3.614


11

10–64. s21 = 0.29, s22 = 0.34, s21 /s22 = 1.208, n1 = 12, n2 = 18

(a) 0.502 ≤ σ12 /σ22 ≤ 2.924


(b) 0.423 ≤ σ12 /σ22 ≤ 3.468
(c) 0.613 ≤ σ12 /σ22
(d) σ12 /σ22 ≤ 2.598

10–65. 0.11 ≤ σ12 /σ22 ≤ 0.86

10–66. 0.089818 ≤ p ≤ 0.155939

10–67. n = 4057

10–68. p ≤ 0.00348

10–69. n = (Zα/2 /E)2 p(1 − p) = (2.575/0.01)2 p(1 − p) = 66306.25p(1 − p). The most
conservative choice of p is p = 0.5, giving n = 16576.56 or n = 16577 homeowners.

10–70. 0.0282410 ≤ p1 − p2 ≤ 0.0677590

10–71. −0.0244 ≤ p1 − p2 ≤ 0.0024

10–72. −8.50 ≤ µ1 − µ2 ≤ 1.94

10–73. −2038 ≤ µ1 − µ2 ≤ 3774.8

10–74. Since X and S 2 are independent, we can construct confidence intervals for µ and σ 2
such that we are 90 percent confident that both intervals provide correct conclusions
by constructing a 100(0.90)1/2 percent confidence interval for each parameter. That
is, we need a 95 percent confidence interval on µ and σ 2 . Thus, 3.938 ≤ µ ≤ 4.057
and 0.0049 ≤ σ 2 ≤ 0.0157 provides the desired simultaneous confidence intervals.

10–75. Assume that all three variances are equal. A 95 percent simultaneous confidence
interval on µ1 − µ2 , µ1 − µ3 , and µ2 − µ3 will require that the individual intervals
use α/3 = 0.05/3 = 0.0167.

For µ1 − µ2 , s2p = 1.97, t0.0167/2,18 ' 2.64; −3.1529 ≤ µ1 − µ2 ≤ 0.1529


For µ1 − µ3 , s2p = 1.76, t0.0167/2,23 ' 2.59; −1.9015 ≤ µ1 − µ3 ≤ 0.9015
For µ2 − µ3 , s2p = 1.24, t0.0167/2,23 ' 2.59; −0.1775 ≤ µ2 − µ3 ≤ 2.1775
12

10–76. The posterior density for µ is truncated normal:


r " Ã ! Ã !#−1
16 12 − 8 6−8 µ−8 2
f (µ|x1 , . . . , x16 ) = Φ p −Φ p e(−1/2)( 10/16 )
2π10 10/16 10/16

for 6 < µ ≤ 12. From the normal tables, the 90% interval estimate for µ is centered
at 8 and is from 8 − (1.795)(1.054) = 6.108 to 9.892. Since 6.108 < 9 < 9.892, we
have no evidence to reject H0 .

10–77. The posterior density for 1/σ 2 is gamma w/parameters r +(n/2) and λ+Σ(xi −µ)2 .
For r = 3, λ = 1, n = 10, µ = 5, Σ(xi − 5)2 = 4.92, the Bayes estimate of 1/σ 2 is
3+5
(1/σ 2 ) = 1+4.92 = 1.35. The integral:
Z U
1 2
0.90 = (5.92)8 (1/σ 2 )7 e−5.92/σ
L 8
Z ∞
10–78. Z = (θ̂ − θ)2 f (θ|x1 , x2 , . . . , xn ) dθ
−∞
Z ∞ Z ∞
2
= θ̂ f (θ|x1 , x2 , . . . , xn ) dθ − 2θ̂ θf (θ|x1 , x2 , . . . , xn ) dθ
−∞ −∞
Z ∞
+ θ2 f (θ|x1 , x2 , . . . , xn ) dθ
−∞

Let
Z ∞
E(θ̂) = θf (θ|x1 , x2 , . . . , xn ) dθ = µθ
−∞
Z ∞
2
E(θ̂ ) = θ2 f (θ|x1 , x2 , . . . , xn ) dθ = τθ
−∞

Then

Z = θ̂2 − 2θ̂µθ + τθ

dZ
= 2θ̂ − 2µθ = 0 so θ̂ = µθ .
dθ̂
1

Chapter 11

11–1. (a) H0 : µ ≤ 160 x − µ0 158 − 160


Z0 = √ = = −1.333
H1 : µ > 160 σ n 3/2
The fiber is acceptable if Z0 > Z0.05 = 1.645. Since Z0 = −1.33 < 1.645, the
fiber is not acceptable.
µ−µ0 165−160
(b) d = σ
= 3
= 1.67, if n = 4 then using the OC curves, we get β ' 0.05.

11–2. (a) H0 : µ = 90 x − µ0 90.48 − 90


Z0 = √ = p = 0.48
H1 : µ < 90 σ n 5/5
Since Z0 is not less than −Z0.05 = −1.645, do not reject H0 . There is no
evidence that mean yield is less than 90 percent.
(b) n = (Zα + Zβ )2 σ 2 /δ 2 = (1.645 + 1.645)2 5/(5)2 =
√2.16 ' 3. Could also use the
OC curves, with d = (µ0 − µ)/σ = (90 − 85)/ 5 = 2.24 and β = 0.05, also
giving n = 3.

11–3. (a) H0 : µ = 0.255 x − µ0 0.2546 − 0.255


Z0 = √ = √ = −12.65
H1 : µ =
6 0.255 σ/ n 0.0001/ 10
Since |Z0 | = 12.65 > Z0.025 = 1.96, reject H0 .
(b) d = |µ−µσ
0|
= |0.2552−0.225|
0.0001
= 2, and using the OC curves with α = 0.05 and
β = 0.10 gives n ' 3. Could also use n ' (Zα/2 + Zβ )2 σ 2 /δ 2 = (1.96 +
1.28)2 (0.0001)2 /(0.0002)2 = 2.62 = 3.

11–4. (a) H0 : µ = 74.035 x − µ0 74.036 − 74.035


Z0 = √ = √ = 3.87
H1 : µ =
6 74.035 σ/ n 0.001/ 15
Since Z0 > Zα/2 = 2.575, reject H0 .
(Zα/2 + Zβ )2 σ 2
(b) n ∼
= = 0.712, n = 1
δ2
x − µ0 1014 − 1000
11–5. (a) H0 : µ = 1.000 Z0 = √ = √ = 2.50
σ n 25/ 20
H1 : µ 6= 1.000 |Z0 | = 2.50 > Z0.005 = 1.96, reject H0 .

x − µ0 3250 − 3500
11–6. (a) H0 : µ = 3500 Z0 = √ = p = −27.39
σ n 1000/12
H1 : µ 6= 3500 |Z0 | = 27.39 > Z0.005 = 2.575, reject H0 .
2

11–7. (a) H0 : µ1 = µ2 x 1 − x2
Z0 = q 2 = 1.349
H 1 : µ1 =
6 µ2 σ1 σ22
+ n2
n1

Since Z0 < Zα/2 = 1.96, do not reject H0 .


(d) d = 3.2, n = 10, α = 0.05, OC curves gives β ≈ 0, therefore power ≈ 1.

11–8. µ1 = New machine, µ2 = Current machine


H0 : µ1 − µ2 ≤ 2, H1 : µ1 − µ2 > 2
Use the t-distribution assuming equal variances: t0 = −5.45, do not reject H0 .

11–9. H0 : µ1 − µ2 = 0, x1 − x2
Z0 = q 2 = 2.656
H1 : µ1 − µ2 6= 0 σ1 σ22
+ n2
n1

Since Z0 > Zα/2 = 1.645, reject H0 .

11–10. H0 : µ1 − µ2 = 0, x1 − x2
Z0 = q 2 = −6.325
H 1 : µ1 − µ2 > 0 σ1 σ22
+ n2
n1

Since Z0 < Zα = 1.645, do not reject H0 .

11–11. H0 : µ1 − µ2 = 0, x1 − x2
Z0 = q 2 = −7.25
H 1 : µ1 − µ2 < 0 σ2 σ22
+ n2
n1

Since Z0 < −Zα = −1.645, reject H0 .


x − µ0 −0.168 − 0
11–12. H0 : µ = 0 t0 = √ = √ = −0.062
s/ n 8.5638/ 10
H1 : µ 6= 0 |t0 | = 0.062 < t0.025,9 = 2.2622, do not reject H0 .

11–13. (a) t0 = 1.842, do not reject H0 .


(b) n = 8 is not sufficient; n = 10.
10.28 − 9.5
11–14. H0 : µ = 9.5 t0 = √ = 0.7492
2.55/ 6
H1 : µ > 9.5 t0.05,5 = 2.015, do not reject H0 .
11–15. t0 = 1.47, do not reject at α = 0.05 level of significance. It can be rejected at the
α = 0.10 level of significance.
3

6.997 − 7.5
11–16. (a) H0 : µ = 7.5 t0 = √ = −1.112
1.279/ 18
H1 : µ < 7.5 t0.05,7 = 1.895, do not reject H0 ,
the true scrap rate is not < 7.5%.
(b) n = 5
(c) 0.95

11–17. n = 3

|δ| 20
11–18. d = = = 2, n = 3
σ 10
x1 − x2 25.617 − 21.7
11–19. (a) H0 : µ1 = µ2 t0 = p = p = 8.49
sp 1/n1 + 1/n2 0.799 1/6 + 1/6
H 1 : µ1 > µ 2 t0 > t0.01,10 = 2.7638
x − x2 − 5
(b) H0 : µ1 − µ2 = 5 t0 = p1 = 2.35, do not reject H0 .
sp 1/n1 + 1/n2
H 1 : µ1 − µ2 > 5
(c) Using sp = 0.799 as an estimate of σ,
d = (µ1 − µ2 )/(2σ) = 5/2(0.799) = 3.13, n1 = n2 = 6, α = 0.01, OC curves
give β ≈ 0, so power ' 1.
(d) OC curves give n = 5.

11–20. t0 = −0.02, do not reject H0 .

11–21. H0 : σ12 = σ22 s1 = 9.4186 s21 = 88.71


H1 : σ12 6= σ22 s2 = 10.0222 s22 = 100.44
α = 0.05
Reject H0 if F0 > F0.025,9,9 = 3.18.
88.71
F0 = = 0.8832 ∴ do not reject H0 : σ12 = σ22 .
100.44

11–22. (a) H0 : σ12 = σ22 F0 = s21 /s22 = 101.17/94.73 = 1.07


H1 : σ12 6= σ22 do not reject H0 .
4

x1 − x2 12.5 − 10.2
(b) H0 : µ1 = µ2 t0 = p = p
sp 1/n1 + 1/n2 9.886 1/8 + 1/9
H 1 : µ1 > µ 2 = −0.48, do not reject.
r
1480 + 1425
11–23. (a) H0 : µ1 = µ2 sp = = 12.704
18
20.0 − 15.8
H1 : µ1 6= µ2 t0 = p = 0.74, do not reject H0 .
12.704 1/10 + 1/10
Reject H0 if |t0 | < t0.005,9 = 3.250.
(b) d = 10/[2(12.7)] = 0.39, Power = 0.13, n∗ = 19
(c) n1 = n2 = 75

x1 − x2 20.0 − 21.5
11–24. (a) H0 : µ1 = µ2 t0 = p = p = −2.40
sp 1/n1 + 1/n2 1.40 1/10 + 1/10
H1 : µ1 6= µ2 reject H0 .
(b) Use sp = 1.40 as an estimate of σ. Then d = |µ1 − µ2 |/2σ = 2/2(1.40) = 0.7.
If α = 0.05 and n1 = n2 = 10, OC curves gives β ' 0.5. For β ' 0.15, we
must have n1 = n2 ' 30.
(c) F0 = s21 /s22 = 2.25/1.69 = 1.33, do not reject H0 .
(d) λ = σ1 /σ2 = 2, α = 0.05, n1 = n2 = 10, OC curves give β ' 0.50.

x1 − x2 8.75 − 8.63
11–25. H0 : µ1 = µ2 t0 = p = p = 0.56
sp 1/n1 + 1/n2 0.57 1/12 + 1/18
H1 : µ1 6= µ2 do not reject H0 .

11–26. H0 : σ 2 = 16 If α = 0.05, λ = σ1 /σ0 = 3/4, and β = 0.10, then


H1 : σ 2 < 16 n ' 55. Thus n = 10 is not good.
(n−1)s2 9(14.69)
For the sample of n = 10 given, χ20 = σ02
= 16
= 8.26. Since χ20.05,9 = 3.325,
do not reject H0 .
(n − 1)s2 7(0.00005)2
11–27. (a) H0 : σ = 0.00002 χ20 = = = 43.75
σ02 (0.00002)2
H1 : σ > 0.00002 Since χ20 > χ20.01,7 = 18.475, reject H0 .
The claim is unjustified.
5

(b) A 99% one-sided lower confidence interval is 0.3078 × 10−4 ≤ σ 2 .


(c) λ = σ1 /σ2 = 2, α = 0.01, n = 8, OC curves give β ' 0.30.
(d) λ = 2, β ' 0.05, α = 0.01, OC curves give n = 17.
11–28. H0 : σ = 0.005
H1 : σ > 0.005
If α = 0.01, β = 0.10, λ = 0.010/0.005 = 2, then the OC curves give n = 14.
2 2
Assuming n = 14, then χ20 = (n−1)s
σ02
= 13(0.007)
(0.005)2
= 25.48 < χ20.01,13 = 27.688, and we
do not reject. The 95% one-sided upper confidence interval is σ 2 ≤ 0.155 × 10−3 .
(n − 1)s2 11(0.10388)
11–29. (a) H0 : σ 2 = 0.5 χ20 = 2
= = 2.28, reject H0 .
σ0 0.5
H1 : σ 2 =
6 0.5
(b) λ = σ/σ0 = 1/0.707 = 1.414, β ≈ 0.58
11–30. H0 : σ12 = σ22 F0 = s21 /s22 = 2.25 × 10−4 /3.24 × 10−4
H1 : σ12 =
6 σ22 = 0.69, do not reject H0 .
Since
√ σ12 = σ22 , the test on means in Exercise 11–7 is appropriate. If λ = σ1 /σ2 =
2.5 = 1.58, then using α = 0.01, β = 0.10, the OC curves give n ' 75.
11–31. H0 : σ12 = σ22 F0 = s21 /s22 = 0.9027/0.0294 = 30.69
H1 : σ12 > σ22 F0 > F0.01,8,10 = 5.06, so reject H0 .

If λ = 4 = 2, α = 0.01, and taking n1 ' n2 = 10 (say), we get β ' 0.65.
(0.984 − 0.907)
11–32. (b) H0 : µ1 − µ2 = 0, t0 = q = 0.0843, do not reject H0 .
1 1
H1 : µ1 − µ2 6= 0 11.37( 25 + 30 )

d−0 5.0 − 0
11–33. H0 : µd = 0 t0 = √ = √ = 0.998,
sd / n 15.846/ 10
H1 : µd 6= 0 do not reject H0 .
11–34. Using µD = µA − µB , t0 = −1.91, do not reject H0 .
11–35. H0 : µd = 0
H 1 : µd =
6 0
Reject H0 if |t0 | > t0.025,5 = 2.571.
3−0
t0 = √ = 5.21 ∴ reject H0
1.41/ 6
11–36. t0 = 2.39, reject H0 .
6

11–37. H0 : p = 0.70, H1 : p 6= 0.70; Z0 = √ (699−700) = 0.586, do not reject H0 .


1000(0.7)(0.3)

(18−200)
11–38. H0 : p = 0.025, H1 : p 6= 0.025; Z0 = √ = −13.03, reject H0 .
8000(0.975)(0.025)

11–39. The “best” test will maximize the probability that H0 is rejected, so we want to
X1 − X2
max Z0 = p
σ12 /n1 + σ22 /n2
subject to n1 + n2 = N .

Since for a given sample, X 1 − X 2 is fixed, this is equivalent to


σ12 σ22
min L = +
n1 n2
subject to n1 + n2 = N .
Since n2 = n1 − N , we have
σ12 σ22
min L = +
n1 N − n1
and from dL/dn1 = 0 we find
σ1 n1
= ,
σ2 n2
which says that the observations should be assigned to the populations in the same
ratio as the standard deviations.
11–40. z0 = 6.26, reject H0 .
(0.01−0.021)
11–41. H0 : p1 = p2 , H1 : p1 < p2 ; Z0 = √ 1 1
= −2.023, do not reject H0 .
0.016(0.984)( 1000 + 1200 )

11–42. H0 : p1 = p2 , H1 : p1 6= p2 ; Z0 = √ (0.042−0.064)2 = −1.55, do not reject H0 .


0.053(0.947)( 500 )

11–43. Let 2σ 2 = σ12 /n1 +σ22 /n2 be the specified sample variance. If we minimize c1 n1 +c2 n2
subject to the constraint σ12 /n1 + σ22 /n2 = 2σ 2 , we obtain the solution
s
n1 σ12 c2
= .
n2 σ22 c1

X 1 − 2X 2
11–44. H0 : µ1 = 2µ2 Z0 = r 2
H1 : µ1 > 2µ2 σ1 4σ22
+
n1 n2
7

11–45. H0 : σ 2 = σ02
H1 : σ 2 6= σ02
µ ¯ ¶
(n − 1)S 2 ¯ 2
2
β = P χ1−α/2,n−1 ≤ 2 ¯ 2
≤ χα/2,n−1 ¯σ = σ1 = 2
6 σ0
σ02
µ 2 ¶
σ0 2 (n − 1)S 2 σ02 2
=P χ ≤ ≤ 2 χα/2,n−1
σ12 1−α/2,n−1 σ12 σ1

which can be evaluated using tables of χ2 .

11–46. H0 : σ12 = σ22


H1 : σ12 6= σ22
µ ¯ 2¶
S12 ¯σ
β = P F1−α/2,u,v ≤ 2 ≤ Fα/2,u,v ¯¯ 12
S2 σ2
µ 2 ¶
σ2 S12 σ22 σ22
=P F1−α/2,u,v ≤ 2 2 ≤ 2 Fα/2,u,v
σ12 S2 σ1 σ1

Since (σ22 /σ12 )(S12 /S22 ) follows an F -distribution, β may be evaluated by using tables
of F .

11–47. (a) Assume the class intervals are defined as follows:

Class Interval Oi Ei (Oi − Ei )2 /Ei


−∞ < X < 11 6 6.15 0.004
11 ≤ X < 16 11 10.50 0.024
16 ≤ X < 21 16 19.04 0.485
21 ≤ X < 26 28 24.68 0.447
26 ≤ X < 31 22 24.54 0.263
31 ≤ X < 36 19 14.63 1.305
36 ≤ X < 41 11 12.36 0.150
41 ≤ X 4 5.10 0.237
χ20 = 2.915

The expected frequencies are obtained by evaluating n[Φ( ci −xs


) − Φ( ci−1s −x )]
where ci is the upper boundary of cell i. For our problem,
· µ ¶ µ ¶¸
ci − 25.61 ci−1 − 25.61
Ei = 117 Φ −Φ .
9.02 9.02
8

Since χ20 = 2.915 < χ20.05,5 = 11.070, do not reject H0 .


(b) To use normal probability paper for data expressed in a histogram, find the
cumulative probability associated with each interval, and plot this against
the upper boundary of each cell.
Cell Upper Bound Observed Frequency Pj
11 6 0.051
16 11 0.145
21 16 0.282
26 28 0.521
31 22 0.709
36 19 0.872
41 11 0.966
45 4 1.006
Normal probability plot.

11–48. Estimate λ̂ = x = 4.9775. The expected frequencies are


Defects 0 1 2 3 4 5 6 7 8 9 10 11 12
Oi 4 13 34 56 70 70 58 42 25 15 9 3 1
Ei 2.76 13.72 34.15 56.66 70.50 70.50 70.18 58.22 41.40 25.76 14.25 3.21 1.33
9

Three cells have expected values less than 5, so they are combined with other cells
to get:
Defects 0–1 2 3 4 5 6 7 8 9 10–12
Oi 17 34 56 70 70 58 42 25 15 13
Ei 16.48 34.15 56.66 70.50 70.50 70.18 58.22 41.40 25.76 18.79

χ20 = 1.8846, χ20.05,8 = 15.51, do not reject H0 , the data could follow a Poisson
distribution.
11–49. x Oi Ei (Oi − Ei )2 /Ei
0 967 1000 1.089
1 1008 1000 0.064
2 975 1000 0.625
3 1022 1000 0.484
4 1003 1000 0.009
5 989 1000 0.121
6 1001 1000 0.001
7 981 1000 0.361
8 1043 1000 1.849
9 1011 1000 0.121
χ20 = 4.724 < χ20.05,9 = 16.919. Therefore, do not reject H0 .
11–50. (a) Assume that data given are the midpoints of the class intervals.
Class Interval Oi Ei (Oi − Ei )2 /Ei
X < 2.095 0 1.79∗
2.095 ≤ X < 2.105 16 6.65 6.77
2.105 ≤ X < 2.115 28 22.18 1.53
2.115 ≤ X < 2.125 41 56.39 4.20
2.125 ≤ X < 2.135 74 108.92 11.19
2.135 ≤ X < 2.145 149 159.60 0.70
2.145 ≤ X < 2.155 256 178.02 34.16
2.155 ≤ X < 2.165 137 150.81 1.26
2.165 ≤ X < 2.175 82 96.56 2.20
2.175 ≤ X < 2.185 40 47.59 1.21
2.185 ≤ X < 2.195 19 18.09 0.04
2.195 ≤ X < 2.205 11 5.12 3.31
2.205 ≤ X 0 1.28∗

Group into next cell
χ20 = 66.57 > χ20.05,8 = 15.507, reject H0 .
10

(b) Upper Cell Bound Observed Frequency Pj


2.105 16 0.019
2.115 28 0.052
2.125 41 0.100
2.135 74 0.186
2.145 149 0.361
2.155 256 0.661
2.165 137 0.872
2.175 82 0.918
2.185 40 0.965
2.195 19 0.987
2.205 11 1.000

11–51. X(j) P(j) X(j) P(j)


188.12 0.0313 203.62 0.5313
193.71 0.0938 204.55 0.5938
193.73 0.1563 208.15 0.6563
195.45 0.2188 211.14 0.7188
200.81 0.2813 219.54 0.7813
201.63 0.3438 221.31 0.8438
202.20 0.4063 224.39 0.9063
202.21 0.4688 226.16 0.9688
11

11–52. χ20 = 11.649 < χ20.05,6 = 12.592. Do not reject.

11–53. χ20 = 0.0331 < χ20.05,1 = 3.841. Do not reject.

11–54. χ20 = 25.554 < χ20.05,9 = 16.919. Reject H0 .

11–55. χ20 = 2.465 < χ20.05,4 = 9.488. Do not reject.

11–56. χ20 = 10.706 > χ20.05,3 = 7.81. Reject H0 .

11–57. The observed and expected frequencies are

IA A Total
L 216 245 461
170.08 290.92

M 226 409 635


234.28 400.72

H 114 297 411


151.64 259.36

Total 556 951 1507


12

χ20 = 34.909, reject H0 . Based on this data, physical activity is not independent of
socioeconomic status.

11–58. χ20 = 13.6289 < χ20.05.8 = 15.507. Do not reject.

11–59. Expected Frequencies:


17 62 55 µ̂1 = 0.304 µ̂1 = 0.127
22 81 71 µ̂2 = 0.394 µ̂2 = 0.465
17 62 54 µ̂3 = 0.302 µ̂3 = 0.408

X3 X 3
(Oij − Eij )2
χ20 = = 2.88 + 1.61 + 0.16 + 2.23 + 0.79 + 13.17
i=1 j=1
Eij
+ 0 + 5.23 + 6 = 22.06
χ20.05,4 = 9.488

∴ reject H0 , pricing strategy and facility conditions are not independent

11–60. (a) Non Defective Defective


Machine 1 468 32 500 473.5 26.5

Machine 2 479 21 500 473.5 26.5
947 53 1000
χ20 = 0.064 + 1.141 + 0.013 + 1.41 = 2.897
χ20.05,1 = 3.841, do not reject H0 , the populations do not differ
(b) homogeneity
(c) yes
1

Chapter 12

12–1. (a) Analysis of Variance


Source DF SS MS F P
Factor 3 80.17 26.72 3.17 0.047
Error 20 168.33 8.42
Total 23 248.50

(b)
2

(c) Tukey’s pairwise comparisons

Family error rate = 0.0500


Individual error rate = 0.0111
Critical value = 3.96
Intervals for (column level mean) - (row level mean)

1 2 3
2 -5.357
4.024

3 -0.857 -0.190
8.524 9.190

4 -2.190 -1.524 -6.024


7.190 7.857 3.357

(d)
3

12–2. (a) Analysis of Variance for Obs


Source DF SS MS F P
Flowrate 2 3.648 1.824 3.59 0.053
Error 15 7.630 0.509
Total 17 11.278
4

(b)
5

12–3. (a) Analysis of Variance for Strength


Source DF SS MS F P
Technique 3 489740 163247 12.73 0.000
Error 12 153908 12826
Total 15 643648
(b) Tukey’s pairwise comparisons

Family error rate = 0.0500


Individual error rate = 0.0117
Critical value = 4.20
Intervals for (column level mean) - (row level mean)

1 2 3
2 -423
53

3 -201 -15
275 460

4 67 252 30
543 728 505
6

12–4. (a) Random effects

Analysis of Variance for Output


Source DF SS MS F P
Loom 4 0.34160 0.08540 5.77 0.003
Error 20 0.29600 0.01480
Total 24 0.63760

(b) στ2 = 0.01412


(c) σ 2 = 0.0148
(d) 0.035
(e)
7

12–5. (a) Analysis of Variance for Density


Source DF SS SS MS F P
Temp 3 0.13911 0.13911 0.04637 2.62 0.083
Error 18 0.31907 0.31907 0.01773
Total 21 0.45818
(b) µ̂ = 21.70, τ̂1 = 0.023 τ̂2 = −0.166 τ̂3 = 0.029 τ̂4 = 0.059
(c)
8

12–6. (a) Analysis of Variance for Conductivity


Source DF SS MS F P
Coating 4 1060.50 265.13 16.35 0.000
Error 15 243.25 16.22
Total 19 1303.75
(b) µ̂ = 139.25, τ̂1 = 5.75 τ̂2 = 6.00 τ̂3 = −7.75 τ̂4 = −10.00 τ̂5 = 6.00
(c) 142.87 ≤ µ1 ≤ 147.13, 7.363 ≤ µ1 − µ2 ≤ 24.137
9

(d) Tukey’s pairwise comparisons


Family error rate = 0.0500
Individual error rate = 0.00747
Critical value = 4.37
Intervals for (column level mean) - (row level mean)

1 2 3 4
2 -9.049
8.549

3 4.701 4.951
22.299 22.549

4 6.951 7.201 -6.549


24.549 24.799 11.049

5 -9.049 -8.799 -22.549 -24.799


8.549 8.799 -4.951 -7.201

(e)
10

12–7. (a) Analysis of Variance for Response Time


Source DF SS MS F P
Circuit 2 260.9 130.5 4.01 0.046
Error 12 390.8 32.6
Total 14 651.7
(b) Tukey’s pairwise comparisons
Family error rate = 0.0500
Individual error rate = 0.0206
Critical value = 3.77
Intervals for (column level mean) - (row level mean)

1 2
2 -17.022
2.222

3 -7.222 0.178
12.022 19.422
(c) c1 = y1· − 2y2· + y3· , SSc1 = 14.4
c2 = y1· − y3· , SSc2 = 246.53
Only c2 is significant at α = 0.05
(d) 0.88
11

12–8. (a) Analysis of Variance for Shape


Source DF SS MS F P
Nozzle 4 0.10218 0.02554 8.92 0.000
Efflux 5 0.06287 0.01257 4.39 0.007
Error 20 0.05730 0.00286
Total 29 0.22235

(c)
12

12–9. (a) Analysis of Variance for Strength


Source DF SS MS F P
Chemical 3 12.95 4.32 2.38 0.121
Bolt 4 157.00 39.25 21.61 0.000
Error 12 21.80 1.82
Total 19 191.75
(c)
13

12–10. µ = 14 Σµ1 = 220


4
= 55, τ1 = 50 − 55 = −5, τ2 = 60 − 55 = 5, τ3 = 50 − 55 = −5,
nΣτ 2 √
τ4 = 60 − 55 = 5. Then Φ2 = a2 2σ2i = 42 n(100)
2(25)
= n, Φ = n. β < 0.1, α = 0.05,
OC curves give n = 7.
1 940
12–11. µ = Σµi = = 188, τi = µi − 188, i = 1, 2, . . . , 5
5 5
τ1 = −13, τ2 = 2, τ3 = −28, τ4 = 12, τ5 = 27
µ ¶
2 n Στi2 n 1830 √
Φ = 2
= = 3.66n, Φ = 1.91 n
a σ 5 100

If β ≤ 0.05, α = 0.01, OC curves give n = 3. If n = 3, then β ∼


= 0.03.

12–12. The test statistic for the two-sample t-test (with n1 = n2 = n) is


y 1· − y 2·
t0 = p − t2n−2
Sp 2/n
(y 1· − y 2· )2 (n/2)
t20 =
Sp2
³y y2· ´2 ³ n ´ 2
y1· y2 y1· y2·

+ · + 2· −
= n n
2
2 = 2n 2n
2
n
Sp Sp
14

2 2
y1· y2· (y1· +y2· )2
But since y1· y2· = 2
+ 2
− 2
, the last equation becomes
2
y1· y2 (y1· + y2· )2
+ 2· − SSTreatments
t20 = n n 2n =
Sp 2 Sp2
P2 Pn
(yij −y )2
Note that Sp2
= i=1
j=1
2n−2

= M SE . Therefore, t20 = SSTreatments
M SE
, and since the
square of tu is F1,u (in general), we see that the two tests are equivalent.
µXa ¶ X a X a
12–13. V ci yi· = V (ci yi· ) = c2i V (yi· )
i=1 i=1 i=1
a
à ni
! a ni
X X X X
= c2i V yij = c2i V (y)
i=1 j=1 i=1 j=1
a
X
= σ2 ni c2i
i=1

12–14. For 4 treatments, a set of orthogonal contrasts is


3y1· − y2· − y3· − y4·
2y2· − y3· − y4·
y3· − y4·
Assuming equal n, the contrast sums of squares are
(3y1· − y2· − y3· − y4· )2 (2y2· − y3· − y4· )2
Q21 = , Q22 =
12n 6n
(y3· − y4· )2
Q23 =
2n
Now
4
X XX
9 yi·2 − 6 yi· yj·
i=1 i<j
Q21 + Q22 + Q23 =
12n
and since " #
XX 4
1 2 X 2
yi· yj· = y − y ,
i<j
2 ·· i=1 i·
4
X 4
X
12 yi·2 − 3y··2 yi·2
i=1 i=1 y··2
Q21 + Q22 + Q23 = = −
12n n 4n
= SSTreatments
15

12–15. (a) 15µ̂ + 5τ̂1 + 5τ̂2 + 5τ̂3 = 307


5µ̂ + 5τ̂1 = 104
5µ̂ + 5τ̂2 = 111
5µ̂ + 5τ̂3 = 92
P3
If i=1 τ̂i = 0, then µ̂ = 20.47, τ̂1 = 0.33, τ̂2 = 1.73, and τ̂3 = 2.07.

τ1d
− τ2 = τ̂1 − τ̂2 = −1.40

(b) If τ̂3 = 0, then µ̂ = 18.40, τ̂1 = 2.40, τ̂2 = 3.80, and τ̂3 = 0. These estimators
differ from those found in part (a). However, note that

τ1d
− τ2 = τ̂1 − τ̂2 = 2.40 − 3.80 = −1.40

which agrees with part (a), because contrasts in the τi are uniquely estimated.

(c) Estimate Using


Function Part (a) Solution Part (b) Solution
µ + τ1 20.80 20.80
2τ1 − τ2 − τ3 1.00 1.00
µ + τ1 + τ2 22.53 24.60
1

Chapter 13

13–1. Analysis of Variance for Wear


Source DF SS MS F P
CS 2 0.0317805 0.0158903 15.94 0.000
DC 2 0.0271854 0.0135927 13.64 0.000
CS*DC 4 0.0006873 0.0001718 0.17 0.950
Error 18 0.0179413 0.0009967
Total 26 0.0775945

13–2. Analysis of Variance for Finish


Source DF SS MS F P
Drying 2 27.4 13.7 0.01 0.986
Paint 1 355.6 355.6 0.38 0.601
Drying*Paint 2 1878.8 939.4 5.03 0.026
Error 12 2242.7 186.9
Total 17 4504.4

13–3. −23.93 ≤ µ1 − µ2 ≤ 5.15

13–4. Analysis of Variance for Strength


Source DF SS MS F P
operator 2 2.250 1.125 0.29 0.759
machine 3 28.833 9.611 2.46 0.160
operator*machine 6 23.417 3.903 0.84 0.565
Error 12 56.000 4.667
Total 23 110.500

13–5. The results would be a mixed model. The test statistics would be:

Effect F0
Operator 0.241
Machine 2.46
Operator∗ Machine 0.84

There is no change in conclusions.


2

13–6. Analysis of Variance for time


Source DF SS MS F P
operator 2 0.01005 0.00503 0.07 0.937
engineer 1 0.04688 0.04688 0.62 0.512
operator*engineer 2 0.15005 0.07503 1.26 0.350
Error 6 0.35785 0.05964
Total 11 0.56483

13–7. Analysis of Variance for current


Source DF SS MS F P
glass 1 14450.0 14450.0 273.79 0.000
phos 2 933.3 466.7 8.84 0.004
glass*phos 2 133.3 66.7 1.26 0.318
Error 12 633.3 52.8
Total 17 16150.0

13–8.
3

There does not appear to be a problem with constant variance across levels of either
factor.

13–9. Analysis of Variance for strength


Source DF SS MS F P
Conc 2 7.7639 3.8819 10.62 0.001
Freeness 2 19.3739 9.6869 26.50 0.000
Time 1 20.2500 20.2500 55.40 0.000
Conc*Freeness 4 6.0911 1.5228 4.17 0.015
Conc*Time 2 2.0817 1.0408 2.85 0.084
Freeness*Time 2 2.1950 1.0975 3.00 0.075
Conc*Freeness*Time 4 1.9733 0.4933 1.35 0.290
Error 18 6.5800 0.3656
Total 35 66.3089
4

13–10. Estimated Effects and Coefficients for warpage (coded units)


Term Effect Coef SE Coef T P
Constant 1.3250 0.01398 94.81 0.000
A 0.2250 0.1125 0.01398 8.05 0.000
B -0.1625 -0.0813 0.01398 -5.81 0.000
C -0.4500 -0.2250 0.01398 -16.10 0.000
A*B -0.5125 -0.2563 0.01398 -18.34 0.000
A*C 0.0000 0.0000 0.01398 0.00 1.000
B*C 0.2875 0.1438 0.01398 10.29 0.000
A*B*C 0.0625 0.0313 0.01398 2.24 0.056
Based on the p-values, factors A, B, C, and the interactions AB and BC are
significant at the 5% level of significance.

13–11. Using only the significant factors and interactions, the resulting residuals are as
follows.
5

13–12. Estimated Effects and Coefficients for score (coded units)


Term Effect Coef SE Coef T P
Constant 182.781 0.4891 373.68 0.000
A -9.063 -4.531 0.4891 -9.26 0.000
B -1.312 -0.656 0.4891 -1.34 0.198
C -2.687 -1.344 0.4891 -2.75 0.014
D 3.937 1.969 0.4891 4.02 0.001
A*B 4.062 2.031 0.4891 4.15 0.001
A*C 0.688 0.344 0.4891 0.70 0.492
A*D -2.187 -1.094 0.4891 -2.24 0.040
B*C -0.563 -0.281 0.4891 -0.57 0.573
B*D -0.188 -0.094 0.4891 -0.19 0.850
C*D 1.688 0.844 0.4891 1.72 0.104
A*B*C -5.187 -2.594 0.4891 -5.30 0.000
A*B*D 4.687 2.344 0.4891 4.79 0.000
A*C*D -0.938 -0.469 0.4891 -0.96 0.352
B*C*D -0.938 -0.469 0.4891 -0.96 0.352
A*B*C*D 2.437 1.219 0.4891 2.49 0.024
6

13–13.
7
8

13–14. See solution for 13–12 for the standard errors.

13–15. Estimated Effects and Coefficients for strength


Term Effect Coef SE Coef T P
Constant 2872.06 40.47 70.97 0.000
A 1430.88 715.44 40.47 17.68 0.000
B 3506.62 1753.31 40.47 43.33 0.000
C -168.37 -84.19 40.47 -2.08 0.054
D 443.37 221.69 40.47 5.48 0.000
E 394.13 197.06 40.47 4.87 0.000
A*B 1168.37 584.19 40.47 14.44 0.000
A*C 93.37 46.69 40.47 1.15 0.266
A*D 31.62 15.81 40.47 0.39 0.701
A*E 30.88 15.44 40.47 0.38 0.708
B*C -130.87 -65.44 40.47 -1.62 0.125
B*D -44.12 -22.06 40.47 -0.55 0.593
B*E -43.37 -21.69 40.47 -0.54 0.599
C*D 80.88 40.44 40.47 1.00 0.333
C*E -93.38 -46.69 40.47 -1.15 0.266
D*E 193.38 96.69 40.47 2.39 0.030
Main effects A, B, D, E and interactions AB and DE are significant.
9

13–16. (a)
10

(b)

(c) Estimated Effects and Coefficients for inches


Term Effect Coef SE Coef T P
Constant 35.938 0.6355 56.55 0.000
A -16.125 -8.062 0.6355 -12.69 0.000
B 3.125 1.562 0.6355 2.46 0.057
C -1.125 -0.562 0.6355 -0.89 0.417
D -1.125 -0.562 0.6355 -0.89 0.417
A*B -4.375 -2.188 0.6355 -3.44 0.018
A*C -0.625 -0.313 0.6355 -0.49 0.644
A*D -3.125 -1.563 0.6355 -2.46 0.057
B*C 1.625 0.812 0.6355 1.28 0.257
B*D 0.125 0.063 0.6355 0.10 0.925
C*D -0.625 -0.312 0.6355 -0.49 0.644

13–17. Block 1 Block 2


(1) a
ab b
ac c
bc abc
11

13–18. Block 1 Block 2


(1) ad a abc
ab bd b bcd
ac cd c acd
bc abcd d abd

13–19. Block 1 Block 2 Block 3 Block 4


(1) a c d
ab b abc abd
bcd cd bd bc
acd abcd ad ac

13–20. AC and BDE confounded: ABCDE generalized interaction

Block 1 Block 2 Block 3 Block 4


(1) a b ab
ac c abc bc
bd abd d ad
abcd bcd acd cd
be abe e ae
abce bce ace ce
de ade bde abde
acde cde abcde bcde

13–21. (a) Estimated Effects and Coefficients for strength


Term Effect Coef SE Coef T P
Constant 50.50 7.377 6.85 0.000
Block -3.50 7.377 -0.47 0.648
A -57.00 -28.50 7.377 -3.86 0.005
B -13.25 -6.62 7.377 -0.90 0.395
C 26.25 13.12 7.377 1.78 0.113
A*B 7.25 3.62 7.377 0.49 0.636
A*C -2.75 -1.38 7.377 -0.19 0.857
B*C 2.50 1.25 7.377 0.17 0.870
12

(b)

(c) & (d) This design is not as efficient as possible. If we were to confound a differ-
ent interaction in each replicate this would provide some information on all
interactions.
13

13–22. Please refer to the original reference for an analysis of the data from this experiment.

13–23. (a) I = ABCD. Alias Structure:

`A = A + BCD `AB = AB + CD `CE = CE + ABD


`B = B + ACD `AC = AC + BD `AD = AD + CB
`C = C + ABD `AE = AE + BCD `BD = BD + AC
`D = D + ABC `BC = BC + AD `CD = CD + AB
`E = E + ABCD `BE = BE + ACD `DE = DE + ABC

(b) design: 25−1 D = ABC

`A = 0.238 `AB = −0.024 `BD = 0.042


`B = −0.16 `AC = 0.0042 `CD = −0.024
`C = −0.043 `BC = −0.026 `BE = 0.1575
`D = 0.0867 `AD = −0.026 `CE = −0.029
`E = −0.242 `AE = 0.059 `DE = 0.036

Conclusion: assuming 3 and 4 factor interactions insignificant, factors A & E


are important; possible also B and BE.

13–24. (a) The generators used were I = ACE and I = BDE.


(b) I = ACE = BDE = ABCD

13–25. (a) D = ABC


(b) Term Effect Coef SE Coef T P
Constant 2.04538 0.007423 275.55 0.000
A 0.06675 0.03338 0.007423 4.50 0.021
B 0.02625 0.01313 0.007423 1.77 0.175
C 0.02025 0.01012 0.007423 1.36 0.266
D -0.00375 -0.00187 0.007423 -0.25 0.817
Only factor A appears to be significant.
14

(c)

13–26. 23−1 with 2 replicates.

13–27. 24−1 I = ABCD Aliases: `A = A + BCD `AB = AB + CD


`B = B + ACD `AC = AC + BD
`C = C + ABD `AD = AD + BC
`D = D = ABC
15

A B C D = ABC
(1) − − − − 190
a + − − + 174
b − + − + 181
ab + + − − 183
c − − + + 177
ac + − + − 181
bc − + + − 188
abc + + + + 173
Sweetener (A) & Temperature (D)
A = −6.25 `AB = −0.25 influence taste.
B = 0.75 `AC = 0.75
C = −2.25 `AD = 0.75
D = −9.25

13–28. 25−1 I = ABCDE


Treatment
A B C D E = ABCD Combination Strength
− − − − + e 800
+ − − − − a 900
− + − − − b 3400
+ + − − + abe 6200
− − + − − c 600
+ − + − + ace 1200
− + + − + bce 3006
+ + + − − abc 5300
− − − + − d 1000
+ − − + + ade 1500
− + − + + bde 4000
+ + − + − abd 6100
− − + + + cde 1500
+ − + + − acd 1100
− + + + − bcd 3300
+ + + + + abcde 6300
`A = 10,994 `AB = 9394 The estimates for A, B, and AB are large
`B = 29,006
`C = −1594
`D = 3394
`E = 2806
16

13–29. 25−2
I = ABCD, I = ACE
Treatment
A B C D = ABC E = AC Combination Strength
− − − − + e 800
+ − − + − ade 1500
− + − + + bde 4000
+ + − − − abe 6200
− − + + − cde 1500
+ − + − + ace 1200
− + + − − bce 3006
+ + + + + abcde 6300
`A = 5894 `C = −494 `AB = 8094
`B = 14506 `D = 2094
`E = 94

13–30. 26−3
III

I = ABD = ACE = BCF = BCDE = ABEF = ACDF = DEF

A = BD = CE = ABCF = ABCDE = BEF = CDF = ADEF


B = AD = CE = ABCF = CDE = AEF = ABCEF = BDEF
C = ABCD = AE = BD = BDE = ABCEF = ADF = CDEF
D = AB = ACDE = BCDF = BCE = ABDEF = ACF = EF
E = ABDE = AC = BCEF = BCD = ABF = ACDEF = DF
F = ABDF = ACEF = BC = BCDEF = ABE = ACD = DE
1

Chapter 14

14–1. (a) ŷ = 10.5 − 0.00156 yards


(b) Predictor Coef SE Coef T P
Constant 10.451 2.514 4.16 0.000
yards -0.001565 0.001090 -1.43 0.163

S = 3.414 R-Sq = 7.3% R-Sq(adj) = 3.8%

Analysis of Variance
Source DF SS MS F P
Regression 1 23.99 23.99 2.06 0.163
Residual Error 26 302.97 11.65
Total 27 326.96
(c) −0.00380 ≤ β1 ≤ 0.00068
(d) R2 = 7.3%
(e)
2

Based on the residual plots there appears to be a severe outlier. This point
should be investigated and if necessary, the point removed and the analysis
rerun.

14–2. ŷ = 10.5 − 0.00156(1800) = 7.63 or approximately 8.


6 ≤ E(y|x0 = 1800) ≤ 9.27

14–3. (a) ŷ = 31.6 − 0.0410 displacement


(b) Predictor Coef SE Coef T P
Constant 31.648 1.812 17.46 0.000
displace -0.040975 0.005406 -7.58 0.000

S = 1.976 R-Sq = 81.5% R-Sq(adj) = 80.1%$

Analysis of Variance
Source DF SS MS F P
Regression 1 224.43 224.43 57.45 0.000
Residual Error 13 50.79 3.91
Total 14 275.21
(c) R2 = 81.5%
(d) ŷ = 20.381, 19.374 ≤ E(y|x0 = 275) ≤ 21.388
3

14–4. ŷ = 31.6 − 0.0410(275) = 20.381, 10.37 ≤ E(y|x0 = 275) ≤ 21.39

14–5.
4

14–6. (a) ŷ = 13.3 + 3.32 taxes


(b) Predictor Coef SE Coef T P
Constant 13.320 2.572 5.18 0.000
taxes 3.3244 0.3903 8.52 0.000
S = 2.961 R-Sq = 76.7% R-Sq(adj) = 75.7%

Analysis of Variance
Source DF SS MS F P
Regression 1 636.16 636.16 72.56 0.000
Residual Error 22 192.89 8.77
Total 23 829.05
(c) 76.7%
(d)
5
6

14–7. (a) ŷ = 93.3 + 15.6x


(b) Predictor Coef SE Coef T P
Constant 93.34 10.51 8.88 0.000
x 15.649 4.345 3.60 0.003

S = 11.63 R-Sq = 48.1% R-Sq(adj) = 44.4%

Analysis of Variance
Source DF SS MS F P
Regression 1 1755.8 1755.8 12.97 0.003
Residual Error 14 1895.0 135.4
Lack of Fit 8 1378.6 172.3 2.00 0.207
Pure Error 6 516.4 86.1
Total 15 3650.8

(c) 7.997 ≤ β1 ≤ 23.299


(d) 74.828 ≤ β0 ≤ 111.852
(e) (126.012, 138.910)

14–8.
7

14–9. (a) ŷ = −6.34 + 9.21 temp


(b) Predictor Coef SE Coef T P
Constant -6.336 1.668 -3.80 0.003
temp 9.20836 0.03377 272.64 0.000

S = 1.943 R-Sq = 100.0% R-Sq(adj) = 100.0%

Analysis of Variance
Source DF SS MS F P
Regression 1 280583 280583 74334.36 0.000
Residual Error 10 38 4
Total 11 280621

β̂1 − β1,0 9.20836 − 10


(c) t0 = r = r = −23.41; t0.025,10 = 2.228; Reject β1 = 0
M SE 4
SXX 3309
(d) 525.58 ≤ E(y|X = 50) ≤ 529.91
(e) 521.22 ≤ yx=58 ≤ 534.28
8

14–10.
9

14–11. (a) ŷ = 77.7895 + 11.8634x


(b) Predictor Coef SE Coef T P
Constant 77.863 4.199 18.54 0.000
hydrocar 11.801 3.485 3.39 0.003

S = 3.597 R-Sq = 38.9% R-Sq(adj) = 35.5%

Analysis of Variance

Source DF SS MS F P
Regression 1 148.31 148.31 11.47 0.003
Residual Error 18 232.83 12.94

(c) 38.9%
(d) 4.5661 ≤ β ≤ 19.1607

14–12. (a)
10

(b)
11

14–13. (a) AvgSize = −1922.7 + 564.54 Level


(b) Predictor Coef SE Coef T P
Constant -1922.7 530.9 -3.62 0.003
Level 564.54 32.74 17.24 0.000

S = 459.0 R-Sq = 95.5% R-Sq(adj) = 95.2%

Analysis of Variance
Source DF SS MS F P
Regression 1 62660784 62660784 297.38 0.000
Residual Error 14 2949914 210708
Total 15 65610698
(c) (0.0015, 0.0019)
(d) 95.5%
(e)
12

14–14. x = OR Grade, y = Statistics Grade

(a) ŷ = −0.0280 + 0.9910x


(b) r = 0.9033
√ √
r n−2 0.9033 18
(c) t0 = √ =√ = 8.93, reject H0 .
r 1 − r2 1 − 0.8160

(d) Z0 = (arctanh(0.9033) − arctanh(0.5)) 17 = 3.88, reject H0 .
(e) 0.7676 ≤ ρ ≤ 0.9615

14–15. x = weight, y = BP

(a) ŷ = 69.1044 + 0.4194x


(b) r = 0.7735
√ √
r n−3 0.7735 23
(c) t0 = √ =√ = 5.85, reject H0 .
1 − r2 1 − 0.5983

(d) Z0 = (arctanh(0.7735) − arctanh(0.6)) 23 = 1.61, do not reject
(e) 0.5513 ≤ ρ ≤ 0.8932
13

14–16. Note that SSR = β̂1 Sxy = β̂12 Sxx , and

σ2
V (β̂1 ) = E(β̂12 ) − [E(β̂1 )]2 =
Sxx
σ2
E(β̂12 ) = β12 +
Sxx
Therefore
E(SSR ) = E(β̂12 )Sxx = σ 2 + β12 Sxx
µ ¶
SSR
E(M SR ) = E = σ 2 + β12 Sxx
1

µ ¶
Sxy 1
14–17. E(β̂1 ) = E = E(Sxy )
Sxx Sxx
X n
1
= E (xi1 − x1 )yi
Sxx i=1
n
1 X
= (xi1 − x1 )E(yi )
Sxx i=1
n
1 X
= (xi1 − x1 )(β0 + β1 xi1 + β2 xi2 )
Sxx i=1
n
X
xi2 (xi1 − x1 )
= β1 + β2 i=1
Sxx
In general, β̂1 is a biased estimator of β1 .

= σ 2 /Sxx , which is minimized if we can make Sxx is as large as possible.


14–18. V (β̂1 ) P
Since ni=1 (xi − x)2 = Sxx , place the x’s as far from x as possible. If n is even, put
n/2 trials at each end of the region of interest. If n is odd, put 1 trial at the center
and (n − 1)/2 trials at each end. These designs should be used only when you are
positive that the relationship between y and x is linear.
14

n
X
14–19. L= wi (yi − β0 − β1 xi )2
i=1
X n
∂L
= −2 wi (yi − β̂0 − β̂1 xi ) = 0.
∂β0 i=1
X n
∂L
= −2 wi xi (yi − β̂0 − β̂1 xi ) = 0.
∂β1 i=1

Simplification of these two equations gives the normal equations for weighted least
squares.
14–20. If y = (β0 + β1 x + ²)−1 , then 1/y = β0 + β1 x + ² is a straight-line regression model.
The scatter diagram of y ∗ = 1/y versus x is linear.

x 10 15 18 12 9 8 11 6
y∗ 5.88 7.69 11.11 6.67 5.00 4.76 5.56 4.17

14–21. The no-intercept model would be the form y = β1 x + ². This model is likely not a
good one for this problem, because there is no data near the point x = 0, y = 0,
and it is probably unwise to extrapolate the linear relationship back to the origin.
The intercept is often just a parameter that improves the fit of the model to the
data in the region where the data were collected.
14–22. b = 14 Σxi = 65.262 Σx2i = 385.194 x = 4.662
Σyi = 208 Σyi2 = 3490 y = 14.857 Σxi yi = 1148.08
Sxx = 80.989 Sxy = 178.473 Syy = 599.714

β̂1 = 2.204
β̂0 = 4.582

ŷ = 4.582 + 2.204x
Sxy
r= = 0.9919, R2 = 0.9839
(Sxx Syy )1/2

H0 : ρ = 0 t0 = 27.08 > t0.05,12 = 1.782. ∴ reject H0


H1 : ρ 6= 0
A strong correlation does not imply a cause and effect relationship.
1

Chapter 15

15–1. (a) ŷ = 7.30 + 0.0183x1 − 0.399x4


(b) Predictor Coef SE Coef T P
Constant 7.304 5.179 1.41 0.176
x1 0.018299 0.004972 3.68 0.002
x4 -0.3986 0.1912 -2.08 0.053

S = 1.922 R-Sq = 64.1% R-Sq(adj) = 59.9%

Analysis of Variance

Source DF SS MS F P
Regression 2 112.263 56.131 15.19 0.000
Residual Error 17 62.824 3.696
Total 19 175.086
(c)
2

(d) The M SE has improved with the x1 x4 model, but the R2 and adjusted R2
have decreased.

15–2. (a) ŷ = −27.9 + 0.0136x1 + 30.7x2 − 0.0670x3


(b) Predictor Coef SE Coef T P
Constant -27.892 6.035 -4.62 0.000
x1 0.013597 0.003247 4.19 0.001
x2 30.685 6.513 4.71 0.000
x3 -0.06701 0.01916 -3.50 0.003

S = 1.167 R-Sq = 87.6% R-Sq(adj) = 85.2%

Analysis of Variance

Source DF SS MS F P
Regression 3 153.305 51.102 37.54 0.000
Residual Error 16 21.782 1.361
Total 19 175.086
3

(c)

15–3. (−0.8024, 0.0044)

15–4. (−0.1076, −0.0264)


4

15–5. (a) ŷ = −1.81 + 0.00360x2 + 0.194x7 − 0.00482x8


(b)
5

(c) Predictor Coef SE Coef T P


Constant -1.808 7.901 -0.23 0.821
x2 0.0035981 0.0006950 5.18 0.000
x7 0.19396 0.08823 2.20 0.038
x8 -0.004815 0.001277 -3.77 0.001

S = 1.706 R-Sq = 78.6% R-Sq(adj) = 76.0%

Analysis of Variance
Source DF SS MS F P
Regression 3 257.094 85.698 29.44 0.000
Residual Error 24 69.870 2.911
Total 27 326.964

15–6. (a) ŷ = 33.45 − 0.05435x1 + 1.0782x6


(b) Predictor Coef SE Coef T P
Constant 33.449 1.576 21.22 0.000
x1 -0.054349 0.006329 -8.59 0.000
x6 1.0782 0.6997 1.54 0.138

S = 2.834 R-Sq = 82.9% R-Sq(adj) = 81.3%

Analysis of Variance

Source DF SS MS F P
Regression 2 856.24 428.12 53.32 0.000
Residual Error 22 176.66 8.03
Total 24 1032.90
6

(c)

(d) x6 is not significant with x1 included in the model.


7

15–7. (a) ŷ = −103 + 0.605 x1 + 8.92 x2 + 1.44 x3 + 0.014 x4


(b) Predictor Coef SE Coef T P
Constant -102.7 207.9 -0.49 0.636
x1 0.6054 0.3689 1.64 0.145
x2 8.924 5.301 1.68 0.136
x3 1.437 2.392 0.60 0.567
x4 0.0136 0.7338 0.02 0.986

S = 15.58 R-Sq = 74.5% R-Sq(adj) = 59.9%

Analysis of Variance
Source DF SS MS F P
Regression 4 4957.2 1239.3 5.11 0.030
Residual Error 7 1699.0 242.7
Total 11 6656.3
(c) H0 : β3 = 0, F0 = 0.361 (not significant)
H0 : β4 = 0, F0 = 0.0004 (not significant)
(d)
8

15–8. (a) y = 62.4 + 1.55 x1 + 0.510 x2 + 0.102 x3 − 0.144 x4


(b) Predictor Coef SE Coef T P
Constant 62.41 70.07 0.89 0.399
x1 1.5511 0.7448 2.08 0.071
x2 0.5102 0.7238 0.70 0.501
x3 0.1019 0.7547 0.14 0.896
x4 -0.1441 0.7091 -0.20 0.844
S = 2.446 R-Sq = 98.2% R-Sq(adj) = 97.4%
Analysis of Variance
Source DF SS MS F P
Regression 4 2667.90 666.97 111.48 0.000
Residual Error 8 47.86 5.98
Total 12 2715.76
(c) H0 : β4 = 0, F0 = 0.04 (not significant)
(d) The t statistics are given in part (b)
(e) H0 : β2 = β3 = β4 = 0,
SSR (β2 , β3 , β4 |β1 , β0 ) = SSR (β1 , β2 , β3 , β4 |β0 ) − SSR (β1 |β0 )
= 2667.90 − 1450.08
= 1217.82
F0 = (1217.82/3)/5.98 = 67.88; at least one of the variables is significant.
(f) (−1.1588, 2.1792)
9

15–9. (a) y = −26219 + 189x − 0.331x2


(b) Predictor Coef SE Coef T P
Constant -26219 11911 -2.20 0.079
x 189.20 80.24 2.36 0.065
x2 -0.3312 0.1350 -2.45 0.058

S = 45.20 R-Sq = 87.3% R-Sq(adj) = 82.2%

Analysis of Variance
Source DF SS MS F P
Regression 2 70284 35142 17.20 0.006
Residual Error 5 10213 2043
Total 7 80497
(c) See the t-test results in part (b)
(d)
10

15–10. (a) y = −4.33 + 4.89x − 2.59x2


(b) Predictor Coef SE Coef T P
Constant -4.3330 0.8253 -5.25 0.001
x 4.887 1.379 3.54 0.009
x2 -2.5855 0.4886 -5.29 0.001

S = 0.7017 R-Sq = 91.9% R-Sq(adj) = 89.6%

Analysis of Variance
Source DF SS MS F P
Regression 2 39.274 19.637 39.89 0.000
Residual Error 7 3.446 0.492
Total 9 42.720
(c) see part (b)
11

(d)
12

15–11. ŷ = 759.39 − 7.607x0 − 0.331(x0 )2

15–13. (a) y = −4.5 + 1.38x + 1.47x2


(b) Predictor Coef SE Coef T P
Constant -4.46 14.63 -0.30 0.768
x 1.384 5.497 0.25 0.807
x2 1.4670 0.4936 2.97 0.016

S = 1.657 R-Sq = 99.6% R-Sq(adj) = 99.5%

Analysis of Variance
Source DF SS MS F P
Regression 2 5740.6 2870.3 1044.99 0.000
Residual Error 9 24.7 2.7
Total 11 5765.3
(c) see part (b)

15–15. The fitted model is ŷ = 11.503 + 0.153x1 − 0.094x2 − 0.0306x1 x2


The t-statistic for H0 : β3 = 0 is t0 = 1.79. We conclude that the slopes are the
same.

15–16. y = β0 + β1 x1 + β2 (x1 − x∗ )x2 + ε, where x2 is an indicator variable with x2 = 0 if


x1 ≤ x∗ and x2 = 1 if x1 > x∗ .

15–17. y = β0 + β1 x1 + β2 (x1 − x∗ )x2 + β3 + ε, where x2 and x3 are indicator variables with


x2 = x3 = 0 if x1 ≤ x∗ and x2 = x3 = 1 if x1 > x∗ . β3 estimates the effect of the
discontinuity.

15–18. The model is as in Exercise 15–16, except now X ∗ is unknown and must be
estimated. This is a nonlinear regression problem. It could be solved by using
one-dimensional or line search methods, which could be used to obtain the trial
values of x∗ .

15–19. b̂1 = 0.594 b̂4 = −0.336

15–20. b̂1 = 0.441 b̂2 = 0.505 b̂3 = −0.315

15–21. V IF1 = 1.2, V IF4 = 1.2

15–22. (a) All possible regressions from Minitab displaying the best two models for each
combination of variables.
13

x x x x x x x x x
Vars R-Sq R-Sq(adj) C-p S 1 2 3 4 5 6 7 8 9

1 54.5 52.7 20.4 2.3929 X


1 35.2 32.7 39.3 2.8548 X
2 74.3 72.3 3.1 1.8324 X X
2 66.0 63.2 11.2 2.1097 X X
3 78.6 76.0 0.9 1.7062 X X X
3 77.8 75.0 1.7 1.7410 X X X
4 80.1 76.7 1.4 1.6812 X X X X
4 79.5 75.9 2.0 1.7073 X X X X
5 80.7 76.3 2.8 1.6941 X X X X X
5 80.7 76.3 2.9 1.6957 X X X X X
6 81.2 75.8 4.4 1.7118 X X X X X X
6 81.1 75.6 4.5 1.7174 X X X X X X
7 81.4 74.9 6.2 1.7442 X X X X X X X
7 81.3 74.8 6.2 1.7470 X X X X X X X
8 81.6 73.8 8.0 1.7814 X X X X X X X X
8 81.4 73.6 8.2 1.7895 X X X X X X X X
9 81.6 72.3 10.0 1.8302 X X X X X X X X X

(b) Stepwise regression from Minitab:

Alpha-to-Enter: 0.15 Alpha-to-Remove: 0.15


Response is y on 9 predictors, with N = 28

Step 1 2 3
Constant 21.788 14.713 -1.808
x8 -0.00703 -0.00681 -0.00482
T-Value -5.58 -7.05 -3.77
P-Value 0.000 0.000 0.001

x2 0.00311 0.00360
T-Value 4.40 5.18
P-Value 0.000 0.000

x7 0.194
T-Value 2.20
P-Value 0.038
14

S 2.39 1.83 1.71


R-Sq 54.47 74.33 78.63
R-Sq(adj) 52.72 72.27 75.96
C-p 20.4 3.1 0.9

The stepwise procedure found variables x2 , x7 , and x8 significant.


(c) Forward selection

Forward selection. Alpha-to-Enter: 0.25


Response is y on 9 predictors, with N = 28
Step 1 2 3 4
Constant 21.788 14.713 -1.808 -1.822

x8 -0.00703 -0.00681 -0.00482 -0.00401


T-Value -5.58 -7.05 -3.77 -2.87
P-Value 0.000 0.000 0.001 0.009

x2 0.00311 0.00360 0.00382


T-Value 4.40 5.18 5.42
P-Value 0.000 0.000 0.000

x7 0.194 0.217
T-Value 2.20 2.45
P-Value 0.038 0.023

x9 -0.0016
T-Value -1.31
P-Value 0.202

S 2.39 1.83 1.71 1.68


R-Sq 54.47 74.33 78.63 80.12
R-Sq(adj) 52.72 72.27 75.96 76.66
C-p 20.4 3.1 0.9 1.4

Forward selection found variables x2 , x7 , x8 , and x9 significant.


15

(d) Backward elimination


Backward elimination. Alpha-to-Remove: 0.1
Response is y on 9 predictors, with N = 28
Step 1 2 3 4 5 6 7
Constant -7.292 -7.294 -9.130 -7.695 -4.627 -1.822 -1.808

x1 0.0008 0.0008
T-Value 0.40 0.42
P-Value 0.690 0.681

x2 0.00363 0.00363 0.00363 0.00358 0.00371 0.00382 0.00360


T-Value 4.32 4.59 4.69 4.76 5.13 5.42 5.18
P-Value 0.000 0.000 0.000 0.000 0.000 0.000 0.000

x3 0.12 0.12 0.17 0.17


T-Value 0.47 0.49 0.75 0.77
P-Value 0.643 0.632 0.461 0.451

x4 0.032 0.032 0.037 0.035 0.026


T-Value 0.77 0.80 1.00 0.97 0.78
P-Value 0.453 0.431 0.329 0.342 0.445

x5 0.000
T-Value 0.00
P-Value 1.000

x6 0.0016 0.0016 0.0015


T-Value 0.49 0.51 0.48
P-Value 0.630 0.618 0.639

x7 0.154 0.154 0.189 0.193 0.235 0.217 0.194


T-Value 1.02 1.10 1.72 1.79 2.54 2.45 2.20
P-Value 0.324 0.284 0.102 0.088 0.019 0.023 0.038

x8 -0.0039 -0.0039 -0.0042 -0.0044 -0.0037 -0.0040 -0.0048


T-Value -1.90 -1.95 -2.34 -2.50 -2.48 -2.87 -3.77
P-Value 0.074 0.066 0.030 0.021 0.021 0.009 0.001

x9 -0.0018 -0.0018 -0.0017 -0.0017 -0.0018 -0.0016


T-Value -1.26 -1.30 -1.26 -1.28 -1.40 -1.31
P-Value 0.222 0.210 0.221 0.213 0.176 0.202

S 1.83 1.78 1.74 1.71 1.70 1.68 1.71


R-Sq 81.56 81.56 81.39 81.18 80.65 80.12 78.63
R-Sq(adj) 72.34 73.80 74.88 75.80 76.25 76.66 75.96
C-p 10.0 8.0 6.2 4.4 2.9 1.4 0.9

Backward elimination found variables x2 , x7 , and x8 significant.


16

15–23. (a) All possible regressions from Minitab displaying the best two models for each
combination of variables.
24 cases used; 1 case contains missing values.
x x
x x x x x x x x x 1 1
Vars R-Sq R-Sq(adj) C-p S 1 2 3 4 5 6 7 8 9 0 1

1 80.9 80.1 5.6 2.9679 X


1 77.1 76.1 10.7 3.2514 X
2 82.6 81.0 5.2 2.8974 X X
2 82.5 80.9 5.4 2.9082 X X
3 84.3 82.0 5.0 2.8231 X X X
3 84.0 81.5 5.5 2.8552 X X X
4 85.0 81.9 6.0 2.8283 X X X X
4 84.9 81.7 6.2 2.8411 X X X X
5 86.7 83.0 5.8 2.7400 X X X X X
5 85.8 81.8 7.1 2.8347 X X X X X
6 88.6 84.5 5.3 2.6141 X X X X X X
6 87.6 83.2 6.6 2.7244 X X X X X X
7 89.6 85.1 5.9 2.5649 X X X X X X X
7 89.0 84.1 6.8 2.6465 X X X X X X X
8 90.2 84.9 7.2 2.5786 X X X X X X X X
8 90.0 84.6 7.4 2.6068 X X X X X X X X
9 90.5 84.4 8.8 2.6285 X X X X X X X X X
(b) Stepwise regression
Alpha-to-Enter: 0.15 Alpha-to-Remove: 0.15
Response is y on 11 predictors, with N = 24
N(cases with missing observations) = 1 N(all cases) = 25
Step 1
Constant 34.43

x1 -0.0482
T-Value -9.66
P-Value 0.000

S 2.97
R-Sq 80.93
R-Sq(adj) 80.06
C-p 5.6
The stepwise procedure found x1 significant.
17

(c) Forward selection

Forward selection. Alpha-to-Enter: 0.25


Response is y on 11 predictors, with N = 24
N(cases with missing observations) = 1 N(all cases) = 25
Step 1 2 3
Constant 34.43 33.50 32.36

x1 -0.0482 -0.0544 -0.1034


T-Value -9.66 -8.40 -2.65
P-Value 0.000 0.000 0.015

x6 1.05 1.02
T-Value 1.44 1.43
P-Value 0.164 0.169

x3 0.070
T-Value 1.28
P-Value 0.217

S 2.97 2.90 2.86


R-Sq 80.93 82.65 83.95
R-Sq(adj) 80.06 80.99 81.55
C-p 5.6 5.2 5.5

The forward selection procedure found x1 , x3 , and x6 significant.


18

(d) Backward elimination


Backward elimination. Alpha-to-Remove: 0.1
Response is y on 11 predictors, with N = 24
N(cases with missing observations) = 1 N(all cases) = 25

Step 1 2 3 4 5 6 7 8 9
Constant -17.6442 -18.5202 -3.7497 -0.9652 -0.6957 -4.4555 -1.9112 -8.9148 0.3409

x1 -0.142 -0.142 -0.139 -0.127 -0.102 -0.089 -0.061


T-Value -2.57 -2.68 -2.66 -2.54 -2.18 -2.11 -1.50
P-Value 0.024 0.019 0.019 0.023 0.044 0.050 0.152

x2 -0.076 -0.075 -0.096 -0.092


T-Value -0.93 -0.97 -1.30 -1.26
P-Value 0.369 0.348 0.215 0.227

x3 0.231 0.230 0.240 0.224 0.140 0.146 0.099 0.035


T-Value 2.34 2.44 2.58 2.48 2.26 2.44 1.79 0.96
P-Value 0.037 0.030 0.022 0.026 0.038 0.026 0.090 0.348

x4 2.4 2.4
T-Value 0.87 0.90
P-Value 0.402 0.383

x5 6.8 6.7 6.8 6.4 6.1 6.6 3.0 3.6 2.9


T-Value 2.06 2.34 2.40 2.31 2.15 2.47 1.83 2.23 2.02
P-Value 0.062 0.036 0.031 0.036 0.047 0.024 0.083 0.038 0.057

x6 1.11 1.14 1.42 1.37 0.66


T-Value 0.88 0.99 1.30 1.26 0.70
P-Value 0.398 0.338 0.215 0.226 0.494

x7 -4.2 -4.1 -3.4 -3.8 -3.9 -3.6


T-Value -1.47 -1.67 -1.46 -1.72 -1.74 -1.67
P-Value 0.167 0.118 0.165 0.106 0.100 0.114

x8 0.28 0.28 0.30 0.29 0.28 0.31 0.26 0.324 0.246


T-Value 2.11 2.20 2.40 2.31 2.26 2.54 2.12 2.72 2.81
P-Value 0.056 0.046 0.031 0.035 0.038 0.021 0.048 0.014 0.011

x9 -0.02
T-Value -0.05
P-Value 0.959

x10 -0.0119 -0.0121 -0.0126 -0.0121 -0.0115 -0.0133 0.0113 -0.0142 -0.0099
T-Value -1.82 -2.09 -2.21 -2.15 -2.01 -2.65 -2.21 -2.90 -5.00
P-Value 0.093 0.057 0.044 0.049 0.061 0.017 0.040 0.009 0.000

x11 -2.4 -2.5 -2.3


T-Value -0.89 -0.93 -0.87
P-Value 0.393 0.370 0.400

S 2.75 2.65 2.63 2.61 2.65 2.61 2.74 2.83 2.82


R-Sq 91.04 91.04 90.48 89.97 88.90 88.57 86.70 85.04 84.31
R-Sq(adj) 82.83 84.15 84.36 84.62 84.05 84.53 83.00 81.89 81.96
C-p 12.0 10.0 8.8 7.4 6.9 5.3 5.8 6.0 5.0

Backward elimination found x5 , x8 , and x10 significant.


19

15–24. (a) All possible regressions from Minitab displaying the best two models for each
combination of variables.

x x x x
Vars R-Sq R-Sq(adj) C-p S 1 2 3 4

1 67.5 64.5 138.7 8.9639 X


1 66.6 63.6 142.5 9.0771 X
2 97.9 97.4 2.7 2.4063 X X
2 97.2 96.7 5.5 2.7343 X X
3 98.2 97.6 3.0 2.3087 X X X
3 98.2 97.6 3.0 2.3121 X X X
4 98.2 97.4 5.0 2.4460 X X X X

(b) Stepwise regression

Alpha-to-Enter: 0.15 Alpha-to-Remove: 0.15


Response is y on 4 predictors, with N = 13
Step 1 2 3 4
Constant 117.57 103.10 71.65 52.58

x4 -0.738 -0.614 -0.237


T-Value -4.77 -12.62 -1.37
P-Value 0.001 0.000 0.205

x1 1.44 1.45 1.47


T-Value 10.40 12.41 12.10
P-Value 0.000 0.000 0.000

x2 0.416 0.662
T-Value 2.24 14.44
P-Value 0.052 0.000

S 8.96 2.73 2.31 2.41


R-Sq 67.45 97.25 98.23 97.87
R-Sq(adj) 64.50 96.70 97.64 97.44
C-p 138.7 5.5 3.0 2.7

Stepwise procedure found x1 and x2 significant.


20

(c) Forward selection

Forward selection. Alpha-to-Enter: 0.25


Response is y on 4 predictors, with N = 13

Step 1 2 3
Constant 117.57 103.10 71.65

x4 -0.738 -0.614 -0.237


T-Value -4.77 -12.62 -1.37
P-Value 0.001 0.000 0.205

x1 1.44 1.45
T-Value 10.40 12.41
P-Value 0.000 0.000

x2 0.42
T-Value 2.24
P-Value 0.052

S 8.96 2.73 2.31


R-Sq 67.45 97.25 98.23
R-Sq(adj) 64.50 96.70 97.64
C-p 138.7 5.5 3.0

Forward selection found x1 , x2 , and x4 significant.


21

(d) Backward elimination

Backward elimination. Alpha-to-Remove: 0.1


Response is y on 4 predictors, with N = 13
Step 1 2 3
Constant 62.41 71.65 52.58

x1 1.55 1.45 1.47


T-Value 2.08 12.41 12.10
P-Value 0.071 0.000 0.000

x2 0.510 0.416 0.662


T-Value 0.70 2.24 14.44
P-Value 0.501 0.052 0.000

x3 0.10
T-Value 0.14
P-Value 0.896

x4 -0.14 -0.24
T-Value -0.20 -1.37
P-Value 0.844 0.205

S 2.45 2.31 2.41


R-Sq 98.24 98.23 97.87
R-Sq(adj) 97.36 97.64 97.44
C-p 5.0 3.0 2.7

Backward elimination found x1 and x2 significant.

15–25. V IF1 = 38.5, V IF2 = 254.4, V IF3 = 46.9, V IF4 = 282.5. The variance inflation
factors indicate a problem with multicollinearity.
1

Chapter 16

16–1. H0 : µ̃ = 7.0 n = 10
H1 : µ̃ = 7.0

α = 0.05

CR: R ≤ Rα∗ (Table X)

R+ = 8, R− = 2 ⇒ R = 2

Since R > Rα∗ , do not reject H0

16–2. H0 : µ̃ = 8.5
H1 : µ̃ =
6 8.5

α = 0.05
R+ = 8, R− = 11, R = min(8, 11) = 8, R0.05

=5


Since R > R0.05 , do not reject H0

16–3. (a) H0 : µ̃ = 3.5 Critical Region: R− < Rd∗


H1 : µ̃ > 3.5

(b) n = 10 α = 0.05 R0.05 =1
− ∗
R=3 Since R > R0.05 , do not reject H0
(c) Probability of not rejecting H0 (µ̃ = 3.5) when µ̃ = 4.5
Z ∞
1 −(1/β)x
p = P (X > 1) = e dx = e−1/β
1 β

16–4. n = 10 σ=1
H0 : µ = 0
H1 : µ > 0

(a) α = 0.025
(b) p = P (X > 0) = P (Z > −1) = 1 − Φ(−1) = 0.8413

R0.05 =1
X1 µ ¶
10
β =1− (0.1587)x (0.8413)10−x = 0.487
x
x=0
2

16–5. H0 : µ̃d = 0
H1 : µ̃d =
6 0

α = 0.05


CR: R < R0.05 =1
− +
R = 2, R = 6 ⇒ R = 2
Since R is not less than R∗ , do not reject H0

16–8. H0 : µ = 7 R+ = 50.5
H1 : µ 6= 7 R− = 4.5
R = min(50.5, 4.5) = 4.5
R0.05 = 8 reject H0

16–9. H0 : µ = 8.5 n = 20
H1 : µ =
6 8.5

α = .05


CR: R ≤ R0.05 = 52

R+ = 19 + 10.5 + · · · + 1 = 88.5
R− = 12 + 20 + · · · + 3.5 = 121.5

R = 88.5 > R0.05
∴ do not reject H0
conclude titanium content is 8.5%

20(21)
16–10. H0 : µ = 8.5 µR = = 10.5
4
10(21)(41)
H1 : µ 6= 8.5 σR2 = = 717.5
24
85 − 105
Z0 = √ = −0.747
717.5

16–12. R+ = 24.5 R = min(24.5, 11.5) = 11.5


R− = 11.5 ∗
R0.05 =3

Do not reject H0 , µd = 0.
3

16–13. H0 : µ1 = µ2 n1 = 8 n2 = 9
H 1 : µ1 > µ 2

CR: R1 < Rα∗ = 51 Table IX

α = 0.05

R2 = 75, R1 = 78
Since R2 is not less than or equal to 51, do not reject H0 . Conclude the two
circuits are equivalent.

16–14. n1 = n2
R1 = 40
R2 = 6(13) − 40 = 38

R0.05 = 26

Do not reject H0

Airline Minutes Rank


D 0 1
D 1 2.5
A 1 2.5
A 2 4
A 3 5
A 4 6.5
D 4 6.5
A 8 8
D 9 9
D 10 10
D 13 11
A 15 12
4

16–15. n1 = n2 = 10, n > 8 large-sample approximation


H 0 : µ1 = µ2
H 1 : µ1 =
6 µ2

α = 0.05

CR: |Z0 | > Z0.025 = 1.96


(20)(21) 10(10)(21)
µR1 = = 105, σR2 1 = = 175
4 12
R1 = 1 + 2 + 3 + · · · + 19.5 = 77

77 − 105
Z0 = √ = −2.117; |Z0 | = 2.117 > 1.96
175
Reject H0 ; conclude unit 2 is superior.

16–16. H0 : techniques do not differ Ranks Ri


H1 : techniques differ 1 14 11.5 6 7 38.5
2 16 11.5 9 15 51.5
3 5 8 10 13 36
4 15 3 1.5 4 10
· ¸
2 1 16(17)2
S = 1495 − = 22.6
15 4
· ¸
1 38.52 51.52 362 102 16(17)2
K= + + + − = 10.03
22.6 4 4 4 4 4

χ20.05,3 = 7.81, reject H0 ; conclude the techniques differ

16–17. H0 : methods do not differ Method Ranks ri


H1 : methods differ 1 10.5 9 12 7 5 43.5
α = 0.10 2 10.5 15 14 8 6 53.5
CR: K > χ20.10,2 = 4.61 3 1 4 3 2 13 83
µ ¶
12 43.52 53.52 53.52 232
K= + + + − 3(16) = 4.835
15(16) 5 5 5 5

at 0.1 significance, reject H0 .


5

16–18. H0 : manufacturers do not differ Ranks Ri


H1 : they differ A 12 1 3 9 13 38
B 4 17 14 7 20 62
C 19 15 16 11 18 79
D 5 10 2 8 6 31
· 2 ¸
12 38 + 622 + 792 + 312
K= − 3(21) = 8.37
20(21) 5

χ20.05,3 = 7.81. Reject H0 ; conclude the manufacturers differ.


1

Chapter 17

17–1. (a) X-bar Chart: U CL = 34.32 + 0.577(5.65) = 37.58


LCL = 34.32 − 0.577(5.65) = 31.06
R chart: U CL = 2.115(5.65) = 12
LCL = 0

There is one observation beyond the upper control limit. Removal of this point
results in the following control charts:
2

The process now appears to be in control.

R 5.74 U SL − LSL 20
(b) σ̂ = = = 2.468, P CR = = = 1.35
d2 2.326 6σ 6(2.468)
· ¸
45 − 34.09 34.09 − 25
P CRK = min , = min[1.474, 1.2277] = 1.2277
3(2.468) 3(2.468)

(c) 0.205%
3


17–2. P (X < µ + 3σ/ n|µX = µ + 1.5σ)

µ + 3σ/ n − (µ + 1.5σ)
= P (Z < √
σ/ n

= P (Z < 3 − 1.5 n)
= probability of failing to detect shift on 1st sample following the shift.


[P (Z < 3 − 1.5 n)]3 = prob of failing to detect shift for 3 consecutive samples
following the shift.

For n = 4, [P (Z < 0)]3 = (0.5)3 = 0.125

For n = 4 with 2-sigma limits,


[P (Z < 2 − 3)]3 = [P (Z < −1)]3 = (0.1587)3 = 0.003997

17–3. (a) ARL = 1/α


(b) ARL = 1/(1 − β)
(c) If k changes from 3 to 2, the in-control ARL will get much shorter (from about
370 to 20). This is not desirable.
(d) For a 1-sigma shift, β ' 0.8, so the ARL is approximately ARL =
1/(1 − 0.8) = 5.

362.75 8.60
17–4. X = = 14.51, R = = 0.34
25 25
(a) X Chart: U CL = 14.706, CL = 14.31, LCL = 14.314
R Chart: U CL = 0.719, CL = 0.34, LCL = 0

(b) σ̂ = R/d2 = 0.34/2.326 = 0.14617, µ̂ = 14.51


6σ natural tolerance limits = 14.51 ± 3(0.14617) = 14.51 ± 0.4385
P (X > 14.90) = 0.00379, P (X < 14.10) = 0.00252
Fraction defective = 0.00631
15 − 14
(c) P CR = = 1.141 P CRK = min[1.119, 1.164] = 1.119
6(0.146)
4

17–5. D/2

214.25 133
17–6. (a) X = = 10.7125 R= = 6.65
20 20
X Chart: U CL = 10.7125 + 0.729(6.65) = 15.56
LCL = 5.86
R Chart: U CL = 2.282(6.65) = 15.175, LCL = 0; process in control

(b) σ̂ = Rd2 = 6.65/2.059 = 3.23


15 − 5
P CR = = 0.516
6(3.23)

17–7. (a) Normal probability plot

The normality assumption appears to be satisfied.


5

(b)

The process appears to be in control.

17–8. (a) Normal probability plot

The normality assumption appears to be satisfied.


6

(b)

The process appears to be in control.

17–9.

The process is out of control.


7

17–10. p = 0.05 U CL = 0.05 + 3(0.0218) = 0.115


µ ¶
0.115 − 0.08
P (X < 0.115|µ = 0.08) = 1 − P Z< = 1 − P (Z < 0.535)
0.0654
= 1 − 0.7036 = 0.2964 Probability of detecting shift
on first sample following shift
P (detecting before 3rd sample) = 1 − (0.7036)2 = 0.5049

17–11. For the detection probability to equal 0.5, the magnitude of the shift must
bring the
p fraction nonconforming exactly to the upper control limit. That is,
δ = k p(1 − p)/n, where δ is the magnitude of the shift. Solving for n gives
n = (k/δ)2 p(1 − p). For example, if k = 3, p = 0.01 (the in-control fraction
nonconforming), and δ = 0.04, then n = (3/0.04)2 (0.01)(0.99) = 56.

17–12. (a) P CR = 1.5


(b) About 7 defective parts per million.
(c) P CRk = 1 P CR unchanged.
(d) About 0.135 percent defective.

17–13. Center the process at µ = 100. The probability that a shift to µ = 105 will be
detected on the first sample following the shift is about 0.15. A p-chart with n = 7
would perform about as well.
8

17–14.

The process appears to be in control.

17–15.

The process is out of control. Removing two out-of-control points and revising the
limits results in:
9

The process is now in control.

17–16.

The process appears to be out of control.

17–17. U CL = 16.485; detection probability = 0.434


17–18. U CL = 19.487, CL = 10, LCL = 0.513
10

17–19.

Since the sample sizes are equal, the c and u charts are equivalent.

17–20.
11

17–21.

17–22.
12

17–23.

17–24. (a)

Z ∞  1 if t < α
β−t
R(t) = f (x) dx = β−α
if α ≤ t ≤ β
t 
0 if t > β
Z ∞
α+β
(b) R(t) dt =
0 2
f (t) 1
(c) h(t) = = , α ≤ t ≤ β.
R(t) β−t
Z t µ ¶
β−t
(d) H(t) = h(t) dt = −`n
0 β−α
β−t
e−H(t) = = R(t).
β−α

17–25. RS (t) = e−λs t , λs = λ1 + λ2 + λ3 = 7.6 × 10−2


−2 ×60
(a) RS (60) = e−7.6×10 = 0.0105
1
(b) M T T F = 1/λs = 7.6×10−2
= 13.16 hours
13

17–26. λ1 = λ2 = λ3 = λ4 = λ5 = 0.002
X5 µ ¶
5
(a) R(1000) = (0.367)k (0.633)5−k = 0.6056
k
k=2

(b) R(1000) = 1 − (0.633)5 = 0.8984

3
X e−1 (1)k
17–27. R(1000) = = 0.98104
k=0
k!

17–28. λ = 1/160 = 6.25 × 10−3

17–29. 0.84, 0.85

17–30. If θ̂ is the maximum likelihood estimator of θ and φ = g(θ) is a single-valued


function of θ, then φ̂ = g(θ̂) is the MLE of φ. To prove this, note that L(θ),
the likelihood function, has a maximum at θ = θ̂. Furthermore, θ = g −1 (φ), so
the likelihood function is L[g −1 (φ)], which has a maximum at θ̂ = g −1 (φ) or at
φ = g(θ̂). In the problem stated, R is of the form g(θ) = e−t/θ , so the problem is
solved.

17–31. (a) 3842


(b) [913.63, ∞)

17–32. (a) R̂(300) = e−300/θ̂ = e−300/3842 = 0.9249

R̂L (300) = e−300/θ̂L = e−300/913.63 = 0.72

(b) L̂0.9 = 3842 `n(1/0.9) = 404.795


1

Chapter 18

18–1. (a) λ = 2 arrivals/hr, µ = 3 services/hr, ρ = 2/3.


X5 µ ¶µ ¶j
1 2
P (n > 5) = 1 − P (n ≤ 5) = 1 − = 0.088
j=0
3 3
ρ λ2
(b) L = = 2, Lq = = 4/3
1−ρ µ(µ − λ)
1
(c) W = = 1 hour
µ−λ

18–2. (a) State 0 = rain; State 1 = clear.


· ¸
0.7 0.3
P =
0.1 0.9

(b)

p0 = 0.7p0 + 0.1p1
1 = p0 + p1

This implies p0 = 1/4, p1 = 3/4.


(3)
(c) To find p11 , note that
· ¸
(3) 3 0.412 0.588
P = P =
0.196 0.804
(3)
Thus, p11 = 0.804.
(2) (2) (1)
(d) f10 = p10 − f10 p10 = 0.16 − (0.1)(0.1) = 0.15
(e) µ0 = 1/p0 = 4 days

18–3.
· ¸
p 1−p
P =
1−p p
· ¸
∞ 1/2 1/2
P =
1/2 1/2
2

18–4. (a)
 
1 − 2λ∆t 2λ∆t 0
P =  µ∆t 1 − (µ + 32 λ)∆t 3
2
λ∆t 
0 µ∆t 1 − µ∆t

(b)
−2λp0 + µp1 = 0
3
2λp0 − (µ + λ)p1 + µp2 = 0
2
3
λp1 − µp2 = 0
2
p0 + p1 + p2 = 1
Solving yields
µ2
p0 =
µ2 + 2λµ + 3λ2
2λµ
p1 =
µ + 2λµ + 3λ2
2

3λ2
p2 =
µ2 + 2λµ + 3λ2

18–5. (a) pii = 1 implies that State 1 is an absorbing state, so States 0 and 3 are
absorbing.
(b) p0 = 1. The system never leaves State 0.
2 1 1
(c) b10 = · 1 + b10 + b20
3 6 6
2 1
b20 = b10 + b20
3 6
20 16
b10 = , b20 =
21 21
1 1
b13 = b13 + b23
6 6
2 1 1
b23 = b13 + b23 + · 1
3 6 6
1 5
b13 = , b23 =
21 21
3

1 20 1 16 18
p0 = · + · =
2 21 2 21 21
1 1 20 1 16 1 39
(d) p0 = ·1+ · + · + ·1=
4 4 21 4 21 4 42

18–6. (a)
 
1 0 0 0 ··· 0 0 0
 q 0 p 0 ··· 0 0 0 
 
 0 q 0 p ··· 0 0 0 
 
P =  .. 
 . 
 
 0 0 0 0 ··· q 0 p 
0 0 0 0 ··· 0 0 1

(b)
 
1 0 0 0 0
 0.7 0 0.3 0 0 
 
P = 
 0 0.7 0 0.3 0 

 0 0 0.7 0 0.3 
0 0 0 0 1

b10 = 0.7(1) + 0.3b20


b20 = 0.7b10 + 0.3b20
b30 = 0.7b20

b10 = 0.953
b20 = 0.845
b30 = 0.591

b34 = 0.7b24 + 0.3(1)


b24 = 0.7b14 + 0.3b34
b14 = 0.3b24

b14 = 0.0465
b24 = 0.155
b34 = 0.408
4

18–7. (a)
 
0 p 0 1−p
 1−p 0 p 0 
P = 
 0

1−p 0 p 
p 0 1−p 0

A = [1 0 0 0]

(b,c) The transition matrix P is said to be doubly stochastic since each row and each
column add to one. For such matrices, it is easy to show that the steady-state
probabilities are all equal. Thus, for p = q = 1/2 or p = 4/5, q = 1/5, we
have p1 = p2 = p3 = p4 = 1/4.

18–9. λ = 1/10, µ = 1/3, ρ = 3/10.

(a) P (wait) = 1 − p0 = ρ = 3/10.


λ2
(b) Lq = = 9/70.
µ(µ − λ)
λ
(c) Wq = 3 = ⇒ λ = 1/6 = criteria for adding service.
µ(µ − λ)
Z ∞
3 −7/3 .
(d) P (Wq > 0) = λ(1 − ρ) e−(µ−λ)wq dwq = e = 0.03.
10 10
1 7 .
(e) P (W > 10) = e− 3 ( 10 )(0.10) = 0.10.
(f) ρ = 1 − p0 = 3/10.

15 5
18–10. λ = 15, µ = 27, N = 3, ρ = = .
27 9
· ¸
5 1 − 4( 59 )3 + 3( 95 )4 .
(a) L = 4 = 0.83.
9 5
[1 − ( 59 )]4
µ ¶3 · 4 ¸
5 9 .
(b) 5 4 = 0.084 .
9 1 − (9)
(c) Since both the number in the system and the number in the queue are the
same when the system is empty.
5

18–11. The general steady-state equations for the M/M/s queueing system are

ρ = λ/(sµ)
½·Xs−1 ¸ · ¸¾−1
(sρ)n (sρ)s
p0 = +
n=0
n! (s!)(1 − ρ)
(sρ)s+1 p0
L = sρ +
s(s!)(1 − ρ)2
(sρ)s+1 p0
Lq =
s(s!)(1 − ρ)2
W = L/λ
1
Wq = W −
µ

In this problem, we have λ = 0.0416, µ = 0.025.

(a) ρ = 0.555 for s = 3.


(b) Plugging into the appropriate equation above, we find that p0 = 0.1732.

Then Lq = 0.3725.

1 Lq
And finally, W = Wq + = + 40 = 48.96 min
µ 0.0416
(c) ρ = 0.832 for s = 2.

Plugging into the appropriate equation above, we find that p0 = 0.0917.

Then Lq = 3.742.

Lq
And finally, W = + 40 = 129.96 min.
0.0416

18–12. λ = 18, µ = 30, φ = 0.6, ρ = 0.6/s.


·X
s−1 j ¸−1
φ φs
p0 = + .
j=0
j! s!(1 − ρ)

s = 1 ⇒ p0 = 0.4.
6

s = 2 ⇒ p0 = 0.5384

(0.6)1
and P (wait) = 1 − p0 − p1 = 1 − 0.5384 − (0.5384) = 0.14.
1!

18–13. λ = 50, µ = 20, φ = 2.5, ρ = 2.5/s.


(a)
λj−1 λj−2 · · · λ0
Cj = ,
µj µj−1 · · · µ1
where
½
λ, j ≤ s
λj =
0, j > s
and
½
jµ, j ≤ s
µj =
sµ, j > s
Therefore,
(λ/µ)j
Cj = , j = 0, 1, . . . , s.
j!
(
(λ/µ)j
p0 j = 0, 1, . . . , s
pj = j! ,
0 otherwise
where
·X
s ¸−1
(λ/µ)j
p0 =
j=0
j!
· s ¸−1
(λ/µ)s X (λ/µ)j
(b) ps = ≤ 0.05
s! j=0
j!

Try s = 5 ⇒ ps = 0.065

s = 6 ⇒ ρ = 0.416, ps = 0.0354 (which works).


(c) p6 = 0.0354
(d) Utilization would change from 41.6% to 100ρ = 100(0.5/6) = 8.33%.
(e) φ = 50/12 = 4.166. p6 = 0.377 fraction getting busy tone.
1

Chapter 19

19–1. Let n denote the number of coin flips. Then the number of heads observed is
X ∼ Bin(n, 0.5). Therefore, we can expect to see about n/2 heads over the long
term.

19–2. If π̂n denotes the estimator for π after n darts have been thrown, then it is easy
to see that π̂n ∼ (4/n)Bin(n, π/4). Then E(π̂n ) = π, and we can expect to see the
estimator converge towards π as n becomes large.

19–3. By the Law of the Unconscious Statistician,


µXn ¶
b − a
E(Iˆn ) = E f (a + (b − a)Ui )
n i=1
= (b − a)E[f (a + (b − a)Ui )]
Z 1
= (b − a) f (a + (b − a)u) · 1 du
0
= I

19–4. (a) The exact answer is Φ(2) − Φ(0) = 0.4772. The n = 1000 result will tend to
be closer than the n = 10.
R4 R 10
(b) We can instead integrate over 0 , say, since 4 ≈ 0. This strategy will prevent
the “waste” of observations on the trivial tail region.
(c) The exact answer is 0.

19–5. (a)
customer arrival time begin service service time depart time wait
1 3 3.0 6.0 9.0 0.0
2 4 9.0 5.5 14.5 5.0
3 6 14.5 4.0 18.5 8.5
4 7 18.5 1.0 19.5 11.5
5 13 19.5 2.5 22.0 6.5
6 14 22.0 2.0 24.0 8.0
7 20 24.0 2.0 26.0 4.0
8 25 26.0 2.5 28.5 1.0
9 28 28.5 4.0 32.5 0.5
10 30 32.5 2.5 35.0 2.5
2

Time Event Customers in System


3 Cust 1 arrival 1
4 Cust 2 arrival 12
6 Cust 3 arrival 123
7 Cust 4 arrival 1234
9 Cust 1 depart 234
13 Cust 5 arrival 2345
14 Cust 6 arrival 23456
14.5 Cust 2 depart 3456
18.5 Cust 3 depart 456
19.5 Cust 4 depart 56
20 Cust 7 arrival 567
22 Cust 5 depart 67
24 Cust 6 depart 7
25 Cust 8 arrival 78
26 Cust 7 depart 8
28 Cust 9 arrival 89
28.5 Cust 8 depart 9
30 Cust 10 arrival 9 10
32.5 Cust 9 depart 10
35 Cust 10 depart
Thus, the last customer leaves at time 35.
(b) The average waiting time for the 10 customers is 4.75.
(c) The maximum number of customers in the system is 5 (between times 14 and
14.5).
(d) The average number of customers over the first 30 minutes is calculated by
adding up all of the customer minutes from the second table — one customer
from times 3 to 4, two customers from times 4 to 6, etc.
Z 30
1 79.5
L(t) dt = = 2.65
30 0 30
3

19–6. The following table gives a history for this (S, s) inventory system.

day initial stock customer order end stock reorder? lost orders
1 20 10 10 no 0
2 10 6 4 yes 0
3 20 11 9 no 0
4 9 3 6 yes 0
5 20 20 0 yes 0
6 20 6 14 no 0
7 14 8 6 no 0

By using s = 6, we had no lost orders.

19–7. (a) Here is the complete table for the generator.

i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Xi 0 1 16 15 12 13 2 11 8 9 14 7 4 5 10 3 0
Thus, U1 = 1/16, U2 = 6/16.
(b) Yes (see the table).
(c) Since the generator cycles, we have X0 = X16 = · · · = X144 = 0. Then
X145 = 1, X146 = 6, . . . , X150 = 2.

19–8. (a) Using the algorithm given in Example 19–8 of the text, we find that X1 =
1422014746 and X2 = 456328559. Since Ui = Xi /(231 − 1), we have U1 =
0.6622, U2 = 0.2125.

19–9. (a) X = −(1/2)`n(1 − U ).


(b) X = −(1/2)`n(0.025) = 0.693.

19–10. (a) Z = Φ−1 (0.25) = −0.6745.


(b) X = µ + σZ = 1 + 3Z = −1.0235.

19–11. f (x) = |x/4|, −2 < x < 2.


Z x
t 1 x2
(a) If −2 < x < 0, then F (x) = − dt = − .
−2 4 2 8
Z x
1 t 1 x2
If 0 < x < 2, then F (x) = + dt = + .
2 0 4 2 8
4

1 X2
Thus, for 0 < U < 1/2, we set F (X) = − = U.
2 8

Solving, we get X = − 4 − 8U .

1 X2
For 1/2 < U < 1, we set F (X) = + = U.
2 8

Solving this time, we get X = 8U − 4.

Recap:
½ √
−√ 4 − 8U , 0 < U < 1/2
X =
8U − 4 1/2 < U < 1
p
(b) X = 8(0.6) − 4 = 0.894.

19–12. (a)
x p(x) F (x) U
−2.5 0.35 0.35 [0,0.35)
1.0 0.25 0.60 [0.35,0.60)
10.5 0.40 1.0 [0.60,1.0)
(b) U = 0.86 yields X = 10.5.

β
19–13. (a) F (X) = 1 − e−(X/α) = U .

Solving for X, we obtain X = α[−`n(1 − U )]1/β .

(b) X = (1.5)[−`n(0.34)]1/2 = 1.558.

19–14. We have
p
Z1 = −2`n(0.45) cos(2π(0.12)) = 0.921

and
p
Z2 = −2`n(0.45) sin(2π(0.12)) = 0.865.
5

12
X
19–15. Ui − 6 = 1.07.
i=1

19–16. Since the Xi ’s are IID exponential(λ) random variables, we know that their m.g.f.
is
λ
MXi (t) = , t < λ, i = 1, 2 . . . , n.
λ−t
Pn
Then the m.g.f. of Y = Xi is
i=1

Yn µ ¶n
λ
MY (t) = MXi (t) = .
i=1
λ − t

We will be done as soon as we can show that this m.g.f. matches that corresponding
to the p.d.f. from Equation (19–4), namely,
Z ∞
MY (t) = ety λn e−λy y n−1 /(n − 1)! dy
0
Z ∞
λn
= e−(λ−t)y y n−1 dy
(n − 1)! 0
Z ∞ µ ¶n−1
λn −u u du
= e
(n − 1)! 0 λ−t λ−t
n Z ∞
λ
= e−u un−1 du
(λ − t)n (n − 1)! 0
λn
= Γ(n)
(λ − t)n (n − 1)!
µ ¶n
λ
= .
λ−t

Since both versions of MY (t) match, that means that the two versions of Y must
come from the same distribution — and we are done.
µYn ¶
1 1
19–17. X = − `n Ui = − `n((0.73)(0.11)) = 0.841.
λ i=1
3

19–18. (a) To get a Bernoulli(p) random variable Xi , simply set


½
1, if Ui ≤ p
Xi =
0, if Ui > p
6

(Note that there are other allocations of the uniforms that will do the trick
just as well.)
(b) Suppose X1 , X2 , . . . , Xn are IID
Pn Bernoulli’s, generated according to (a). To
get a Binomial(n, p), let Y = i=1 Xi .

19–19. Suppose success (S) on trial i corresponds to Ui ≤ 0.25, and failure (F ) corresponds
to Ui > 0.25. Then, from the sequence of uniforms in Problem 19–15, we have
F F F F S, i.e., we require X = 5 trials before observing the first success.

19–20. The grand sample mean is


b
1X
Z̄b = Zi = 4,
b i=1

while
b
1 X
V̂B = (Zi − Z̄b )2 = 1.
b − 1 i=1

So the 90% batch means CI for µ is


q
µ ∈ Z̄b ± tα/2,b−1 V̂B /b
p
= 4 ± t0.05,2 1/3
p
= 4 ± 2.92 1/3
= 4 ± 1.686

19–21. The 90% confidence interval is of the form

[−2.5, 3.5] = X̄ ± tα/2,b−1 y


= 0.5 ± t0.05,4 y
= 0.5 ± 2.132 y

Since the half-length of the CI is 3, we must have that y = 3/2.132 = 1.407.


The 95% CI will therefore be of the form

X̄ ± t0.025,4 y = 0.5 ± (2.776)(1.407) = 0.5 ± 3.91 = [−3.41, 4.41].


7

19–23. The 95% batch means CI for µ is


q
µ ∈ Z̄b ± tα/2,b−1 V̂B /b
p
= 100 ± t0.025,4 250/5

= 100 ± 2.776 50
= 100 ± 19.63

19–25. (a) (i) Both exponential(1).


(ii) Cov(Ui , 1 − Ui ) = −V (Ui ) = −1/12.
(iii) Yes.
(b) After a little algebra,
1£ ¤
V ((X̄n + Ȳn )/2) = V (X̄n ) + V (Ȳn ) + 2Cov(X̄n , Ȳn )
4
1£ ¤
= V (X̄n ) + Cov(X̄n , Ȳn )
2
1
= [V (Xi ) + Cov(Xi , Yi )]
2n
1
≤ V (Xi )
2n
= V (X̄2n ).
So the variance decreases compared to V (X̄2n ).
(c) You get zero, which is the correct answer.

19–26. (a) E(C) = E(X̄) − E[k(Y − E(Y ))] = µ − [k(E(Y ) − E(Y ))] = µ.
(b) V (C) = V (X̄) + k 2 V (Y ) − 2k Cov(X̄, Y ). Comment: It would be nice, in
terms of minimizing variance, if k Cov(X̄, Y ) > 0.
(c)
d
V (C) = 2kV (Y ) − 2 Cov(X̄, Y ) = 0.
dk
This implies that the critical (minimizing) point is

Cov(X̄, Y )
k = .
V (Y )
8

Thus, the optimal variance is


µ ¶2 µ ¶
Cov(X̄, Y ) Cov(X̄, Y )
V (C) = V (X̄) + V (Y ) − 2 Cov(X̄, Y )
V (Y ) V (Y )
Cov 2 (X̄, Y )
= V (X̄) − .
V (Y )

19–27. They are exponential(1).

19–28. They should look normal.

19–29. You should have a bivariate normal distribution (with correlation 0), centered at
zero with symmetric tails in all directions.

You might also like