Probability and Statistics in Engineering - Hines, Montgomery, Goldsman, Borror 4e Solutions (Thedrunkard1234)
Probability and Statistics in Engineering - Hines, Montgomery, Goldsman, Borror 4e Solutions (Thedrunkard1234)
Probability and Statistics in Engineering - Hines, Montgomery, Goldsman, Borror 4e Solutions (Thedrunkard1234)
Chapter 1
1–2. (a) A B A B
U U
C C
(b) A∪B
A B A B
U U
=
C A∪C C
A∪(B∩C) = (A∪B)∩C(A∪C)
A B A B
U U
=
C A C C
A∩(B∪C) = (A∩B)∪(A∩C)
(c) (d)
U U
A B A B
A∩B = A A∪B = B
(e) A B (f)
U U
ABC
1–5. S = {(t1 , t2 ): t1 ∈ R, t2 ∈ R, t1 ≥ 0, t2 ≥ 0}
t2 t2 t2
0.15
t1 + t 2 = 0.3
C
B
A
t1 t1 t1
0.15
24 24
y–x=1
23 t2
1 t1 x
24 t1
iii) y
24
4.8
x
19.2 24
3
1–8. {0, 1}A = {∅, {a}, {b}, {c}, {d}, {a, b}, {a, c}, {a, d}, {b, c}, {b, d},
{c, d}, {a, b, c}, {a, b, d}, {a, c, d}, {b, c, d}, {a, b, c, d}}
1–9. N = Not Defective, D = Defective
(a) S = {N N N, N N D, N DN, N DD, DN N, DN D, DDN, DDD}
(b) S = {N N N N, N N N D, N N DN, N DN N, DN N N }
1–11. 6 · 5 = 30 routes
1–16. There are 512 possibilities, so the probability of randomly selecting one is 5−12 .
µ ¶
8
1–17. = 28 comparisons
2
4
µ ¶
40
1–18. = 780 tests
2
40!
1–19. P240 = = 1560 tests
38!
µ ¶
10
1–20. = 252
5
µ ¶µ ¶
5 5
1–21. = 25
1 1
µ ¶µ ¶
5 5
= 100
2 2
1–29. A: over 60
M : male
F : female
P (F ) · P (A|F )
P (F |A) =
P (F ) · P (A|F ) + P (M ) · P (A|M )
(0.04)(0.01) 0.004
= =
(0.4)(0.01) + (0.6)(0.2) 0.124
∼
= 0.0323
1–30. A: defective
Bi : production on machine i
If
k
X X X
P (A2 ∪ A3 ∪ · · · ∪ Ak ) = P (Ai ) − P (Ai ∩ Aj ) + P (Ai ∩ Aj ∩ Ar )
i=2 2≤i<j≤k 2≤i<j<r≤k
X
− P (Ai ∩ Aj ∩ Ar ∩ A` ) + · · · (Eq. 1)
2≤i<j<r<`≤k
Then
k
X X X
P (A1 ∪ A2 ∪ · · · ∪ Ak ) = P (Ai ) − P (Ai ∩ Aj ) + P (Ai ∩ Aj ∩ Ar )
i=1 1≤i<j≤k 1≤i<j<r≤k
X
− P (Ai ∩ Aj ∩ Ar ∩ A` ) + · · · (Eq. 2)
1≤i<j<r<`≤k
By Thm. 1–3,
P (A1 ∪ (A2 ∪ A3 ∪ · · · ∪ Ak )) = P (A1 ) + P (A2 ∪ A3 ∪ · · · ∪ Ak )
− P ((A1 ∩ A2 ) ∪ · · · ∪ (A1 ∩ Ak ))
So from using Eq. 1 through 3,
P (A1 ∪ A2 ∪ · · · ∪ Ak )
" k #
X X X
= P (A1 ) + P (Ai ) − P (Ai ∩ Aj ) + P (Ai ∩ Aj ∩ Ar ) − · · ·
i=2 2≤i<j≤k 2≤i<j<r≤k
" k
#
X X X
− P (A1 ∩ Ai ) − P (A1 ∩ Ai ∩ Aj ) + P (A1 ∩ Ai ∩ Aj ∩ Ar ) − · · ·
i=2 2≤i<j≤k 2≤i<j<r≤k
k
X X X
= P (Ai ) − P (Ai ∩ Aj ) + P (Ai ∩ Aj ∩ Ar )
i=1 1≤i<j≤k 1≤i<j<r≤k
+ · · · + (−1)k−1 · P (A1 ∩ A2 ∩ · · · ∩ Aj )
7
(365)(364) · · · (365 − n + 1)
1–35. P (B) =
365n
n 10 20 21 22 23 24 25 30 40 50 60
P (B) 0.117 0.411 0.444 0.476 0.507 0.538 0.569 0.706 0.891 0.970 0.994
6 2 8
1–36. P (win on 1st throw) = + =
36 36 36
· ¸
P (B3 ) · P (A|B3 ) (0.5)(0.3)
1–39. P (B3 |A) = 3 =
X (0.2)(0.2) + (0.3)(0.5) + (0.5)(0.3)
P (Bi ) · P (A|Bi )
i=1
= 0.441
1–40. F : Structural Failure
DS : Diagnosis as Structural Failure
P (F ) · P (DS |F )
P (F |DS ) =
P (F ) · P (DS |F ) + P (F ) · P (DS |F )
(0.25)(0.9) 0.225
= = = 0.6
(0.25)(0.9) + (0.75)(0.2) 0.225 + 0.150
1
Chapter 2
4 48
0 5
2–1. RX = {0, 1, 2, 3, 4}, P (X = 0) =
52
5
µ ¶µ ¶ µ ¶µ ¶
4 48 4 48
1 4 2 3
P (X = 1) = µ ¶ , P (X = 2) = µ ¶ ,
52 52
5 5
µ ¶µ ¶ µ ¶µ ¶
4 48 4 48
3 2 4 1
P (X = 3) = µ ¶ , P (X = 4) = µ ¶
52 52
5 5
1 1 1 1 1 1 26
2–2. µ = 0 · +1· +2· +3· +4· +5· =
6 6 3 12 6 12 12
· ¸ µ ¶2
2 1 2 1 2 1 2
2 1 2 1 2 1 26 83
σ = 0 · +1 · +2 · +3 · +4 · +5 · − =
6 6 3 12 6 12 12 36
Z ∞
2–3. ce−x dx = 1 ⇒ c = 1, so
0
½
e−x if x ≥ 0
f (x) =
0 otherwise
Z ∞ Z ∞
−x
µ= xe dx = −xe−x |∞
0 + e−x dx = 1
0 0
Z ∞ · Z ∞ ¸
2 −x 2
σ = 2
xe dx − 1 = −x2 e−x |∞
0 + 2xe −x
dx − 1
0 0
· Z ∞ ¸
= −2xe−x |∞
0 + 2e−x
dx − 1
0
=2−1=1
x pX (x)
1
−1 5
1
0 10
2
+1 5
3
+2 10
ow 0
µ ¶ µ ¶ µ ¶ µ ¶
1 1 2 3 4
E(X) = −1 · + 0· + 1· + 2· =
5 10 5 10 5
µ ¶ µ ¶ µ ¶ µ ¶ µ ¶2
2 1 12 22 2 3 4 29
V (X) = (−1) · + 0 · + 1 · + 2 · − =
5 10 5 10 5 25
3
0, x<0
1
, −1 ≤ x < 0
5
3
FX (x) = , 0≤x<1
10
7
, 1≤x<2
10
1, x≥2
29
X e−20 (20)x
2–9. P (X < 30) = P (X ≤ 29) = = 0.978
x=0
x!
= 0; ow
Z 2 Z 4
1
2–11. (a) kx dx + k(4 − x) dx = 1 ⇒ k =
0 2 4
1
and fX (x) ≥ 0 for k =
4
4
Z 2 Z 4
1 2 1
(b) µ = x dx + (4x − x2 ) dx = 2
0 4 2 4
Z 2 Z 4
1 3 1 2
σ2 = x dx + (4x2 − x3 ) dx − 22 =
0 4 2 4 3
a2
σ2 =
6
1
√
2–13. From Chebyshev’s inequality 1 − = 0.75 ⇒ k = 2 with µ = 2, σ = 2, and the
√ √ k2
interval is [14 − 2 2, 14 + 2 2].
Z 0
2–14. (a) kt2 dt = 1 ⇒ k = 3
−1
5
Z ¯ # "
0 4 ¯0
t ¯ 3
(b) µ = 3t3 dt = 3 ¯ =−
−1 4 −1 4
Z 0 µ ¶2 Ã ¯ !
0
3 t5 ¯¯ 9 3
σ2 = 3t4 dt − − =3 ¯ − =
−1 4 5 −1 16 80
= 1; t>0
µ ¶
1 1 1 8
2–15. (a) k + + =1⇒k=
2 4 8 7
" µ ¶ µ ¶2 µ ¶3 #
8 1 1 1 11
(b) µ = 1· +2· +3 =
7 2 2 2 7
" µ ¶ µ ¶2 µ ¶3 # µ ¶2
2 8 2 1 2 1 2 1 11 26
σ = 1 · +2 · +3 · − =
7 2 2 2 7 49
105
2–20. µ = 7, σ 2 =
18
2–22. µ = 0, σ 2 = 25, σ = 5
P [|X − µ| ≥ kσ] = P [|X| ≥ 5k] = 0 if k > 1 and = 1, 0 < k ≤ 1.
From Chebychev’s inequality, the upper bound is k12 .
Z x
1 du
2–24. F (x) = ; −∞ < x ≤ ∞
0 σπ {1 + ((u − µ)2 /σ 2 )}
u−µ 1
Let t = , dt = du and
σ σ
Z x−u µ ¶
σ 1 dt 1 −1 x−u
F (x) = · = tan ; −∞ < x < ∞
0 π 1 + t2 π σ
Z π/2
π/2
2–25. k sin y dy = 1 ⇒ k[− cos y|0 ] = 1 ⇒ k = 1
0
Z π/2 ³π ´
µ= y sin y dy = sin =1
0 2
2–26. Assume X continuous
Z Z " k µ ¶ #
∞ ∞ X k
µk = (x − µ)k fX (x) dx = (−µj )xk−j fX (x) dx
−∞ −∞ j
j=0
k
X µ ¶ Z ∞
j k
= (−1) µj xk−j fX (x) dx
j −∞
j=0
k
X µ ¶
j k
= (−1) µj µ0k−j
j
j=0
1
Chapter 3
fZ (z) = e−z ; z ≥ 0
= 0; ow
9
X 1
3–6. (a) E(Di ) = d · pDi (d) = (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9) = 4.5
d=0
10
9
X
(b) V (Di ) = d2 · pDi (d) − (4.5)2
d=0
1 2
= [1 + 22 + 32 + 42 + 52 + 62 + 72 + 82 + 92 ] − (20.25) = 8.25
10
(c) d y y pY (y)
0 4 0 0.2
1 3 1 0.2
2 2 2 0.2
3 1 3 0.2
4 0 4 0.2
5 0 ow 0
6 1
7 2
8 3
9 4
2
E(Y ) = (1 + 2 + 3 + 4) = 2
10
2 2
V (Y ) = (1 + 22 + 32 + 42 ) − 4 = 2
10
3
3–7. R = Revenue/Gal
R = 0.92; A < 0.7
= 0.98; A ≥ 0.7
E(R) = 0.92 P (A < 0.7) + 0.98 P (A ≥ 0.7)
= 0.92(0.7) + 0.98(0.3) = 93.8c|/gal.
Z ∞ Z ∞
tx 1 − θ1 (x−β) 1 1
3–8. MX (t) = e e dx = eβ/θ e−x( θ −t) dx
β θ θ β
µ ¶−1
1 1
= −t eβt , for 1
θ
−t>0
θ θ
µ ¶−2 ¸ ·
1 1 − θt 1 − θt βt
MX0 (t) = e β+1
θ θ θ
µ ¶−3 "µ ¶2 µ ¶ µ ¶ #
1 1 − θt 1 − θt 1 − θt 1 − θt
MX00 (t) = eβt β2 + β+2+ β
θ θ θ θ θ
E(X) = MX0 (0) = β + θ
V (X) = MX00 (0) − (β + θ)2 = θ2
· r r ¸
y 2 y
3–9. (a) FY (y) = P (2X ≤ y) = P − ≤X≤+
2 2
µ r ¶
y
=P 0≤X≤
2
Z y√ √y
2 √y
= e dx = −e |0 2 = 1 − e− 2
−x −x
0
1 ³ y ´−1/2 −(y/2)1/2
fY (y) = FY0 (y) = ·e ;y > 0
4 2
= 0; otherwise
4
3–10. Note that as stated Y > y0 ⇒ signal read (not Y > |y0 |), so
Z π/2 Z 3π/2
1 1
PY (Y > y) = dx + dx
tan−1 y 2π tan−1 y+π 2π
FY (y) = 1 − PY (Y > y)
1 1
= + tan−1 y; y ≥ 0
2 π
µ ¶
1 1
fY (y) = ; −∞ < y < +∞
π 1 + y2
1
Note the symmetry in ( 1+y 2 ).
dE[L(X, S)] 6 5 5
= 10−6 S − = 0 ⇒ S = 106
dS 8 4 3
5
Z 4 Z 4
g2
3–12. µG = E(G) = gfG (g) dg = dg = 8/3.
0 0 8
Z 4
g3
E(G2 ) = dg = 8.
0 8
2
σG = V (G) = E(G2 ) − (E(G))2 = 8/9.
H(G) = (3 + 0.05G)2
H(µG ) = (3 + 0.05µG )2
H 0 (µG ) = (0.1)(3 + 0.05µG )
H 00 (µG ) = 0.005.
Thus,
1 1 8
µA ≈ H(µG ) + H 00 (µG )σG
2
= H(8/3) + (0.005) = 9.82
2 2 9
and
3
3–14. y=
(1 + x)2
√ −1/2
x = 3y −1
√
dx 3 −3/2
=− y
dy 2
¯ ¯ √
¯ dx ¯
¯ ¯ = 3 y −3/2
¯ dy ¯ 2
¯ ¯
¯ dx ¯
fY (y) = fX (x) ¯¯ ¯¯
dy
√
√ −1/2 3 −3/2
= fX ( 3y − 1) y
2
√
√ −1/2 3 −3/2
= exp[− 3y + 1] y ; 0≤y≤3
2
3–15. With equal probability pX (1) = · · · = pX (6) = 16 ,
6 µ ¶
X 1
MX (t) = etx
x=1
6
7
E(X) = MX0 (0) =
2
35
V (X) = MX00 (0) − [MX0 (0)]2 =
12
4b3/2
a = √ .
π
7
(b) Similarly,
Z ∞
2 a 2
µX = E(X) = ax3 e−bx dx = Γ(2) = √
0 2b2 πb
and
µ ¶
2 a 5 3
E(X ) = Γ = .
2b5/2 2 2b
These facts imply that
2 3 4 3π − 8
σX = E(X 2 ) − [E(X)]2 = − = .
2b πb 2πb
Now we have
H(x) = 18x2
4 72
H(µX ) = 18µ2X = 18 · =
πb πb
2 72
H 0 (µX ) = 36µX = 36 · √ = √
πb πb
H 00 (µX ) = 36
Then
. 1
µY = H(µX ) + H 00 (µX ) · σX
2
2
72 1 3π − 8 27
= + · 36 · =
πb 2 2πb b
and
.
σY2 = (H 0 (µX ))2 σX
2
µ ¶2 µ ¶
72 3π − 8 2592(3π − 8)
= √ =
πb 2πb π 2 b2
√
3–17. E(Y ) = 1, V (Y ) = 1 H(Y ) = Y 2 + 36
1
E(X) ∼
= H(µY ) + H 00 (µY )σY2
2
V (X) ∼
= [H 0 (µY )]2 σY2
Z 1 Z 1
5
3–18. E(P ) = (1 + 3r)6r(1 − r) dr = (6r + 12r2 − 18r3 ) dr =
0 0 2
π · X2
3–20. V = ·1
4
π π
E(V ) = E(X 2 ) = [V (X) + (E(X))2 ]
4 4
π
= · [25 · 10−6 + 22 ] = 3.14162
4
R1
3–22. (a) 0
k(1 − x)a−1 xb−1 dx = 1.
R∞
Let Γ(p) = 0 up−1 e−u du define the gamma function. Integrating by parts
and applying L’Hôspital’s Rule, it can be shown that Γ(p + 1) = p · Γ(p).
√
Make a variable change u = v 2 so that Γ( 12 ) = π.
9
Z 1
Γ(a + b)
(b) E(X ) =k
xb+k−1 (1 − x)a−1
Γ(a) · Γ(b) 0
Z 1
Γ(a + b)
= x(b+k)−1 (1 − x)a−1 dx
Γ(a) · Γ(b) 0
Γ(a + b) Γ(a)Γ(b + k)
= · .
Γ(a) · Γ(b) Γ(a + b + k)
(a + b − 1)! b
Then E(X) = =
(a − 1)!(b − 1)! a+b
[(a + b) − 1]! (a − 1)!(b + 1)! b(b + 1)
E(X 2 ) = · =
(a − 1)!(b − 1)! (a + b + 1)! (a + b + 1)(a + b)
ab
so V (X) =
(a + b)2 (a + b + 1)
1 1 t 1 2t 1 3t
3–23. MX (t) = + e + e + e
2 4 8 8
1 1 3
MX0 (t) = et + e2t + e3t
4 4 8
1 1 9
MX00 (t) = et + e2t + e3t
4 2 8
7
(a) E(X) = MX0 (0) =
8
µ ¶2
15 7 71
V (Y ) = MX00 (0) − [MX0 (0)]2 = − =
8 8 64
(b) x y y pY (y)
1
0 4 0
8
3
1 1 1
8
1
2 0 4
2
3 1 ow 0
ow 0
11
FY (y) = 0; y < 0
1
= ;0 ≤ y < 1
8
1
= ;1 ≤ y < 4
2
= 1; y ≥ 4
= 0 for odd r.
R∞ R∞
3–25. If −∞
|x|r f (x) dx = k < ∞, then −∞
|x|n f (x) dx < ∞, where 0 ≤ n < r.
Proof: If |x| ≤ 1, then |x|n ≤ 1 and if |x| > 1, then |x|n ≤ |x|r .
Then
Z ∞ Z Z
n n
|x| f (x) dx = |x| · f (x) dx + |x|n · f (x) dx
−∞ |x|≤1 |x|>1
Z Z
≤ 1f (x) dx + |x|r f (x) dx ≤ 1 + k < ∞
|x|≤1 |x|>1
µ ¶
σ 2 t2 σ 2 t2
3–26. MX (t) = exp µt + ⇒ ψX (t) = µt +
2 2
dψX (t)/dt|t=0 = [µ + σ 2 t]t=0 = µ
d2 ψX (t) dt2 |t=0 = σ 2 |t=0 = σ 2
dr ψX (t)/dtr |t=0 = 0; r ≥ 3
3–27. From Table XV, using the first column with scaling: u1 = 0.10480, u2 = 0.22368,
u3 = 0.24130, u4 = 0.42167, u5 = 0.37570, u6 = 0.77921, . . . , u20 = 0.07056.
FX (x) = 0; x<0
= 0.5; 0≤x<1
= 0.75; 1≤x<2
= 0.875; 2≤x<3
= 1; x≥3
So, x1 = 0, x2 = 0, x3 = 0, x4 = 0, x5 = 0, x6 = 2, . . . , x20 = 0
FT (t) = 1 − e−t/4
Thus, we set
From Table XV, using the second column with scaling: u1 = 0.15011, u2 = 0.46573,
u3 = 0.48360, . . . , u10 = 0.36257
13
So,
t1 = −4`n(0.84989) = 0.6506
t2 = −4`n(0.53427) = 2.5074
..
.
t10 = −4`n(0.63143) = 1.8391
1
Chapter 4
4–1. (a) x 0 1 2 3 4 5
pX (x) 27/50 11/50 6/50 3/50 2/50 1/50
y 0 1 2 3 4
pY (y) 20/50 15/50 10/50 4/50 1/50
(b) y 0 1 2 3 4
pY |0 (y) 11/27 8/27 4/27 3/27 1/27
(c) x 0 1 2 3 4 5
pX|0 (x) 11/20 4/20 2/20 1/20 1/20 1/20
4–2. (a) y 1 2 3 4 5 6 7 8 9
pY (y) 44/100 26/100 12/100 8/100 4/100 2/100 2/100 1/100 1/100
x 1 2 3 4 5 6
pX (x) 26/100 21/100 17/100 15/100 11/100 10/100
(b) y 1 2 3 4 5 6 7 8 9
x=1 pY |1 (y) 10/26 6/26 3/26 2/26 1/26 1/26 1/26 1/26 1/26
x=2 pY |2 (y) 8/21 5/21 3/21 2/21 1/21 1/21 1/21 0 0
x=3 pY |3 (y) 8/17 5/17 2/17 1/17 1/17 0 0 0 0
x=4 pY |4 (y) 7/15 4/15 2/15 1/15 1/15 0 0 0 0
x=5 pY |5 (y) 6/11 3/11 1/11 1/11 0 0 0 0 0
x=6 pY |6 (y) 5/10 3/10 1/10 1/10 0 0 0 0 0
Z 10 Z 100
k
4–3. (a) dx1 dx2 = 1 ⇒ k = 1
0 0 1000
Z 10
1 1
(b) fX1 (x1 ) = dx2 = ; 0 ≤ x1 ≤ 100
0 1000 100
= 0; otherwise
Z 100 ¯100
1 1 ¯ 1
fX2 (x2 ) = dx1 = x1 ¯¯ = ; 0 ≤ x2 ≤ 10
0 1000 1000 0 10
= 0; otherwise
2
4–8. (a) x1 51 52 53 54 55
pX1 (x1 ) 0.28 0.28 0.22 0.09 0.13
x2 51 52 53 54 55
pX2 (x2 ) 0.18 0.15 0.35 0.12 0.20
x2 51 52 53 54 55 otherwise
(b)
6 7 5 5 5
PX2 |51 (x2 ) 0
28 28 28 28 28
5 5 10 2 6
PX2 |52 (x2 ) 0
28 28 28 28 28
5 1 10 1 5
PX2 |53 (x2 ) 0
22 22 22 22 22
1 1 5 1 1
PX2 |54 (x2 ) 0
9 9 9 9 9
1 1 5 3 3
PX2 |55 (x2 ) 0
13 13 13 13 13
55
X
E(X2 |x1 = 51) = x2 · pX2 |51 (x2 ) = 52.857
x2 =51
X55
E(X2 |x1 = 52) = x2 · pX2 |52 (x2 ) = 52.964
x2 =51
X55
E(X2 |x1 = 53) = x2 · pX2 |53 (x2 ) = 53.0
x2 =51
4
55
X
E(X2 |x1 = 54) = x2 · pX2 |54 (x2 ) = 53.0
x2 =51
55
X
E(X2 |x1 = 55) = x2 · pX2 |55 (x2 ) = 53.461
x2 =51
Z 1
4–9. fX1 (x1 ) = 6x21 x2 dx2 = 3x21 x22 |1x2 =0 = 3x21 ; 0 ≤ x1 ≤ 1
0
Z 1
fX2 (x2 ) = 6x21 x2 dx1 = 2x31 x2 |1x1 =0 = 2x2 ; 0 ≤ x2 ≤ 1
0
fX ,X (x1 , x2 ) 6x2 x2
fX1 |x2 (x1 ) = 1 2 = 1 = 3x21 ; 0 ≤ x1 ≤ 1
fX2 (x2 ) 2x2
fX ,X (x1 , x2 )
fX2 |x1 (x2 ) = 1 2 = 2x2 ; 0 ≤ x2 ≤ 1
fX1 (x1 )
Z 1
3
E(X1 |x2 ) = x1 fX1 |x2 (x1 ) dx1 =
0 4
Z 1
2
E(X2 |x1 ) = x2 fX2 |x1 (x2 ) dx2 =
0 3
2
4–10. (a) fX1 (x1 ) = 2x1 e−x1 ; x1 ≥ 0
−x22
fX2 (x2 ) = 2x2 e ; x2 ≥ 0
Since fX1 |x2 (x1 ) = fX1 (x1 ) and fX2 |x1 (x2 ) = fX2 (x2 ), X1 and X2 are indepen-
dent
Z ∞ Z ∞
−x21 2
(c) E(X1 |x2 ) = E(X1 ) = x1 (2x1 e ) dx1 = 2x21 e−x1 dx1
0 0
2 2
Let u = x1 , du = dx1 , dv = 2x1 e−x1 , v = −e−x1 , and integrate by parts, so
Z ∞
2 2 π
E(X1 ) = −x1 e−x1 |∞
0 + e−x1 dx1 = 0 +
0 4
Similarly, E(X2 ) = π/4.
5
z
4–11. z = xy, t = x ⇒ x = t, y =
t
¯ ¯ ¯ ¯
¯∂x/∂t ∂x/∂z ¯ ¯1 0¯¯
¯ ¯ ¯
¯ ¯ = ¯ z 1¯ = 1
¯ ¯ ¯ ¯ t
¯∂y/∂t ∂y/∂z ¯ ¯− 2 ¯
t t
so p(z, t) = g(t)h(z/t)|1/t|
R∞
Thus `(z) = −∞ g(t)h(z/t)|1/t| dt
a
4–12. a = s1 · s2 , t = s1 so s2 = , s1 = t
t
1
Jacobian is t
4–15. Y = X1 + X2 + X3 + X4
E(Y ) = 80, V (Y ) = 4(9) = 36
4–16. If X1 and X2 are independent, then
Thus,
Similarly,
pX2 |x1 (x2 ) = pX2 (x2 ) and pX1 |x2 (x1 ) = pX1 (x1 ) for all x1 , x2 .
Then
pX1 ,X2 (x1 , x2 ) = pX2 |x1 (x2 )pX1 (x1 ) = pX2 (x2 )pX1 (x1 ) for all x1 , x2 .
Z √ ¯√1−x2
1−x2
2 2 ¯¯ 2√
4–23. (a) fX (x) = dy = y ¯ = 1 − x2 ; −1 < x < +1
0 π π 0 π
Z √1−y2
2 4p
fY (y) = √ dx = 1 − y2; 0 < y < 1
− 1−y 2 π π
9
2
fX,Y (x, y) 1 p p
(b) fX|y (x) = π
= µ ¶p = p ; − 1 − y2 < x < 1 − y2
fY (y) 4 2 1 − y2
1 − y2
π
2
fX,Y (x, y) π 1 √
fY |x (y) = = µ ¶√ =√ ; 0 < y < 1 − x2
fX (x) 2 1 − x2
1 − x2
π
Z √1−y2 ¯√ 2
2 ¯ 1−y
x 1 x ¯ =0
(c) E(X|y) = √ p dx = p
− 1−y 2 2 1 − y2 2 1 − y2 2 ¯−√1−y2
Z √ à ¯√1−x2 ! √
1−x2
y 1 y 2 ¯¯ 1 − x2
E(Y |x) = √ dy = √ =
0 1 − x2 1 − x2 2 ¯0 2
Change X and Y , reversing roles to show E(Y |X) = E(Y ) if X and Y are inde-
pendent.
Z ∞ Z ∞
fX,Y (x, y)
4–25. E(X|y) = xfX|y (x) dx = x dx
−∞ −∞ fY (y)
Z ∞ Z ∞ Z ∞ · ¸
fX,Y (x, y)
E[E(X|Y )] = E(X|y) · fY (y) dy = x dx fY (y) dy
−∞ −∞ −∞ fY (y)
Z ∞ ·Z ∞ ¸ Z ∞
= x fX,Y (x, y) dy dx = xfX (x) dx = E(X).
−∞ −∞ −∞
4–26. w = s + d, let y = s − d
w+y w−y
s= , d=
2 2
s = 10 → w + y = 20 d = 10 → w − y = 20
s = 40 → w + y = 80 d = 30 → w − y = 60
As illustrated
d y
30 30 w
+
y
=
20
20 20 80
=
y
–
10 10
w
s w
10 20 30 40 10 20
w 30 40 50 60 70
+
60
–10 y
=
=
y
20
–
–20
w
¯ ∂s ¯ ¯ ¯
¯ ∂s ¯
¯1/2 ¯
¯ ∂w
The Jacobian J = ¯ ∂d
∂y ¯
¯ = ¯¯
1/2¯ = −1
¯ ∂w ∂d ¯ 1/2 −1/2¯ 2
∂y
1 1 1 w+y w−y
∴ fW,Y (w, y) = · · ; 10 < < 40, 10 < < 30
30 20 2 2 2
Z w−20
1 w − 20
fW (w) = dy = ; 20 < w < 40
20−w 1200 600
Z w−20
1 20
= dy = ; 40 < w ≤ 50
w−60 1200 600
Z 80−w
1 70 − w
= dy = ; 50 < w < 70
w−60 1200 600
= 0; ow
Z 1
1
4–27. (a) fY (y) = (x + y) dx = y + ; 0 < y < 1
0 2
fX,Y (x, y) x+y
fX|y (x) = = ; 0 < x < 1, 0 < y < 1
fY (y) 1
y+
2
11
Z · 3
1 ¸1
(x + y) 1 x x2 y
E(X|Y = y) = x· µ ¶ dx = +
0 1 1 3 2 0
y+ y +
2 2
2 + 3y
= ; 0 < y < 1.
3(1 + 2y)
Z 1 · ¸1
y2 1
(b) fX (x) = (x + y) dy = xy + = x + ;0 < x < 1
0 2 y=0 2
Z 1
7
E(X) = [x2 + (x/2)] dx = [(x3 /3) + (x2 /4)]10 =
0 12
Z 1
7
(c) E(Y ) = [y 2 + (y/2)] dy = [(y 3 /3) + (y 2 /4)]10 =
0 12
Z ∞ Z ∞
k(1 + x + y) 9
4–28. (a) 4 4
dx dy = 1 ⇒ k =
0 0 (1 + x) (1 + y) 2
Z ∞ Z ∞
(9/2)(1 + x + y) dy (9/2) dy
(b) fX (x) = 4 4
=
0 (1 + x) (1 + y) (1 + x) 0 (1 + y)4
4
Z ∞ Z ∞
(9/2)x dy (9/2) y dy
+ 4 4
+
(1 + x) 0 (1 + y) (1 + x) 0 (1 + y)4
4
3(3 + 2x)
= ;x > 0
4(1 + x)4
Z ∞ Z ∞ Z ∞ · ¸∞
−n (1 + x + y)−n+1
4–29. (a) k (1 + x + y) dx dy = 1, k dy = 1
0 0 0 (−n + 1) 0
Z ∞
k dy k
so n−1
=1⇒ = 1 ⇒ k = (n − 1)(n − 2)
n−1 0 (1 + y) (n − 1)(n − 2)
Z x Z y
(b) FX,Y (x, y) = P (X ≤ x, Y ≤ y) = k (1 + s + t)−n dt ds
0 0
· ¸x
(1 + s + y)−n+2 (1 + s)−n+2
=k −
(−n + 2)(−n + 1) (−n + 2)(−n + 1) s=0
x (request)
1
x = y + (1/4)
x=y
y (receipt)
1
µ ¶
z−a
4–35. (a) FZ (z) = P (Z ≤ z) = P (a + bX ≤ z) = P X ≤
µ ¶ b
z−a
= FX
b
µ ¶¯ ¯
z − a ¯¯ 1 ¯¯
fZ (z) = fX ¯b¯.
b
13
µ ¶ µ ¶ µ ¶
1 1 1
(b) FZ (z) = P ≤z =P X≥ = 1 − FX
X z z
µ ¶
1 1
fZ (z) = fX
z z2
The range space for Z is determined from the range space of X and the
definition of the transformation.
1
Chapter 5
x p(x)
0 (1 − p)4
1 4p(1 − p)3
2 6p2 (1 − p)2
3 4p3 (1 − p)
4 p4
otherwise 0
5–2.
6 µ ¶
X 6
P (X ≥ 5) = (0.95)x (0.05)6−x
x
x=5
= 6(0.95)5 (0.05) + (0.95)6
= 0.9672
5–3. Assume independence and let W represent the number of orders received.
12 µ ¶
X 12
P (W ≥ 4) = (0.5)w (0.5)12−w
w
w=4
12 µ ¶
X
12 12
= (0.5)
w
w=4
3 µ ¶
X
12 12
= 1 − (0.5) = 0.9270
w
w=0
5–5.
2 µ ¶
X 50
P (X > 2) = 1 − P (X ≤ 2) = 1 − (0.02)x (0.98)50−x = 0.0784.
x
x=0
2
5–6.
n
X µ ¶
tX tx n x
MX (t) = E[e ] = e p (1 − p)n−x
x
x=0
t n
= (pe + q) , where q = 1 − p
0
E[X] = MX (0) = [n(pet + q)n−1 pet ]t=0 = np
00
E[X 2 ] = MX (0) = np[et (n − 1)(pet + q)n−2 (pet ) + (pet + q)n−1 et ]t=0
= (np)2 − np2 + np
5–7.
µ ¶
X
P (p̂ ≤ 0.03) = P ≤ 0.03 = P (X ≤ 3)
100
X3 µ ¶
100
= (0.01)x (0.99)100−x = 0.9816
3
x=0
5–8.
µ ¶
p p
P (p̂ > p + pq/n) = P p̂ > 0.07 + (0.07)(0.93)/200
= P (X > 200(0.088))
= P (X > 17.6)
= 1 − P (X ≤ 17.6)
X17 µ ¶
200
= 1− (0.07)x (0.93)200−x
x
x=0
= 0.1649
3
P (X = 5) = p4 (1 − p) = f (p)
df (p) 4
= 5p4 − 4p3 = 0 ⇒ p =
dp 5
0.09
0.08
0.07
0.06
0.05
f(p)
0.04
0.03
0.02
0.01
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
p
4
(c)
5–13.
pet
MX (t) = , where q = 1 − p
1 − qet
· ¸
0 (1 − qet )(pet ) + (pet )(qet )
E[X] = MX (0) =
(1 − qet )2 t=0
(1 − q)p + pq p 1
= 2
= 2 =
(1 − q) p p
· ¸
2 (1 − qet )2 (pet ) + 2pet (1 − qet )(qet )
00
E[X ] = MX (0) =
(1 − qet )4 t=0
(1 + q)p 1+q
= 3
=
(1 − q) p2
q
V (X) =
p2
5
5–14.
5–15.
5–16. (a)
µ ¶
7
P (X = 8) = (0.1)2 (0.9)5 = 0.0124
2
(b)
∞ µ
X ¶
x−1
P (X > 8) = (0.1)2 (0.9)x−3
2
x=9
5–17.
µ ¶
3
P (X = 4) = (0.8)2 (0.2)2 = 0.0768
1
µ ¶
2 2
P (X < 4) = P (X = 2) + P (X = 3) = (0.8) + (0.8)2 (0.2) = 0.896
1
X = X1 + X2 + . . . + Xr
pet
The moment generating function for the geometric is 1−qet
, so
r
Y · ¸r
pet
MX (f ) = MXi (t) =
i=1
1 − qet
0 r
E[X] = MX (t)|t=0 =
p
6
Continuing,
h 00 i µ r ¶2 rq
V (X) = MX (t)|t=0 − = 2
p p
5–19.
r 5 rq (5)(0.2)
E[X] = = = 6.25, V (X) = 2
= = 1.5625
p 0.8 p (0.8)2
7 µ
X ¶
x−1
P (X ≤ 7) = (0.8)4 (0.2)x−4
3
x=4
(b)
" µ ¶5 #
1 .
1 − 4(60) = 0.7656
4
5–23.
10! .
P (Y1 = 4, Y2 = 1, Y3 = 3, Y4 = 2) = (0.2)4 (0.2)1 (0.2)3 (0.4)2 = 0.005
4!1!3!2!
5–24.
10!
P (Y1 = 0, Y2 = 0, Y3 = 0, Y4 = 10) = (0.2)0 (0.2)0 (0.2)0 (0.4)10
0!0!0!10!
10!
P (Y1 = 5, Y2 = 0, Y3 = 0, Y4 = 5) = (0.2)5 (0.2)0 (0.2)0 (0.4)5
5!0!0!5!
5–25. (a)
µ ¶µ ¶
4 21
X2
x 5−x .
P (X ≤ 2) = µ ¶ = 0.98
x=0
24
5
(b)
X2 µ ¶µ ¶x µ ¶5−x
5 4 21 .
P (X ≤ 2) = = 0.97
x 25 25
x=0
n
5–26. The approximation improves as N
decreases. n = 5, N = 100 is a better condition
than n = 5, N = 25.
8
We could instead use the binomial approximation; now we want n such that
µ ¶ µ ¶0 µ ¶n µ ¶n
n 7 18 18
0.05 ≥ = .
0 25 25 25
.
We find that n = 9.
5–28.
∞
X −c x ∞
X
tX tx e c −c (cet )x t t
MX (t) = E[e ] = e = e = e−c ece = ec(e −1)
x=0
x! x=0
x!
5–29.
X9
e−25 (25)x .
P (X < 10) = P (X ≤ 9) = = 0.0002
x=0
x!
5–30.
X∞
e−10 (10)x
P (X > 20) = P (X ≥ 21) =
x=21
x!
20
X e−10 (10)x
= 1 − P (X ≤ 20) = 1 −
x=0
x!
= 0.002
5–31.
P (X > 5) = P (X ≥ 6)
5
X e−4 4x
= 1 − P (X ≤ 5) = 1 −
x=0
x!
.
= 0.2149
9
y x p(x) xp(x)
0 0 e−2 0
1 1 2e−2 2e−2
2 2 2e−2 4e−2
3 or more 3 1 − 5e−2 3 − 15e−2
E[X] = 1.78
Y 0 1 2 3 4 5 6 7 8 9 ≥ 10
X 0 1 2 3 4 5 6 7 8 9 10
e−c cx
pX (x) = , x = 0, 1, 2, . . . , 9
x!
X∞ X9
e−c ci e−c ci
= =1− , x = 10
i=10
i! i=0
i!
We could also have done this problem using a Poisson approximation. For (a),
we would use λ = 0.025 errors / page with 50 pages. Then c = 50(0.025) =
−1.25 0 .
1.25, and we would eventually obtain P (X ≥ 1) = 1 − e 0!(1.25) = 0.7135,
which is a bit off of our exact answer. For (b), we would take c = n(0.025),
eventually yielding n = 160 after trial and error.
5–38.
e−c c0
P (X = 0) = with c = 10000(0.0001) = 1,
0!
P (X = 0) = e−1 = 0.3679
and
P (X ≥ 2) = 1 − P (X ≤ 1) = 0.265
11
5–40. Kendall and Stuart state: “the liability of individuals to accident varies.” That
is, the individuals who compose a population have different degrees of accident
proneness.
5–41. Use Table XV and scaling by 10−5 .
(a) From Col. 3 of Table XV,
Realization 1 Realization 2
u1 = 0.01536 < 0.5 ⇒ x1 = 1 u1 = 0.63661 > 0.5 ⇒ x1 = 0
u2 = 0.25595 < 0.5 ⇒ x2 = 1 u2 = 0.53342 > 0.5 ⇒ x2 = 0
u3 = 0.22527 < 0.5 ⇒ x3 = 1 u3 = 0.88231 > 0.5 ⇒ x3 = 0
u4 = 0.06243 < 0.5 ⇒ x4 = 1 u4 = 0.48235 < 0.5 ⇒ x4 = 1
u5 = 0.81837 > 0.5 ⇒ x5 = 0 u5 = 0.52636 > 0.5 ⇒ x5 = 0
u6 = 0.11008 < 0.5 ⇒ x6 = 1 u6 = 0.87529 > 0.5 ⇒ x6 = 0
u7 = 0.56420 > 0.5 ⇒ x7 = 0 u7 = 0.71048 > 0.5 ⇒ x7 = 0
u8 = 0.05463 < 0.5 ⇒ x8 = 1 u8 = 0.51821 > 0.5 ⇒ x8 = 0
x=6 x=1
Continue to get three more realizations.
y = x1/3
#1
#2
Chapter 6
6–1.
1
fX (x) = ; 0 < x < 4
4
µ ¶ Z 7/4 µ ¶ Z 27/8
1 7 dx 5 9 27 dx 9
P <X< = = , P <X< = =
2 4 1/2 4 16 4 8 9/4 4 32
6–2.
4 143 177
fX (x) = ; <x<
34 4 4
Z 40
4 1
P (X < 40) = dx =
143/4 34 2
Z 42
4 4
P (40 < X < 42) = dx =
40 34 17
6–3.
1
fX (x) = , 0≤x≤2
2
µ ¶ Z y−5
y−5 2 dx y−5
FY (y) = P (Y ≤ y) = P X ≤ = =
2 0 2 4
So
1
fY (y) = , 5<y<9
4
6–4. The p.d.f. of profit, X, is
1
fX (x) = ; 0 < x < 2000
2000
Y = Brokers Fees = 50 + 0.06X
6–5.
Z β
tX dx
MX (t) = E(e ) = etx
α β−α
¯β
etx ¯¯
=
(β − α)t ¯α
etβ − etα
=
(β − α)t
1 £ −1 tβ ¤
E(X) = MX0 (0) = t βe − t−2 etβ − t−1 αetα + t−2 etα t=0
β−α
1 £ 2 tβ ¤
= β e − β 2 etβ /2 − α2 etα + α2 etα /2 t=0
β−α
β+α
=
2
and
00
E(X 2 ) = MX (0)
1 £ −1 2 tβ
= t β e − βetβ t−2 + 2etβ t−3 − t−2 βetβ
β−α
¤
−t−1 α2 etα + αetα t−2 + t−2 αetα − 2etα t−3 t=0
· 3 ¸
1 β − α3
=
β−α 3
(β − α)2
V (X) = E(X 2 ) − [E(X)]2 =
12
6–6.
β+α
E(X) = = 0 ⇒ β+α = 0
2
(β − α)2
V (X) = = 1 ⇒ β 2 − 2αβ + α2 = 12
12
√ √
⇒ α = − 3, β = + 3
3
y FY (y)
y<1 0
1≤y<2 0.3
2≤y<3 0.5
3≤y<4 0.9
y>4 1
Generate realizations of ui ∼ Uniform[0,1] random numbers as described in Section
6–6; use these in the inverse as yi = FY−1 (ui ), i = 1, 2, . . .. For example, if u1 =
0.623, then y1 = FY−1 (0.623) = 3.
6–8.
1
fX (x) = ; 0<x<4
4
= 0; otherwise
6–9.
Z ∞ Z ∞
tx −λx
MX (t) = e λe dx = λ ex(t−λ) dx,
0 0
λ λ 1
MX (t) = [ex (t − λ)]∞
x=0 = = , t<λ
t−λ λ−t 1 − λ/t
4
0 £ ¤ 1
E(X) = MX (0) = λ(λ − t)−2 t=0 =
λ
00 £ ¤ 2
E(X 2 ) = MX (0) = 2λ(λ − t)−3 t=0 = 2
λ
2 1 1
V (X) = E(X 2 ) − [E(X)]2 = − =
λ2 λ2 λ2
P ∗ = 1000, X > 1
= 750, X ≤ 1
6–11.
Z 1/2
1 1 −x/3
P (X < ) = e dx = 1 − e(−1/3)(1/2) = 0.154
2 0 3
or 15.4% experience failure in the first six months.
6–12. P ∗ = Profit, T = Life Length
P ∗ = rY − dY, T >Y
= rT − dY, T ≤Y
Z Y
∗
E(P ) = (rY − dY )P (T ≥ Y ) − dY P (T ≤ Y ) + r tθe−θt dt
0
= r(θ−1 − θ−1 e−θY ) − dY
dE(P k ) d
= re−θY − d = 0 ⇒ Y = −θ−1 `n( )
dY r
d
For Y to be positive, 0 < r < 1.
5
P (X ≤ 2) = 1 − e−2λ
P (X ≤ 3) = 1 − e−3λ
2
1 − e−2λ = (1 − e−3λ ) ⇒ 1 = 3e−2λ − 2e−3λ
3
Only λ = 0 satisfies this condition; but we must have λ > 0 , so there is no value
of λ for which
2
P (X ≤ 2) = P (X ≤ 3)
3
6–15.
CI = C, X > 15;
= C + Z, X ≤ 15
CII = 3C, X > 15
= 3C + Z, X ≤ 15
6–17.
Z ∞
Γ(p) = xp−1 e−x dx
0
dx
R∞ 2 R∞ 2
Let x = y 2 ⇒ dy
= 2y. So Γ(p) = 0
y 2p−1 e−y dy and Γ( 12 ) = 2 0
e−y dy.
µ ¶2 Z ∞ Z ∞
1 −y 2 2
Γ( ) = 4 e dy · e−x dx
2 0 0
Z ∞Z ∞
−(x2 +y 2 )
= 4 e dx dy
0 0
√
So Γ( 12 ) = π.
6–19.
Y = X1 + · · · + X10
7
½
7e−7xi if xi > 0
g(xi ) =
0 otherwise
Z ∞
7
P (Y > 1) = (7y)9 e−7y dy
1 Γ(10)
9
X 7k
= e−7·1 = 0.8305
k=0
k!
6–20.
λ = 6; t = 4 ⇒ λt = 24
23
X e−24 24x
P (X ≥ 24) = 1 − P (X ≤ 23) = 1 −
x=0
x!
6–21.
Z ∞
tX λ
MX (t) = E(e ) = etx (λx)r−1 e−λx dx
0 Γ(r)
r Z ∞
λ
= xr−1 e−x(λ−t) dx
Γ(r) 0
dx 1
The integral converges if (λ − t) > 0 or λ > t. Let u = x(λ − t), du
= (λ−t)
. So
Z ∞µ ¶r−1
λr u
MX (t) = e−u (λ − t)−1 du
Γ(r) 0 λ−t
µ ¶r Z ∞
λ 1
= · ur−1 e−u du
λ−t Γ(r) 0
µ ¶r µ ¶r
λ 1 λ
= · · Γ(r) =
λ−t Γ(r) λ−t
−r
= (1 − (t/λ)) , where λ > t
6–22.
Z ∞
0.25
P (Y > 24) = (0.25y)3 e−0.25y dy
24 Γ(4)
3
X (0.25 · 24)k
= e−0.25·24 = 0.1512
k=0
k!
8
3
X (0.1 · 60)k
P (X < 60) = 1 − e−0.1·60 = 0.8488
k=0
k!
6–24.
Z ∞
λr
E(X) = x(x − u)r−1 e−λ(x−u) dx
Γ(r) 0
dx 1
Let y = λ(x − u) ⇒ dy
= λ
Z
λr−2 ∞
E(X) = (y + λu)(y/λ)r−1 e−y dy
Γ(r) 0
Z ∞ Z ∞
1 r −y λu
= y e dy + y r−1 e−y dy
λΓ(r) 0 λΓ(r) 0
Γ(r + 1) uΓ(r)
= +
λΓ(r) Γ(r)
r
= +u
λ
6–26.
Γ(λ + r) λ−1
fX (x) = x (1 − x)r−1
Γ(λ)Γ(r)
λ = r = 1 gives
Γ(2)
fX (x) = x0 (1 − x)0
Γ(1)Γ(1)
½
1 if 0 < x < 1
=
0 otherwise
9
6–27. λ = 2, r = 1
Γ(3)
fX (x) = x(1 − x)0
Γ(2)Γ(1)
½
2x if 0 < x < 1
=
0 otherwise
λ = 1, r = 2
Γ(3)
fX (x) = x0 (1 − x)
Γ(1)Γ(2)
½
2(1 − x) if 0 < x < 1
=
0 otherwise
Let u = y β ⇒ dy = β −1 y −β+1 du
Z ∞
E(X) = β (δu1/β + γ)u(1−1/β) e−u β −1 u(1/β−1) du
Z0∞ Z ∞
−u
= γ e du + δu1/β e−u du
0 0
µ ¶
1
= γ + δΓ 1 +
β
x−γ
Let y = ⇒ dx = δdy
δ
Z ∞
2 β β
E(X ) = (δy + γ)2 y β−1 e−y δ dy
0 δ
u = y β ⇒ dy = β −1 y 1−β
Z ∞
2 β
E(X ) = (δu1/β + γ)2 u1−1/β e−u δ β −1 u1/β−1 du
0 δ
µ ¶ µ ¶
2 2 1
= δ Γ 1+ + 2γδ Γ 1 + + γ2
β β
So
· µ ¶ µ ¶¸
2 2 22 1 2
V (X) = E(X ) − [E(X)] = δ Γ 1 + −Γ 1+
β β
6–31.
x−γ β
F (x) = 1 − e−( δ
)
1.5−1 2 .
F (1.5) = 1 − e−( 0.5
)
= 0.63
6–32.
x−0 1/3
F (x) = 1 − e−( 400 )
600 1/3 .
1 − F (600) = e−( 400 ) = 0.32
6–33.
x−0 1/2
F (x) = 1 − e−( 400 )
1/2 .
1 − F (800) = 1 − e−2 = 0.24
6–35.
x−0 1/4
F (x) = 1 − e−( 200 )
1/4 .
(a) 1 − F (1000) = 1 − e−5 = 0.22
(b) 0 + 200Γ(5) = 200 · 24 = 4800 = E(X)
11
6–36. P ∗ = Profit
P ∗ = $100; x ≥ 8760
= −$50; x < 8760
Z 8760 Z ∞
∗ −1 −20000−1 x −1 x
E(P ) = −50 20000 e dx + 100 20000−1 e−20000 dx
0 8760
= −50(1 − e−876/2000 ) + 100e−876/2000
= $46.80/set
6–37. r/λ = 20
(r/λ2 )1/2 = 10
r = 4, λ = 0.2
3
X
P (X ≤ 15) = F (15) = 1 − e−3 3k /k! = 0.3528
k=0
6–38. (a) Use Table XV, Col. 1 with scaling and Equation 6–35.
Note: Since r is an integer, an alternate scheme which may be more efficient here
is to let xi = xi1 + xi2 , where xij is exponential with parameter λ = 4.
(b) Using the gamma variates in Problem 6–38(c) and Table XV, Col. 3 entry #25,
(0.000383)1/2
y1 = = 0.03645
(0.28834)1/2
(0.05885)1/2
y2 = = 1.102797
(0.04839)1/2
..
.
etc.
1
Chapter 7
(a) c = 1.56
(b) c = 1.96
(c) c = 2.57
(d) c = −1.645
7–4. P (Z ≥ Zα ) = α ⇒ Φ(Zα ) = 1 − α.
µ ¶
680 − 600
7–6. (a) P (X > 680) = 1 − Φ = 1 − Φ(1.33) = 0.09176
60
µ ¶
550 − 600
(b) P (X ≤ 550) = Φ = Φ(−5/6) = 1 − Φ(5/6) = 0.20327
60
µ ¶
500 − 485
7–7. P (X > 500) = 1 − Φ = 1 − Φ(0.5) = 0.30854, i.e., 30.854%
30
µ ¶
28.5 − 30
7–8. (a) P (X ≥ 28.5) = 1 − Φ = 1 − Φ(−1.36) = Φ(1.36) = 0.91308
1.1
µ ¶
31 − 30
(b) P (X ≤ 31) = Φ = 0.819
1.1
· µ ¶ µ ¶¸
2 2
(c) P (|X − 30| > 2) = 1 − Φ −Φ − = 1 − [0.96485 − 0.03515] =
1.1 1.1
0.0703
7–10.
Z ∞
1 (x−µ)2
MX (t) = tX
E(e ) = √ etx e− 2σ2 dx
σ 2π −∞
Z ∞
σ 2
= √ et(yσ+µ) e−y /2 dy (letting y = (x − µ)/σ)
σ 2π −∞
Z ∞
eµt 2
= √ e−(y −2σty)/2 dy
2π −∞
µt Z ∞
e 2 2 2 2 2
= √ e−(y −2σty+σ t −σ t )/2 dy
2π −∞
Z ∞
eµt 2 2 2
= √ e−(y−σt) /2 eσ t /2 dy
2π −∞
Z ∞
µt+(1/2)σ 2 t2 1 2
= e √ e−w /2 dw (letting w = y − σt)
2π −∞
µt+(1/2)σ 2 t2
= e ,
7–11.
µ ¶
y−b
FY (y) = P (aX + b ≤ y) = P X ≤
a
µ y−b ¶
−µ
= Φ a
σ
µ ¶ µ ¶
y − b − aµ y − (aµ + b)
= Φ = Φ
aσ aσ
(b)
P (X > c) = 0.9
µ ¶
c − 12
⇒ 1−Φ = 0.9
0.02
µ ¶
c − 12
⇒ Φ = 0.1
0.02
c − 12
⇒ = −1.28
0.02
⇒ c = 12 − 0.0256 = 11.97
(c)
µ ¶ µ ¶
12.05 − 12 11.95 − 12
P (11.95 ≤ X ≤ 12.05) = Φ −Φ
0.02 0.02
= Φ(2.5) − Φ(−2.5) = 0.9876
(c)
µ ¶ µ ¶
7.2 − 7.25 6.8 − 7.25
Φ −Φ
0.1 0.1
= Φ(−0.5) − Φ(−4.5)
.
= 1 − Φ(0.5) = 0.3085
5
(d)
µ ¶ µ ¶
7.2 − 6.75 6.8 − 6.75 .
Φ −Φ = 1 − Φ(0.5) = 0.3085
0.1 0.1
Since E(PA∗ ) < E(PB∗ ) when k < 0.1368, use process B; When k ≥ 0.1368, use
process A.
Then
dE(P )
= −(C + R2 )Φ(8 − µ) + (C + R1 )Φ(6 − µ) = 0,
dµ
or
C + R2 Φ(6 − µ)
= = e14−2µ
C + R1 Φ(8 − µ)
Thus,
µ ¶
1 C + R2
µ = 7 − `n .
2 C + R1
(a) We have
µ ¶ µ ¶
72 − 70 62 − 70
P (62 ≤ X ≤ 72) = Φ −Φ
4 4
= Φ(0.5) − Φ(−2)
= 0.69146 − 0.02275 = 0.66871.
µX
n ¶ n
X
7-20. E(Y ) = E Xi = E(Xi ) = nµ, so
i=1 i=1
E(Y ) − nµ
E(Zn ) = √ = 0
σ2n
µXn ¶ Xn
V (Y ) = V Xi = V (Xi ) = nσ 2
i=1 i=1
V (Y )
V (Zn ) = = 1
nσ 2
µ X n ¶ n
1 1X nµ
7–21. E(X̄) = E Xi = E(Xi ) = = µ
n i=1 n i=1 n
µ X n ¶ n
1 1 X nσ 2 σ2
V (X̄) = V Xi = 2 V (Xi ) = =
n i=1 n i=1 n2 n
Y ∼ N (0.05, 0.0025)
µ ¶
0 − 0.05
P (Y < 0) = Φ = Φ(−1) = 1 − Φ(1) = 0.15866.
0.05
Then
µ
¶ µ ¶
6.3 − 6.0 5.7 − 6.0
P (5.7 < Y < 6.3) = Φ −Φ
0.3464 0.3464
= Φ(0.866) − Φ(−0.866) = 0.6156.
With independence,
50
X
Y = Xi , E(Y ) = 0, V (Y ) = 50/12
i=1
µ ¶
5−0
P (Y > 5) = 1 − Φ p = 1 − Φ(2.45) = 0.00714.
50/12
100
X
Y = Xi .
i=1
Assuming that the Xi ’s are independent, we use the central limit theorem to ap-
proximate the distribution of Y ∼ N (1000, 0.01). Then
µ ¶
102 − 100 .
P (Y > 102) = P Z > = 1 − Φ(20) = 0.
0.1
If µ = 12, then
P (11.8 < X < 12.2) = Φ(1.33) − Φ(−1.33) = 0.8164,
or 18.4% defective. This is the optimal value of the mean.
9
n
X
7–28. Y = E(Xi ), where Xi is the travel time between pair i.
i=1
n
X
E(Y ) = E(Xi ) = 30
i=1
n
X
V (Y ) = V (Xi )
i=1
= (0.4)2 + (0.6)2 + (0.3)2 + (1.2)2 + (0.9)2 + (0.4)2 + (0.4)2 = 3.18.
Thus,
µ ¶
32 − 30
P (Y ≤ 32) = Φ √ = Φ(1.12) = 0.86864.
3.18
√
7–29. p = 0.08, n = 200, np = 16, npq = 3.84.
µ ¶
16.5 − 16
(a) P (X ≤ 16) = Φ = Φ(0.13) = 0.55172.
3.84
µ ¶ µ ¶
15.5 − 16 14.5 − 16
(b) Φ −Φ = Φ(−0.13) − Φ(−0.391) = 0.1.
3.84 3.84
µ ¶ µ ¶
20.5 − 16 11.5 − 16
(c) Φ −Φ = Φ(1.17) − Φ(−1.17) = 0.758.
3.84 3.84
µ ¶ µ ¶
14.5 − 16 13.5 − 16
(d) Φ −Φ = Φ(−0.391) − Φ(−0.651) = 0.09.
3.84 3.84
√
⇒ 0.167 n = 1.96
⇒ n = 139
10
p p
7–31. Z1 = −2 `n(u1 ) · cos(2πu2 ), Z2 = −2 `n(u1 ) · sin(2πu2 ). Note that the sine and
cosine calculations are carried out in radians.
u1 u2 z1 z2
0.15011 0.46573 −1.902 0.416
0.48360 0.93093 1.093 −0.507
0.39975 0.06907 1.229 0.569
u1 u2 z1 z2
0.02011 0.08539 2.402 1.429
0.97265 0.61680 −0.175 −0.158
0.16656 0.42751 −1.700 0.833
x1 3x1
10 + 1.732(2.402) = 14.161 42.483
10 + 1.732(1.429) = 12.475 37.425
10 + 1.732(−0.175) = 9.697 29.091
10 + 1.732(−0.158) = 9.726 29.179
10 + 1.732(−1.700) = 7.056 21.167
10 + 1.732(0.833) = 11.443 34.328
11
x2 −2x2
20(0.81647) = 16.329 −32.658
20(0.30995) = 6.199 −12.398
20(0.76393) = 15.279 −30.558
20(0.07856) = 1.571 −3.142
20(0.06121) = 1.224 −2.448
20(0.27756) = 5.551 −11.102
y1 = 9.825
y2 = 25.027
y3 = −1.467
y4 = 26.039
y5 = 18.719
y6 = 23.226
(−1.092)2 = 3.618
(0.416)2 = 0.173
(1.093)2 = 1.195
(−0.507)2 = 0.257
(1.229)2 = 1.510
yi = µY + σzi , i = 1, 2, . . . , n.
xi = eyi , i = 1, 2, . . . , n.
12
√
Let y1 = x1 /x22 .
= 0 if x > r
For our problem, r = 2600, µ = 2500, and σ = 50. Now, after a bit of calculus,
Z ∞
E(X) = xf (x) dx
−∞
· ¸
σ (r − µ)2
= µ− √ exp −
Φ( r−µ
σ
) 2π 2σ 2
· ¸
50 (2600 − 2500)2
= 2500 − √ exp −
0.9772 2π 2(50)2
= 2497.24
7–37. E(X) = e62.5 , V (X) = e125 (e25 − 1), median(X) = e50 , mode(X) = e25
7–38. W is lognormal with parameters 17.06 and 7.0692, or `n(W ) ∼ N (17.06, 7.0692).
Assuming that the interval [`n(L), `n(R)] is symmetric about 17.06, we obtain
`n(L) = 17.06 − c and `n(R) = 17.06 + c, so that
µ ¶ µ ¶
c −c
Φ −Φ = 0.90.
2.6588 2.6588
13
c
This means that = 1.645, or c = 4.374.
2.6588
For a selected value of k, the quantity in brackets assumes a value, say c; thus,
µ ¶2 µ ¶2
x 1 − µ1 2ρ(x1 − µ1 )(x2 − µ2 ) x 2 − µ2
− + − c = 0,
σ1 σ1 σ2 σ2
which is a quadratic in x1 − µ1 and x2 − µ2 . If we write the general second-
degree equation as
we can determine the nature of the curve from the second-order terms. In
particular, if B 2 − 4AC < 0, the curve is an ellipse. In any case,
µ ¶2
2 2ρ 4 4(ρ2 − 1)
B − 4AC = − 2 2 = < 0,
σ1 σ2 σ1 σ2 σ12 σ22
the last inequality a result of the fact that ρ2 < 1 (for ρ 6= 0). Thus, we have
an ellipse.
(b) Let σ12 = σ22 = σ 2 and ρ = 0. Then the equation of the curve becomes
µ ¶2 µ ¶2
x 1 − µ1 x 2 − µ2
+ − c = 0,
σ1 σ2
√
which is a circle with center (µ1 , µ2 ) and radius σ c.
7–44.
F (r) = P (R ≤ r)
µq ¶
2 2
= P X1 + X2 ≤ r
= P (X12 + X22 ≤ r2 )
Z Z · ¸
1 (t21 + t22 )
= 2
exp − dt1 dt2 ,
A 2πσ 2σ 2
Thus,
Pn
7–45. Using the fact that i=1 Xi2 has a χ2n distribution, we obtain
2
rn−1 e−r /2
f (r) = (n−2)/2 ; r≥0
2 Γ(n/2)
So we have
½ ·µ ¶2 µ ¶2 ¸¾
1 −1 y1 y2 2ρy1 y22 y2
f (y1 , y2 ) = p |y2 | exp − +
2πσ1 σ2 1 − ρ2 2(1 − ρ2 ) σ1 σ1 σ2 σ2
So the marginal is
Z ∞
fY1 (y1 ) = f (y1 , y2 ) dy2
−∞
p ·µ ¶2 ¸−1
1 − ρ2 y1 ρ 1 − ρ2
= − + ; −∞ < y1 < ∞
πσ1 σ2 σ1 σ2 σ22
FY (y) = P (Y ≤ y) = P (Z 2 ≤ y)
√ √
= P (− y ≤ Z ≤ y)
Z √y
1 −z2 /2
= 2 e dz
0 2π
√ √
Take z = u so that dz = (2 u)−1 du. Then
Z y
1 (1/2)−1 −u/2
FY (y) = u e du
0 2π
where
½ n
X ¾
2
A = (x1 , x2 , . . . , xn ) : xi ≤ y .
i=1
x1 = y 1/2 cos(θ1 )
x2 = y 1/2 sin(θ1 ) cos(θ2 )
x3 = y 1/2 sin(θ1 ) sin(θ2 ) cos(θ3 )
..
.
xn−1 = y 1/2 sin(θ1 ) sin(θ2 ) · · · sin(θn−2 ) cos(θn−1 )
xn = y 1/2 sin(θ1 ) sin(θ2 ) · · · sin(θn−2 ) sin(θn−1 )
17
¯ √ ¯
¯ cos(θ1 )
√ − y sin(θ1 ) 0 ··· 0 ¯
¯ 2 y ¯
¯ sin(θ1 ) cos(θ2 ) √ √ ¯
¯ √ y cos(θ1 ) cos(θ2 ) − y cos(θ1 ) sin(θ2 ) · · · 0 ¯
¯ 2 y ¯
¯ .. .. .. .. .. ¯
¯ . . . . . ¯
¯ ¯
¯ sin(θ1 )··· sin(θn−1 ) √ √ ¯
¯ √
2 y y cos(θ1 ) sin(θ2 ) · · · sin(θn−1 ) ··· ··· y sin(θ1 ) sin(θ2 ) · · · cos(θn−1 ) ¯
This transformation gives variables whose limits are much easier. In the region
covered by A, we have 0 ≤ θi ≤ π for i = 1, 2, . . . , n − 2, and 0 < θn−1 < 2π. Thus,
µX
n ¶
P Xi2 ≤y ∗
i=1
Z Z Z
1 1 (n/2)−1 −y/2
= ··· n/2
y e |∆n | dy dθn−1 · · · dθ1
A (2π) 2
Z y∗ Z 2π Z π Z π
1 (n/2)−1 −y/2
= y e dy dθn−1 sin(θn−2 ) dθn−2 · · · sinn−2 (θ1 ) dθ1
2(2π)n/2 0 0 0 0
Z y∗
= K y (n/2)−1 e−y/2 dy ≡ F (y ∗ )
0
Thus,
R∞
To evaluate K, use K 0
f (y) dy = 1. This finally gives
1
f (y) = y (n/2)−1 e−y/2 , y ≥ 0.
2n/2 Γ(n/2)
7–49. For x ≥ 0,
so that
2 2
f (x) = √ e−x /2 , x > 0
2π
= 0, otherwise
1
Chapter 8
Mean = 34.767
Variance = 1.828
Standard Dev = 1.352
Skewness = 0.420
Kurtosis = 2.765
Minimum = 32.100
Maximum = 37.900
n = 64
Mean = 89.476
Variance = 17.287
Standard Dev = 4.158
Skewness = 0.251
Kurtosis = 1.988
Minimum = 82.600
Maximum = 98.000
n = 90
8–4.
Number of Defects Frequency Relative Freq
1 1 0.0067
2 14 0.0933
3 11 0.0733
4 21 0.1400
5 10 0.0667
6 18 0.1200
7 15 0.1000
8 14 0.0933
9 9 0.0600
10 15 0.1000
11 4 0.0267
12 4 0.0267
13 6 0.0400
14 5 0.0333
15 1 0.0067
16 1 0.0067
17 1 0.0067
150 1.0000
x̄ = 6.9334, s2 = 12.5056, R = 16, x̃ = 6.5, MO = 4. The data appear to follow a
Poisson distribution, though s2 seems to be somewhat greater than x̄.
8–6.
Class Interval Frequency Relative Freq
32 ≤ X < 33 6 0.094
33 ≤ X < 34 11 0.172
34 ≤ X < 35 22 0.344
35 ≤ X < 36 14 0.219
36 ≤ X < 37 6 0.094
37 ≤ X < 38 5 0.077
64 1.000
x̄ = 34.7672, s2 = 1.828, x̃ = (34.6 + 34.7)/2 = 34.65. The data appear to follow a
normal distribution.
4
8–7.
Class Interval Frequency Relative Freq
82 ≤ X < 84 6 0.067
84 ≤ X < 86 14 0.156
86 ≤ X < 88 18 0.200
88 ≤ X < 90 11 0.122
90 ≤ X < 92 14 0.156
92 ≤ X < 94 8 0.088
94 ≤ X < 96 12 0.133
96 ≤ X < 98 6 0.067
98 ≤ X < 100 1 0.011
x̄ = 89.4755, s2 = 17.2870. The data appear to follow a either a gamma or a
Weibull distribution.
n = 19
8–10.(a,b)
83 4
84 3
85 3
86 7 7
87 7 5 8 6 9 4
88 5 6 3 2 3 5 3 6 7 49
89 8 2 0 9 8 6 3 8 3 7
90 8 3 1 9 4 1 4 6 4 3507
91 5 1 0 0 8 2 8 6 1 1620
92 7 3 7 6 7 2 2 2
93 3 2 4 3 0 7
94 7 2 2 4
95 6
96 1
97
98 8
99
100 3
(c) x̄ = 90.6425, s2 = 7.837, s = 2.799
(d) x̃ = median = 90.45. There are several modes, e.g., 91.0, 919.1, 92.7.
6
8–12. (a)
Frequency
32 5 6 9 8 1 7 6
33 1 6 6 8 4 681656 11
34 2 5 3 7 7 27697160167656173 22
35 6 1 0 4 1 320149857 14
36 2 8 8 4 6 8 6
37 9 8 1 6 3 5
(b) x̄ = 34.7672, s2 = 1.828
(c)
Frequency
32 1 5 6 7 8 9 6
33 1 1 4 5 6 666688 11
34 0 1 1 1 2 23355666667777779 22
35 0 0 1 1 1 234456789 14
36 2 4 6 8 8 8 6
37 1 3 6 8 9 5
(d) x̃ = 34.65
8–13. (a)
Frequency
82 6 9 2
83 0 1 6 7 4
84 0 1 1 1 2 569 8
85 0 1 1 1 4 4 6
86 1 1 1 4 4 44677 10
87 3 3 3 3 5 667 8
88 2 2 3 6 8 5
89 1 1 4 6 6 7 6
90 0 0 1 1 3 45666 10
91 1 2 4 7 4
92 1 4 4 3
93 1 1 2 27 5
94 1 1 1 33467 8
95 1 2 3 6 4
96 1 3 4 8 4
97 3 8 2
98 0 1
7
8–18. The descriptive measures developed in this chapter are for numerical data only.
The mode, however, does have some meaning. For these data, the mode is the
letter e.
8–19. (a)
n
X n
X n
X n
X
(Xi − X̄) = Xi − X̄ = Xi − nX̄
i=1 i=1 i=1 i=1
n
X n
X
= Xi − Xi = 0
i=1 i=1
(b)
n
X n
X
2
(Xi − X̄) = (Xi2 + X̄ 2 − 2Xi X̄)
i=1 i=1
n
X n
X
= Xi2 + nX̄ 2 − 2X̄ Xi
i=1 i=1
n
X
= Xi2 + nX̄ 2 − 2nX̄ 2
i=1
Xn
= Xi2 − nX̄ 2
i=1
8–25. a = x̄
8–27. There is no guarantee that LN is an integer. For example, if we want a 10% trimmed
mean with 23 observations, then we would have to trim 2.3 observations from each
end. Since we cannot do this, some other procedure must be used. A reasonable
alternative is to calculate the trimmed mean with two observations trimmed from
each end, then to repeat this procedure with three observations trimmed from each
end, and finally to interpolate between the two different values of the trimmed
mean.
Chapter 9
9–1. Since
· ¸
1 (xi − µ)2
f (xi ) = √ exp − ,
σ 2π 2σ 2
we have
5
Y
f (x1 , x2 , . . . , x5 ) = f (xi )
i=1
Y5 · ¸
1 (xi − µ)2
= √ exp −
i=1
σ 2π 2σ 2
µ ¶5/2 · 5 ¸
1 1 X 2
= exp − 2 (xi − µ)
σ 2 2π 2σ i=1
9–2. Since
f (xi ) = λe−λxi ,
we have
n
Y
f (x1 , x2 , . . . , xn ) = f (xi )
i=1
Yn
= λe−λxi
i=1
· n
X ¸
n
= λ exp −λ xi
i=1
Of course,
1
X
pX1 (x1 ) = pX1 ,X2 (x1 , x2 ) and
x2 =0
1
X
pX2 (x2 ) = pX1 ,X2 (x1 , x2 )
x1 =0
So pX1 (0) = M/N , pX1 (1) = 1 − (M/N ), pX2 (0) = M/N , pX2 (1) = 1 − (M/N ).
√
9–7. Use estimated standard error S/ n.
9–11. N (0, 1)
9–14.
Z ∞
tX 1
MX (t) = E(e ) = etx x(n/2)−1 e−x/2 dx
2n/2 Γ(n/2)
Z0 ∞
1
= n/2 x(n/2)−1 e−x[(1/2)−t] dx
2 Γ(n/2) 0
Then
p p
9–16. Let T = Z/ χ2n /n = Z n/χ2n . Now
p
E(T ) = E(Z)E( n/χ2n ) = 0, because E(Z) = 0.
V (T ) = E(n/χ2n )
Z ∞
(n/s)
= n/2
s(n/2)−1 e−s/2 ds
0 2 Γ(n/2)
Z ∞
n
= n/2 s(n/2)−2 e−s/2 ds
2 Γ(n/2) 0
Z ∞
n
= (n/2)−1 (2u)(n/2)−2 e−u du
2 Γ(n/2) 0
nΓ( n2 − 1)
= , if n > 2
2Γ( n2 )
nΓ( n2 − 1)
= , if n > 2
2( n2 − 1)Γ( n2 − 1)
n
= , if n > 2
n−2
5
2
E(Fm,n ) = (n/m)2 E(X 2 )E(1/Y 2 ).
Z ∞
2 (1/y 2 )
E(1/Y ) = n/2 Γ(n/2)
y (n/2)−1 e−y/2 dy
0 2
Z ∞
1
= n/2 y (n/2)−3 e−y/2 dy
2 Γ(n/2) 0
Z ∞
1
= n/2 2(2u)(n/2)−3 e−u du
2 Γ(n/2) 0
1
= , if n > 4
(n − 2)(n − 4)
Thus,
µ ¶2
2 2 2 n
V (Fm,n ) = (n/m) E(X )E(1/Y ) −
n−2
n2 (2m + m2 ) n2 2n2 (m + n − 2)
= − =
m2 (n − 2)(n − 4) (n − 2)2 m(n − 2)2 (n − 4)
9–18. X(1) is greater than t if and only if every observation is greater than t. Then
Similarly,
9–19.
0 t<0
F (t) = 1−p 0≤t<1
1 t≥1
Then
9–20.
· µ ¶¸n−1 · ¸
t−µ 1 (x − µ)2
fX(1) (t) = n 1 − Φ √ exp −
σ σ 2π 2σ 2
· µ ¶¸n−1 · ¸
t−µ 1 (x − µ)2
fX(n) (t) = n Φ √ exp −
σ σ 2π 2σ 2
7
F (t) = 1 − e−λt
Treat F (X(n) ) as a random variable giving the fraction of objects in the population
having values of X ≤ X(n) .
This gives
Z 1
n
E(Y ) = ny n dy = .
0 n+1
Treat F (X(1) ) as a random variable giving the fraction of objects in the population
having values of X ≤ X(1) .
Let Y = F (X(1) ). Then dy = f (X(1) )dx(1) , and thus f (y) = n(1 − y)n−1 , 0 ≤ y ≤ 1.
This gives
Z 1
E(Y ) = ny(1 − y)n−1 dy
0
8
Thus,
Z 1
E(Y ) = n y(1 − y)n−1 dy = nβ(2, n)
0
nΓ(2)Γ(n) n!1! 1
= = =
Γ(n + 2) (n + 1)! n+1
Chapter 10
10–1. Both estimators are unbiased. Now, V (X 1 ) = σ 2 /2n while V (X 2 ) = σ 2 /n. Since
V (X 1 ) < V (X 2 ), X 1 is a more efficient estimator than X 2 .
10–2. E(θ̂1 ) = µ, E(θ̂2 ) = (1/2)E(2X1 − X6 + X4 ) = (1/2)(2µ − µ + µ) = µ. Both
estimators are unbiased.
V (θ̂1 ) = σ 2 /7,
µ ¶2
1
V (θ̂2 ) = V (2X1 − X6 + X4 )
2
µ ¶ µ ¶
1 1
= [4V (X1 ) + V (X6 ) + V (X4 )] = 6σ 2 = 3σ 2 /2
4 4
θ̂1 has a smaller variance than θ̂2 .
10–3. Since θ̂1 is unbiased, M SE(θ̂1 ) = V (θ̂1 ) = 10.
M SE(θ̂2 ) = V (θ̂2 ) + (Bias)2 = 4 + (θ − θ/2)2 = 4 + θ2 /4.
√
If θ < 24 = 4.8990, θ̂2 is a better estimator of θ than θ̂1 , because it would have
smaller M SE.
10–4. M SE(θ̂1 ) = V (θ̂1 ) = 12, M SE(θ̂2 ) = V (θ̂2 ) = 10,
M SE(θ̂3 ) = E(θ̂3 − θ)2 = 6. θ̂3 is a better estimator because it has smaller M SE.
10–5. E(S 2 ) = (1/24)E(10S12 + 8S22 + 6S32 ) = (1/24)(10σ 2 + 8σ 2 + 6σ 2 )
= (1/24)24σ 2 = σ 2
P
10–6. Any linear estimator of µ is of the form θ̂ = ni=1 ai Xi where ai are constants. θ̂ is
P
an unbiased estimator of µ only if E(θ̂) = µ, which implies that ni=1 ai = 1. Now
P
V (θ̂) = ni=1 2 2
P ai σ . Thus we must choose the ai to minimize V (θ̂) subject to the
constraint ai = 1. Let λ be a Lagrange multiplier. Then
n
à n !
X X
F (ai , λ) = a2i σ 2 − λ ai − 1
i=1 i=1
n
, n
Y Y
Xi −α ΣXi −nα
10–7. L(α) = α e /Xi ! = α e Xi !
i=1 i=1
n
à n !
X Y
`n L(α) = Xi `n α − nα − `n Xi !
i=1 i=1
n
d `n L(α) X
= Xi /α − n = 0
dα i=1
n
X
α̂ = Xi /n = X
i=1
10–8. For the Poisson distribution, E(X) = α = µ01 . Also, M10 = X. Thus α̂ = X is the
moment estimator of α.
n
Y Pn
10–9. L(λ) = λe−λti = λn e−λ i=1 ti
i=1
n
X
`n L(λ) = n `n λ − λ ti
i=1
n
X
d `n L(λ)
= (n/λ) − ti = 0
dλ i=1
, n
X
λ̂ = n ti = (t)−1
i=1
10–11. If X is a gamma random variable, then E(X) = r/λ P and V (X) = r/λ2 . Thus
n
E(X 2 ) = (r + r2 )λ2 . Now M10 = X and M20 = (1/n) i=1 Xi2 . Equating moments,
we obtain n
X
r/λ = X, (r + r2 )λ2 = (1/n) Xi2
i=1
or, ," #
n
X 2
λ̂ = X (1/n) Xi2 − X
i=1
," n
#
2 X 2
r̂ = X (1/n) Xi2 − X
i=1
3
n
Y
10–13. L(p) = (1 − p)Xi −1 p = pn (1 − p)ΣXi −n
i=1
Pn
`n L(p) = n `n p + ( Xi − n)`n (1 − p). From d `n L(p)/dρ = 0, we obtain
i=1
à n !,
X
(n/p̂) − Xi − n (1 − ρ̂) = 0
i=1
, n
X
p̂ = n Xi = 1/X
i=1
N
X
np = X, np − np2 + n2 p2 = (1/N ) Xi2
i=1
," N
#
2 X
n̂ = X X − (1/N ) (Xi − X)2 , p̂ = X/N̂
i=1
N µ ¶ " N µ ¶#
Y n Y n Pn PN
10–17. L(p) = pXi (1 − p)n−Xi = p i=1 Xi
(1 − p)nN − i=1 Xi
Xi Xi
i=1 i=1
N µ ¶ Ã N
! Ã N
!
X n X X
`n L(p) = `n + Xi `n p + nN − Xi `n(1 − p)
Xi
i=1 i=1 i=1
N
, Ã N
!,
d `n L(p) X X
= Xi p− nN − Xi (1 − p) = 0
dp i=1 i=1
p̂ = X/n
4
n µ ¶µ ¶βτ " µ ¶β #
Y β Xi − γ Xi − γ
10–18. L = exp −
i=1
δ δ δ
Thus,
P (|θ̂ − θ| ≥ ²) ≤ (1/²2 )E(θ̂ − θ)2
σ 2 /n2 n1
a∗ = =
σ 2 /n1 + σ 2 /n2 n1 + n2
n
Y n
Y
10–21. L(γ) = (γ + 1)Xiγ = (γ + 1) n
Xiγ
i=1 i=1
n
X
`n L(γ) = n `n(γ + 1) + γ `n Xi
i=1
5
, n
d `n L(γ) X
=n (γ + 1) + `n Xi = 0
dγ i=1
à , n
!
X
γ̂ = −1 − n `n Xi
i=1
n
Y Pn
10–22. L(γ) = λe−λ(Xi −X` ) = λn e−λ( i=1 Xi −nX` )
i=1
à n !
X
`n L(λ) = n `n λ − λ Xi − nX`
i=1
, Ã n !
d `n L(λ) X
=n λ− Xi − nX` =0
dλ i=1
P
10–23. Assume X` unknown, and we want to maximizePn `n λ − λ ni=1 (Xi − X` ) with
respect to X` , subject to Xi ≥ X` . Thus we want ni=1 (Xi − X` ) to be a minimum,
subject to Xi ≥ X` . Thus X̂` = min(X1 , X2 , . . . , Xn ) = X(1) .
" n−1 # " n−1 #
X X
10–24. E(G) = E K (Xi+1 − Xi )2 = K E(Xi+1 − Xi )2
i=1 i=1
" n−1 #
X ¡ ¢
2 2
=K E Xi+1 − 2Xi Xi+1 + Xi
i=1
n−1
X
2
=K [E(Xi+1 ) − 2E(Xi Xi+1 ) + E(Xi2 )]
i=1
= K[2(n − 1)σ 2 ]
X n−1
1
G= (Xi+1 − Xi )2
2(n − 1) i=1
is an unbiased estimator of σ 2 .
6
µ ¶
2 −n/2 1 2 2
10–25. f (x1 , x2 , . . . , xn |µ) = (2πσ ) exp − Σ(xi − µ) /σ ,
2
µ ¶
2 −1/2 1 2 2
f (µ) = (2πσ0 ) exp − (x − µ0 ) /σ0
2
( · µ ¶¸2 )
c 1 nx µ 0
f (µ|x1 , x2 , . . . , xn ) = c−1/2 (2π)−1/2 exp − µ − + 2
2 c σ2 σ0
n 1
where c = 2
+ 2
σ σ0
µ ¶
2 −n/2 1
2
10–26. f (x1 , x2 , . . . , xn |1/σ ) = (2πσ ) exp − Σ(xi − µ)2 /σ 2
2
1 2 2
f (1/σ 2 ) = (mσ02 )m+1 (1/σ 2 )m e−mσ0 /σ
Γ(m + 1)
The posterior density for 1/σ 2 is gamma with parameters m + (n/2) + 1 and mσ02 +
Σ(xi − µ)2 .
10–30. From Exercise 10–25 and using the relationship that the Bayes’ estimator for µ
using a squared-error loss function is given by µ̂ = 1c [ nx
σ2
+ σµ02 ], we have
0
· ¸−1 · ¸
25 1 25(4.85) 4
µ̂ − + + = 4.708
40 8 40 8
0.1
10–31. µ̂ = 1.05 − µ ¶2 µ ¶ (2π)−1/2
1.20 − 1.05 0.98 − 1.05
Φ 0.1 −Φ 0.1
2 2
h 1.20−1.05 0.98−1.05
i
× e−1/2( 0.1/2
)
− e−1/2( 0.1/2
)
Since r is integer, tables of the Poisson distribution could be used to find L and U .
2x 2x
10–35. (a) f (x1 |θ) = , f (θ) = 1, 0 < θ < 1, f (x1 , θ) = , and
θ2 θ2
Z 1 Z 1
2x 1
f (x1 ) = 2
dθ = −2x = 2 − 2x; 0 < x < 1
x θ θ x
8
f (x1 , θ) 2x
f (θ|x1 ) = = 2
f (x1 ) θ (2 − 2x)
24px+1 (1 − p)2−x
f (p|x1 ) =
Γ(x + 2)Γ(3 − x)
σ σ
10–39. (a) x − Zα/2 √ ≤ µ ≤ x + Zα/2 √
n n
74.03533 ≤ µ ≤ 74.03666
σ
(b) x − Zα √ ≤ µ
n
74.0356 ≤ µ
10–43. For the total width to be 8, the half-width must be 4, therefore n = (Zα/2 σ/E)2 =
[(1.96)25/4]2 = 150.06 ' 150 or 151.
10–51. 13
10–62. σ 2 ≤ 193.09
10–67. n = 4057
10–68. p ≤ 0.00348
10–69. n = (Zα/2 /E)2 p(1 − p) = (2.575/0.01)2 p(1 − p) = 66306.25p(1 − p). The most
conservative choice of p is p = 0.5, giving n = 16576.56 or n = 16577 homeowners.
10–74. Since X and S 2 are independent, we can construct confidence intervals for µ and σ 2
such that we are 90 percent confident that both intervals provide correct conclusions
by constructing a 100(0.90)1/2 percent confidence interval for each parameter. That
is, we need a 95 percent confidence interval on µ and σ 2 . Thus, 3.938 ≤ µ ≤ 4.057
and 0.0049 ≤ σ 2 ≤ 0.0157 provides the desired simultaneous confidence intervals.
10–75. Assume that all three variances are equal. A 95 percent simultaneous confidence
interval on µ1 − µ2 , µ1 − µ3 , and µ2 − µ3 will require that the individual intervals
use α/3 = 0.05/3 = 0.0167.
for 6 < µ ≤ 12. From the normal tables, the 90% interval estimate for µ is centered
at 8 and is from 8 − (1.795)(1.054) = 6.108 to 9.892. Since 6.108 < 9 < 9.892, we
have no evidence to reject H0 .
10–77. The posterior density for 1/σ 2 is gamma w/parameters r +(n/2) and λ+Σ(xi −µ)2 .
For r = 3, λ = 1, n = 10, µ = 5, Σ(xi − 5)2 = 4.92, the Bayes estimate of 1/σ 2 is
3+5
(1/σ 2 ) = 1+4.92 = 1.35. The integral:
Z U
1 2
0.90 = (5.92)8 (1/σ 2 )7 e−5.92/σ
L 8
Z ∞
10–78. Z = (θ̂ − θ)2 f (θ|x1 , x2 , . . . , xn ) dθ
−∞
Z ∞ Z ∞
2
= θ̂ f (θ|x1 , x2 , . . . , xn ) dθ − 2θ̂ θf (θ|x1 , x2 , . . . , xn ) dθ
−∞ −∞
Z ∞
+ θ2 f (θ|x1 , x2 , . . . , xn ) dθ
−∞
Let
Z ∞
E(θ̂) = θf (θ|x1 , x2 , . . . , xn ) dθ = µθ
−∞
Z ∞
2
E(θ̂ ) = θ2 f (θ|x1 , x2 , . . . , xn ) dθ = τθ
−∞
Then
Z = θ̂2 − 2θ̂µθ + τθ
dZ
= 2θ̂ − 2µθ = 0 so θ̂ = µθ .
dθ̂
1
Chapter 11
x − µ0 3250 − 3500
11–6. (a) H0 : µ = 3500 Z0 = √ = p = −27.39
σ n 1000/12
H1 : µ 6= 3500 |Z0 | = 27.39 > Z0.005 = 2.575, reject H0 .
2
11–7. (a) H0 : µ1 = µ2 x 1 − x2
Z0 = q 2 = 1.349
H 1 : µ1 =
6 µ2 σ1 σ22
+ n2
n1
11–9. H0 : µ1 − µ2 = 0, x1 − x2
Z0 = q 2 = 2.656
H1 : µ1 − µ2 6= 0 σ1 σ22
+ n2
n1
11–10. H0 : µ1 − µ2 = 0, x1 − x2
Z0 = q 2 = −6.325
H 1 : µ1 − µ2 > 0 σ1 σ22
+ n2
n1
11–11. H0 : µ1 − µ2 = 0, x1 − x2
Z0 = q 2 = −7.25
H 1 : µ1 − µ2 < 0 σ2 σ22
+ n2
n1
6.997 − 7.5
11–16. (a) H0 : µ = 7.5 t0 = √ = −1.112
1.279/ 18
H1 : µ < 7.5 t0.05,7 = 1.895, do not reject H0 ,
the true scrap rate is not < 7.5%.
(b) n = 5
(c) 0.95
11–17. n = 3
|δ| 20
11–18. d = = = 2, n = 3
σ 10
x1 − x2 25.617 − 21.7
11–19. (a) H0 : µ1 = µ2 t0 = p = p = 8.49
sp 1/n1 + 1/n2 0.799 1/6 + 1/6
H 1 : µ1 > µ 2 t0 > t0.01,10 = 2.7638
x − x2 − 5
(b) H0 : µ1 − µ2 = 5 t0 = p1 = 2.35, do not reject H0 .
sp 1/n1 + 1/n2
H 1 : µ1 − µ2 > 5
(c) Using sp = 0.799 as an estimate of σ,
d = (µ1 − µ2 )/(2σ) = 5/2(0.799) = 3.13, n1 = n2 = 6, α = 0.01, OC curves
give β ≈ 0, so power ' 1.
(d) OC curves give n = 5.
x1 − x2 12.5 − 10.2
(b) H0 : µ1 = µ2 t0 = p = p
sp 1/n1 + 1/n2 9.886 1/8 + 1/9
H 1 : µ1 > µ 2 = −0.48, do not reject.
r
1480 + 1425
11–23. (a) H0 : µ1 = µ2 sp = = 12.704
18
20.0 − 15.8
H1 : µ1 6= µ2 t0 = p = 0.74, do not reject H0 .
12.704 1/10 + 1/10
Reject H0 if |t0 | < t0.005,9 = 3.250.
(b) d = 10/[2(12.7)] = 0.39, Power = 0.13, n∗ = 19
(c) n1 = n2 = 75
x1 − x2 20.0 − 21.5
11–24. (a) H0 : µ1 = µ2 t0 = p = p = −2.40
sp 1/n1 + 1/n2 1.40 1/10 + 1/10
H1 : µ1 6= µ2 reject H0 .
(b) Use sp = 1.40 as an estimate of σ. Then d = |µ1 − µ2 |/2σ = 2/2(1.40) = 0.7.
If α = 0.05 and n1 = n2 = 10, OC curves gives β ' 0.5. For β ' 0.15, we
must have n1 = n2 ' 30.
(c) F0 = s21 /s22 = 2.25/1.69 = 1.33, do not reject H0 .
(d) λ = σ1 /σ2 = 2, α = 0.05, n1 = n2 = 10, OC curves give β ' 0.50.
x1 − x2 8.75 − 8.63
11–25. H0 : µ1 = µ2 t0 = p = p = 0.56
sp 1/n1 + 1/n2 0.57 1/12 + 1/18
H1 : µ1 6= µ2 do not reject H0 .
d−0 5.0 − 0
11–33. H0 : µd = 0 t0 = √ = √ = 0.998,
sd / n 15.846/ 10
H1 : µd 6= 0 do not reject H0 .
11–34. Using µD = µA − µB , t0 = −1.91, do not reject H0 .
11–35. H0 : µd = 0
H 1 : µd =
6 0
Reject H0 if |t0 | > t0.025,5 = 2.571.
3−0
t0 = √ = 5.21 ∴ reject H0
1.41/ 6
11–36. t0 = 2.39, reject H0 .
6
(18−200)
11–38. H0 : p = 0.025, H1 : p 6= 0.025; Z0 = √ = −13.03, reject H0 .
8000(0.975)(0.025)
11–39. The “best” test will maximize the probability that H0 is rejected, so we want to
X1 − X2
max Z0 = p
σ12 /n1 + σ22 /n2
subject to n1 + n2 = N .
11–43. Let 2σ 2 = σ12 /n1 +σ22 /n2 be the specified sample variance. If we minimize c1 n1 +c2 n2
subject to the constraint σ12 /n1 + σ22 /n2 = 2σ 2 , we obtain the solution
s
n1 σ12 c2
= .
n2 σ22 c1
X 1 − 2X 2
11–44. H0 : µ1 = 2µ2 Z0 = r 2
H1 : µ1 > 2µ2 σ1 4σ22
+
n1 n2
7
11–45. H0 : σ 2 = σ02
H1 : σ 2 6= σ02
µ ¯ ¶
(n − 1)S 2 ¯ 2
2
β = P χ1−α/2,n−1 ≤ 2 ¯ 2
≤ χα/2,n−1 ¯σ = σ1 = 2
6 σ0
σ02
µ 2 ¶
σ0 2 (n − 1)S 2 σ02 2
=P χ ≤ ≤ 2 χα/2,n−1
σ12 1−α/2,n−1 σ12 σ1
Since (σ22 /σ12 )(S12 /S22 ) follows an F -distribution, β may be evaluated by using tables
of F .
Three cells have expected values less than 5, so they are combined with other cells
to get:
Defects 0–1 2 3 4 5 6 7 8 9 10–12
Oi 17 34 56 70 70 58 42 25 15 13
Ei 16.48 34.15 56.66 70.50 70.50 70.18 58.22 41.40 25.76 18.79
χ20 = 1.8846, χ20.05,8 = 15.51, do not reject H0 , the data could follow a Poisson
distribution.
11–49. x Oi Ei (Oi − Ei )2 /Ei
0 967 1000 1.089
1 1008 1000 0.064
2 975 1000 0.625
3 1022 1000 0.484
4 1003 1000 0.009
5 989 1000 0.121
6 1001 1000 0.001
7 981 1000 0.361
8 1043 1000 1.849
9 1011 1000 0.121
χ20 = 4.724 < χ20.05,9 = 16.919. Therefore, do not reject H0 .
11–50. (a) Assume that data given are the midpoints of the class intervals.
Class Interval Oi Ei (Oi − Ei )2 /Ei
X < 2.095 0 1.79∗
2.095 ≤ X < 2.105 16 6.65 6.77
2.105 ≤ X < 2.115 28 22.18 1.53
2.115 ≤ X < 2.125 41 56.39 4.20
2.125 ≤ X < 2.135 74 108.92 11.19
2.135 ≤ X < 2.145 149 159.60 0.70
2.145 ≤ X < 2.155 256 178.02 34.16
2.155 ≤ X < 2.165 137 150.81 1.26
2.165 ≤ X < 2.175 82 96.56 2.20
2.175 ≤ X < 2.185 40 47.59 1.21
2.185 ≤ X < 2.195 19 18.09 0.04
2.195 ≤ X < 2.205 11 5.12 3.31
2.205 ≤ X 0 1.28∗
∗
Group into next cell
χ20 = 66.57 > χ20.05,8 = 15.507, reject H0 .
10
IA A Total
L 216 245 461
170.08 290.92
χ20 = 34.909, reject H0 . Based on this data, physical activity is not independent of
socioeconomic status.
X3 X 3
(Oij − Eij )2
χ20 = = 2.88 + 1.61 + 0.16 + 2.23 + 0.79 + 13.17
i=1 j=1
Eij
+ 0 + 5.23 + 6 = 22.06
χ20.05,4 = 9.488
Chapter 12
(b)
2
1 2 3
2 -5.357
4.024
3 -0.857 -0.190
8.524 9.190
(d)
3
(b)
5
1 2 3
2 -423
53
3 -201 -15
275 460
4 67 252 30
543 728 505
6
1 2 3 4
2 -9.049
8.549
3 4.701 4.951
22.299 22.549
(e)
10
1 2
2 -17.022
2.222
3 -7.222 0.178
12.022 19.422
(c) c1 = y1· − 2y2· + y3· , SSc1 = 14.4
c2 = y1· − y3· , SSc2 = 246.53
Only c2 is significant at α = 0.05
(d) 0.88
11
(c)
12
2 2
y1· y2· (y1· +y2· )2
But since y1· y2· = 2
+ 2
− 2
, the last equation becomes
2
y1· y2 (y1· + y2· )2
+ 2· − SSTreatments
t20 = n n 2n =
Sp 2 Sp2
P2 Pn
(yij −y )2
Note that Sp2
= i=1
j=1
2n−2
i·
= M SE . Therefore, t20 = SSTreatments
M SE
, and since the
square of tu is F1,u (in general), we see that the two tests are equivalent.
µXa ¶ X a X a
12–13. V ci yi· = V (ci yi· ) = c2i V (yi· )
i=1 i=1 i=1
a
à ni
! a ni
X X X X
= c2i V yij = c2i V (y)
i=1 j=1 i=1 j=1
a
X
= σ2 ni c2i
i=1
τ1d
− τ2 = τ̂1 − τ̂2 = −1.40
(b) If τ̂3 = 0, then µ̂ = 18.40, τ̂1 = 2.40, τ̂2 = 3.80, and τ̂3 = 0. These estimators
differ from those found in part (a). However, note that
τ1d
− τ2 = τ̂1 − τ̂2 = 2.40 − 3.80 = −1.40
which agrees with part (a), because contrasts in the τi are uniquely estimated.
Chapter 13
13–5. The results would be a mixed model. The test statistics would be:
Effect F0
Operator 0.241
Machine 2.46
Operator∗ Machine 0.84
13–8.
3
There does not appear to be a problem with constant variance across levels of either
factor.
13–11. Using only the significant factors and interactions, the resulting residuals are as
follows.
5
13–13.
7
8
13–16. (a)
10
(b)
(b)
(c) & (d) This design is not as efficient as possible. If we were to confound a differ-
ent interaction in each replicate this would provide some information on all
interactions.
13
13–22. Please refer to the original reference for an analysis of the data from this experiment.
(c)
A B C D = ABC
(1) − − − − 190
a + − − + 174
b − + − + 181
ab + + − − 183
c − − + + 177
ac + − + − 181
bc − + + − 188
abc + + + + 173
Sweetener (A) & Temperature (D)
A = −6.25 `AB = −0.25 influence taste.
B = 0.75 `AC = 0.75
C = −2.25 `AD = 0.75
D = −9.25
13–29. 25−2
I = ABCD, I = ACE
Treatment
A B C D = ABC E = AC Combination Strength
− − − − + e 800
+ − − + − ade 1500
− + − + + bde 4000
+ + − − − abe 6200
− − + + − cde 1500
+ − + − + ace 1200
− + + − − bce 3006
+ + + + + abcde 6300
`A = 5894 `C = −494 `AB = 8094
`B = 14506 `D = 2094
`E = 94
13–30. 26−3
III
Chapter 14
Analysis of Variance
Source DF SS MS F P
Regression 1 23.99 23.99 2.06 0.163
Residual Error 26 302.97 11.65
Total 27 326.96
(c) −0.00380 ≤ β1 ≤ 0.00068
(d) R2 = 7.3%
(e)
2
Based on the residual plots there appears to be a severe outlier. This point
should be investigated and if necessary, the point removed and the analysis
rerun.
Analysis of Variance
Source DF SS MS F P
Regression 1 224.43 224.43 57.45 0.000
Residual Error 13 50.79 3.91
Total 14 275.21
(c) R2 = 81.5%
(d) ŷ = 20.381, 19.374 ≤ E(y|x0 = 275) ≤ 21.388
3
14–5.
4
Analysis of Variance
Source DF SS MS F P
Regression 1 636.16 636.16 72.56 0.000
Residual Error 22 192.89 8.77
Total 23 829.05
(c) 76.7%
(d)
5
6
Analysis of Variance
Source DF SS MS F P
Regression 1 1755.8 1755.8 12.97 0.003
Residual Error 14 1895.0 135.4
Lack of Fit 8 1378.6 172.3 2.00 0.207
Pure Error 6 516.4 86.1
Total 15 3650.8
14–8.
7
Analysis of Variance
Source DF SS MS F P
Regression 1 280583 280583 74334.36 0.000
Residual Error 10 38 4
Total 11 280621
14–10.
9
Analysis of Variance
Source DF SS MS F P
Regression 1 148.31 148.31 11.47 0.003
Residual Error 18 232.83 12.94
(c) 38.9%
(d) 4.5661 ≤ β ≤ 19.1607
14–12. (a)
10
(b)
11
Analysis of Variance
Source DF SS MS F P
Regression 1 62660784 62660784 297.38 0.000
Residual Error 14 2949914 210708
Total 15 65610698
(c) (0.0015, 0.0019)
(d) 95.5%
(e)
12
14–15. x = weight, y = BP
σ2
V (β̂1 ) = E(β̂12 ) − [E(β̂1 )]2 =
Sxx
σ2
E(β̂12 ) = β12 +
Sxx
Therefore
E(SSR ) = E(β̂12 )Sxx = σ 2 + β12 Sxx
µ ¶
SSR
E(M SR ) = E = σ 2 + β12 Sxx
1
µ ¶
Sxy 1
14–17. E(β̂1 ) = E = E(Sxy )
Sxx Sxx
X n
1
= E (xi1 − x1 )yi
Sxx i=1
n
1 X
= (xi1 − x1 )E(yi )
Sxx i=1
n
1 X
= (xi1 − x1 )(β0 + β1 xi1 + β2 xi2 )
Sxx i=1
n
X
xi2 (xi1 − x1 )
= β1 + β2 i=1
Sxx
In general, β̂1 is a biased estimator of β1 .
n
X
14–19. L= wi (yi − β0 − β1 xi )2
i=1
X n
∂L
= −2 wi (yi − β̂0 − β̂1 xi ) = 0.
∂β0 i=1
X n
∂L
= −2 wi xi (yi − β̂0 − β̂1 xi ) = 0.
∂β1 i=1
Simplification of these two equations gives the normal equations for weighted least
squares.
14–20. If y = (β0 + β1 x + ²)−1 , then 1/y = β0 + β1 x + ² is a straight-line regression model.
The scatter diagram of y ∗ = 1/y versus x is linear.
x 10 15 18 12 9 8 11 6
y∗ 5.88 7.69 11.11 6.67 5.00 4.76 5.56 4.17
14–21. The no-intercept model would be the form y = β1 x + ². This model is likely not a
good one for this problem, because there is no data near the point x = 0, y = 0,
and it is probably unwise to extrapolate the linear relationship back to the origin.
The intercept is often just a parameter that improves the fit of the model to the
data in the region where the data were collected.
14–22. b = 14 Σxi = 65.262 Σx2i = 385.194 x = 4.662
Σyi = 208 Σyi2 = 3490 y = 14.857 Σxi yi = 1148.08
Sxx = 80.989 Sxy = 178.473 Syy = 599.714
β̂1 = 2.204
β̂0 = 4.582
ŷ = 4.582 + 2.204x
Sxy
r= = 0.9919, R2 = 0.9839
(Sxx Syy )1/2
Chapter 15
Analysis of Variance
Source DF SS MS F P
Regression 2 112.263 56.131 15.19 0.000
Residual Error 17 62.824 3.696
Total 19 175.086
(c)
2
(d) The M SE has improved with the x1 x4 model, but the R2 and adjusted R2
have decreased.
Analysis of Variance
Source DF SS MS F P
Regression 3 153.305 51.102 37.54 0.000
Residual Error 16 21.782 1.361
Total 19 175.086
3
(c)
Analysis of Variance
Source DF SS MS F P
Regression 3 257.094 85.698 29.44 0.000
Residual Error 24 69.870 2.911
Total 27 326.964
Analysis of Variance
Source DF SS MS F P
Regression 2 856.24 428.12 53.32 0.000
Residual Error 22 176.66 8.03
Total 24 1032.90
6
(c)
Analysis of Variance
Source DF SS MS F P
Regression 4 4957.2 1239.3 5.11 0.030
Residual Error 7 1699.0 242.7
Total 11 6656.3
(c) H0 : β3 = 0, F0 = 0.361 (not significant)
H0 : β4 = 0, F0 = 0.0004 (not significant)
(d)
8
Analysis of Variance
Source DF SS MS F P
Regression 2 70284 35142 17.20 0.006
Residual Error 5 10213 2043
Total 7 80497
(c) See the t-test results in part (b)
(d)
10
Analysis of Variance
Source DF SS MS F P
Regression 2 39.274 19.637 39.89 0.000
Residual Error 7 3.446 0.492
Total 9 42.720
(c) see part (b)
11
(d)
12
Analysis of Variance
Source DF SS MS F P
Regression 2 5740.6 2870.3 1044.99 0.000
Residual Error 9 24.7 2.7
Total 11 5765.3
(c) see part (b)
15–18. The model is as in Exercise 15–16, except now X ∗ is unknown and must be
estimated. This is a nonlinear regression problem. It could be solved by using
one-dimensional or line search methods, which could be used to obtain the trial
values of x∗ .
15–22. (a) All possible regressions from Minitab displaying the best two models for each
combination of variables.
13
x x x x x x x x x
Vars R-Sq R-Sq(adj) C-p S 1 2 3 4 5 6 7 8 9
Step 1 2 3
Constant 21.788 14.713 -1.808
x8 -0.00703 -0.00681 -0.00482
T-Value -5.58 -7.05 -3.77
P-Value 0.000 0.000 0.001
x2 0.00311 0.00360
T-Value 4.40 5.18
P-Value 0.000 0.000
x7 0.194
T-Value 2.20
P-Value 0.038
14
x7 0.194 0.217
T-Value 2.20 2.45
P-Value 0.038 0.023
x9 -0.0016
T-Value -1.31
P-Value 0.202
x1 0.0008 0.0008
T-Value 0.40 0.42
P-Value 0.690 0.681
x5 0.000
T-Value 0.00
P-Value 1.000
15–23. (a) All possible regressions from Minitab displaying the best two models for each
combination of variables.
24 cases used; 1 case contains missing values.
x x
x x x x x x x x x 1 1
Vars R-Sq R-Sq(adj) C-p S 1 2 3 4 5 6 7 8 9 0 1
x1 -0.0482
T-Value -9.66
P-Value 0.000
S 2.97
R-Sq 80.93
R-Sq(adj) 80.06
C-p 5.6
The stepwise procedure found x1 significant.
17
x6 1.05 1.02
T-Value 1.44 1.43
P-Value 0.164 0.169
x3 0.070
T-Value 1.28
P-Value 0.217
Step 1 2 3 4 5 6 7 8 9
Constant -17.6442 -18.5202 -3.7497 -0.9652 -0.6957 -4.4555 -1.9112 -8.9148 0.3409
x4 2.4 2.4
T-Value 0.87 0.90
P-Value 0.402 0.383
x9 -0.02
T-Value -0.05
P-Value 0.959
x10 -0.0119 -0.0121 -0.0126 -0.0121 -0.0115 -0.0133 0.0113 -0.0142 -0.0099
T-Value -1.82 -2.09 -2.21 -2.15 -2.01 -2.65 -2.21 -2.90 -5.00
P-Value 0.093 0.057 0.044 0.049 0.061 0.017 0.040 0.009 0.000
15–24. (a) All possible regressions from Minitab displaying the best two models for each
combination of variables.
x x x x
Vars R-Sq R-Sq(adj) C-p S 1 2 3 4
x2 0.416 0.662
T-Value 2.24 14.44
P-Value 0.052 0.000
Step 1 2 3
Constant 117.57 103.10 71.65
x1 1.44 1.45
T-Value 10.40 12.41
P-Value 0.000 0.000
x2 0.42
T-Value 2.24
P-Value 0.052
x3 0.10
T-Value 0.14
P-Value 0.896
x4 -0.14 -0.24
T-Value -0.20 -1.37
P-Value 0.844 0.205
15–25. V IF1 = 38.5, V IF2 = 254.4, V IF3 = 46.9, V IF4 = 282.5. The variance inflation
factors indicate a problem with multicollinearity.
1
Chapter 16
16–1. H0 : µ̃ = 7.0 n = 10
H1 : µ̃ = 7.0
α = 0.05
R+ = 8, R− = 2 ⇒ R = 2
16–2. H0 : µ̃ = 8.5
H1 : µ̃ =
6 8.5
α = 0.05
R+ = 8, R− = 11, R = min(8, 11) = 8, R0.05
∗
=5
∗
Since R > R0.05 , do not reject H0
16–4. n = 10 σ=1
H0 : µ = 0
H1 : µ > 0
(a) α = 0.025
(b) p = P (X > 0) = P (Z > −1) = 1 − Φ(−1) = 0.8413
∗
R0.05 =1
X1 µ ¶
10
β =1− (0.1587)x (0.8413)10−x = 0.487
x
x=0
2
16–5. H0 : µ̃d = 0
H1 : µ̃d =
6 0
α = 0.05
∗
CR: R < R0.05 =1
− +
R = 2, R = 6 ⇒ R = 2
Since R is not less than R∗ , do not reject H0
16–8. H0 : µ = 7 R+ = 50.5
H1 : µ 6= 7 R− = 4.5
R = min(50.5, 4.5) = 4.5
R0.05 = 8 reject H0
16–9. H0 : µ = 8.5 n = 20
H1 : µ =
6 8.5
α = .05
∗
CR: R ≤ R0.05 = 52
R+ = 19 + 10.5 + · · · + 1 = 88.5
R− = 12 + 20 + · · · + 3.5 = 121.5
∗
R = 88.5 > R0.05
∴ do not reject H0
conclude titanium content is 8.5%
20(21)
16–10. H0 : µ = 8.5 µR = = 10.5
4
10(21)(41)
H1 : µ 6= 8.5 σR2 = = 717.5
24
85 − 105
Z0 = √ = −0.747
717.5
Do not reject H0 , µd = 0.
3
16–13. H0 : µ1 = µ2 n1 = 8 n2 = 9
H 1 : µ1 > µ 2
α = 0.05
R2 = 75, R1 = 78
Since R2 is not less than or equal to 51, do not reject H0 . Conclude the two
circuits are equivalent.
16–14. n1 = n2
R1 = 40
R2 = 6(13) − 40 = 38
∗
R0.05 = 26
Do not reject H0
α = 0.05
77 − 105
Z0 = √ = −2.117; |Z0 | = 2.117 > 1.96
175
Reject H0 ; conclude unit 2 is superior.
Chapter 17
There is one observation beyond the upper control limit. Removal of this point
results in the following control charts:
2
R 5.74 U SL − LSL 20
(b) σ̂ = = = 2.468, P CR = = = 1.35
d2 2.326 6σ 6(2.468)
· ¸
45 − 34.09 34.09 − 25
P CRK = min , = min[1.474, 1.2277] = 1.2277
3(2.468) 3(2.468)
(c) 0.205%
3
√
17–2. P (X < µ + 3σ/ n|µX = µ + 1.5σ)
√
µ + 3σ/ n − (µ + 1.5σ)
= P (Z < √
σ/ n
√
= P (Z < 3 − 1.5 n)
= probability of failing to detect shift on 1st sample following the shift.
√
[P (Z < 3 − 1.5 n)]3 = prob of failing to detect shift for 3 consecutive samples
following the shift.
362.75 8.60
17–4. X = = 14.51, R = = 0.34
25 25
(a) X Chart: U CL = 14.706, CL = 14.31, LCL = 14.314
R Chart: U CL = 0.719, CL = 0.34, LCL = 0
17–5. D/2
214.25 133
17–6. (a) X = = 10.7125 R= = 6.65
20 20
X Chart: U CL = 10.7125 + 0.729(6.65) = 15.56
LCL = 5.86
R Chart: U CL = 2.282(6.65) = 15.175, LCL = 0; process in control
(b)
(b)
17–9.
17–11. For the detection probability to equal 0.5, the magnitude of the shift must
bring the
p fraction nonconforming exactly to the upper control limit. That is,
δ = k p(1 − p)/n, where δ is the magnitude of the shift. Solving for n gives
n = (k/δ)2 p(1 − p). For example, if k = 3, p = 0.01 (the in-control fraction
nonconforming), and δ = 0.04, then n = (3/0.04)2 (0.01)(0.99) = 56.
17–13. Center the process at µ = 100. The probability that a shift to µ = 105 will be
detected on the first sample following the shift is about 0.15. A p-chart with n = 7
would perform about as well.
8
17–14.
17–15.
The process is out of control. Removing two out-of-control points and revising the
limits results in:
9
17–16.
17–19.
Since the sample sizes are equal, the c and u charts are equivalent.
17–20.
11
17–21.
17–22.
12
17–23.
17–24. (a)
Z ∞ 1 if t < α
β−t
R(t) = f (x) dx = β−α
if α ≤ t ≤ β
t
0 if t > β
Z ∞
α+β
(b) R(t) dt =
0 2
f (t) 1
(c) h(t) = = , α ≤ t ≤ β.
R(t) β−t
Z t µ ¶
β−t
(d) H(t) = h(t) dt = −`n
0 β−α
β−t
e−H(t) = = R(t).
β−α
17–26. λ1 = λ2 = λ3 = λ4 = λ5 = 0.002
X5 µ ¶
5
(a) R(1000) = (0.367)k (0.633)5−k = 0.6056
k
k=2
3
X e−1 (1)k
17–27. R(1000) = = 0.98104
k=0
k!
Chapter 18
(b)
p0 = 0.7p0 + 0.1p1
1 = p0 + p1
18–3.
· ¸
p 1−p
P =
1−p p
· ¸
∞ 1/2 1/2
P =
1/2 1/2
2
18–4. (a)
1 − 2λ∆t 2λ∆t 0
P = µ∆t 1 − (µ + 32 λ)∆t 3
2
λ∆t
0 µ∆t 1 − µ∆t
(b)
−2λp0 + µp1 = 0
3
2λp0 − (µ + λ)p1 + µp2 = 0
2
3
λp1 − µp2 = 0
2
p0 + p1 + p2 = 1
Solving yields
µ2
p0 =
µ2 + 2λµ + 3λ2
2λµ
p1 =
µ + 2λµ + 3λ2
2
3λ2
p2 =
µ2 + 2λµ + 3λ2
18–5. (a) pii = 1 implies that State 1 is an absorbing state, so States 0 and 3 are
absorbing.
(b) p0 = 1. The system never leaves State 0.
2 1 1
(c) b10 = · 1 + b10 + b20
3 6 6
2 1
b20 = b10 + b20
3 6
20 16
b10 = , b20 =
21 21
1 1
b13 = b13 + b23
6 6
2 1 1
b23 = b13 + b23 + · 1
3 6 6
1 5
b13 = , b23 =
21 21
3
1 20 1 16 18
p0 = · + · =
2 21 2 21 21
1 1 20 1 16 1 39
(d) p0 = ·1+ · + · + ·1=
4 4 21 4 21 4 42
18–6. (a)
1 0 0 0 ··· 0 0 0
q 0 p 0 ··· 0 0 0
0 q 0 p ··· 0 0 0
P = ..
.
0 0 0 0 ··· q 0 p
0 0 0 0 ··· 0 0 1
(b)
1 0 0 0 0
0.7 0 0.3 0 0
P =
0 0.7 0 0.3 0
0 0 0.7 0 0.3
0 0 0 0 1
b10 = 0.953
b20 = 0.845
b30 = 0.591
b14 = 0.0465
b24 = 0.155
b34 = 0.408
4
18–7. (a)
0 p 0 1−p
1−p 0 p 0
P =
0
1−p 0 p
p 0 1−p 0
A = [1 0 0 0]
(b,c) The transition matrix P is said to be doubly stochastic since each row and each
column add to one. For such matrices, it is easy to show that the steady-state
probabilities are all equal. Thus, for p = q = 1/2 or p = 4/5, q = 1/5, we
have p1 = p2 = p3 = p4 = 1/4.
15 5
18–10. λ = 15, µ = 27, N = 3, ρ = = .
27 9
· ¸
5 1 − 4( 59 )3 + 3( 95 )4 .
(a) L = 4 = 0.83.
9 5
[1 − ( 59 )]4
µ ¶3 · 4 ¸
5 9 .
(b) 5 4 = 0.084 .
9 1 − (9)
(c) Since both the number in the system and the number in the queue are the
same when the system is empty.
5
18–11. The general steady-state equations for the M/M/s queueing system are
ρ = λ/(sµ)
½·Xs−1 ¸ · ¸¾−1
(sρ)n (sρ)s
p0 = +
n=0
n! (s!)(1 − ρ)
(sρ)s+1 p0
L = sρ +
s(s!)(1 − ρ)2
(sρ)s+1 p0
Lq =
s(s!)(1 − ρ)2
W = L/λ
1
Wq = W −
µ
Then Lq = 0.3725.
1 Lq
And finally, W = Wq + = + 40 = 48.96 min
µ 0.0416
(c) ρ = 0.832 for s = 2.
Then Lq = 3.742.
Lq
And finally, W = + 40 = 129.96 min.
0.0416
s = 1 ⇒ p0 = 0.4.
6
s = 2 ⇒ p0 = 0.5384
(0.6)1
and P (wait) = 1 − p0 − p1 = 1 − 0.5384 − (0.5384) = 0.14.
1!
Try s = 5 ⇒ ps = 0.065
Chapter 19
19–1. Let n denote the number of coin flips. Then the number of heads observed is
X ∼ Bin(n, 0.5). Therefore, we can expect to see about n/2 heads over the long
term.
19–2. If π̂n denotes the estimator for π after n darts have been thrown, then it is easy
to see that π̂n ∼ (4/n)Bin(n, π/4). Then E(π̂n ) = π, and we can expect to see the
estimator converge towards π as n becomes large.
19–4. (a) The exact answer is Φ(2) − Φ(0) = 0.4772. The n = 1000 result will tend to
be closer than the n = 10.
R4 R 10
(b) We can instead integrate over 0 , say, since 4 ≈ 0. This strategy will prevent
the “waste” of observations on the trivial tail region.
(c) The exact answer is 0.
19–5. (a)
customer arrival time begin service service time depart time wait
1 3 3.0 6.0 9.0 0.0
2 4 9.0 5.5 14.5 5.0
3 6 14.5 4.0 18.5 8.5
4 7 18.5 1.0 19.5 11.5
5 13 19.5 2.5 22.0 6.5
6 14 22.0 2.0 24.0 8.0
7 20 24.0 2.0 26.0 4.0
8 25 26.0 2.5 28.5 1.0
9 28 28.5 4.0 32.5 0.5
10 30 32.5 2.5 35.0 2.5
2
19–6. The following table gives a history for this (S, s) inventory system.
day initial stock customer order end stock reorder? lost orders
1 20 10 10 no 0
2 10 6 4 yes 0
3 20 11 9 no 0
4 9 3 6 yes 0
5 20 20 0 yes 0
6 20 6 14 no 0
7 14 8 6 no 0
i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Xi 0 1 16 15 12 13 2 11 8 9 14 7 4 5 10 3 0
Thus, U1 = 1/16, U2 = 6/16.
(b) Yes (see the table).
(c) Since the generator cycles, we have X0 = X16 = · · · = X144 = 0. Then
X145 = 1, X146 = 6, . . . , X150 = 2.
19–8. (a) Using the algorithm given in Example 19–8 of the text, we find that X1 =
1422014746 and X2 = 456328559. Since Ui = Xi /(231 − 1), we have U1 =
0.6622, U2 = 0.2125.
1 X2
Thus, for 0 < U < 1/2, we set F (X) = − = U.
2 8
√
Solving, we get X = − 4 − 8U .
1 X2
For 1/2 < U < 1, we set F (X) = + = U.
2 8
√
Solving this time, we get X = 8U − 4.
Recap:
½ √
−√ 4 − 8U , 0 < U < 1/2
X =
8U − 4 1/2 < U < 1
p
(b) X = 8(0.6) − 4 = 0.894.
19–12. (a)
x p(x) F (x) U
−2.5 0.35 0.35 [0,0.35)
1.0 0.25 0.60 [0.35,0.60)
10.5 0.40 1.0 [0.60,1.0)
(b) U = 0.86 yields X = 10.5.
β
19–13. (a) F (X) = 1 − e−(X/α) = U .
19–14. We have
p
Z1 = −2`n(0.45) cos(2π(0.12)) = 0.921
and
p
Z2 = −2`n(0.45) sin(2π(0.12)) = 0.865.
5
12
X
19–15. Ui − 6 = 1.07.
i=1
19–16. Since the Xi ’s are IID exponential(λ) random variables, we know that their m.g.f.
is
λ
MXi (t) = , t < λ, i = 1, 2 . . . , n.
λ−t
Pn
Then the m.g.f. of Y = Xi is
i=1
Yn µ ¶n
λ
MY (t) = MXi (t) = .
i=1
λ − t
We will be done as soon as we can show that this m.g.f. matches that corresponding
to the p.d.f. from Equation (19–4), namely,
Z ∞
MY (t) = ety λn e−λy y n−1 /(n − 1)! dy
0
Z ∞
λn
= e−(λ−t)y y n−1 dy
(n − 1)! 0
Z ∞ µ ¶n−1
λn −u u du
= e
(n − 1)! 0 λ−t λ−t
n Z ∞
λ
= e−u un−1 du
(λ − t)n (n − 1)! 0
λn
= Γ(n)
(λ − t)n (n − 1)!
µ ¶n
λ
= .
λ−t
Since both versions of MY (t) match, that means that the two versions of Y must
come from the same distribution — and we are done.
µYn ¶
1 1
19–17. X = − `n Ui = − `n((0.73)(0.11)) = 0.841.
λ i=1
3
(Note that there are other allocations of the uniforms that will do the trick
just as well.)
(b) Suppose X1 , X2 , . . . , Xn are IID
Pn Bernoulli’s, generated according to (a). To
get a Binomial(n, p), let Y = i=1 Xi .
19–19. Suppose success (S) on trial i corresponds to Ui ≤ 0.25, and failure (F ) corresponds
to Ui > 0.25. Then, from the sequence of uniforms in Problem 19–15, we have
F F F F S, i.e., we require X = 5 trials before observing the first success.
while
b
1 X
V̂B = (Zi − Z̄b )2 = 1.
b − 1 i=1
19–26. (a) E(C) = E(X̄) − E[k(Y − E(Y ))] = µ − [k(E(Y ) − E(Y ))] = µ.
(b) V (C) = V (X̄) + k 2 V (Y ) − 2k Cov(X̄, Y ). Comment: It would be nice, in
terms of minimizing variance, if k Cov(X̄, Y ) > 0.
(c)
d
V (C) = 2kV (Y ) − 2 Cov(X̄, Y ) = 0.
dk
This implies that the critical (minimizing) point is
Cov(X̄, Y )
k = .
V (Y )
8
19–29. You should have a bivariate normal distribution (with correlation 0), centered at
zero with symmetric tails in all directions.