0% found this document useful (0 votes)
11 views

Problem Sheet 4 Solution

Uploaded by

23pgp172
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Problem Sheet 4 Solution

Uploaded by

23pgp172
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Problem Sheet-4

Solutions

Statistics for Business and Economics 7.18 to 7.41. 7.44 to 7.46. 7.50-7.53 Solutions:
18. a. E ( x ) =  = 200

b.  x =  / n = 50 / 100 = 5

c. Normal with E ( x ) = 200 and  x = 5

d. It shows the probability distribution of all possible sample means that can be observed with random
samples of size 100. This distribution can be used to compute the probability that x is within a
specified  from 

19. a. The sampling distribution is normal with

E ( x ) =  = 200

 x =  / n = 50 / 100 = 5

For  5, 195  x  205

Using Standard Normal Probability Table:

x − 5
At x = 205, z = = =1 P ( z  1) = .8413
x 5

x − −5
At x = 195, z = = = −1 P ( z  −1) = .1587
x 5

P (195  x  205) = .8413 - .1587 = .6826

b. For  10, 190  x  210

Using Standard Normal Probability Table:

x− 10
At x = 210, z = = =2 P ( z  2) = .9772
x 5

x − −10
At x = 190, z = = = −2 P ( z  −2) = .0228
x 5
P (190  x  210) = .9772 - .0228 = .9544

20. x = / n

 x = 25/ 50 = 3.54

 x = 25/ 100 = 2.50

 x = 25/ 150 = 2.04

 x = 25/ 200 = 1.77

The standard error of the mean decreases as the sample size increases.

21. a.  x =  / n = 10 / 50 = 141
.

b. n / N = 50 / 50,000 = .001

Use  x =  / n = 10 / 50 = 141
.

c. n / N = 50 / 5000 = .01

Use  x =  / n = 10 / 50 = 141
.

d. n / N = 50 / 500 = .10

N −n  500 − 50 10
Use  x = = = 134
.
N −1 n 500 − 1 50

Note: Only case (d) where n /N = .10 requires the use of the finite population correction factor.

22. a. E( x ) = 51,800 and  x =  / n = 4000 / 60 = 516.40

x
51,800

E(x )
Discrete Probability Distributions

The normal distribution for x is based on the Central Limit Theorem.

b. For n = 120, E ( x ) remains $51,800 and the sampling distribution of x can still be approximated by
a normal distribution. However,  x is reduced to 4000 / 120 = 365.15.

c. As the sample size is increased, the standard error of the mean,  x , is reduced. This appears logical
from the point of view that larger samples should tend to provide sample means that are closer to the
population mean. Thus, the variability in the sample mean, measured in terms of  x , should
decrease as the sample size is increased.

4000
23. a. With a sample of size 60  x = = 516.40
60

52,300 − 51,800
At x = 52,300, z = = .97
516.40

P( x ≤ 52,300) = P(z ≤ .97) = .8340

51,300 − 51,800
At x = 51,300, z = = −.97
516.40

P( x < 51,300) = P(z < -.97) = .1660

P(51,300 ≤ x ≤ 52,300) = .8340 - .1660 = .6680

4000
b. x = = 365.15
120

52,300 − 51,800
At x = 52,300, z = = 1.37
365.15

P( x ≤ 52,300) = P(z ≤ 1.37) = .9147

51,300 − 51,800
At x = 51,300, z = = −1.37
365.15

P( x < 51,300) = P(z < -1.37) = .0853

P(51,300 ≤ x ≤ 52,300) = .9147 - .0853 = .8294

24. a. Normal distribution, E ( x ) = 17.5

 x =  / n = 4 / 50 = .57

b. Within 1 week means 16.5  x  18.5

18.5 − 17.5
At x = 18.5, z = = 1.75 P(z ≤ 1.75) = .9599
.57
At x = 16.5, z = -1.75. P(z < -1.75) = .0401

So P(16.5 ≤ x ≤ 18.5) = .9599 - .0401 = .9198

c. Within 1/2 week means 17.0 ≤ x ≤ 18.0

18.0 − 17.5
At x = 18.0, z = = .88 P(z ≤ .88) = .8106
.57

At x = 17.0, z = -.88 P(z < -.88) = .1894

P(17.0 ≤ x ≤ 18.0) = .8106 - .1894 = .6212

25.  x =  / n = 100 / 90 = 10.54 This value for the standard error can be used for parts (a) and (b)
below.

512 − 502
a. z= = .95 P(z ≤ .95) = .8289
10.54

492 − 502
z= = −.95 P(z < -.95) = .1711
10.54

probability = .8289 - .1711 =.6578

525 − 515
b. z= = .95 P(z ≤ .95) = .8289
10.54

505 − 515
z= = −.95 P(z < -.95) = .1711
10.54

probability = .8289 - .1711 =.6578

The probability of being within 10 of the mean on the Mathematics portion of the test is exactly the
same as the probability of being within 10 on the Critical Reading portion of the SAT. This is
because the standard error is the same in both cases. The fact that the means differ does not affect the
probability calculation.

c.  x =  / n = 100 / 100 = 10.0 The standard error is smaller here because the sample size is larger.

504 − 494
z= = 1.00 P(z ≤ 1.00) = .8413
10.0

484 − 494
z= = −1.00 P(z < -1.00) = .1587
10.0

probability = .8413 - .1587 =.6826

The probability is larger here than it is in parts (a) and (b) because the larger sample size has made
the standard error smaller.
Discrete Probability Distributions

x − 939
26. a. z=
/ n

Within  25 means x - 939 must be between -25 and +25.

The z value for x - 939 = -25 is just the negative of the z value for x - 939 = 25. So we just show
the computation of z for x - 939 = 25.

25
n = 30 z= = .56 P(-.56 ≤ z ≤ .56) = .7123 - .2877 = .4246
245 / 30

25
n = 50 z= = .72 P(-.72 ≤ z ≤ .72) = .7642 - .2358 = .5284
245 / 50

25
n = 100 z= = 1.02 P(-1.02 ≤ z ≤ 1.02) = .8461 - .1539 = .6922
245 / 100

25
n = 400 z= = 2.04 P(-2.04 ≤ z ≤ 2.04) = .9793 - .0207 = .9586
245 / 400

b. A larger sample increases the probability that the sample mean will be within a specified
distance of the population mean. In the automobile insurance example, the probability of
being within 25 of  ranges from .4246 for a sample of size 30 to .9586 for a sample of
size 400.
27. a.  x =  / n = 2.30 / 50 = .3253

x − 22.18 − 21.68
At x = 22.18, z = = = 1.54 P(z ≤ 1.54) = .9382
/ n .3253

At x = 21.18, z = -1.54

P(z < -1.54) = .0618, thus

P(21.18 ≤ x ≤ 22.18) = .9382 - .0618 = .8764

b.  x =  / n = 2.05 / 50 = .2899

x − 19.30 − 18.80
At x = 19.30, z = = = 1.72 P(z ≤ 1.72) = .9573
/ n .2899

At x = 18.30, z = -1.72, P(z < -1.72) = .0427, thus

P(18.30 ≤ x ≤ 19.30) = .9573 - .0427 = .9146

c. In part (b) we have a higher probability of obtaining a sample mean within $.50 of the population
mean because the standard error for female graduates (.2899) is smaller than the standard error for
male graduates (.3253).

d. With n = 120,  x =  / n = 2.05 / 120 = .1871


18.50 − 18.80
At x = 18.50, z = = −1.60
.1871

P( x < 18.50) = P(z < -1.60) = .0548

28. a. This is a graph of a normal distribution with E ( x ) = 95 and

 x =  / n = 14 / 30 = 2.56

b. Within 3 strokes means 92  x  98

98 − 95 92 − 95
z= = 1.17 z= = −1.17
2.56 2.56

P(92  x  98) = P(-1.17 ≤ z ≤ 1.17) = .8790 - .1210 = .7580

The probability the sample means will be within 3 strokes of the population mean of 95 is .7580.

c.  x =  / n = 14 / 45 = 2.09

Within 3 strokes means 103  x  109

109 − 106 103 − 106


z= = 1.44 z= = −1.44
2.09 2.09

P(103  x  109) = P(-1.44 ≤ z ≤ 1.44) = .9251 - .0749 = .8502

The probability the sample means will be within 3 strokes of the population mean of 106 is .8502.

d. The probability of being within 3 strokes for female golfers is higher because the sample size is
larger.

29.  = 183  = 50

a. n = 30 Within 8 means 175  x  191

x − 8
z= = = .88
 / n 50 / 30

P(175  x  191) = P(-.88  z  .88) = .8106 - .1894 = .6212

b. n = 50 Within 8 means 175  x  191

x − 8
z= = = 1.13
 / n 50 / 50

P(175  x  191) = P(-1.13  z  1.13) = .8708 - .1292 = .7416

c. n = 100 Within 8 means 175  x  191


Discrete Probability Distributions

x − 8
z= = = 1.60
/ n 50 / 100

P(175  x  191) = P(-1.60  z  1.60) = .9452 - .0548 = .8904

d. None of the sample sizes in parts (a), (b), and (c) are large enough. The sample size will need to
be greater than n = 100, which was used in part (c).

30. a. n / N = 40 / 4000 = .01 < .05; therefore, the finite population correction factor is not necessary.

b. With the finite population correction factor

N −n  4000 − 40 8.2
x = = = 129
.
N −1 n 4000 − 1 40

Without the finite population correction factor

 x =  / n = 130
.

Including the finite population correction factor provides only a slightly different value for  x than
when the correction factor is not used.

x− 2
c. z= = = 154
. P(z ≤ 1.54) = .9382
130
. 130
.

P(z < -1.54) = .0618

Probability = .9382 - .0618 = .8764

31. a. E( p ) = p = .40

p(1 − p) .40(.60)
b. p = = = .0490
n 100

c. Normal distribution with E( p ) = .40 and  p = .0490

d. It shows the probability distribution for the sample proportion p .

32. a. E( p ) = .40

p(1 − p) .40(.60)
p = = = .0346
n 200

Within ± .03 means .37 ≤ p ≤ .43

p− p .03
z= = = .87 P(z ≤ .87) = .8078
p .0346
P(z < -.87) = .1922

P(.37 ≤ p ≤ .43) = .8078 - .1922 = .6156

p− p .05
b. z= = = 1.44 P(z ≤ 1.44) = .9251
p .0346

P(z < -1.44) = .0749

P(.35 ≤ p ≤ .45) = .9251 - .0749 = .8502

p(1 − p)
33. p =
n

(.55)(.45)
p = = .0497
100

(.55)(.45)
p = = .0352
200

(.55)(.45)
p = = .0222
500
(.55)(.45)
p = = .0157
1000

The standard error of the proportion,  p , decreases as n increases

(.30)(.70)
34. a. p = = .0458
100

Within ± .04 means .26 ≤ p ≤ .34

p− p .04
z= = = .87 P(z ≤ .87) = .8078
p .0458

P(z < -.87) = .1922

P(.26 ≤ p ≤ .34) = .8078 - .1922 = .6156

(.30)(.70)
b. p = = .0324
200

p− p .04
z= = = 1.23 P(z ≤ 1.23) = .8907
p .0324

P(z < -1.23) = .1093


Discrete Probability Distributions

P(.26 ≤ p ≤ .34) = .8907 - .1093 = .7814

(.30)(.70)
c. p = = .0205
500

p− p .04
z= = = 1.95 P(z ≤ 1.95) = .9744
p .0205

P(z < -1.95) = .0256

P(.26 ≤ p ≤ .34) = .9744 - .0256 = .9488

(.30)(.70)
d. p = = .0145
1000

p− p .04
z= = = 2.76 P(z ≤ 2.76) = .9971
p .0145

P(z < -2.76) = .0029

P(.26 ≤ p ≤ .34) = .9971 - .0029 = .9942

e. With a larger sample, there is a higher probability p will be within  .04 of the population
proportion p.

35. a.

p(1 − p) .30(.70)
p = = = .0458
n 100

p
.30

The normal distribution is appropriate because np = 100(.30) = 30 and n(1 - p) = 100(.70) = 70 are
both greater than 5.

b. P (.20  p  .40) = ?

.40 − .30
z= = 2.18 P(z ≤ 2.18) = .9854
.0458

P(z < -2.18) = .0146

P(.20 ≤ p ≤ .40) = .9854 - .0146 = .9708


c. P (.25  p  .35) = ?

.35 − .30
z= = 1.09 P(z ≤ 1.09) = .8621
.0458

P(z < -1.09) = .1379

P(.25 ≤ p ≤ .35) = .8621 - .1379 = .7242

36. a. This is a graph of a normal distribution with a mean of E ( p ) = .55 and

p(1 − p) .55(1 − .55)


p = = = .0352
n 200

b. Within ± .05 means .50 ≤ p ≤ .60

p− p .60 − .55 p− p .50 − .55


z= = = 1.42 z= = = −1.42
p .0352 p .0352

P(.50  p  .60) = P(-1.42 ≤ z ≤ 1.42) = .9222 - .0778 = .8444

c. This is a graph of a normal distribution with a mean of E ( p ) = .45 and

p(1 − p) .45(1 − .45)


p = = = .0352
n 200

p(1 − p) .45(1 − .45)


d. p = = = .0352
n 200

Within ± .05 means .40 ≤ p ≤ .50

p− p .50 − .45 p− p .40 − .45


z= = = 1.42 z= = = −1.42
p .0352 p .0352

P(.40  p  .50) = P(-1.42 ≤ z ≤ 1.42) = .9222 - .0778 = .8444

e. No, the probabilities are exactly the same. This is because  p , the standard error, and the width of
the interval are the same in both cases. Notice the formula for computing the standard error. It
involves p (1 − p ) . So whenever p = 1 - p the standard error will be the same. In part (b), p = .45
and 1 – p = .55. In part (d), p = .55 and 1 – p = .45.

p(1 − p) .55(1 − .55)


f. For n = 400,  p = = = .0249
n 400

Within ± .05 means .50 ≤ p ≤ .60


Discrete Probability Distributions

p− p .60 − .55 p− p .50 − .55


z= = = 2.01 z= = = −2.01
p .0249 p .0249

P(.50  p  .60) = P(-2.01 ≤ z ≤ 2.01) = .9778 - .0222 = .9556

The probability is larger than in part (b). This is because the larger sample size has reduced the
standard error from .0352 to .0249.

37. a. Normal distribution

E ( p ) = .12

p(1 − p) (.12)(1 − .12)


p = = = .0140
n 540

p− p .03
b. z= = = 2.14 P(z ≤ 1.94) = .9838
p .0140

P(z < -2.14) = .0162

P(.09 ≤ p ≤ .15) = .9838 - .0162 = .9676


p− p .015
c. z= = = 1.07 P(z ≤ 1.07) = .8577
p .0140

P(z < -1.07) = .1423

P(.105 ≤ p ≤ .135) = .8577 - .1423 = .7154

38. a. It is a normal distribution with

E( p ) = .42

p(1 − p) (.42)(.58)
p = = = .0285
n 300

p− p .03
b. z= = = 1.05 P(z ≤ 1.05) = .8531
p .0285

P(z < -1.05) = .1469

P(.39 ≤ p ≤ .44) = .8531 - .1469 = .7062

p− p .05
c. z= = = 1.75 P(z ≤ 1.75) = .9599
p .0285

P(z < -1.75) = .0401

P(.39 ≤ p ≤ .44) = .9599 - .0401 = .9198


d. The probabilities would increase. This is because the increase in the sample size makes the standard
error,  p , smaller.

39. a. Normal distribution with E ( p ) = p = .75 and

p(1 − p) .75(1 − .75)


p = = = .0204
n 450

p− p .04
b. z= = = 1.96 P(z ≤ 1.96) = .9750
p .0204

P(z < -1.96) = .0250

P(.71  p  .79) = P(-1.96  z  1.96) = .9750 - .0275 = .9500

c. Normal distribution with E ( p ) = p = .75 and

p(1 − p ) .75(1 − .75)


p = = = .0306
n 200

p− p .04
d. z= = = 1.31 P(z ≤ 1.31) = .9049
.75(1 − .75) .0306
200

P(z < -1.31) = .0951

P(.71  p  .79) = P(-1.31  z  1.31) = .9049 - .0951 = .8098

e. The probability of the sample proportion being within .04 of the population mean was reduced from
.9500 to .8098. So there is a gain in precision by increasing the sample size from 200 to 450. If the
extra cost of using the larger sample size is not too great, we should probably do so.

40. a. E ( p ) = .76

p(1 − p) .76(1 − .76)


p = = = .0214
n 400

Normal distribution because np = 400(.76) = 304 and n(1 - p) = 400(.24) = 96

.79 − .76
b. z= = 1.40 P(z ≤1.40) = .9192
.0214

P(z < -1.40) = .0808

P(.73  p  .79) = P(-1.40  z  1.40) = .9192 - .0808 = .8384

p(1 − p) .76(1 − .76)


c. p = = = .0156
n 750
Discrete Probability Distributions

.79 − .76
z= = 1.92 P(z ≤ 1.92) = .9726
.0156

P(z < -1.92) = .0274

P(.73  p  .79) = P(-1.92  z  1.92) = .9726 - .0274 = .9452

41. a. E( p ) = .17

p(1 − p) (.17)(1 − .17)


p = = = .0133
n 800

Distribution is approximately normal because np = 800(.17) = 136 > 5


and n(1 – p) = 800(.83) = 664 > 5

.19 − .17
b. z= = 1.51 P(z ≤ 1.51) = .9345
.0133

P(z < -1.51) = .0655

P(.15  p  .19) = P(-1.51  z  1.51) = .9345 - .0655 = .8690


p(1 − p) (.17)(1 − .17)
c. p = = = .0094
n 1600

.19 − .17
z= = 2.13 P(z ≤ 2.13) = .9834
.0094

P(z < -2.13) = .0166

P(.15  p  .19) = P(-2.13  z  2.13) = .9834 - .0166 = .9668

42. The random numbers corresponding to the first seven universities selected are

122, 99, 25, 55, 115, 102, 61

The third, fourth and fifth columns of Table 7.1 were needed to find 7 random numbers of 133 or
less without duplicate numbers.

Author’s note: The universities identified are: Clarkson U. (122), U. of Arizona (99), UCLA (25),
U. of Maryland (55), U. of New Hampshire (115), Florida State U. (102), Clemson U. (61).

43. a. With n = 100, we can approximate the sampling distribution with a normal distribution having

E( x ) = 8086

 2500
x = = = 250
n 100

x − 200
b. z= = = .80 P(z ≤ .80) = .7881
/ n 2500 / 100
P(z < -.80) = .2119

P(7886  x  8286) = P(-.80  z  .80) = .7881 - .2119 = .5762

The probability that the sample mean will be within $200 of the population mean is .5762.

9000 − 8086
c. At 9000, z = = 3.66
2500 / 100

P( x ≥ 9000) = P(z ≥ 3.66) 0


Yes, the research firm should be questioned. A sample mean this large is extremely unlikely (almost
0 probability) if a simple random sample is taken from a population with a mean of $8086.

44. a. Normal distribution with

E ( x ) = 406

 80
x = = = 10
n 64

x − 15
b. z= = = 1.50 P(z ≤ 1.50) = .9332
 / n 80 / 64

P(z < -1.50) = .0668

P(391  x  421) = P(-1.50  z  1.50) = .9332 - .0668 = .8664

x − 380 − 406
c. At x = 380, z = = = −2.60
/ n 80 / 64

P( x ≤ 380) = P(z ≤ -2.60) = .0047

Yes, this is an unusually low performing group of 64 stores. The probability of a sample mean
annual sales per square foot of $380 or less is only .0047.

45. With n = 60 the central limit theorem allows us to conclude the sampling distribution is
approximately normal.

a. This means 14  x  16

16 − 15
At x = 16, z = = 1.94 P(z ≤ 1.94) = .9738
4 / 60

P(z < -1.94) = .0262

P(14  x  16) = P(-1.94  z  1.94) = .9738 - .0262 = .9476

b. This means 14.25  x  15.75


Discrete Probability Distributions

15.75 − 15
At x = 15.75, z = = 1.45 P(z ≤ 1.45) = .9265
4 / 60

P(z < -1.45) = .0735

P(14.25  x  15.75) = P(-1.45  z  1.45) = .9265 - .0735 = .8530

46.  = 27,175  = 7400

a.  x = 7400 / 60 = 955

x − 0
b. z= = =0
x 955

P( x > 27,175) = P(z > 0) = .50

Note: This could have been answered easily without any calculations ; 27,175 is the expected value
of the sampling distribution of x .

x − 1000
c. z= = = 1.05 P(z ≤ 1.05) = .8531
x 955

P(z < -1.05) = .1469

P(26,175  x  28,175) = P(-1.05  z  1.05) = .8531 - .1469 = .7062

d.  x = 7400 / 100 = 740

x − 1000
z= = = 1.35 P(z ≤ 1.35) = .9115
x 740

P(z < -1.35) = .0885

P(26,175  x  28,175) = P(-1.35  z  1.35) = .9115 - .0885 = .8230

N −n 
47. a. x =
N −1 n

N = 2000

2000 − 50 144
x = = 2011
.
2000 − 1 50

N = 5000

5000 − 50 144
x = = 20.26
5000 − 1 50

N = 10,000
10,000 − 50 144
x = = 20.31
10,000 − 1 50

Note: With n / N  .05 for all three cases, common statistical practice would be to ignore
144
the finite population correction factor and use  x = = 20.36 for each case.
50

b. N = 2000

25
z= = 1.24 P(z ≤ 1.24) = .8925
20.11

P(z < -1.24) = .1075


Probability = P(-1.24  z  1.24) = .8925 - .1075 = .7850

N = 5000

25
z= = 1.23 P(z ≤ 1.23) = .8907
20.26

P(z < -1.23) = .1093

Probability = P(-1.23  z  1.23) = .8907 - .1093 = .7814

N = 10,000

25
z= = 1.23 P(z ≤ 1.23) = .8907
20.31

P(z < -1.23) = .1093

Probability = P(-1.23  z  1.23) = .8907 - .1093 = .7814

All probabilities are approximately .78 indicating that a sample of size 50 will work well for all 3
firms.

 500
48. a. x = = = 20
n n

n = 500/20 = 25 and n = (25)2 = 625

b. For  25,

25
z= = 1.25 P(z ≤ 1.25) = .8944
20

P(z < -1.25) = .1056

Probability = P(-1.25  z  1.25) = .8944 - .1056 = .7888

49. Sampling distribution of x


Discrete Probability Distributions

 
x = =
n 30

0.05 0.05

x
1.9  2.1

1.9 + 2.1 = 2
 =
2

The area below x = 2.1 must be 1 - .05 = .95. An area of .95 in the standard normal table shows
z = 1.645.

Thus,

2.1 − 2.0
z= = 1.645
 / 30

Solve for 

(.1) 30
= = .33
1.645

50. p = .28

a. This is the graph of a normal distribution with E( p ) = p = .28 and

p(1 − p) .28(1 − .28)


p = = = .0290
n 240

b. Within ± .04 means .24 ≤ p ≤ .32

.32 − .28 .24 − .28


z= = 1.38 z= = −1.38
.0290 .0290

P(.24  p  .32) = P(-1.38 ≤ z ≤ 1.38) = .9162 - .0838 = .8324

c. Within ± .02 means .26 ≤ p ≤ .30

.30 − .28 .26 − .28


z= = .69 z= = −.69
.0290 .0290

P(.26  p  .30) = P(-.69 ≤ z ≤ .69) = .7549 - .2451 = .5098


p(1 − p) (.40)(.60)
51. p = = = .0245
n 400

P ( p  .375) = ?

.375 − .40
z= = −1.02 P(z < -1.02) = .1539
.0245

P ( p  .375) = 1 - .1539 = .8461

p(1 − p) (.40)(1 − .40)


52. a. p = = = .0251
n 380

Within ± .04 means .36 ≤ p ≤ .44

.44 − .40 .36 − .40


z= = 1.59 z= = −1.59
.0251 .0251

P(.36  p  .44) = P(-1.59 ≤ z ≤ 1.59) = .9441 - .0559 = .8882

b. We want P( p  .45)

p− p .45 − .40
z= = = 1.99
p .0251

P( p  .45) = P(z  1.99) = 1 - .9767 = .0233

53. a. Normal distribution with E ( p ) = .15 and

p(1 − p) (.15)(.85)
p = = = .0292
n 150

b. P (.12  p  .18) = ?

.18 − .15
z= = 1.03 P(z ≤ 1.03) = .8485
.0292

P(z < -1.03) = .1515

P(.12  p  .18) = P(-1.03  z  1.03) = .8485 - .1515 =.6970

You might also like