We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2
KEY FORMULAS
Prem S. Mann • Introductory Statistics, Ninth Edition
Chapter 2 • Organizing and Graphing Data • Chebyshev’s theorem:
• Relative frequency of a class = f∕ ∑ f For any number k greater than 1, at least (1 − 1∕k2) of the values for any distribution lie within k standard deviations • Percentage of a class = (Relative frequency) × 100% of the mean. • Class midpoint or mark = (Upper limit + Lower limit)∕2 • Empirical rule: • Class width = Upper boundary − Lower boundary For a specific bell-shaped distribution, about 68% of the observations fall in the interval (μ − σ) to (μ + σ), about • Cumulative relative frequency 95% fall in the interval (μ − 2σ) to (μ + 2σ), and about Cumulative frequency 99.7% fall in the interval (μ − 3σ) to (μ + 3σ). = Total observations in the data set • Q1 = First quartile given by the value of the middle term • Cumulative percentage among the (ranked) observations that are less than the = (Cumulative relative frequency) × 100% median Q2 = Second quartile given by the value of the middle term in a ranked data set Chapter 3 • Numerical Descriptive Measures Q3 = Third quartile given by the value of the middle term • Mean for ungrouped data: μ = ∑ x∕N and x = ∑ x∕n among the (ranked) observations that are greater than • Mean for grouped data: μ = ∑ mf∕N and x = ∑ mf∕n the median where m is the midpoint and f is the frequency of a class • Interquartile range: IQR = Q3 − Q1 • Weighted Mean for ungrouped data = ∑ xw∕ ∑ w • The kth percentile: • k% Trimmed Mean = Mean of the values after dropping kn k% of the values from each end of Pk = Value of the ( th term in a ranked data set 100 ) the ranked data • Percentile rank of xi • Median for ungrouped data = Value of the middle term in a ranked data set Number of values less than xi = × 100 • Range = Largest value − Smallest value Total number of values in the data set • Variance for ungrouped data: ( ∑ x) 2 ( ∑ x) 2 Chapter 4 • Probability ∑ x2 −( ∑ x2 −( N ) n ) • Classical probability rule for a simple event: σ2 = and s2 = N n−1 1 P(Ei ) = 2 2 where σ is the population variance and s is the sample variance Total number of outcomes • Standard deviation for ungrouped data: • Classical probability rule for a compound event: 2 ( ∑ x) 2 ( ∑ x) Number of outcomes in A ∑ x2 −( ∑ x2 −( P(A) = N ) n ) Total number of outcomes σ= and s = • Relative frequency as an approximation of probability: R N R n−1 f where σ and s are the population and sample standard devia- P(A) = tions, respectively n σ s • Conditional probability of an event: • Coefficient of variation = × 100% or × 100% μ x P(A and B) P(A and B) • Variance for grouped data: P(A∣B) = and P(B∣A) = P(B) P(A) (∑ m f )2 (∑ m f )2 • Condition for independence of events: ∑ m2 f −( ∑ m2 f −( N ) n ) P(A) = P(A∣B) and∕or P(B) = P(B∣A) σ2 = and s2 = N n−1 • For complementary events: P(A) + P(A) = 1 • Standard deviation for grouped data: • Multiplication rule for dependent events: (∑ m f )2 (∑ m f )2 P(A and B) = P(A) P(B∣A) ∑ m2 f −( ∑ m2 f − N ) ( n ) • Multiplication rule for independent events: σ= and s = R N R n−1 P(A and B) = P(A) P(B) • Joint probability of two mutually exclusive events: • Population proportion: p = X∕N P(A and B) = 0 • Sample proportion: p̂ = x∕n • Addition rule for mutually nonexclusive events: • Mean of p̂: μp̂ = p P(A or B) = P(A) + P(B) − P(A and B) • Standard deviation of p̂ when n∕N ≤ .05: σp̂ = √pq∕n • Addition rule for mutually exclusive events: p̂ − p • z value for p̂: z = P(A or B) = P(A) + P(B) σp̂ • n factorial: n! = n(n − 1)(n − 2) . . . 3 · 2 · 1 • Number of combinations of n items selected x at a time: Chapter 8 • Estimation of the Mean and Proportion n! • Point estimate of μ = x nCx = x!(n − x)! • Confidence interval for μ using the normal distribution • Number of permutations of n items selected x at a time: when σ is known: n! x ± zσ x where σ x = σ∕√n n Px = (n − x)! • Confidence interval for μ using the t distribution when σ is not known: Chapter 5 • Discrete Random Variables and Their x ± ts x where s x = s∕√n Probability Distributions • Mean of a discrete random variable x: μ = ∑ xP(x) • Margin of error of the estimate for μ: • Standard deviation of a discrete random variable x: E = zσx or t sx σ = √∑ x 2P(x) − μ2 • Determining sample size for estimating μ: • Binomial probability formula: P(x) = nCx p q x n−x n = z 2σ2∕E 2 • Mean and standard deviation of the binomial distribution: • Confidence interval for p for a large sample: μ = np and σ = √npq p̂ ± z sp̂ where sp̂ = √p̂q̂∕n • Hypergeometric probability formula: • Margin of error of the estimate for p: r Cx N−r Cn−x P(x) = N Cn E = z sp̂ where sp̂ = √p̂q̂∕n
λx e−λ • Determining sample size for estimating p:
• Poisson probability formula: P(x) = x! n = z 2pq∕E 2 • Mean, variance, and standard deviation of the Poisson prob- ability distribution: Chapter 9 • Hypothesis Tests about the Mean μ = λ, σ2 = λ, and σ = √λ and Proportion • Test statistic z for a test of hypothesis about μ using the normal distribution when σ is known: Chapter 6 • Continuous Random Variables and the Normal Distribution x−μ σ z= where σ x = x−μ σx √n • z value for an x value: z = σ • Test statistic for a test of hypothesis about μ using the t dis- • Value of x when μ, σ, and z are known: x = μ + zσ tribution when σ is not known: x−μ s t= where s x = sx √n Chapter 7 • Sampling Distributions • Mean of x : μx = μ • Test statistic for a test of hypothesis about p for a large • Standard deviation of x when n∕N ≤ .05: σx = σ∕√n sample: x−μ p̂ − p pq • z value for x : z = z= where σp̂ = σx σp̂ A n