Simple Random and Systematic Sampling
Simple Random and Systematic Sampling
Sampling
Simple random sampling and systematic sampling provide the foundation for almost all of the
more complex sampling designs that are based on probability sampling. They are also usually
the easiest designs to implement. These two designs highlight a trade-off inherent in all sampling
designs: do we select sample units at random to minimize the risk of introducing biases into the
sample or do we select sample units systematically to ensure that sample units are well-
distributed throughout the population?
Both designs involve selecting n sample units from the N units in the population and can be
implemented with or without replacement.
Advantages:
• Easy to implement
• Requires little advance knowledge about the target population
Disadvantages:
How it is implemented:
All units within the population must have the same probability of being selected, therefore each
and every sample of size n drawn from the population has an equal chance of being selected.
1
Estimating the Population Mean
The population mean (μ) is the true average number of entities per sample unit and is estimated
with the sample mean ( μ̂ or y ) which has an unbiased estimator:
∑y
i =1
i
μ̂ =
n
where yi is the value from each unit in the sample and n is the number of units in the sample.
The population variance (σ2) is estimated with the sample variance (s2) which has an unbiased
estimator:
∑(y
i =1
i − y)2
s2 =
n −1
2
⎛ N −n⎞ s
Variance of the estimate μ̂ is: vâr (μˆ ) = ⎜ ⎟ .
⎝ N ⎠ n
The standard error of the estimate is the square root of variance of the estimate, which as always,
is the standard deviation of the sampling distribution of the estimate. Standard error is a useful
gauge of how precisely a parameter has been estimated as is a function of the variation inherent
in the population (σ2) and the size of the sample (n).
2
⎛ N −n⎞ s
Standard error of μ̂ is: SE(μˆ ) = ⎜ ⎟ .
⎝ N ⎠ n
⎛ N −n⎞
The quantity ⎜ ⎟ is the finite population correction factor which adjusts variance of the
⎝ N ⎠
estimator (not variance of the population which does not change with n) to reflect the amount of
information that is known about the population through the sample. Simply, as the amount of
information we know about the population through sampling increases, the remaining uncertainty
decreases. Therefore, the correction factor reflects the proportion of the population that remains
unknown. Consequently, as the number of sampling units measured (n) approaches the total
number of sampling units in the population
(N), the finite population correction factor FPC with N = 100
approaches zero, so the amount of
uncertainty in the estimate also 1
approaches zero. 0.8
0.6
FPC
2
we have to do when N is unknown, the effect on the variance of the estimator is slight when N is
large. When N is small, however, the variance of the estimator can be overestimated
appreciably.
N
The population total τ = ∑y
i =1
i = Nμ is estimated with the sample total ( τˆ ) which has an unbiased
n
N
estimator: τˆ = Nμˆ =
n ∑y
i =1
i
where N is the total number of sample units in a population, n is the number of units in the
sample, and yi is the value measured from each sample unit.
In studies of wildlife populations, the total number of entities in a population is often referred to as
“abundance” and is traditionally represented with the symbol N. Consequently, there is real
potential for confusing the number of entities in the population with the number of sampling units
in the sampling frame. Therefore, in the context of sampling theory, we’ll use τˆ to represent the
population total and N to represent the number of sampling units in a population. Later, when
addressing wildlife populations specifically, we’ll use N to represent abundance to remain
consistent with the literature in that field.
Because the estimator τˆ is simply the number of sample units in the population N times the
mean number of entities per sample unit, μ̂ , the variance of the estimate τˆ reflects both the
number of units in the sampling universe N and the variance associated with μ̂ . An unbiased
estimate for the variance of the estimate τˆ is:
⎛ s 2 ⎞⎛ N − n ⎞
var(τˆ) = N 2 var(μˆ ) = N 2 ⎜⎜ ⎟⎟⎜ ⎟
⎝ n ⎠⎝ N ⎠
3
In the case of simple random sampling, the population proportion follows the mean exactly; that
is, p = μ. If this idea is new to you, convince yourself by working through an example. Say we
generate a sample of size 10, where 4 entities have a value of 1 and 6 entities have a value of 0
(e.g., 1 = presence of a trait, 0 = absence of a trait). The proportion of entities in the sample with
the trait is 4/10 or 0.40 which is also equal to the sample mean, which = 0.40
([1+1+1+1+0+0+0+0+0+0]/10 = 4/10). Cosmic.
It follows that the population proportion (p) is estimated with the sample proportion ( p̂ ) which has
an unbiased estimator:
∑y
i =1
i
pˆ = μ̂ = .
n
Because we are dealing with dichotomous proportions (sample unit does or does not have the
trait), the population variance σ2 is computed based on variance for a binomial which is the
proportion of the population with the trait (p) times the proportion that does not have that trait (1 –
p) or p(1 – p). The estimate of the population variance s2 is: p ˆ (1 − pˆ ) .
2
⎛ N −n⎞ s ⎛ N − n ⎞ pˆ ( 1 − pˆ )
Variance of the estimate p̂ is: vâr (pˆ ) = ⎜ ⎟ =⎜ ⎟ .
⎝ N ⎠ n −1 ⎝ N ⎠ n −1
2
⎛ N −n⎞ s ⎛ N − n ⎞ pˆ ( 1 − pˆ )
Standard error of p̂ is: SE(pˆ ) = ⎜ ⎟ = ⎜ ⎟ .
⎝ N ⎠ n −1 ⎝ N ⎠ n −1
Determining how many sample units (n) to measure requires that we establish the degree of
precision that we require for the estimate we wish to generate; we denote a quantity B, the
desired bound of the error of estimation, which we define as the half-width of the confidence
interval we want to result around the estimate we will generate from the proposed sampling effort.
To establish the number of sample units to measure to estimate the population mean μ at a
desired level of precision B with simple random sampling, we set Z × SE( y ) (the formula for a
confidence interval) equal to B and solve this expression for n. We use Z to denote the upper α/2
point of the standard normal distribution for simplicity (although we could use the Student’s t
distribution), where α is the same value we used to establish the width of a confidence interval,
the rate at which we are willing to tolerate Type I errors.
⎛ N −n ⎛σ2 ⎞⎞
We set B=Z ⎜ ⎜ ⎟ ⎟ and solve for n:
⎜ N ⎜ n ⎟⎟
⎝ ⎝ ⎠⎠
4
1 Z 2σ 2
n= ; n0 = .
1 1 B2
+
n0 N
If we anticipate n to be small relative to N, we can ignore the population correction factor and use
only the formula for n0 to gauge sample size.
Example: Estimate the average amount of money μ for a hospital’s accounts receivable. Note,
however, that no prior information exists with which to estimate population variance σ2 but we
know that most receivables lie within a range of about $100 and there are N = 1000 accounts.
How many samples are needed to estimate μ with a bound on the error of estimation B = $3 with
95% confidence (α = 0.05, Z = 1.96) using simple random sampling?
Although it is ideal to have data with which to estimate σ2, the range is often approximately equal
to 4 σ, so one-fourth of the range might be used as an approximate value of σ.
range 100
σ≈ = = 25
4 4
1 1 1
n= = = = 217.4
1 1 1 1 0.0036 + 0.0001
2 2
+ +
1.96 25 1000 277.78 1000
32
Therefore, about 218 samples are needed to estimate μ with a bound on the error of estimation
B = $3.
To establish the number of sample units to measure to estimate the population total τ at a desired
⎛ σ2 ⎞
level of precision B with simple random sampling, we set B = Z ⎜⎜ N ( N − n) ⎟ and solve for n:
⎟
⎝ n ⎠
1 N 2 Z 2σ 2
n= ; n0 =
1 1 B2
+
n0 N
And as with establishing n for the population mean, if N is large relative to n, the population
correction factor can be ignored, and the formula for sample size reduced to n0
Example: What sample size is necessary to estimate the caribou population we examined to
within d = 2000 animals of the true total with 90% confidence (α = 0.10)?
Using s2 = 919 from earlier and Z = 1.645, which is the upper α = 0.10/2 = 0.05 point of the
normal distribution:
5
1
n= ≈ 44.
1 1
+
51 286
Systematic Sampling
Occasionally, selecting sample units at random can introduce logistical challenges that preclude
collecting data efficiently. If we suspect that the chances of introducing a bias are low or if ideal
dispersion of sample units throughout the population is a higher priority than minimizing potential
biases, then it might be most appropriate to choose samples non-randomly. As in simple random
sampling, systematic sampling is a type of probability sampling where each element in the
population has a known and equal probability of being selected. The probabilistic framework is
maintained through selection of one or more random starting points. Although sometimes more
convenient, systematic sampling does provide less protection against introducing biases in the
sample compared to simple random sampling.
Estimators for systematic sampling and simple random sampling are identical; only the method of
sample selection differs. Therefore, systematic sampling is used most often to simplify the
process of selecting a sample or to ensure ideal dispersion of sample units throughout the
population.
Advantages:
• Easy to implement
• Maximum dispersion of sample units throughout the population
• Requires minimum knowledge of the population
Disadvantages:
How it is implemented:
6
a 1-in-10 systematic sample. The example in the figure is a 1-in-8 sample drawn from a
population of N = 300; this yields n = 28. Note that the sample size drawn will vary and depends
on the location of the first unit drawn.
∑y
i =1
i
The population mean (μ) is estimated with: μ̂ =
n
∑(y
i =1
i − y)2
The population variance (σ2) is estimated with: s 2 =
n −1
2
⎛ N −n⎞s
Variance of the estimate μ̂ is: vâr(μˆ ) = ⎜ ⎟ .
⎝ N ⎠ n
2
⎛ N −n⎞ s
Standard error of μ̂ is: SE ( μˆ ) = ⎜ ⎟ .
⎝ N ⎠ n
⎛ s 2 ⎞⎛ N − n ⎞
Variance of the estimate τˆ is: vâr(τˆ) = N 2 var(μˆ ) = N 2 ⎜⎜ ⎟⎟⎜ ⎟.
⎝ n ⎠⎝ N ⎠
⎛ s2 ⎞⎛ N − n ⎞
Standard error of τˆ is: vâr(τˆ) = N 2 ⎜⎜ ⎟⎜
⎟⎝ N ⎟⎠
⎝ n ⎠
∑y
i =1
i
pˆ = μ̂ = .
n
7
2
⎛ N −n⎞ s ⎛ N − n ⎞ pˆ (1 − pˆ )
Variance of the estimate p̂ is: vâr( pˆ ) = ⎜ ⎟ =⎜ ⎟ .
⎝ N ⎠ n −1 ⎝ N ⎠ n −1