0% found this document useful (0 votes)
10 views73 pages

4 Lecture4

Uploaded by

ngocthanh2821
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views73 pages

4 Lecture4

Uploaded by

ngocthanh2821
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

APPLIED STATISTICS

COURSE CODE: ENEE1019IU

Lecture 4:

Chapter 4: Probability and Distribution

(2 credits: 2 is for lecture, 0 is for lab-work)


Instructor: TRAN THANH TU
Email: [email protected]

[email protected] 1
CHAPTER 3: DESCRIPTIVE STATISTICS (RECALL)

•3.1. Measures of location


•3.2. Measures of variability
•3.3. Measures of distribution shape, relative location,
and detecting outliers
•3.4. Five number summaries and box plots
•3.5. Measures of association between two variables
(self-reading)

[email protected] 2
CHAPTER 4: PROBABILITY AND DISTRIBUTION

•4.1. Introduction to probability


•4.2. Discrete probability distributions
•4.3. Continuous probability distributions
•4.4. Sampling and sampling distributions (self-reading)

[email protected] 3
4.1. INTRODUCTION TO PROBABILITY

•Random experiments:
A random experiment is a process that generates well-defined
experimental outcomes. On any single repetition or trial, the outcome that
occurs is determined completely by chance.
By specifying all the possible experimental outcomes, we identify the
sample space for a random experiment.
The sample space for a random experiment is the set of all experimental
outcomes.
An experimental outcome is also called a sample point to identify it as an
element of the sample space.

[email protected] 4
4.1. INTRODUCTION TO PROBABILITY

•Random experiments:

[email protected] 5
4.1. INTRODUCTION TO PROBABILITY

•Assigning probabilities:
The three approaches most frequently used are the classical, relative
frequency, and subjective methods.
Regardless of the method used, two basic requirements for assigning
probabilities must be met.

[email protected] 6
4.1. INTRODUCTION TO PROBABILITY

•Assigning probabilities:
The classical method of assigning probabilities is appropriate when all
the experimental outcomes are equally likely.
When using this approach, the two basic requirements for assigning
probabilities are automatically satisfied.
If n experimental outcomes are possible,
a probability of 1/n is assigned to each experimental outcome

[email protected] 7
4.1. INTRODUCTION TO PROBABILITY

•Assigning probabilities:
The relative frequency method of assigning probabilities is appropriate when
data are available to estimate the proportion of the time the experimental
outcome will occur if the experiment is repeated a large number of times.

When using this approach, the two basic requirements for assigning
probabilities are automatically satisfied. [email protected] 8
4.1. INTRODUCTION TO PROBABILITY

•Assigning probabilities:
The subjective method of assigning probabilities is most appropriate when
one cannot realistically assume that the experimental outcomes are equally
likely and when little relevant data are available.

[email protected] 9
4.1. INTRODUCTION TO PROBABILITY

•Events and their probabilities:


An event is a collection of sample points.
The probability of any event is equal to the sum of the probabilities of
the sample points in the event.
à calculate the probability of a particular event by adding the probabilities of
the sample points (experimental outcomes) that make up the event.

[email protected] 10
4.1. INTRODUCTION TO PROBABILITY
•Events and their probabilities:

Events
(e.g. less than 10,
equal 10, more than 10, etc.) [email protected] 11
4.1. INTRODUCTION TO PROBABILITY

•Some basic relationships of probability:


The complement of A is defined to be the event consisting of all sample
points that are not in A. The complement of A is denoted by Ac.

[email protected] 12
4.1. INTRODUCTION TO PROBABILITY

•Some basic relationships of probability:


Addition Law: is helpful when we are interested in knowing the
probability that at least one of two events occurs. That is, with events A
and B we are interested in knowing the probability that event A or event
B or both occur.
Addition Law: two concepts related to the combination of events: the
union of events and the intersection of events.
-The union of A and B is the event -The intersection of A and B is the event
containing all sample points belonging to A containing the sample points belonging to both
or B or both. The union is denoted by A U B. A and B. The intersection is denoted by A ∩ B.

[email protected] 13
4.1. INTRODUCTION TO PROBABILITY

•Some basic relationships of probability:


The addition law provides a way to compute the probability that event
A or event B or both occur. In other words, the addition law is used to
compute the probability of the union of two events.

Two events are said to be mutually exclusive if the events have no


sample points in common.

[email protected] 14
4.1. INTRODUCTION TO PROBABILITY

•Conditional probability: Suppose we have an event A with


probability P(A). If we obtain new information and learn that A related event,
denoted by B, already occurred, we will want to take advantage of this
information by calculating a new probability for event A.
• This new probability of event A is called a conditional probability and is
written P(A ∣ B) (“the probability of A given B”)

[email protected] 15
4.1. INTRODUCTION TO PROBABILITY

•Multiplication Law:
is used to compute the probability of the
intersection of two events. The multiplication law
is based on the definition of conditional
probability.
•Independent Events:

For independent events:


[email protected] 16
4.1. INTRODUCTION TO PROBABILITY
•Bayes’ theorem: is applied when probability revision is
needed (new information for the events occur)

[email protected] 17
4.1. INTRODUCTION TO PROBABILITY

•Bayes’ theorem:
Two-event case:

n mutually exclusive case:

[email protected] 18
4.2. DISCRETE PROBABILITY DISTRIBUTIONS

•Random variables
•Developing discrete probability distributions
•Bivariate distributions, covariance, and financial portfolios
•Binomial probability distribution
•Poisson probability distribution
•Hypergeometric probability distribution

[email protected] 19
4.2. DISCRETE PROBABILITY DISTRIBUTIONS

•Random variables: numerical description of the outcome of an


experiment.
§Discrete Random Variables:
A random variable that may assume either a finite number of values or
an infinite sequence of values such as 0, 1, 2, . . . is referred to as a
discrete random variable.

[email protected] 20
4.2. DISCRETE PROBABILITY DISTRIBUTIONS

•Random variables: numerical description of the outcome of an


experiment.
Continuous Random Variables: A random variable that may assume
any numerical value in an interval or collection of intervals is called a
continuous random variable. Experimental outcomes based on
measurement scales such as time, weight, distance, and temperature can
be described by continuous random variables.

[email protected] 21
4.2. DISCRETE PROBABILITY DISTRIBUTIONS

•Random variables:

[email protected] 22
4.2. DISCRETE PROBABILITY DISTRIBUTIONS
•Developing discrete probability distributions:
• The probability distribution for a random variable describes how
probabilities are distributed over the values of the random variable.
• For a discrete random variable x, a probability function, denoted by f(x),
provides the probability for each value of the random variable.
• The relative frequency method of assigning probabilities to values of a
random variable is applicable when reasonably large amounts of data are
available.
• We then treat the data as if they were the population and use the relative
frequency method to assign probabilities to the experimental outcomes.
• The use of the relative frequency method to develop discrete probability
distributions leads to what is called an empirical discrete distribution.
[email protected] 23
4.2. DISCRETE PROBABILITY DISTRIBUTIONS
•Developing discrete probability distributions:
A primary advantage of defining a random variable and its probability
distribution is that once the probability distribution is known, it is relatively
easy to determine the probability of a variety of events that may be
of interest to a decision maker.
In the development of a probability function for any discrete random
variable, the following two conditions must be satisfied.

The simplest example of a discrete probability distribution given by a


formula is the discrete uniform probability distribution.

[email protected] 24
4.2. DISCRETE PROBABILITY DISTRIBUTIONS
•Developing discrete probability distributions:
Expected value: or mean, of a random variable is a measure of the
central location for the random variable. The formula for the expected
value of a discrete random variable x follows.

Variance: summarize the variability in the values of a random variable

Recall (Lecture3): Sample variance In which, 𝑥:̅ mean (or weighted mean)
∑(𝑥! ∗ 𝑤! )
𝑥̅ =
∑ 𝑤!
[email protected] 25
4.2. DISCRETE PROBABILITY DISTRIBUTIONS
•Developing discrete probability distributions:
Variance:
•Recall (Lecture3): Variance = 256/4= 64

[email protected] 26
4.2. DISCRETE PROBABILITY DISTRIBUTIONS

•Bivariate distributions, covariance, correlation:


A probability distribution involving two random variables is called a
bivariate probability distribution. Recall (Lecture3)
Covariance of random variables x and y:

Correlation of random variables x and y:

[email protected] 27
4.2. DISCRETE PROBABILITY DISTRIBUTIONS

•Bivariate distributions, covariance, correlation:


Linear combination of random variables x and y

In which:
a: coefficient of x
b: coefficient of y

[email protected] 28
4.2. DISCRETE PROBABILITY DISTRIBUTIONS

•Bivariate distributions, covariance, correlation:


Linear combination of p random variables
Expected value:

Variance:

[email protected] 29
4.2. DISCRETE PROBABILITY DISTRIBUTIONS

•Binomial probability distribution: The binomial probability


distribution is a discrete probability distribution that has many applications.
It is associated with a multiple-step experiment that we call the binomial
experiment.

[email protected] 30
4.2. DISCRETE PROBABILITY DISTRIBUTIONS

•Binomial probability distribution: The probability distribution


associated with this random variable is called the binomial probability
distribution.

[email protected] 31
4.2. DISCRETE PROBABILITY DISTRIBUTIONS

•Binomial probability distribution: for the binomial probability


distribution, x is a discrete random variable with the probability function f(x)
applicable for values of x = 0, 1, 2, . . . , n.

[email protected] 32
4.2. DISCRETE PROBABILITY DISTRIBUTIONS
•Poisson probability distribution: useful in estimating the number of
occurrences over a specified interval of time or space (e.g. number of arrivals at a
car wash in one hour, the number of repairs needed in 10 miles of highway, or the
number of leaks in 100 miles of pipeline)

•Telecommunication example: telephone calls arriving in a system.


•Astronomy example: photons arriving at a telescope.
•Chemistry example: the molar mass distribution of a living polymerization.
•Biology example: the number of mutations on a strand of DNA per unit length.
•Management example: customers arriving at a counter or call center.
•Finance and insurance example: number of losses or claims occurring in a
given period of time.
•Earthquake seismology example: an asymptotic Poisson model of seismic risk
for large earthquakes.
[email protected] 33
4.2. DISCRETE PROBABILITY DISTRIBUTIONS
•Poisson probability distribution:

[email protected] 34
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS
• A fundamental difference separates discrete and continuous random
variables in terms of how probabilities are computed.
- For a discrete random variable, the probability function f (x) provides the
probability that the random variable assumes a particular value.
- With continuous random variables, the counterpart of the probability
function is the probability density function, also denoted by f (x).

àThe difference is that the probability density function does not directly
provide probabilities.
àThe area under the graph of f (x) corresponding to a given interval does
provide the probability that the continuous random variable x assumes a
value in that interval.
[email protected] 35
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS
• A fundamental difference separates discrete and continuous random
variables in terms of how probabilities are computed.
- For a discrete random variable, the probability function f (x) provides the
probability that the random variable assumes a particular value.
- With continuous random variables, the counterpart of the probability
function is the probability density function, also denoted by f (x).

àThe difference is that the probability density function does not directly
provide probabilities.
àThe area under the graph of f (x) corresponding to a given interval does
provide the probability that the continuous random variable x assumes a
value in that interval.
[email protected] 36
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Continuous probability distribution: A probability


distribution in which the random variable x can take on any value
(is continuous).
•Because there are infinite values that x could assume,
the probability of x taking on any one specific value is zero.

[email protected] 37
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

Uniform probability distribution


Normal probability distribution
Normal approximation of binomial probabilities
Exponential probability distribution

[email protected] 38
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

[email protected] 39
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Uniform probability distribution:


Uniform distributions are probability distributions with equally
likely outcomes.
There are two types of uniform distributions: discrete and
continuous.
In a discrete distribution, each outcome is discrete.
In a continuous distribution, outcomes are continuous and infinite.

[email protected] 40
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Uniform probability distribution:

[email protected] 41
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Uniform probability distribution:

[email protected] 42
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Normal probability distribution: The most important probability


distribution for describing a continuous random variable.
• Normal Curve: The form, or shape, of the normal distribution is illustrated
by the bell-shaped normal curve

[email protected] 43
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS
•Normal probability distribution: characteristics
• The entire family of normal distributions is differentiated by two parameters:
the mean µ and the standard deviation σ.
• The highest point on the normal curve is at the mean, which is also the
median and mode of the distribution.
• The mean of the distribution can be any numerical value: negative, zero,
or positive. Three normal distributions with the same standard deviation
but three different means (−10, 0, and 20) are shown here.

[email protected] 44
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS
•Normal probability distribution: characteristics
• The normal distribution is symmetric, with the shape of the normal curve to the
left of the mean a mirror image of the shape of the normal curve to the right of the
mean. The tails of the normal curve extend to infinity in both directions and
theoretically never touch the horizontal axis. Because it is symmetric, the
normal distribution is not skewed; its skewness measure is zero.
• The standard deviation determines how flat and wide the normal curve is. Larger
values of the standard deviation result in wider, flatter curves, showing
more variability in the data. Two normal distributions with the same mean but
with different standard deviations are shown here.

[email protected] 45
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS
•Normal probability distribution: characteristics
• Probabilities for the normal random variable are given by areas under the
normal curve. The total area under the curve for the normal distribution is 1.
Because the distribution is symmetric, the area under the curve to the left of
the mean is 0.5 and the area under the curve to the right of the mean is 0.5.
• The percentage of values in some commonly used intervals are:

a. 68.3% of the values of a normal random variable are


within plus or minus one standard deviation of its mean.
b. 95.4% of the values of a normal random variable are
within plus or minus two standard deviations of its mean.
c. 99.7% of the values of a normal random variable are
within plus or minus three standard deviations of its
mean.
[email protected] 46
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Normal probability distribution:


Standard Normal Probability Distribution: A random variable that has a
normal distribution with a mean of zero and a standard deviation of one is
said to have a standard normal probability distribution.

[email protected] 47
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Normal probability distribution:


Computing Probabilities for Any Normal Probability Distribution:

à we can interpret z as the number of standard deviations that the normal random
variable x is from its mean µ.
à Use the table to check the probability [email protected] 48
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS
•Normal
probability
distribution
(z < 0):

[email protected] 49
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Normal
probability
distribution
(z > 0):

[email protected] 50
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Normal probability distribution:


Computing Probabilities for Any Normal Probability Distribution:

[email protected] 51
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Normal approximation of binomial probabilities:


The binomial random variable is the number of successes in the n trials,
and probability questions pertain to the probability of x successes in the n
trials.
Binomial probability refers to the probability of exactly x successes on
n repeated trials in an experiment which has two possible outcomes
(commonly called a binomial experiment).

[email protected] 52
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS
•Normal approximation of binomial probabilities:
In cases where n*p ≥ 5, and n*(1 − p) ≥ 5:
set µ = n*p and 𝜎 = 𝑛 ∗ 𝑝 ∗ (1 − 𝑝)
à normal distribution:

[email protected] 53
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Exponential probability distribution:


In probability theory and statistics, the exponential distribution is
the probability distribution of the time between events in a Poisson
point process, i.e., a process in which events occur continuously and
independently at a constant average rate (time between arrivals at a car
wash, the time required to load a truck, the distance between major
defects in a highway, and so on).
It is a particular case of the gamma distribution.

[email protected] 54
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Exponential probability distribution:


• Computing Probabilities for the Exponential Distribution: As with any
continuous probability distribution, the area under the curve corresponding
to an interval provides the probability that the random variable assumes a
value in that interval.

[email protected] 55
4.3. CONTINUOUS PROBABILITY DISTRIBUTIONS

•Exponential probability distribution:


Relationship Between the Poisson and Exponential Distributions:
- Poisson distribution provides an appropriate description of the number of
occurrences per interval
- Exponential distribution provides a description of the length of the
interval between occurrences.

[email protected] 56
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)
• The sampled population is the population from which the sample is
drawn.
• The target population is the population we want to make inferences about
• A frame is a list of the elements that the sample will be selected from.
The sampling problems
Selecting a sample
Point estimation
Introduction to sampling distributions
Sampling distribution of 𝑥̅
Sampling distribution of 𝑝̅
Properties of point estimators
Other sampling methods [email protected] 57
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

Numerical characteristics of a population are called parameters.

Sampling problems:
- Sampling errors
- Lack of sample representativeness
- Difficulty in estimation of sample size
- Lack of knowledge about the sampling process
- Lack of resources
- Lack of cooperation
- Lack of existing appropriate sampling frames for larger population

[email protected] 58
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

•Selecting a sample:
Sampling from a Finite Population: A simple random sample of size n
from a finite population of size N is a sample selected such that each
possible sample of size n has the same probability of being selected.
One procedure for selecting a simple random sample from a finite
population is to use a table of random numbers to choose the elements
for the sample one at a time in such a way that, at each step, each of the
elements remaining in the population has the same probability of being
selected.
Sampling n elements in this way will satisfy the definition of a simple
random sample from a finite population.
sampling without replacement
sampling with replacement
[email protected] 59
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

[email protected] 60
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)
we cannot develop a list of all the elements that could
•Selecting a sample: be produced, the population is considered infinite

Sampling from an infinite Population: A random sample of size n from


an infinite population is a sample selected such that the following
conditions are satisfied:
1. Each element selected comes from the same population.
2. Each element is selected independently.

Care and judgment must be exercised in implementing the selection


process for obtaining a random sample from an infinite population.
à prevent selection bias

[email protected] 61
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)
•Point estimation:
To estimate the value of a population parameter, we compute a
corresponding characteristic of the sample, referred to as a sample
statistic.
• By making the preceding computations, we perform the statistical
procedure called point estimation.
• We refer to:
- the sample mean 𝑥̅ as the point estimator of the population mean µ
- the sample standard deviation s as the point estimator of the population standard
deviation σ, and
- the sample proportion 𝑝̅ as the point estimator of the population proportion p.
The numerical value obtained for 𝑥,̅ s, or 𝑝̅ is called the point estimate. [email protected] 62
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

•Introduction to sampling distributions:


• If we consider the process of selecting a simple random sample as an
experiment, the sample mean 𝑥̅ is the numerical description of the outcome
of the experiment.
àthe sample mean 𝑥̅ is a random variable
à𝑥̅ has a mean or expected value, a standard deviation, and a probability
distribution.
• Because the various possible values of 𝑥̅ are the result of different simple
random samples, the probability distribution of 𝑥̅ is called the sampling
distribution of 𝑥.̅
• Knowledge of this sampling distribution and its properties will enable us to
make probability statements about how close the sample mean 𝑥̅ is to the
population mean µ.
[email protected] 63
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)
!: the probability distribution of all
•Sampling distribution of 𝒙
possible values of the sample mean 𝑥.̅
The sampling distribution of 𝑥̅ has an expected value or mean, a standard
deviation, and a characteristic shape or form. Standard
Expected Value of 𝒙 -: error of
the mean

-:
Standard Deviation of 𝒙

finite population correction factor correction factor=1


Infinite (population is large, some how can’t be identified)

[email protected] 64
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

!: the probability distribution of all


•Sampling distribution of 𝒙
possible values of the sample mean 𝑥.̅
Form of the Sampling Distribution of 𝒙 -:
- Population has a normal distribution: the sampling distribution of 𝑥̅
is normally distributed for any sample size.
- Population does not have a normal distribution: central limit
theorem is helpful in identifying the shape of the sampling distribution of
𝑥.̅

[email protected] 65
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

!: the probability distribution of all


•Sampling distribution of 𝒙
possible values of the sample mean 𝑥.̅
Practical Value of the Sampling Distribution of 𝒙 -:
Whenever a simple random sample is selected and the value of the
sample mean is used to estimate the value of the population mean µ,
we cannot expect the sample mean to exactly equal the population
mean.
The practical reason we are interested in the sampling distribution of 𝑥̅
is that it can be used to provide probability information about the
difference between the sample mean and the population mean.

[email protected] 66
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

!: the probability distribution of all


•Sampling distribution of 𝒙
possible values of the sample mean 𝑥.̅
Relationship Between the Sample Size and the Sampling Distribution
-:
of 𝒙
As the sample size is increased, the standard error of the mean
decreases.
As a result, the larger sample size provides a higher probability that the
sample mean is within a specified distance of the population mean.

[email protected] 67
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

!: point estimator of the population


•Sampling distribution of 𝒑
proportion p.

The sampling distribution of 𝑝̅ is the probability distribution of all possible


values of the sample proportion 𝑝.̅
Expected Value of 𝒑 -:

-:
Standard Deviation of 𝒑

[email protected] 68
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

!: point estimator of the population


•Sampling distribution of 𝒑
proportion p.
Form of the Sampling Distribution of 𝒑 - : The sampling distribution of p
can be approximated by a normal distribution whenever:
np ≥ 5 and n(1 − p) ≥ 5

Practical Value of the Sampling Distribution of 𝒑 - : can be used to


provide probability information about the difference between the sample
proportion and the population proportion.

[email protected] 69
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

•Properties of point estimators: three properties of good point


estimators: unbiased, efficiency, and consistency.
Unbiased: The sample statistic 𝜃1 is an unbiased estimator of the
population parameter 𝜃 if:

Efficiency: The point estimator with the smaller standard error is said to
have greater relative efficiency than the other.
Consistency: point estimator is consistent if the values of the point
estimator tend to become closer to the population parameter as the
sample size becomes larger. In other words, a large sample size tends to
provide a better point estimate than a small sample size.

[email protected] 70
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

•Properties of point estimators: three properties of good point


estimators: unbiased, efficiency, and consistency.
Unbiased:

[email protected] 71
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

•Properties of point estimators: three properties of good point


estimators: unbiased, efficiency, and consistency.
Efficiency:
More
efficient

[email protected] 72
4.4. SAMPLING AND SAMPLING DISTRIBUTIONS (SELF-READING)

•Other sampling methods:


Stratified random sampling
Cluster sampling
Systematic sampling
Convenience sampling
Judgment sampling

[email protected] 73

You might also like