0% found this document useful (0 votes)
23 views16 pages

Chapter 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views16 pages

Chapter 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Measurement Techniques and Sampling Methods

Introduction
 As you know from earlier chapters, a variable is a condition or characteristic that
can take on different values or categories. A key point now is that many variables
are difficult to accurately measure, and if psychologists do not measure the
variables they study accurately, then their research is flawed. It’s like the GIGO
principle: garbage in, garbage out. If you fail to measure your variables
accurately, you will obtain use less data and, therefore, useless results

Defining Measurement
 When we measure, we attempt to identify and characterize the dimensions,
quantity, capacity, or degree of something. More formally, measurement refers to
the act of measuring, and it is conducted by assigning symbols or numbers to
something according to a specific set of rules. This definition is based on the work
of the famous Harvard psychologist Stanley Smith Stevens.

Scales of Measurement
 In addition to helping define measurement, Stevens showed that measurement
can be categorized by the type of information that is communicated by the
symbols assigned to the variables of interest.
 Based on his work, we usually identify four levels of measurement, which provide
different kinds and amounts of information. Stevens called these following four
levels of measurement the “scales of measurement”: nominal scale, ordinal
scale, interval scale, and ratio scale. You can also refer to these as variables:
nominal variables, ordinal variables, interval variables, and ratio variables.

 Nominal Scale
 The simplest and most basic type of measurement. It is a nonquantitative scale
of measurement because it identifies types rather than amounts of something. A
nominal scale uses symbols, such as words or numbers, to classify or categorize
the values of a variable (i.e., nominal scaled variables) into groups or types.
Numbers can be used to label the categories of a nominal variable, but these
numbers serve only as markers, not as indicators of amount or quantity. For
example, you might mark the categories of the variable “gender” with 1 = female
and 2 = male.
 Ordinal Scale
 An ordinal scale is a rank-order scale of measurement. Any variable where the
levels can be ranked (but you don’t know if the distance between the levels is the
same) is an ordinal variable. It allows you to determine which person is higher or
lower on a variable of interest, but it does not allow you to know exactly how
much higher or lower a person is compared to another. Some examples of
ordinal-level variables are the order of finish in a marathon, social class (e.g.,
high, medium, and low).
 Interval Scale
 The third level of measurement, the interval scale, has equal distances between
adjacent numbers on the scale (called equal intervals) as well as the
characteristics of the lower-level scales (i.e., marking/naming of levels and rank
ordering of levels). For example, the difference between 1° and 2° Fahrenheit is
the same amount of temperature as the distance between 50° and 51°
Fahrenheit. Although the distance between adjacent points on an interval scale is
equal, an interval scale does not possess an absolute zero point. The zero point is
somewhat arbitrary. You can see this by noting that neither 0° Celsius nor 0°
Fahrenheit means no temperature. Zero degrees Celsius (0°C) is the freezing
point of water, and zero degrees Fahrenheit (0°F) is 32 degrees below freezing.
 Ratio Scale
 The fourth level of measurement, the ratio scale, is the highest (i.e., most
quantitative) level of measurement. A ratio scale has an absolute zero point as
well as the characteristics of the lower-level scales. It marks/names the values of
the variable (as in nominal scales), provides rank ordering of the values of the
variable (as in ordinal scales), and has equal distances between the values of the
variable (as in interval scales).
 In addition, only ratio scales have a true or absolute zero point (where 0 means
none). Some examples of ratio-level variables are weight, height, response time,
Kelvin temperature, and annual income. If your annual income is zero dollars,
then you earned no annual income.

Psychometric Properties of Good Measurement


 So, what is needed to obtain good measurement? The two major properties of
good measurement are reliability and validity.
 Overview of Reliability and Validity
 Reliability refers to the consistency or stability of the scores of your measurement
instrument. Validity refers to the extent to which your measurement procedure is
measuring what you think it is measuring (and not something else) and whether
you have used and interpreted the scores correctly. If you are going to have
validity, you must have some reliability, but reliability is not enough to ensure
validity.
 Reliability
 Reliability in psychological testing and research refers to the consistent or stable
scores obtained from assessments or research instruments. It's quantified using a
reliability coefficient, a type of correlation coefficient indicating how strongly and
positively scores relate, aiming for a value above .70 for strong consistency.
 Four primary types of reliability are:
 Test-Retest Reliability: Measures the consistency of scores over time by
administering the same test twice after a period and correlating the two sets of
scores. The interval between tests affects the reliability coefficient; longer
intervals may result in lower reliability.
 Equivalent-Forms Reliability: Assesses the consistency of scores on two
different forms of the same test, designed to measure the same thing, by
correlating scores from both forms. This type is common in standardized tests
like the SAT or GRE.
 Internal Consistency Reliability: Evaluates how well items on a test measure
a single construct by assessing the uniformity of responses to all items.
Coefficient alpha (Cronbach’s alpha) is commonly used to report this reliability,
with higher values indicating better measurement consistency.
 Interrater Reliability: Focuses on the consistency of measurements made by
different observers or raters by correlating their assessments or calculating the
percentage of their agreement on observations.
 Validity
 Validity in psychological measurement refers to the accuracy of the conclusions,
interpretations, or actions based on test scores. Tests here include any
measurement device, such as standardized tests, survey instruments, or
observational coding. It's essential to understand that the instruments
themselves aren't valid or invalid; rather, it's the interpretations and actions
derived from the test scores that bear validity.
 Validity encompasses all aspects of construct validity, which involves verifying
that the constructs like intelligence or depression are accurately represented in
the research. Constructs might also include experimental settings, such as
impoverished neighbourhoods, within which the research is conducted. Each
construct requires precise operational definitions to ensure accurate
representation.
 Operationalization defines how constructs are measured in a study. For example,
poverty might be operationally defined through income levels and participation in
welfare programs, while depression could be defined by scores on a psychological
inventory. Validity depends on whether these operational definitions accurately
represent the intended constructs.
 Validation is the ongoing process of gathering evidence to support the
interpretations of test scores. This evidence is gathered by:
 Content Validity: assesses whether the items, tasks, or questions on a test or
instrument adequately represent the construct's domain. This evaluation is based
on expert judgment, typically requiring multiple experts who are well-versed in
the specific construct. These experts analyze the test items to ensure
comprehensive and accurate representation of the construct by addressing the
following questions:
o Face Validity: Do the items appear to directly measure the construct?
This is a preliminary judgment about whether the test seems to measure
what it's supposed to measure.
o Comprehensive Representation: Does the test cover all the relevant
aspects of the construct without omitting any significant areas?
o Relevance: Are all the items relevant to the construct, with no unrelated
content included?
 If the experts conclude that the test meets these criteria—representing the
construct accurately, covering all necessary content, and excluding irrelevant
items—it is considered to have content validity. This form of validity ensures that
the test is suitable for drawing meaningful and accurate conclusions about the
construct in question.
 Internal Structure Validity: concerns whether tests or instruments measure a
single construct or multiple dimensions of a multidimensional construct. Some
tests, like the Rosenberg Self-Esteem Scale, measure a single, global construct
such as self-esteem.
 In contrast, the Harter Self-Esteem Scale measures global self-esteem along with
five other dimensions: social acceptance, scholastic competence, physical
appearance, athletic competence, and behavioral conduct, making it a
multidimensional instrument.
 To analyze the internal structure of a test, researchers often employ factor
analysis, a statistical technique that helps identify the interrelationships among
items and determines how many distinct dimensions (or factors) these items
represent. This analysis reveals whether the test is unidimensional (measuring a
single factor) or multidimensional (measuring multiple factors). Knowing the
number of dimensions is crucial to avoid misinterpretation of the test results.
 Additionally, indexes of homogeneity such as item-to-total correlation and
coefficient alpha (a measure of internal consistency reliability) are used to gauge
how closely related the items are within the same dimension or factor. High
values on these indexes suggest that the items measure the same construct or a
specific dimension of a multidimensional construct effectively.
 Validity Based on Relations to Other Variables: involves assessing how test scores
correlate with one or more established criteria (standards or benchmarks). This
form of validity evidence uses a validity coefficient, a correlation coefficient
indicating how well the test scores align with the expected criteria in both
direction and magnitude.
 There are several types of validity evidence within this category:
 Criterion-related Validity: This type of evidence assesses how well test scores
predict or relate to a known criterion, such as performance on an established test
or in future scenarios. There are two subtypes:
 Predictive Validity: It measures how well test scores can forecast future
criterion performance. For instance, predicting college success based on test
scores obtained earlier.
 Concurrent Validity: It evaluates how well test scores correspond with current
performance on a related criterion, such as comparing a new depression scale
with the Beck Depression Inventory among a sample where depression is
expected.
 Convergent and Discriminant Validity: These assess the degree to which the
test scores correlate appropriately with scores from other tests measuring the
same or different constructs.
 Convergent Validity: Indicates that scores on the test in question are related to
scores on other, independently measured tests of the same construct.
 Discriminant Validity: Demonstrates that scores on the test are not related to
scores from tests measuring different constructs. It is essential for establishing
that a test is measuring a distinct construct.
 Known Groups Validity: This type involves demonstrating that the test can
distinguish between groups known to differ on the construct it measures. For
example, a gender roles test should show that females score higher on femininity
and males on masculinity, according to the hypothesis.

Using Reliability and Validity Information


 When using standardized tests or evaluating empirical research, it's crucial to
consider the norming group—the reference group upon which reported reliability
and validity evidence is based. This group's characteristics significantly impact
the generalizability of the test's reliability and validity:
 Importance of the Norming Group: If the population you plan to use the test with
differs significantly from the norming group, the reliability and validity of the test
for your population may not hold. This discrepancy arises because the test might
not accurately reflect or measure the constructs relevant to a different group.
 Evaluation of Research Articles: When assessing journal articles, consider how
the researchers have handled reliability and validity:
o Appropriateness of Measures: Check if the measures used were suitable
for the study's participants.
o Evidence of Reliability and Validity: Evaluate the amount and quality of
evidence provided to support the measures' reliability and validity.

Sources of Information about Tests


 Mental Measurements Yearbook (MMY): This comprehensive guide offers
reviews and information about a wide range of standardized tests. It is a valuable
resource for assessing the quality and suitability of tests for specific purposes.
 Tests in Print (TIP): This publication serves as an index to the tests covered by
the Mental Measurements Yearbook, providing vital data about the availability
and scope of tests. It helps identify which tests are currently in use and
supported by up-to-date research.
 Online Databases: Important sources of empirical research literature include:
 PsycINFO: A robust database for psychology and related fields, providing
abstracts and citations to scholarly literature.
o PsycARTICLES: Offers full-text articles from journals published by the
American Psychological Association and affiliated journals.
o SocINDEX: Covers the broad spectrum of sociological studies.
o MEDLINE: A comprehensive source for medical information, including
aspects of psychology and psychiatry that intersect with medical fields.
o ERIC: Focuses on educational literature and resources, providing
extensive information relevant to educational testing.
 Specialized Books and Handbooks:
 Handbook of Research Design and Social Measurement by Miller (1991):
This handbook offers insights into the methodologies and measurement tools
used in social science research.
 Tests: A Comprehensive Reference for Assessments in Psychology,
Education, and Business by Maddox (1997): Provides detailed descriptions
and evaluations of various tests used across these fields.
 Taking the Measure of Work: A Guide to Validated Scales for
Organizational Research and Diagnosis by Fields (2002): Focuses on
measurement scales used in organizational settings.
 Measures of Personality and Social Psychological Attitudes by Robinson,
Shaver, and Wrightsman (1991): Contains validated measures for assessing
various psychological and social attitudes.

Sampling Methods
 Whenever you review published research, it is important to critically examine the
sampling methods used (i.e., how the researcher obtained the research
participants) so that you can judge the quality of the study.
 Furthermore, if you ever conduct an empirical research study on your own, you
will need to select research participants and to use the best sampling method
that is appropriate and feasible in your situation.
 In experimental research, random samples are usually not used because the
focus is primarily on the issue of causation, and random assignment is far more
important than random sampling for constructing a strong experimental research
design.
 Conversely, in survey research, random samples often are used and are quite
important if the researcher intends to generalize directly to a population based
on his or her single research study results. Political polls are a common example
where the researcher needs to generalize to a population based on a single
sample.

Terminology used in Sampling


 When discussing sampling in research, it's crucial to grasp some fundamental
concepts to understand how researchers draw conclusions from their studies.
Here's a detailed breakdown of these key terms and their significance in the
sampling process:
 Element: This is the basic unit from which a sample is selected. In the context of
a research population, an element could be an individual person or an item
depending on the study's focus.
 Population: The full set of elements from which the sample is drawn. This
includes everyone or everything that meets the criteria of the study.
 Sampling: This is the process of selecting a subset of elements from a
population to represent the whole. This subset is what we call a sample.
 Representative Sample: A sample that accurately reflects the characteristics
of the population from which it was drawn. It's like a microcosm of the
population, ensuring that the results of the study can be generalized back to the
population.
 Equal Probability of Selection Method (EPSEM): A sampling technique
where each element in the population has an equal chance of being included in
the sample. This method supports the creation of a representative sample by
giving each element a fair chance of selection, reducing bias.
 Statistic vs. Parameter:
o Statistic: A numerical measure that describes an aspect of a sample (e.g.,
average age of participants in the sample).
o Parameter: A numerical measure that describes an aspect of a population
(e.g., average age of all individuals eligible for the study).
 Sampling Error: The difference between a statistic derived from a sample and
the actual parameter of the population. Sampling error highlights the
discrepancies that can arise due to the sample not perfectly representing the
entire population.
 Census: This is the collection of data from every element in the population.
Unlike sampling, a census aims to eliminate sampling error entirely by including
all eligible elements in the data collection.
 Sampling Frame: A list or database containing elements from which the sample
is drawn. This frame is crucial for defining who or what can be included in the
study.
 Response Rate: The percentage of selected individuals who actually participate
in the study. A high response rate is generally indicative of a successful sampling
effort and is vital for the validity of study conclusions.

Random Sampling Techniques


 The two major types of sampling used in psychological research are random
sampling and nonrandom sampling. When the goal is to generalize from a
specific sample to a population, random sampling methods are preferred because
they produce representative samples.
 Nonrandom sampling methods generally produce biased samples (i.e., samples
that are not representative of a known population). Any particular research
sample might (or might not) be representative, but your chances are much
greater if you use a random sampling method (in particular, if you use an equal
probability of selection method).
 It is especially important that the demographic characteristics of nonrandom
samples be described in detail in research reports so that readers can understand
the exact characteristics of the research participants. Researchers and readers of
reports can then make generalizations based on what the famous research
methodologist (and past APA president) Donald Campbell called proximal
similarity.
 Campbell’s idea is that you can generalize research results to different people,
places, settings, and contexts to the degree that the people in the field are
similar to those described in the research study.
 Simple Random Sampling
 Simple random sampling is a fundamental and straightforward method within the
category of equal probability selection methods (EPSEM). It ensures that every
member of the population has an identical chance of being selected for the
sample, a characteristic crucial for creating representative samples from which
results can be generalized to the broader population.
 How Simple Random Sampling Works
 The Hat Model: This is a classic illustrative method to explain simple random
sampling:
 Write each participant's name on equal-sized slips of paper.
 Place all slips into a hat and mix them thoroughly.
 Draw slips one at a time until the desired sample size is reached.
 This process ensures that each draw is independent and all members have an
equal chance of being selected.
 Sampling Without Replacement: In practice, simple random sampling is
typically done without replacement. Once a name is drawn, it is not returned to
the hat. This approach prevents any individual from being selected more than
once, maintaining the integrity and diversity of the sample.
 Modern Techniques: Using Technology
 While the hat model is illustrative, modern simple random sampling usually
involves technology, especially random number generators, which are more
practical and scalable for large populations:
 Random Number Tables: Before computers, researchers used pre-published
tables of random numbers to select participants.
 Online Random Number Generators: These tools are now widely used for
simple random sampling. Websites like Randomizer.org,
PsychicScience.org/random.aspx, and Random.org allow researchers to generate
random numbers associated with participant identifiers. Here's how you might
use such a tool:
 Enter how many sets of numbers you need (typically one for one sample).
 Specify the number of random numbers needed (equal to your sample size).
 Define the range of numbers that corresponds to your list of participants.
 Opt for numbers in each set to be unique (sampling without replacement).
 Decide whether to sort the numbers (optional based on preference).
 Generate the numbers and match them to participant identifiers to form your
sample.
 Practical Example
 Using the online generator, you might select a sample from a population, such as
past presidents of an association, by matching the randomly generated numbers
to their membership or identifier numbers. This method ensures each selected
individual corresponds to a random number generated, adhering to the principles
of simple random sampling.
 Advantages of Simple Random Sampling
 Fairness: Every member of the population has an equal chance of selection.
 Unbiased: Minimizes sampling bias, enhancing the representativeness of the
sample.
 Generalizable: Facilitates reliable generalization from the sample to the
population.
 Stratified Random Sampling
 Stratified random sampling is a type of sampling technique used in research to
ensure that the sample reflects the population more accurately compared to
simple random sampling. This method involves dividing the population into
homogeneous subgroups known as strata before the sample is drawn, and then
performing random sampling within these strata.
 How Stratified Sampling Works:
 Define Strata: The population is divided into different groups based on one or
more stratification variables, which could be categorical (e.g., gender, ethnicity)
or quantitative (e.g., age, income levels). Each group must be mutually exclusive
and collectively exhaustive, covering the entire population without overlap.
 Random Sampling Within Strata: A random sample is then drawn from each
stratum. This can be done using any standard random sampling method, such as
using random number tables or digital random number generators.
 Combining Samples: The individual samples from each stratum are combined
to form the final sample, which will ideally have more balanced characteristics
reflective of the population.
 Stratified sampling can be further classified into two types based on how the
samples are drawn relative to the population proportions:
 Proportional Stratified Sampling: Here, the size of the sample drawn from
each stratum is proportional to the size of the stratum in the population. For
example, if 60% of the population is female, then 60% of the sample is also
female. This method ensures that each subgroup is accurately represented in the
sample.
 Disproportional Stratified Sampling: In this approach, the samples are not
drawn in proportion to the stratum sizes in the population. This method might be
used when certain groups within the population are of particular interest and
need to be oversampled to ensure sufficient representation in the sample.
 Advantages of Stratified Sampling:
 Increased Accuracy: Stratified sampling can produce more accurate results
than simple random sampling, particularly when there are significant differences
between strata in the population.
 Efficiency: It often requires a smaller sample size to achieve the same level of
accuracy as simple random sampling, making it more cost-effective.
 Improved Representativeness: By ensuring each subgroup is represented
according to its prevalence in the population (proportional) or according to the
research needs (disproportional), this method can provide a sample that is highly
representative of the population or key groups within it.
 Practical Applications:
 Stratified sampling is particularly useful in scenarios where the population is
diverse and segments of the population are expected to behave differently in
relation to the research study. It is widely used in market research, health
studies, and any context where population subgroups are expected to exhibit
distinct characteristics relevant to the study.
 Cluster Random Sampling
 Cluster random sampling is a versatile and cost-effective sampling method used
especially when it is impractical or too costly to conduct simple or stratified
random sampling. It involves dividing the population into pre-existing segments
or clusters that each contain multiple elements, and then selecting whole clusters
randomly to form the sample.
 Types of Cluster Sampling:
 Cluster sampling can be executed in one of two main formats: one-stage and
two-stage cluster sampling, each suitable for different research needs and
contexts.
 One-Stage Cluster Sampling:
 In this simpler form of cluster sampling, entire clusters are selected randomly,
and all elements within these selected clusters are included in the sample.
 Example: If a researcher selects 15 schools as clusters, every student in each of
these 15 schools would be included in the sample.
 Two-Stage Cluster Sampling:
 This more complex form involves two steps: first, clusters are selected randomly;
second, from each of these chosen clusters, a random sample of elements is
drawn.
 Example: A researcher might first select 30 classrooms randomly and then
randomly choose 10 students from each classroom to form the sample.
 Advantages of Cluster Sampling:
 Cost-Effectiveness: Reduces travel and administrative costs because the data
collection is concentrated within selected clusters.
 Feasibility: Makes sampling possible when it is difficult to list all the elements in
the population but easier to list clusters.
 Scalability: Easily scalable, particularly useful in large, geographically dispersed
populations.
 Equal Probability of Selection Method (EPSEM):
 Cluster sampling can be an EPSEM method, particularly when all clusters are
approximately equal in size or when adjustments are made to ensure each
element has an equal chance of selection.
 If clusters vary significantly in size, advanced weighting techniques may be
necessary to adjust the probabilities and ensure the sample remains
representative of the population.
 Considerations and Challenges:
 Representativeness: To maintain representativeness, it's essential to ensure
that the clusters themselves are as heterogeneous as the overall population,
containing a diverse mix of the population’s characteristics.
 Sampling Bias: There is potential for higher sampling bias if the clusters chosen
are not representative of the population, or if the variability within clusters is
lower than between clusters.
 Systematic Sampling
 Systematic sampling is an efficient alternative to simple random sampling that
often yields similarly representative samples. This method involves a fixed,
regular interval known as the sampling interval, and is particularly useful when a
complete list of the population is available.
 Steps in Systematic Sampling:
 Determine the Sampling Interval (k): This is calculated by dividing the total
population size by the desired sample size. The sampling interval, k, represents
the frequency at which elements are selected from the population to form the
sample.
 Select the Starting Point: Randomly choose a number between 1 and k. This
initial number is your starting point in the population list from which you will
begin sampling.
 Select Every kth Element: After determining the starting point, proceed to
select every kth element in the list to be part of the sample. For example, if
k=10k = 10k=10 and your starting point is 5, your sample will include the 5th,
15th, 25th, and so on, individuals in the population list.
 Example of Systematic Sampling:
 Assuming a population size of 100 and a desired sample size of 10:
 Step 1: Calculate k which would be 100/10 = 10.
 Step 2: If the random starting point is 5,
 Step 3: The sample would then include individuals numbered 5, 15, 25, 35, ...,
up to 95.
 This method is straightforward and ensures that the sample is spread evenly
throughout the population list.
 Advantages of Systematic Sampling:
 Simplicity and Speed: Easier to implement than simple random sampling,
especially when dealing with large populations.
 Cost-effective: Reduces the time and resources needed to select a sample.
 Ensures Coverage: Systematically moves through the population, ensuring that
all segments are covered.
 Potential Issue: Periodicity
 Periodicity Issue: This occurs if the arrangement of the population list has a
cyclical pattern that matches the sampling interval. For example, if the list cycles
every 12 people and your interval is also 12, you might end up sampling a biased
subset that doesn't represent the whole population.
 Mitigation: To prevent periodicity, ensure that the list does not have an ordering
that matches the sampling interval or shuffle the list to disrupt any patterns
before sampling.
 Practical Application
 Systematic sampling is particularly useful in industrial and quality control settings
where a continuous flow of production can be sampled at regular intervals. It's
also used in field surveys and when dealing with large databases, where it can be
cumbersome to generate a large number of random numbers.

Nonrandom Sampling Techniques


 Nonrandom sampling methods are often used when random sampling is
impractical due to constraints like time, cost, or access to a complete population
list. While these methods generally produce samples that may not be as
statistically representative of the population as random sampling methods, they
can still provide valuable insights, especially in exploratory or qualitative
research. Here's an overview of four major nonrandom sampling techniques:
 1. Convenience Sampling
 Definition: Involves selecting participants who are readily available or easiest to
reach.
 Example: College students participating in studies for course credit.
 Characteristics: This method is the quickest and least costly but may introduce
significant bias, limiting the generalizability of the findings.
 2. Quota Sampling
 Definition: The researcher defines quotas for certain groups within the
population and fills these quotas using convenience sampling.
 Example: A quota might require sampling equal numbers of participants across
different racial groups.
 Characteristics: Quota sampling attempts to improve representativeness by
ensuring diversity within the sample, though it still relies on nonrandom selection
methods.
 3. Purposive Sampling
 Definition: Researchers select participants based on specific characteristics and
criteria relevant to the research question.
 Example: A study might specifically look for adolescents aged 14–17 diagnosed
with obsessive-compulsive disorder.
 Characteristics: This method is useful for targeting a specific subset of the
population, particularly when studying populations with distinct characteristics
necessary for the research.
 4. Snowball Sampling
 Definition: Existing study participants recruit future participants from among
their acquaintances.
 Example: Research on rare characteristics, such as political power in a
community, where participants may need to help researchers identify other
potential participants.
 Characteristics: Particularly effective for reaching "hidden" populations that are
difficult for researchers to access directly. However, it can introduce bias because
the sample may not be representative of the broader population.
 Key Considerations for Nonrandom Sampling
 Bias and Representativeness: Nonrandom samples are generally less
representative of the population, which can limit the extent to which findings can
be generalized.
 Practicality: These methods are often more feasible than random sampling in
many real-world scenarios, especially when resources are limited or the
population is hard to access.
 Use in Research: They are particularly useful in exploratory, qualitative, or
initial phases of research to gather preliminary data when random sampling is
not possible.

Random Selection and Random assignment


 Random Selection
 Purpose: The goal of random selection is to create a representative sample from
a larger population. This method ensures that every individual in the population
has an equal chance of being included in the sample.
 Use in Research: Random selection is fundamental in survey research and
studies aiming to generalize findings to a larger population. By utilizing random
sampling techniques (e.g., simple random sampling, stratified random sampling),
researchers can obtain a sample that mirrors the diversity and characteristics of
the entire population.
 Outcome: If executed correctly using an EPSEM (Equal Probability of Selection
Method), random selection results in a sample that is representative of the
population, thereby enhancing the validity of generalizations made from the
study.
 Random Assignment
 Purpose: Random assignment is used to distribute participants in an
experimental study into different groups, such as treatment and control groups,
in a way that makes these groups equivalent at the start of the experiment.
 Use in Research: This technique is essential in experimental research where the
objective is to establish causality between variables. By randomly assigning
participants to groups, researchers ensure that any differences observed in the
outcomes can be attributed to the treatment effect rather than pre-existing
differences among participants.
 Outcome: Random assignment helps in controlling for confounding variables,
thus increasing the internal validity of the experiment. It ensures that the groups
are comparable, which is crucial for testing the effects of the experimental
treatment.
 Key Differences
 Application: Random selection is applied during the sampling phase of research
when deciding who will be part of the study, aiming for a representative
demographic. Random assignment, on the other hand, is used during the
experimental design phase to decide how participants are placed into different
study conditions or groups.
 Goal: The primary goal of random selection is representativeness of the sample,
while random assignment focuses on creating equivalent groups for accurate
testing of hypotheses about causal relationships.
 Examples
 Random Selection: Suppose a researcher wants to study dietary habits in Ann
Arbor, Michigan. They might use simple random sampling to select 1,000 adult
residents to participate in the study, ensuring the sample reflects the overall
population.
 Random Assignment: In a clinical trial testing a new drug, researchers start
with a group of volunteers (a sample possibly obtained through convenience
sampling) and use random assignment to allocate equal numbers of participants
to either the treatment group receiving the drug or the control group receiving a
placebo.

Determining the Sample Size When Random Sampling


Is used
 Key Considerations for Determining Sample Size
 Population Size: If the population is very small (100 people or fewer), it might
be feasible and advisable to include the entire population in the study rather than
a sample.
 Desired Confidence Level and Power: Larger sample sizes decrease the
likelihood of missing an effect or relationship present in the population. They
increase the statistical power of the study, which is the probability of correctly
rejecting the null hypothesis.
 Review of Literature: Checking how many participants were used in similar
studies can provide a benchmark for what might be necessary for your study.
 Use of Sample Size Calculators: Tools like G-Power are invaluable for
calculating sample size based on specific statistical requirements, including effect
size, desired power, and alpha level. These calculators require some knowledge
of inferential statistics. G-Power, for instance, is a widely recommended resource.
 Sample Size Tables: Some texts and resources offer tables with recommended
sample sizes based on various assumptions and statistical models. These tables
can serve as a starting point, but adjustments may be needed based on the
specifics of your study.
 Additional Factors Influencing Sample Size
 Population Heterogeneity: More diverse populations require larger samples to
accurately reflect that diversity in the study's findings.
 Subgroup Analyses: If the research involves analyzing data separately for
different subgroups (e.g., by gender, age, ethnicity), each subgroup must be
adequately represented in the sample.
 Precision of Estimates: Narrower (more precise) confidence intervals require
larger samples. For instance, estimating a parameter with a 95% confidence
interval of ±4% requires a larger sample than a ±5% interval.
 Strength of Effects: Detecting weaker relationships or smaller effect sizes
generally requires larger samples to distinguish the effect from random noise in
the data.
 Sampling Method Efficiency: Less efficient sampling methods, like cluster
sampling, may require larger samples compared to more efficient methods like
proportional stratified sampling.
 Expected Response Rate: Lower expected response rates necessitate a larger
initial sample size to ensure that enough data is collected, considering not all
selected participants will participate.
 Statistical Techniques Requirements
 Different statistical analyses might require different sample sizes to achieve
reliable results. Specific guidelines or tables, such as those in statistical textbooks
or software manuals, can provide detailed recommendations based on the type of
analysis planned.

Sampling in Qualitative Research


 Qualitative research is distinct in its approach to sampling because it seeks depth
over breadth, focusing intensely on understanding the nuances of particular
cases rather than generalizing findings to a larger population. The sampling
strategies used in qualitative studies are tailored to meet the specific goals of
gaining in-depth understanding of phenomena within their natural settings.
 Key Characteristics of Sampling in Qualitative Research
 Purposive Sampling: This is the most common form of sampling in qualitative
research. Researchers deliberately choose participants based on specific
characteristics or qualities that relate to the research question. The goal is to
select individuals, groups, or settings rich in information that are pertinent to the
phenomenon being studied.
 Theoretical Sampling: Often used in grounded theory research, theoretical
sampling involves selecting participants based on emerging results to develop or
refine ongoing theoretical constructs. Selection continues until no new
information is obtained (theoretical saturation).
 Continuous Selection: Unlike quantitative methods where sampling is typically
defined at the beginning of the study, qualitative sampling often involves an
ongoing process of identifying and selecting participants throughout the research
process. This allows the study to adapt and delve deeper into emerging themes
or unexpected insights.
 Mixed Sampling
 The concept of mixed sampling combines qualitative and quantitative methods to
enhance the robustness of the research design. By integrating both approaches,
researchers can leverage the strengths of each:
 Qualitative methods: Provide depth, context, and detailed understanding.
 Quantitative methods: Offer breadth, generalizability, and the ability to
measure and compare.
 Mixed sampling allows for more comprehensive data collection and analysis,
making it possible to address complex research questions that require both
detailed understanding and broader generalizations.
 Application and Benefits
 The adaptability of qualitative sampling methods makes them particularly suited
to exploring complex issues, developing new theories, and providing detailed
insights into human behavior and social processes. These methods are
instrumental in health studies, anthropology, psychology, and other social
sciences where understanding the context and subtleties of human experiences
is crucial.

You might also like