0% found this document useful (0 votes)
44 views11 pages

Sample Design

The document discusses sampling design and different types of sampling methods. It describes probability sampling methods like simple random sampling, cluster sampling, and stratified sampling which ensure every member of the population has an equal chance of being selected. It also describes non-probability sampling methods like convenience sampling and judgemental sampling which do not use random selection and have a higher risk of bias. The key is choosing the right sampling method based on your research goals and population.

Uploaded by

Mian Adeel Jee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views11 pages

Sample Design

The document discusses sampling design and different types of sampling methods. It describes probability sampling methods like simple random sampling, cluster sampling, and stratified sampling which ensure every member of the population has an equal chance of being selected. It also describes non-probability sampling methods like convenience sampling and judgemental sampling which do not use random selection and have a higher risk of bias. The key is choosing the right sampling method based on your research goals and population.

Uploaded by

Mian Adeel Jee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

A sample design is 

a definite plan for obtaining a sample from a given population. It refers


to the technique or the procedure the researcher would adopt in selecting items for the
sample. Sample design also leads to a procedure to tell the number of items to be included
in the sample i.e., the size of the sample.

Sampling design can be divided into two main categories, probability, and non-probability
sampling. In probability sampling, every person in the target population (either random or
representative) has an equal chance of being selected for the sample.

Sampling design can be divided into two main categories, probability, and non-probability

sampling. In probability sampling, every person in the target population (either random or

representative) has an equal chance of being selected for the sample. In non-probability

sampling, some individuals in the group will be more likely to be selected than others. 

Take a close look at your research goals (including the level of accuracy desired and your

budget) to determine which type of sampling will best help you achieve those goals. 

Probability sampling

Probability sampling ensures that every member of your sample has an equal probability

of being selected for your research. There are four main types of probability sampling:

simple random, cluster, systematic, and stratified.


Simple random sampling

As the name suggests, simple random sampling is both simple and random. With this

method, you may choose your sample with a random number generator or by drawing

from a hat, for example, to provide you with a completely random subset of your group.

This allows you to draw generalized conclusions about the whole population based on the

data provided from the subset (sample).

As an example, let’s say that your population is the employees of your company. You take

each of your 1,500 employees and randomly assign numbers to each one. Then, using a

random number generator, you select 150 numbers. Those 150 are your sample.
Cluster sampling
In cluster sampling, your population is divided into subgroups that have similar

characteristics to the whole population. Instead of selecting individuals, you randomly

select an entire subgroup for your sample. 

There is a higher probability of error with this method because there could be differences

between the clusters. You cannot guarantee that the sample you use is truly representative

of the entire population you’re studying.

Let’s look at your company again. The 1,500 employees are spread across 25 offices with

close to the same number of employees in each office. You use cluster sampling to choose

the employees of four offices to use as your sample.


Stratified sampling

In stratified random sampling, you divide a population into smaller subgroups called

strata. The strata are based on the shared attributes of the individuals, such as income, age

range, or education level. This method is used when you believe that these similarities

indicate additional similarities that will resonate with your broader population.

Back at your company, you have 900 male employees and 600 females. You want your

sample to represent the gender balance in your company, so you sort into two strata based

on gender. Using random sampling in each group, you select 90 men and 60 women for a

sample of 150 people.

Non-probability sampling

In non-probability samples, the criteria for selection are not random, and the chances of

being included in the sample are not equal. While it’s easier and less expensive to perform

non-probability sampling, there is a higher risk of sampling bias, and inferences about the

full population are weaker. 


Non-probability sampling is most often used in exploratory or qualitative research, where

the goal is to develop an understanding of a small or underrepresented population. 

There are five main types of non-probability sampling: convenience, judgemental,

voluntary, snowball, and quota.


Convenience sampling

In convenience sampling, the sample consists of individuals who are most accessible to the

researcher. It may be easy to collect initial information, but it cannot be generalized to

your target population.

Back at your company, you’re in a rush to get some preliminary data about your idea. You

turn to your colleagues in the marketing department as your sample and collect

information from them. This sample gives you initial data but is not representative of the

views of all employees in the company.


Judgemental or purposive sampling

In this type of non-probability sampling, the researcher uses their expertise to choose a

sample that they believe will be most useful in reaching their research objectives.

Judgemental sampling is frequently used in qualitative research, where statistical

inferences are unnecessary, or the population is quite small. To be effective, the sample

must have clear inclusion and exclusion criteria.

For example, the latest research you’re performing for your company explores the

experiences of employees with disabilities. You purposively choose employees with support

needs as your sample to assess their experiences and needs in your organization.
Voluntary response sampling

Based on ease of access like convenience sampling, voluntary response sampling is when

people volunteer to participate in your research. Because some people are more likely to

volunteer than others, there will likely be some bias involved.


Consider your company again. You send a survey out to all employees to gather

information about employee satisfaction. The survey is voluntary, and the employees who

respond have strong opinions. There’s no way to be certain that these responses are

indicative of the opinions of all employees.


Snowball sampling

The snowball sampling method is used when your population is difficult to access. You

reach out to the members of the population that you can and then count on these

participants to recruit others for your study. The number of participants “snowballs” as

the number increases.

Your company produces an app designed to help people with mental illnesses. Due to

HIPAA laws, there is no efficient legal or ethical way to collect a list of individuals who

might participate in your research. You reach out to people you know who suffer from

depression and ask them to refer others who may be interested in trying your app for

research purposes and providing you with information about their experiences. 
Quota sampling

With quota sampling, your population is divided into categories determined by the

researcher. Depending on the research, you may need a particular number of males or

females, or you may need your sample to represent a certain income level or age range.

Bias may occur simply based on the categories chosen by the researchers.

An example of quota sampling would be if you decided your research would be easiest if

you reach out to C-level executives for their input on the new management app you’ve

designed. By choosing only the highest-level managers, you may be omitting input from

other management levels that could be valuable. However, if C-suite managers are the

target audience for your app, this is a fast way to gain insights.
There are five key steps in sampling design.

1. Define target population

What population do you want to study? Determine who will provide you with the most

useful information for your research and help you meet your objectives.

2. Choose a sample frame

A sample frame is the group of people from which you’ll pull your sample. 

3. Select a sampling method

Choose a sampling method based on your research needs. Take your time and find the best

method for your specific study.

4. Determine sample size

Use our sample size calculator to determine the necessary sample size for your study.

5. Execute the sample

Implement your research plan according to your chosen methodology.

Reliability is consistency across time (test-retest reliability), across items (internal consistency),
and across researchers (interrater reliability). Validity is the extent to which the scores actually
represent the variable they are intended to. Validity is a judgment based on various types of
evidence.

The purpose of establishing reliability and validity in research is essentially to ensure that data
are sound and replicable, and the results are accurate. The evidence of validity and reliability are
prerequisites to assure the integrity and quality of a measurement instrument [Kimberlin &
Winterstein, 2008]
Reliability can be estimated by comparing different versions of the same measurement. Validity
is harder to assess, but it can be estimated by comparing the results to other relevant data or
theory.

Validity:

Validity refers to whether a test measures what it aims to measure.

The validity of a research study refers to how well the results among the study participants
represent true findings among similar individuals outside the study. This concept of validity
applies to all types of clinical studies, including those about prevalence, associations,
interventions, and diagnosis.

Validity isn't determined by a single statistic, but by a body of research that demonstrates the
relationship between the test and the behavior it is intended to measure. There are four types of
validity: content validity, criterion-related validity, construct validity, and face validity.

Validity is how researchers talk about the extent that results represent reality. Research methods,
quantitative or qualitative, are methods of studying real phenomenon – validity refers to how
much of that phenomenon they measure vs. how much “noise,” or unrelated information, is
captured by the results.

Validity and reliability make the difference between “good” and “bad” research reports. Quality
research depends on a commitment to testing and increasing the validity as well as the reliability
of your research results.

Any research worth its weight is concerned with whether what is being measured is what is
intended to be measured and considers the ways in which observations are influenced by the
circumstances in which they are made.

The basis of how our conclusions are made play an important role in addressing the broader
substantive issues of any given study.

For this reason we are going to look at various validity types that have been formulated as a part
of legitimate research methodology.

Here are the 7 key types of validity in research:


1. Face validity
2. Content validity
3. Construct validity
4. Internal validity
5. External validity
6. Statistical conclusion validity
7. Criterion-related validity

1. Face validity

Face validity is how valid your results seem based on what they look like. This is the least
scientific method of validity, as it is not quantified using statistical methods.

Face validity is not validity in a technical sense of the term.  It is concerned with whether it
seems like we measure what we claim.

Here we look at how valid a measure appears on the surface and make subjective judgments
based off of that.

For example,

 Imagine you give a survey that appears to be valid to the respondent and the questions are
selected because they look valid to the administer.
 The administer asks a group of random people, untrained observers,  if the questions
appear valid to them

In research it’s never enough to rely on face judgments alone – and more quantifiable methods of
validity are necessary in order to draw acceptable conclusions.  There are many instruments of
measurement to consider so face validity is useful in cases where you need to distinguish one
approach over another.

Face validity should never be trusted on its own merits.

2. Content validity

Content validity is whether or not the measure used in the research covers all of the content in
the underlying construct (the thing you are trying to measure).

This is also a subjective measure, but unlike face validity we ask whether the content of a
measure covers the full domain of the content. If a researcher wanted to measure introversion,
they would have to first decide what constitutes a relevant domain of content for that trait.

Content validity is considered a subjective form of measurement because it still relies on


people’s perception for measuring constructs that would otherwise be difficult to measure.
Where content validity distinguishes itself (and becomes useful) is through its use of experts in
the field or individuals belonging to a target population.  This study can be made more objective
through the use of rigorous statistical tests.

For example you could have a content validity study that informs researchers how items used in
a survey represent their content domain, how clear they are, and the extent to which they
maintain the theoretical factor structure assessed by the factor analysis.

3. Construct validity

A construct represents a collection of behaviors that are associated in a meaningful way to create
an image or an idea invented for a research purpose. Construct validity is the degree to which
your research measures the construct (as compared to things outside the construct).

Depression is a construct that represents a personality trait which manifests itself in behaviors
such as over sleeping, loss of appetite, difficulty concentrating, etc.

The existence of a construct is manifest by observing the collection of related indicators.  Any
one sign may be associated with several constructs.  A person with difficulty concentrating may
have A.D.D. but not depression.

Construct validity is the degree to which inferences can be made from operationalization
(connecting concepts to observations) in your study to the constructs on which those
operationalizations are based.  To establish construct validity you must first provide evidence
that your data supports the theoretical structure.

You must also show that you control the operationalization of the construct, in other words,
show that your theory has some correspondence with reality.

 Convergent Validity – the degree to which an operation is similar to other operations it


should theoretically be similar to.
 Discriminative Validity -– if a scale adequately differentiates itself or does not
differentiate between groups that should differ or not differ based on theoretical reasons
or previous research.
 Nomological Network – representation of the constructs of interest in a study, their
observable manifestations, and the interrelationships among and between these. 
According to Cronbach and Meehl,  a nomological network has to be developed for a
measure in order for it to have construct validity
 Multitrait-Multimethod Matrix – six major considerations when examining Construct
Validity according to Campbell and Fiske.  This includes evaluations of the convergent
validity and discriminative validity.  The others are trait method unit, multi-method/trait,
truly different methodology, and trait characteristics.

4. Internal validity
Internal validity refers to the extent to which the independent variable can accurately be stated to
produce the observed effect.

If the effect of the dependent variable is only due to the independent variable(s) then internal
validity is achieved. This is the degree to which a result can be manipulated.

Put another way, internal validity is how you can tell that your research “works” in a research
setting. Within a given study, does the variable you change affect the variable you’re studying?

Sign Up

5. External validity

External validity refers to the extent to which the results of a study can be generalized beyond
the sample. Which is to say that you can apply your findings to other people and settings.

Think of this as the degree to which a result can be generalized. How well do the research results
apply to the rest of the world?

A laboratory setting (or other research setting) is a controlled environment with fewer variables.
External validity refers to how well the results hold, even in the presence of all those other
variables.

6. Statistical conclusion validity

Statistical conclusion validity is a determination of whether a relationship or co-variation exists


between cause and effect variables.

This type of validity requires:

 Ensuring adequate sampling procedures


 Appropriate statistical tests
 Reliable measurement procedures

This is the degree to which a conclusion is credible or believable.

7. Criterion-related validity
Criterion-related validity (also called instrumental validity) is a measure of the quality of your
measurement methods.  The accuracy of a measure is demonstrated by comparing it with a
measure that is already known to be valid.

In other words – if your measure has a high correlation with other measures that are known to be
valid because of previous research.

For this to work you must know that the criterion has been measured well.  And be aware that
appropriate criteria do not always exist.

What you are doing is checking the performance of your operationalization against a criteria.

The criteria you use as a standard of judgment accounts for the different approaches you would
use:

 Predictive Validity – operationalization’s ability to predict what it is theoretically able to


predict.  The extent to which a measure predicts expected outcomes.
 Concurrent Validity – operationalization’s ability to distinguish between groups it
theoretically should be able to.  This is where a test correlates well with a measure that
has been previously validated.

When we look at validity in survey data we are asking whether the data represents what we think
it should represent.

We depend on the respondent’s mind set and attitude in order to give us valid data.

In other words we depend on them to answer all questions honestly and conscientiously. We also
depend on whether they are able to answer the questions that we ask. When questions are asked
that the respondent can not comprehend or understand, then the data does not tell us what we
think it does.

Reliability refers to the consistency of a measure. Psychologists consider three types of


consistency: over time (test-retest reliability), across items (internal consistency), and across
different researchers (inter-rater reliability).

There are two types of reliability – internal and external reliability.

 Internal reliability assesses the consistency of results across items within a test.

 External reliability refers to the extent to which a measure varies from one use to another.
The term reliability in psychological research refers to the consistency of a quantitative research
study or measuring test.

You might also like