Unit 2.2 SEC SYBA
Unit 2.2 SEC SYBA
What is sampling?
In survey research, sampling is the process of using a subset of a population to represent the whole population. To help illustrate
this further, let’s look at data sampling methods with examples below.
Let’s say you wanted to do some research on everyone in North America. To ask every person would be almost impossible. Even
if everyone said “yes”, carrying out a survey across different states, in different languages and timezones, and then collecting and
processing all the results, would take a long time and be very costly.
Sampling allows large-scale research to be carried out with a more realistic cost and time-frame because it uses a smaller number
of individuals in the population with representative characteristics to stand in for the whole.
However, when you decide to sample, you take on a new task. You have to decide who is part of your sample list and how to
choose the people who will best represent the whole population. How you go about that is what the practice of sampling is all
about.
Sampling definitions
Although the idea of sampling is easiest to understand when you think about a very large population, it makes sense to use
sampling methods in research studies of all types and sizes. After all, if you can reduce the effort and cost of doing a study, why
wouldn’t you? And because sampling allows you to research larger target populations using the same resources as you would
smaller ones, it dramatically opens up the possibilities for research.
Sampling is a little like having gears on a car or bicycle. Instead of always turning a set of wheels of a specific size and being
constrained by their physical properties, it allows you to translate your effort to the wheels via the different gears, so you’re
effectively choosing bigger or smaller wheels depending on the terrain you’re on and how much work you’re able to do.
Sampling allows you to “gear” your research so you’re less limited by the constraints of cost, time, and complexity that come
with different population sizes.
It allows us to do things like carrying out exit polls during elections, map the spread and effects rates of epidemics across
geographical areas, and carry out nationwide census research that provides a snapshot of society and culture.
Types of sampling
Sampling strategies in research vary widely across different disciplines and research areas, and from study to study.
There are two major types of sampling methods: probability and non-probability sampling.
Probability sampling, also known as random sampling, is a kind of sample selection where randomisation is used
instead of deliberate choice. Each member of the population has a known, non-zero chance of being selected.
Non-probability sampling techniques are where the researcher deliberately picks items or individuals for the sample
based on non-random factors such as convenience, geographic availability, or costs.
As we delve into these categories, it’s essential to understand the nuances and applications of each method to ensure that the
chosen sampling strategy aligns with the research goals.
There’s a wide range of probability sampling methods to explore and consider. Here are some of the best-known options.
With simple random sampling, every element in the population has an equal chance of being selected as part of the sample. It’s
something like picking a name out of a hat. Simple random sampling can be done by anonymising the population – e.g. by
assigning each item or person in the population a number and then picking numbers at random.
Pros: Simple random sampling is easy to do and cheap. Designed to ensure that every member of the population has an equal
chance of being selected, it reduces the risk of bias compared to non-random sampling.
Cons: It offers no control for the researcher and may lead to unrepresentative groupings being picked by chance.
2. Systematic sampling
With systematic sampling the random selection only applies to the first item chosen. A rule then applies so that every nth item or
person after that is picked.
Best practice is to sort your list in a random way to ensure that selections won’t be accidentally clustered together. This is
commonly achieved using a random number generator. If that’s not available you might order your list alphabetically by first
name and then pick every fifth name to eliminate bias, for example.
Next, you need to decide your sampling interval – for example, if your sample will be 10% of your full list, your sampling
interval is one in 10 – and pick a random start between one and 10 – for example three. This means you would start with person
number three on your list and pick every tenth person.
Pros: Systematic sampling is efficient and straightforward, especially when dealing with populations that have a clear order. It
ensures a uniform selection across the population.
Cons: There’s a potential risk of introducing bias if there’s an unrecognized pattern in the population that aligns with the
sampling interval.
3. Stratified sampling
Stratified sampling involves random selection within predefined groups. It’s a useful method for researchers wanting to
determine what aspects of a sample are highly correlated with what’s being measured. They can then decide how to subdivide
(stratify) it in a way that makes sense for the research.
For example, you want to measure the height of students at a university where 80% of students are female and 20% are male. We
know that gender is highly correlated with height, and if we took a simple random sample of 200 students (out of the 2,000 who
attend the university), we could by chance get 200 females and not one male. This would bias our results and we would
underestimate the height of students overall. Instead, we could stratify by gender and make sure that 20% of our sample (40
students) are male and 80% (160 students) are female.
Pros: Stratified sampling enhances the representation of all identified subgroups within a population, leading to more accurate
results in heterogeneous populations.
Cons: This method requires accurate knowledge about the population’s stratification, and its design and execution can be more
intricate than other methods.
4. Cluster sampling
With cluster sampling, groups rather than individual units of the target population are selected at random for the sample. These
might be pre-existing groups, such as people in certain zip codes or students belonging to an academic year.
Cluster sampling can be done by selecting the entire cluster, or in the case of two-stage cluster sampling, by randomly selecting
the cluster itself, then selecting at random again within the cluster.
Pros: Cluster sampling is economically beneficial and logistically easier when dealing with vast and geographically dispersed
populations.
Cons: Due to potential similarities within clusters, this method can introduce a greater sampling error compared to other
methods.
The non-probability sampling methodology doesn’t offer the same bias-removal benefits as probability sampling, but there are
times when these types of sampling are chosen for expediency or simplicity. Here are some forms of non-probability sampling
and how they work.
1. Convenience sampling
People or elements in a sample are selected on the basis of their accessibility and availability. If you are doing a research survey
and you work at a university, for example, a convenience sample might consist of students or co-workers who happen to be on
campus with open schedules who are willing to take your questionnaire.
This kind of sample can have value, especially if it’s done as an early or preliminary step, but significant bias will be introduced.
Pros: Convenience sampling is the most straightforward method, requiring minimal planning, making it quick to implement.
Cons: Due to its non-random nature, the method is highly susceptible to biases, and the results are often lacking in their
application to the real world.
2. Quota sampling
Like the probability-based stratified sampling method, this approach aims to achieve a spread across the target population by
specifying who should be recruited for a survey according to certain groups or criteria.
For example, your quota might include a certain number of males and a certain number of females. Alternatively, you might want
your samples to be at a specific income level or in certain age brackets or ethnic groups.
Pros: Quota sampling ensures certain subgroups are adequately represented, making it great for when random sampling isn’t
feasible but representation is necessary.
Cons: The selection within each quota is non-random and researchers’ discretion can influence the representation, which both
strongly increase the risk of bias.
3. Purposive sampling
Participants for the sample are chosen consciously by researchers based on their knowledge and understanding of the research
question at hand or their goals.
Also known as judgment sampling, this technique is unlikely to result in a representative sample, but it is a quick and fairly easy
way to get a range of results or responses.
Pros: Purposive sampling targets specific criteria or characteristics, making it ideal for studies that require specialised
participants or specific conditions.
Cons: It’s highly subjective and based on researchers’ judgment, which can introduce biases and limit the study’s real-world
application.
With this approach, people recruited to be part of a sample are asked to invite those they know to take part, who are then asked to
invite their friends and family and so on. The participation radiates through a community of connected individuals like a snowball
rolling downhill.
Pros: Especially useful for hard-to-reach or secretive populations, snowball sampling is effective for certain niche studies.
Cons: The method can introduce bias due to the reliance on participant referrals, and the choice of initial seeds can significantly
influence the final sample.
A research strategy refers to a step-by-step plan of action that gives direction to the researcher’s thought process. It enables a
researcher to conduct the research systematically and on schedule. The main purpose is to introduce the principal components of
the study such as the research topic, areas, major focus, research design and finally the research methods.
● Research questions.
● Research objectives.
● Amount of time available.
● Resources at the researcher’s disposal.
● Philosophical underpinnings of the researcher.
Research Strategy
Research strategy helps a researcher choose the right data collection and analysis procedure. Thus, it is of utmost importance to
choose the right strategy while conducting the research. The following section will focus on the different types of strategies that
can be used.
Figure 1: Types of research strategy
● Qualitative: This strategy is generally used when to understand the underlying reasons or the opinion of the people on
certain facts or a problem. It does not involve numerical data. It provides insights into the research problem and hence
helps in achieving the research objectives. Various methods that can be used include interviews, observations,
open-ended surveys and focus group discussions.
● Quantitative: It involves the collection of primary or secondary data which is in numerical form. Under this strategy,
the researcher can collect the data by using questionnaires, polls and surveys or through secondary sources. This
strategy mainly focuses on when, where, what and how often a specific phenomenon occurs.
● Descriptive: This is generally used when the researcher wants to describe a particular situation. This involves
observing and describing the behaviour patterns of either an individual, community or any group. One thing that
distinguishes it from other forms of research strategies is that subjects are observed in a completely unchanged
environment. Under this approach surveys, observations and case studies are mainly used to collect the data and to
understand the specific set of variables.
● Analytical: This involves the use of already available information. Here the researcher in an attempt to understand the
complex problem set, studies and analyses the available data. It majorly concerns the cause-and-effect relationship. The
scientifically based problem-solving approaches mainly use this strategy.
● Action: This strategy aims at finding solutions to an immediate problem. It is generally applied by an agency, company
or by government in order to address a particular problem and find possible solutions to it. For example, finding which
strategy could best work out to motivate physically challenged students.
● Basic: According to this strategy no generalizations are made in order to understand the subject in a better and more
precise way. Thus, it involves investigation and the analysis of a phenomenon. Although their findings are not directly
applicable in the real world, they work towards enhancing the knowledge base.
● Critical: It works towards analyzing the claims regarding a particular society. For example, a researcher can focus on
any conclusion or theory made regarding a particular society or culture and test it empirically through a survey or
experiment.
● Interpretive: this strategy is similar to the qualitative research strategy. However, rather than using hypothesis testing,
interpretation is done through the sense-making process. In simple terms, this strategy uses human experience in order
to understand the phenomena.
● Exploratory: It is mainly used to gain insights into the problem or regarding certain situations but does work towards
providing the solution to the research problem. This research strategy is generally undertaken when there is very little
or no earlier study on the research topic.
● Predictive: It deals with developing an understanding of the future of the research problem and has its foundation
based on probability. This is generally very popular among companies and organizations.
Get your Methodology ready in 18 days. Use 634CAE327AA5A to get a discount of 2000 on 3001 - 5000 words standard order.
Descriptive research This majorly involves Generally used when the To understand the
strategy observing and describing the researcher wants to social status of
behaviour pattern of either an describe a particular working women in
individual, community or any situation. a specific region of
group to be specific. a country.
Analytical research This form of research strategy To examine the To understand the
strategy involves the use of already cause-and-effect impact of certain
available information. relationship between two policy decisions on
or more variables. the gross domestic
product of an
economy.
Action research strategy It aims at finding solutions to Applied by agencies, Determining which
an immediate problem. companies or governments strategy would
in order to address a work best to
problem and find possible motivate physically
solutions. challenged students.
Basic research strategy Involves investigation and the Works towards enhancing To identify the
analysis of a phenomenon. the existing knowledge reason behind the
base. breakout of certain
epidemics in certain
regions.
Critical researchstrategy The strategy focuses on Works towards analyzing To analyze the
critically analyzing prior the claims regarding a claims made by
findings of a research. particular society or another study
phenomenon. regarding the
temperature
conditions in the
next 10 years.
Interpretive research This strategy uses human Applicable when the To determine and
strategy experience in order to researcher wants to analyse the
understand a research understand the underlying problems faced by
problem. reasons or the opinion of women in their
the people on certain facts society or
or the problem. household.
Exploratory research Used to gain insights into the Undertaken when there is To understand in
strategy problem or regarding certain very little or no earlier depth the problems
situations but works towards study on the research topic. faced by working
providing the solution to the women in Northern
research problem. India.
Predictive research This form of research strategy Used for studies and To predict future
strategy deals with developing an problems that require sales or increase in
understanding of the future of prediction of future trends. customers before
a research problem and has its the launch of
foundation based on certain new
probability. products.
Research Design
● Research Objectives: Clearly define the goals and objectives of the research study. What is the research trying to
achieve or investigate?
● Research Questions or Hypotheses: Formulating specific research questions or hypotheses that address the objectives
of the study. These questions guide the research process.
● Data Collection Methods: Determining how data will be collected, whether through surveys, experiments,
observations, interviews, archival research, or a combination of these methods.
● Sampling: Deciding on the target population and selecting a sample that represents that population. Sampling methods
can vary, such as random sampling, stratified sampling, or convenience sampling.
● Data Collection Instruments: Developing or selecting the tools and instruments needed to collect data, such as
questionnaires, surveys, or experimental equipment.
● Data Analysis: Defining the statistical or analytical techniques that will be used to analyze the collected data. This may
involve qualitative or quantitative methods, depending on the research goals.
● Time Frame: Establishing a timeline for the research project, including when data will be collected, analyzed, and
reported.
● Ethical Considerations: Addressing ethical issues, including obtaining informed consent from participants, ensuring
the privacy and confidentiality of data, and adhering to ethical guidelines.
● Resources: Identifying the resources needed for the research, including funding, personnel, equipment, and access to
data sources.
● Data Presentation and Reporting: Planning how the research findings will be presented and reported, whether
through written reports, presentations, or other formats.
There are various research designs, such as experimental, observational, survey, case study, and longitudinal designs, each suited
to different research questions and objectives. The choice of research design depends on the nature of the research and the goals
of the study.
A well-constructed research design is crucial because it helps ensure the validity, reliability, and generalizability of research
findings, allowing researchers to draw meaningful conclusions and contribute to the body of knowledge in their field.
1. Experimental Method
Controlled Experiments: In controlled experiments, researchers manipulate one or more independent variables and measure their
effects on dependent variables while controlling for confounding factors.
2. Observational Method
Naturalistic Observation: Researchers observe and record behavior in its natural setting without intervening. This method is often
used in psychology and anthropology.
Structured Observation: Observations are made using a predetermined set of criteria or a structured observation schedule.
3. Survey Method
Questionnaires: Researchers collect data by administering structured questionnaires to participants. This method is widely used
for collecting quantitative research data.
Interviews: In interviews, researchers ask questions directly to participants, allowing for more in-depth responses. Interviews can
take on structured, semi-structured, or unstructured formats.
Single-Case Study: Focuses on a single individual or entity, providing an in-depth analysis of that case.
Multiple-Case Study: Involves the examination of multiple cases to identify patterns, commonalities, or differences.
5. Content Analysis
Researchers analyze textual, visual, or audio data to identify patterns, themes, and trends. This method is commonly used in
media studies and social sciences.
6. Historical Research
Researchers examine historical documents, records, and artifacts to understand past events, trends, and contexts.
7. Action Research
Researchers work collaboratively with practitioners to address practical problems or implement interventions in real-world
settings.
8. Ethnographic Research
Researchers immerse themselves in a particular cultural or social group to gain a deep understanding of their behaviors, beliefs,
and practices.
Cross-sectional surveys collect data from a sample of participants at a single point in time.
Longitudinal surveys collect data from the same participants over an extended period, allowing for the study of changes over
time.
10. Meta-Analysis
Researchers conduct a quantitative synthesis of data from multiple studies to provide a comprehensive overview of research
findings on a particular topic.
Combines qualitative and quantitative research methods to provide a more holistic understanding of a research problem.
A qualitative research method that aims to develop theories or explanations grounded in the data collected during the research
process.
Researchers use mathematical or computational models to simulate real-world phenomena and explore various scenarios.
Combines elements of surveys and experiments, allowing researchers to manipulate variables within a survey context.
Combines elements of cross-sectional and longitudinal research to examine both age-related changes and cohort differences.
The selection of a specific research design method should align with the research objectives, the type of data needed, available
resources, ethical considerations, and the overall research approach. Researchers often choose methods that best suit the nature of
their study and research questions to ensure that they collect relevant and valid data.
Research Design Examples (this is for your understanding you can write examples of your own too)
Research designs can vary significantly depending on the research questions and objectives. Here are some examples of research
designs across different disciplines:
● Experimental Design: A pharmaceutical company conducts a randomized controlled trial (RCT) to test the efficacy of
a new drug. Participants are randomly assigned to two groups: one receiving the new drug and the other a placebo. The
company measures the health outcomes of both groups over a specific period.
● Observational Design: An ecologist observes the behavior of a particular bird species in its natural habitat to
understand its feeding patterns, mating rituals, and migration habits.
● Survey Design: A market research firm conducts a survey to gather data on consumer preferences for a new product.
They distribute a questionnaire to a representative sample of the target population and analyze the responses.
● Case Study Design: A psychologist conducts a case study on an individual with a rare psychological disorder to gain
insights into the causes, symptoms, and potential treatments of the condition.
● Content Analysis: Researchers analyze a large dataset of social media posts to identify trends in public opinion and
sentiment during a political election campaign.
● Historical Research: A historian examines primary sources such as letters, diaries, and official documents to
reconstruct the events and circumstances leading up to a significant historical event.
● Action Research: A school teacher collaborates with colleagues to implement a new teaching method in their
classrooms and assess its impact on student learning outcomes through continuous reflection and adjustment.
● Ethnographic Research: An anthropologist lives with and observes an indigenous community for an extended period
to understand their culture, social structures, and daily lives.
● Cross-Sectional Survey: A public health agency conducts a cross-sectional survey to assess the prevalence of smoking
among different age groups in a specific region during a particular year.
● Longitudinal Study: A developmental psychologist follows a group of children from infancy through adolescence to
study their cognitive, emotional, and social development over time.
● Meta-Analysis: Researchers aggregate and analyze the results of multiple studies on the effectiveness of a specific
type of therapy to provide a comprehensive overview of its outcomes.
● Mixed-Methods Research: A sociologist combines surveys and in-depth interviews to study the impact of a
community development program on residents’ quality of life.
● Grounded Theory: A sociologist conducts interviews with homeless individuals to develop a theory explaining the
factors that contribute to homelessness and the strategies they use to cope.
● Simulation and Modeling: Climate scientists use computer models to simulate the effects of various greenhouse gas
emission scenarios on global temperatures and sea levels.
● Case-Control Study: Epidemiologists investigate a disease outbreak by comparing a group of individuals who
contracted the disease (cases) with a group of individuals who did not (controls) to identify potential risk factors.
These examples demonstrate the diversity of research designs used in different fields to address a wide range of research
questions and objectives. Researchers select the most appropriate design based on the specific context and goals of their study.
Internal Validity vs. External Validity in Research
How do you determine whether a psychology study is trustworthy and meaningful? Two characteristics that can help you assess
research findings are internal and external validity.
Internal validity measures how well a study is conducted (its structure) and how accurately its results reflect
the studied group.
External validity relates to how applicable the findings are in the real world.1
These two concepts help researchers gauge if the results of a research study are trustworthy and meaningful.
Internal Validity
Internal validity is the extent to which a research study establishes a trustworthy cause-and-effect relationship.2 This type of
validity depends largely on the study's procedures and how rigorously it is performed.
Internal validity is important because once established, it makes it possible to eliminate alternative explanations for a finding. If
you implement a smoking cessation program, for instance, internal validity ensures that any improvement in the subjects is due to
the treatment administered and not something else.
Internal validity is not a "yes or no" concept. Instead, we consider how confident we can be with study findings based on whether
the research avoids traps that may make those findings questionable. The less chance there is for "confounding," the higher the
internal validity and the more confident we can be.3
Confounding refers to uncontrollable variables that come into play and can confuse the outcome of a study, making us unsure of
whether we can trust that we have identified the cause-and-effect relationship.
In short, you can only be confident that a study is internally valid if you can rule out alternative explanations for the findings.
Three criteria are required to assume cause and effect in a research study:
To ensure the internal validity of a study, you want to consider aspects of the research design that will increase the likelihood that
you can reject alternative hypotheses. Many factors can improve internal validity in research, including:4
Blinding: Participants—and sometimes researchers—are unaware of what intervention they are receiving (such
as using a placebo on some subjects in a medication study) to avoid having this knowledge bias their
perceptions and behaviors, thus impacting the study's outcome
Experimental manipulation: Manipulating an independent variable in a study (for instance, giving smokers a
cessation program) instead of just observing an association without conducting any intervention (examining the
relationship between exercise and smoking behavior)
Random selection: Choosing participants at random or in a manner in which they are representative of the
population that you wish to study
Randomization or random assignment: Randomly assigning participants to treatment and control groups,
ensuring that there is no systematic bias between the research groups
Strict study protocol: Following specific procedures during the study so as not to introduce any unintended
effects; for example, doing things differently with one group of study participants than you do with another
group
Just as there are many ways to ensure internal validity, a list of potential threats should be considered when planning a study.5
Attrition: Participants dropping out or leaving a study, which means that the results are based on a biased
sample of only the people who did not choose to leave (and possibly who all have something in common, such
as higher motivation)6
Confounding: A situation in which changes in an outcome variable can be thought to have resulted from some
type of outside variable not measured or manipulated in the study
Diffusion: This refers to the results of one group transferring to another through the groups interacting and
talking with or observing one another; this can also lead to another issue called resentful demoralization, in
which a control group tries less hard because they feel resentful over the group that they are in
Experimenter bias: An experimenter behaving in a different way with different groups in a study, which can
impact the results (and is eliminated through blinding)
Historical events: May influence the outcome of studies that occur over a period of time, such as a change in
the political leader or a natural disaster that occurs, influencing how study participants feel and act
Instrumentation: This involves "priming" participants in a study in certain ways with the measures used,
causing them to react in a way that is different than they would have otherwise reacted
Maturation: The impact of time as a variable in a study; for example, if a study takes place over a period of
time in which it is possible that participants naturally change in some way (i.e., they grew older or became
tired), it may be impossible to rule out whether effects seen in the study were simply due to the impact of time
Statistical regression: The natural effect of participants at extreme ends of a measure falling in a certain
direction due to the passage of time rather than being a direct effect of an intervention
Testing: Repeatedly testing participants using the same measures influences outcomes; for example, if you give
someone the same test three times, it is likely that they will do better as they learn the test or become used to
the testing process, causing them to answer differently
External validity refers to how well the outcome of a research study can be expected to apply to other settings. This is important
because, if external validity is established, it means that the findings can be generalizable to similar individuals or populations.7
External validity affirmatively answers the question: Do the findings apply to similar people, settings, situations, and time
periods?
Population validity and ecological validity are two types of external validity. Population validity refers to whether you can
generalize the research outcomes to other populations or groups. Ecological validity refers to whether a study's findings can be
generalized to additional situations or settings.1
Another term called transferability refers to whether results transfer to situations with similar characteristics. Transferability
relates to external validity and refers to a qualitative research design.
If you want to improve the external validity of your study, there are many ways to achieve this goal. Factors that can enhance
external validity include:8
Field experiments: Conducting a study outside the laboratory, in a natural setting
Inclusion and exclusion criteria: Setting criteria as to who can be involved in the research, ensuring that the
population being studied is clearly defined
Psychological realism: Making sure participants experience the events of the study as being real by telling
them a "cover story," or a different story about the aim of the study so they don't behave differently than they
would in real life based on knowing what to expect or knowing the study's goal
Replication: Conducting the study again with different samples or in different settings to see if you get the
same results; when many studies have been conducted on the same topic, a meta-analysis can also be used to
determine if the effect of an independent variable can be replicated, therefore making it more reliable9
Reprocessing or calibration: Using statistical methods to adjust for external validity issues, such as
reweighting groups if a study had uneven groups for a particular characteristic (such as age)
External validity is threatened when a study does not take into account the interaction of variables in the real world.10 Threats to
external validity include:
Pre- and post-test effects: When the pre- or post-test is in some way related to the effect seen in the study,
such that the cause-and-effect relationship disappears without these added tests
Sample features: When some feature of the sample used was responsible for the effect (or partially
responsible), leading to limited generalizability of the findings
Selection bias: Also considered a threat to internal validity, selection bias describes differences between groups
in a study that may relate to the independent variable—like motivation or willingness to take part in the study,
or specific demographics of individuals being more likely to take part in an online survey11
Situational factors: Factors such as the time of day of the study, its location, noise, researcher characteristics,
and the number of measures used may affect the generalizability of findings
While rigorous research methods can ensure internal validity, external validity may be limited by these methods.
Internal Validity vs. External Validity
Internal validity and external validity are two research concepts that share a few similarities while also having several differences.
Similarities
One of the similarities between internal validity and external validity is that both factors should be considered when designing a
study. This is because both have implications in terms of whether the results of a study have meaning.
Both internal validity and external validity are not "either/or" concepts. Therefore, you always need to decide to what degree a
study performs in terms of each type of validity.
Each of these concepts is also typically reported in research articles published in scholarly journals. This is so that other
researchers can evaluate the study and make decisions about whether the results are useful and valid.
Differences
The essential difference between internal validity and external validity is that internal validity refers to the structure of a study
(and its variables) while external validity refers to the universality of the results. But there are further differences between the two
as well.
For instance, internal validity focuses on showing a difference that is due to the independent variable alone. Conversely, external
validity results can be translated to the world at large.
Internal validity and external validity aren't mutually exclusive. You can have a study with good internal validity but be overall
irrelevant to the real world. You could also conduct a field study that is highly relevant to the real world but doesn't have
trustworthy results in terms of knowing what variables caused the outcomes.
Examples of Validity
Perhaps the best way to understand internal validity and external validity is with examples.
An example of a study with good internal validity would be if a researcher hypothesizes that using a particular mindfulness app
will reduce negative mood. To test this hypothesis, the researcher randomly assigns a sample of participants to one of two groups:
those who will use the app over a defined period and those who engage in a control task.
The researcher ensures that there is no systematic bias in how participants are assigned to the groups. They do this by blinding
the research assistants so they don't know which groups the subjects are in during the experiment.
A strict study protocol is also used to outline the procedures of the study. Potential confounding variables are measured along
with mood, such as the participants' socioeconomic status, gender, age, and other factors. If participants drop out of the study,
their characteristics are examined to make sure there is no systematic bias in terms of who stays in.
An example of a study with good external validity would be if, in the above example, the participants used the mindfulness app at
home rather than in the laboratory. This shows that results appear in a real-world setting.
To further ensure external validity, the researcher clearly defines the population of interest and chooses a representative sample.
They might also replicate the study's results using different technological devices.
Takeaways
Setting up an experiment so that it has both sound internal validity and external validity involves being mindful from the start
about factors that can influence each aspect of your research.
It's best to spend extra time designing a structurally sound study that has far-reaching implications rather than to quickly rush
through the design phase only to discover problems later on. Only when both internal validity and external validity are high can
strong conclusions be made about your results.
What is Survey Research Design?
Survey research design is a fundamental method in the field of research where the primary method of data collection is through
surveys. This type of research design allows researchers to collect structured data from individuals or groups to gain deeper
insights into their thoughts, behaviors, or experiences related to a specific topic. Online surveys or forms typically consist of
structured questions, each tailored to gather specific information, making them a versatile tool in both quantitative and
qualitative research.
Survey design is highly valued in research because it is an accessible and efficient way for respondents to share their
perspectives. By leveraging survey research, organizations can quickly gauge public opinions, understand trends within a
population, and identify issues or areas for improvement. This method is widely used in academic, business, and government
research to uncover data that can lead to actionable solutions or further study.
One of the key strengths of survey research is its ability to provide a snapshot of trends or opinions within a population,
allowing researchers to generalize findings and make informed decisions. Additionally, surveys can be used to test hypotheses,
track changes over time, or serve as the foundation for more in-depth studies. As a result, survey research design remains a
Survey research methods can be broadly categorized into two main types: quantitative and qualitative survey designs. A
quantitative survey design is typically used in large-scale research and focuses on gathering numerical data through
closed-ended questions. These may include multiple-choice questions or dichotomous responses, which can be analyzed quickly
using statistical tools. The primary goal of quantitative surveys is to obtain a general snapshot of trends within your population of
interest, making them an ideal method for studying large datasets efficiently.
In contrast, a qualitative survey design is often employed in smaller-scale studies. This type of survey relies on open-ended
questions that allow respondents to elaborate on their thoughts, attitudes, or behaviors. Qualitative data is typically collected in
interview format and is analyzed and reported in the respondents' own words, often in the form of direct quotes. Qualitative
surveys offer in-depth insights into the motivations behind responses, providing rich, detailed data that goes beyond numbers.
Both quantitative and qualitative survey methods have their advantages and can be applied depending on the research
objectives. When choosing the right survey design, it's crucial to consider not only the type of data you need but also the time
For example, a longitudinal survey study involves collecting data at multiple points over a defined time period to examine
changes in key variables. In this approach, surveys are administered at least twice: once at the beginning and once at the end of
the study period. Researchers may also choose to collect data at intervals throughout the study. By contrast, a cross-sectional
survey study collects data at a single point in time, providing a snapshot of opinions, behaviors, or trends at that specific
moment.
Within these two designs - longitudinal and cross-sectional - researchers can choose from various methods of survey
administration. Time efficiency is often a key factor in determining which method to use. Online surveys and electronic
questionnaires have become increasingly popular due to their ease of access and the ability to reach a large audience quickly.
Additionally, these methods require less time for analysis, as participants complete them independently. However, other methods,
such as phone or face-to-face interviews, may provide more depth but can take longer to administer and analyze due to the need
Selecting the right survey research method - whether quantitative or qualitative, longitudinal or cross-sectional - depends on
your research goals, the data you need to collect, and the resources available for both data collection and analysis.
Time Frame Time frame is crucial for design Time frame is crucial for design choice
Considerati choice
on
Question Bank
Short answer on
1. Various sources of research ideas
2. Different method of data collection
3. Discuss in short about goals ethics and principles of research
4. What are psychological variables and how do we operationalize them?
5. What is a research problem and hypothesis? Give types and criteria