0% found this document useful (0 votes)
14 views14 pages

Measurement

The document discusses operationalization, which is the process of defining abstract concepts in a way that allows them to be measured. It involves specifying concrete indicators and developing procedures to measure those indicators. Operationalization is important as it makes concepts measurable, improves research rigor, and facilitates communication among researchers. The document provides examples of how concepts like intelligence, motivation, and customer satisfaction can be operationalized.

Uploaded by

mba2335161
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views14 pages

Measurement

The document discusses operationalization, which is the process of defining abstract concepts in a way that allows them to be measured. It involves specifying concrete indicators and developing procedures to measure those indicators. Operationalization is important as it makes concepts measurable, improves research rigor, and facilitates communication among researchers. The document provides examples of how concepts like intelligence, motivation, and customer satisfaction can be operationalized.

Uploaded by

mba2335161
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Measurement

Operationalization
Introduction
In research, measurement assigns numbers or other symbols to variables to quantify them. This
allows researchers to compare and analyze data in a meaningful way. However, many concepts
in research are abstract or complex and cannot be directly measured. This is where
operationalization comes in.
Operationalization Defined
Operationalization is the process of defining an abstract concept or variable in a way that it can
be measured. It involves specifying how the concept will be measured using concrete and
observable indicators. This makes it possible to collect data on the concept and use that data to
test hypotheses.
Why Operationalization is Important
Operationalization is essential for several reasons:
• It makes concepts measurable. Without operationalization, it would be impossible to
collect data on many important concepts in research.
• It makes research more rigorous. By specifying how concepts will be measured,
operationalization reduces ambiguity and makes it possible to replicate research studies.
• It improves communication among researchers. Operationalization helps researchers to
understand each other's work and to compare and contrast findings.
Steps in Operationalization
The process of operationalization typically involves the following steps:
1. Define the concept. The first step is clearly defining the concept you want to measure.
This may involve consulting with experts in the field or reviewing the literature on the
topic.
2. Identify indicators. Indicators are observable variables that can be used to measure the
concept. For example, to measure stress, you might use indicators such as the hours of
sleep per night, heart rate, and blood pressure.
3. Develop measurement procedures. Once you have identified your indicators, you need
to develop guidelines for measuring them. This may involve creating surveys, designing
experiments, or using existing data sources.
4. Refine and test your measures. Refining and testing your efforts is essential to ensure
they are reliable and valid. Reliability refers to the consistency of a measure, while validity
refers to its accuracy.
Real-Life Examples of Operationalization
Here are some real-life examples of operationalization:
• Concept: Intelligence
• Indicators: IQ score, GPA, standardized test scores
• Measurement procedures: Administering IQ tests, collecting GPA data, administering
standardized tests
• Concept: Motivation
• Indicators: Self-reported motivation levels, effort expended on tasks, persistence in the
face of challenges
• Measurement procedures: Administering surveys, coding behavior, tracking task
completion
• Concept: Customer satisfaction
• Indicators: Satisfaction ratings, likelihood to recommend, repurchase behavior
• Measurement procedures: Administering surveys, tracking customer loyalty programs,
monitoring sales data
Conclusion
Operationalization is an essential part of the research process. It allows researchers to measure
abstract concepts in a reliable, valid, and replicable way. This makes it possible to collect data
that can be used to test hypotheses and make informed decisions.

Measurement Structure
In research, measurement structure refers to the underlying relationship between the observed
indicators and the latent construct they are intended to measure. It is a crucial aspect of
measurement theory ensuring the measures used are reliable and valid for assessing the
constructs of interest. Understanding measurement structure is essential for conducting
rigorous research and drawing meaningful conclusions from data.
Conceptualizing Measurement Structure
Measurement structure can be conceptualized as a hierarchy, with the latent construct at the
top and the observed indicators at the bottom. The latent construct represents the underlying
concept or phenomenon the researcher is interested in measuring. At the same time, the
observed indicators are specific, observable variables that are believed to reflect the latent
construct. The relationship between the latent construct and the observed indicators is often
depicted through a structural equation model (SEM).
Types of Measurement Structures
There are two main types of measurement structures:
1. Reflective Measurement Structure: In a reflective measurement structure, the observed
indicators are assumed to reflect the underlying latent construct. This means that the
latent construct causes the observed indicators to vary. For example, in a study of student
motivation, the observed indicators might include grades, participation in class
discussions, and completion of assignments. These indicators are assumed to reflect the
underlying latent construct of motivation.
2. Formative Measurement Structure: In a formative measurement structure, the observed
indicators are assumed to cause the latent construct to vary. This means that the
observed indicators form or define the latent construct. For example, in a customer
satisfaction study, the observed indicators might include product quality ratings,
customer service, and value for money. These indicators are not simply reflections of
satisfaction but contribute to the overall satisfaction assessment.
Importance of Measurement Structure
Measurement structure is important for several reasons:
1. Reliability: A sound measurement structure ensures that the observed indicators
consistently measure the latent construct. This means that the measures are reliable and
can be trusted to provide accurate information about the construct.
2. Validity: A sound measurement structure ensures that the observed indicators measure
the latent construct they intend to measure. This means the measures are valid and can
be used to draw meaningful conclusions about the construct.
3. Theoretical Understanding: Understanding measurement structure helps researchers
develop a deeper understanding of the constructs they measure. This can lead to more
refined theories and better research designs.
Real-Life Examples of Measurement Structure
1. Measuring Employee Engagement: In a study of employee engagement, the observed
indicators might include job satisfaction, commitment to the organization, and
willingness to go the extra mile. These indicators would be used to measure the latent
construct of engagement.
2. Assessing Student Learning: In a study of student learning, the observed indicators might
include exam scores, project performance, and class participation. These indicators
would be used to measure the latent construct of knowledge.
3. Evaluating Marketing Effectiveness: In a marketing effectiveness study, the observed
indicators might include brand awareness, purchase intent, and customer satisfaction.
These indicators would be used to measure the latent construct of marketing
effectiveness.
Conclusion
Measurement structure plays a fundamental role in ensuring the quality and interpretability of
research results. By understanding and applying measurement structure concepts, researchers
can create reliable and valid measures that accurately assess the constructs of interest. This, in
turn, leads to more rigorous research and more meaningful conclusions.

Measurement Validity: Face Validity, Predictive Validity,


Criterion Validity, Convergent Validity, and Discriminant
Validity
Measurement validity is a crucial aspect of research, ensuring that the measures used accurately
assess the constructs of interest. It evaluates the degree to which a measure measures what it
intends to measure. There are five primary types of measurement validity: face validity,
predictive validity, criterion validity, convergent validity, and discriminant validity.
Face Validity
Face validity is the simplest and most subjective form of validity. It refers to how much a measure
appears to measure what it claims to measure on the surface. Face validity is assessed through
informal judgment by field experts or the target population.
Characteristics of Face Validity:
• Subjective assessment of measure's appropriateness.
• Based on experts' or target population's judgment.
• Evaluate the measure's relevance and comprehensiveness.
Example of Face Validity:
• A survey measuring employee satisfaction might include items such as "I am satisfied
with my job" and "I feel valued at my workplace." These items seem to address the
concept of employee satisfaction directly.
Predictive Validity
Predictive validity assesses how much a measure can predict future outcomes or behaviors. It
involves correlating the measure with a known or established criterion in the future. Predictive
validity is essential for standards used in selection or placement decisions.
Characteristics of Predictive Validity:
• Correlation between the measure and a future criterion.
• Evaluate the measure's ability to predict future outcomes.
• Useful for selection and placement decisions.
Example of Predictive Validity:
• A standardized aptitude test for college admissions might be evaluated for predictive
validity by examining its correlation with college GPA. A high correlation would indicate
that the test can effectively predict academic performance.
Criterion Validity
Criterion validity assesses the extent to which a measure correlates with an established or
accepted measure of the same construct. It involves comparing the measure to a criterion that
is already considered valid. Criterion validity can be concurrent (comparing the measure to a
criterion simultaneously) or predictive (comparing the measure to a criterion in the future).
Characteristics of Criterion Validity:
• Correlation between the measure and a criterion measure.
• Evaluate the measure's agreement with an established measure.
• Can be concurrent or predictive.
Example of Criterion Validity:
• A new measure of job stress might be evaluated for criterion validity by comparing it to
an existing and well-established measure. A high correlation would indicate that the new
measure accurately reflects the same construct as the established measure.
Convergent Validity
Convergent validity assesses how much a measure correlates with other measures of the same
construct. This indicates that the measure is indeed measuring the construct it is intended to
measure.
Characteristics of Convergent Validity:
• Correlation between the measure and other measures of the same construct.
• Evaluate the measure's convergence with other measures.
• Demonstrates consistent measurement of the construct.
Example of Convergent Validity:
• A measure of intelligence might be evaluated for convergent validity by examining its
correlation with other cognitive ability tests. High correlations would indicate that the
measure consistently assesses intelligence.
Discriminant Validity
Discriminant validity assesses the extent to which a measure is distinct from measures of
different constructs. This indicates that the measure measures the specific construct it intends
to measure and not other related constructs.
Characteristics of Discriminant Validity:
• Lack of correlation between the measure and measures of different constructs.
• Evaluate the measure's distinction from other measures.
• Demonstrates specificity of measurement.
Example of Discriminant Validity:
• A measure of anxiety might be evaluated for discriminant validity by examining its
correlation with measures of depression. Low correlations would indicate that the
anxiety measure is distinct from measures of depression, suggesting that it is explicitly
measuring anxiety.
Conclusion
Face validity, predictive validity, criterion validity, convergent validity, and discriminant validity
are essential tools for evaluating the quality of measures used in research. By assessing these
types of validity, researchers can ensure that their measures accurately reflect the constructs
they are intended to measure, leading to more rigorous research and more meaningful
conclusions.

Measurement Reliability: Assessing Consistency of Measures


In research, measurement reliability refers to the consistency and dependability of a measure. It
indicates the extent to which a measure produces the same results when administered
repeatedly under similar conditions. Reliable measurements are essential for accurate and
meaningful research findings.
Types of Measurement Reliability
There are several different types of measurement reliability, each focusing on various aspects of
consistency:
1. Test-Retest Reliability:
Test-retest reliability assesses the consistency of a measure over time. It involves administering
the measure to the same group of individuals twice, with a reasonable interval between
administrations. A high correlation between the two sets of scores indicates that the measure is
reliable over time.
Example:
• Administering a standardized test to a group of students twice, a few weeks apart, and
comparing their scores to evaluate the test's reliability over time.
2. Internal Consistency Reliability:
Internal consistency reliability assesses the consistency of a measure across its items. It measures
the extent to which the items in a measure measure the same underlying construct. A high
internal consistency reliability coefficient, such as Cronbach's alpha, indicates that the items
consistently measure the same construct.
Example:
• Evaluating the internal consistency reliability of a survey measuring employee
satisfaction by examining the correlations between individual items and the overall
satisfaction score.
3. Parallel Forms Reliability:
Parallel forms reliability assesses the consistency of a measure across equivalent versions of the
measure. It involves administering two different but equivalent forms of the measure to the
same group of individuals and comparing their scores. A high correlation between the scores on
the two forms indicates that the measure is reliable across different versions.
Example:
• Developing two parallel forms of a multiple-choice test and administering them to a
group of students to evaluate the similar forms' reliability of the test.
4. Alternate-Forms Reliability:
Alternate-forms reliability, similar to parallel-forms reliability, assesses the consistency of a
measure across different versions of the measure. However, alternate-forms reliability typically
involves using existing forms of the measure that are not necessarily designed to be equivalent.
Example:
• Comparing the scores of a group of students on two different existing forms of an
aptitude test to evaluate the reliability of the alternate forms of the test.
5. Rater Reliability:
Rater reliability assesses the consistency of a measure when administered by different raters or
observers. It is essential in measures that involve subjective judgment or evaluation. A high rater
reliability coefficient indicates that other raters consistently apply the same criteria when
evaluating the measure.
Example:
• Evaluating the rater reliability of a performance assessment by having multiple observers
evaluate the same set of performances and comparing their ratings.
Importance of Measurement Reliability
Measurement reliability is crucial for several reasons:
1. Accurate Results: Reliable measures produce consistent and accurate results, reducing
measurement error and increasing confidence in research findings.
2. Replicable Research: Reliable measures allow for replicable research, ensuring that
similar results are obtained when studies are replicated.
3. Meaningful Comparisons: Reliable measures enable meaningful comparisons between
individuals or groups, as the scores reflect consistent measurement of the construct.
4. Decision-Making: Reliable measures inform decision-making in various contexts, such as
selection, placement, or assessment.
Conclusion
Measurement reliability is a fundamental aspect of research methodology. By understanding and
assessing measurement reliability, researchers can ensure that their measures are consistent
and accurate, leading to more rigorous and meaningful research outcomes.

Measurement: Survey, Questionnaire, and Test


Introduction
Surveys, questionnaires, and tests are commonly used tools for gathering data in research. While
they share some similarities, they also have distinct characteristics and applications.
Understanding the differences between these methods is crucial for selecting the most
appropriate approach for a given research question.
Surveys
Surveys are broad-based data collection instruments that typically employ a variety of question
formats to gather information from a large sample of individuals. They are often used to describe
populations, assess attitudes, opinions, or behaviors, and track changes over time.
Characteristics of Surveys:
• Structured or semi-structured format
• Diverse question types (e.g., multiple-choice, open-ended)
• Large sample size
• Descriptive or exploratory research
• Gather information on a range of topics
Examples of Surveys:
• Employee satisfaction surveys
• Customer satisfaction surveys
• Community needs assessments
• Political opinion polls
• Market research surveys
Questionnaires
Questionnaires are similar to surveys but are typically narrower in scope, focusing on a specific
topic or construct. They often use a standardized format with fixed questions to gather
structured data.
Characteristics of Questionnaires:
• Standardized format
• Structured questions (e.g., Likert scales, rating scales)
• Moderate to large sample size
• Measure specific constructs or variables
• Used in both quantitative and qualitative research
Examples of Questionnaires:
• Personality inventories
• Attitude scales
• Health surveys
• Demographic questionnaires
• Academic achievement tests
Tests
Tests assess an individual's knowledge, skills, or abilities. They typically involve standardized
tasks or questions that are administered under controlled conditions.
Characteristics of Tests:
• Standardized administration
• Objective scoring
• Measure specific skills or knowledge
• Used for assessment, selection, or placement
• Often norm-referenced or criterion-referenced
Examples of Tests:
• Standardized aptitude tests (e.g., SAT, ACT)
• Intelligence tests
• Achievement tests
• Skill-based tests (e.g., typing tests, language proficiency tests)
• Psychological assessments
Choosing the Right Method
The choice between a survey, questionnaire, or test depends on the research question and
objectives.
• Surveys: Suitable for broad-based data collection, exploring opinions, and tracking
changes.
• Questionnaires: Ideal for measuring specific constructs or variables in a standardized
manner.
• Tests: Appropriate for assessing knowledge, skills, or abilities under controlled
conditions.
Real-Life Examples
• Survey: A researcher might use a survey to assess employee satisfaction levels across a
large organization.
• Questionnaire: A psychologist might use a standardized personality questionnaire to
measure anxiety levels in a group of clients.
• Test: An educational institution might use an achievement test to evaluate students'
proficiency in a subject.

Measurement: Scales and Response Options with a Focus on


Likert Scales
Likert scales, or summative scales, are commonly used in surveys and questionnaires to measure
attitudes, opinions, and perceptions. They are a type of ordinal scale, meaning that the response
options are ordered meaningfully, but the distance between them is not necessarily equal.
Structure of Likert Scales
Likert scales typically consist of statements or questions, each with a set of response options
representing different levels of agreement or disagreement. The response options generally are
numbered, with higher numbers indicating more excellent agreement or positive sentiment.
Issues with Likert Scales
Despite their widespread use, Likert scales are not without their limitations. Some of the critical
issues associated with Likert scales include:
1. Social Desirability Bias: Respondents may provide responses that they believe will make
them appear more favorable or socially acceptable.
2. Acquiescence Bias: Respondents may tend to agree with statements regardless of their
true opinions.
3. Response Range Restriction: Respondents may not use the full range of response
options, limiting the variability of the data.
4. Linguistic Ambiguity: The wording of statements or questions can influence how
respondents interpret and respond to them.
5. Cultural Differences: Response patterns on Likert scales may vary across different
cultures.
Addressing Likert Scale Issues
Researchers can take several steps to minimize the limitations of Likert scales:
1. Careful Item Construction: Statements or questions should be clear, concise, and
unambiguous.
2. Pilot Testing: Pre-testing the scale with a small sample can identify potential wording or
response options issues.
3. Reverse Scoring: Including reverse-scored items can help to detect and control for
acquiescence bias.
4. Attention to Language: Use inclusive language appropriate for the target audience.
5. Cultural Considerations: Adapt the scale or use culturally appropriate items when
working with diverse populations.
Real-Life Examples of Likert Scales
Likert scales are used in a wide variety of research settings, including:
• Customer Satisfaction Surveys: Customer satisfaction with products, services, or overall
experience.
• Employee Engagement Surveys: Assessing employee attitudes, motivation, and
organizational commitment.
• Personality Assessments: Measuring personality traits such as extraversion,
agreeableness, conscientiousness, neuroticism, and openness to experience.
• Attitudinal Surveys: Gauging public opinion on social, political, or environmental issues.
Conclusion
Likert scales are a versatile and widely used tool for measuring attitudes, opinions, and
perceptions. While they have certain limitations, researchers can take steps to minimize these
issues and ensure the validity and reliability of their data. By carefully constructing items, pilot
testing the scale, and considering cultural differences, researchers can utilize Likert scales to
gather meaningful insights into the attitudes and opinions of their participants.

Measurement: Response and Rater Bias with a Focus on Likert


Scales
In research, response and rater bias are two significant sources of error that can affect the
validity and accuracy of data collected using Likert scales. Understanding and addressing these
biases is crucial for ensuring the reliability and trustworthiness of research findings.
Response Bias
Response bias refers to systematic errors in respondents' answers that arise from factors
unrelated to the research question. It can manifest in various forms, including:
1. Social Desirability Bias: Respondents tend to provide responses that they believe will
make them appear more favorable or socially acceptable. This can lead to an
overrepresentation of positive responses and an underrepresentation of negative or
neutral responses.
2. Acquiescence Bias: Respondents tend to agree with statements regardless of their
genuine opinions. This can result in an inflated level of agreement across all items,
regardless of their content.
3. Extreme Response Style: Respondents consistently choose extreme response options
(e.g., always agree or always disagree) regardless of the item's content. This can distort
the distribution of responses and mask meaningful variation.
Rater Bias
Rater bias refers to systematic errors in raters' evaluations or ratings that arise from factors
unrelated to the individuals or objects being evaluated. It can manifest in various forms,
including:
1. Halo Effect: Raters' overall impression of an individual or object influences their ratings
on specific items. A positive overall impression can lead to higher ratings on all items,
while a negative impression can lead to lower ratings.
2. Leniency or Strictness Bias: Raters consistently assign higher or lower ratings than other
raters. This can result in inconsistent or inaccurate ratings across individuals or objects.
3. Central Tendency Bias: Raters tend to cluster their ratings around the middle of the scale,
avoiding extreme ratings. This can mask true differences between individuals or objects.
Minimizing Response and Rater Bias in Likert Scales
Researchers can take several steps to minimize response and rater bias in Likert scales:
1. Careful Item Construction: Statements or questions should be transparent, concise,
unbiased, and unambiguous.
2. Pilot Testing: Pre-testing the scale with a small sample can identify potential issues with
wording or response options that may introduce bias.
3. Reverse Scoring: Including reverse-scored items can help to detect and control for
acquiescence bias.
4. Training and Standardization: Raters should be trained to recognize and minimize bias
in their evaluations. Consistent rating procedures and standards should be implemented.
5. Blind Rating: When possible, raters should be blinded to participants' identities or other
potentially biasing information.
6. Statistical Techniques: Statistical methods can be used to identify and control for rater
bias, such as using multilevel modeling or adjusting for rater effects.
Real-Life Examples of Response and Rater Bias
Response and rater bias can have significant implications in various research settings:
• Customer Satisfaction Surveys: Social desirability bias may lead customers to overstate
their satisfaction to avoid appearing critical.
• Employee Performance Evaluations: Rater bias can influence promotion decisions or
performance-based pay, potentially unfairly disadvantaging certain employees.
• Personality Assessments: Acquiescence bias may inflate scores on personality traits,
while extreme response style can distort the overall profile.
• Educational Assessments: The halo effect or leniency bias can lead to inflated or deflated
grades, affecting students' academic progress and future opportunities.
Conclusion
Response and rater bias are pervasive challenges in research that can significantly impact the
accuracy and validity of findings. By understanding the nature of these biases and implementing
strategies to minimize their effects, researchers can enhance the reliability and trustworthiness
of their data and produce more meaningful insights.
Measurement: Other Measurement Types
In addition to surveys, questionnaires, and tests, researchers can employ several other
measurement types to gather data for their studies. These methods provide alternative
approaches to collecting information and can be particularly useful when traditional methods
are not feasible or appropriate.
Observational Measurement
Observational measurement involves directly observing and recording the behavior or activities
of individuals or groups. It is often used in natural settings to study behavior in its natural context.
Types of Observational Measurement:
• Structured Observation: Observe and record predetermined behaviors or events using a
standardized coding scheme.
• Unstructured Observation: Involves making open-ended observations and recording
detailed notes without a predetermined framework.
Examples of Observational Measurement:
• Observe children's play behavior in preschool to study social interactions and
communication patterns.
• Recording the frequency and duration of customer interactions with store displays to
assess the effectiveness of marketing strategies.
• Monitoring employee behavior during work meetings to evaluate team dynamics and
collaboration.
Trace Measurement
Trace measurement involves collecting indirect evidence of behavior or activities by analyzing
physical traces or records left behind. It can be used to study past or ongoing behavior without
directly observing individuals.
Types of Trace Measurement:
• Physical Traces: Analyzing physical evidence such as footprints, fingerprints, or wear
patterns to infer behavior.
• Records and Artifacts: Examining written records, digital data, or artifacts to understand
past activities or decisions.
Examples of Trace Measurement:
• Analyzing website traffic data to track user behavior and identify popular content.
• Examining financial records to assess spending patterns and identify potential fraud.
• Studying historical documents to understand past events, social norms, or cultural
practices.
Content Analysis
Content analysis involves systematically analyzing and interpreting the content of written,
spoken, or visual materials to draw inferences about the underlying meaning or intentions.
Types of Content Analysis:
• Manifest Content Analysis: Focuses on the explicit and observable content of the
material.
• Latent Content Analysis: Seeks to uncover deeper meanings and underlying themes
within the material.
Examples of Content Analysis:
• Analyzing newspaper articles to identify framing biases and political agendas.
• Examining social media posts to understand public sentiment and opinion on specific
topics.
• Studying corporate advertisements to decode persuasive strategies and cultural
representations.
Archival Data
Archival data refers to existing records, documents, or artifacts collected and preserved for
historical, legal, or other purposes. It can provide valuable insights into past events, trends, or
behaviors.
Types of Archival Data:
• Official Records: Government records, census data, historical archives, and legal
documents.
• Personal Records: Diaries, letters, memoirs, and personal photographs.
• Organizational Records: Company records, financial statements, marketing materials,
and employee files.
Examples of Archival Data:
• Analyzing historical census data to study population growth, migration patterns, and
demographic trends.
• Examining corporate archives to understand business strategies, organizational changes,
and decision-making processes.
• Studying personal diaries and letters to gain insights into individual experiences, social
norms, and cultural practices.
Structured Interviews
Structured interviews involve asking participants a series of predetermined questions in a
standardized format. The questions are typically closed-ended, with fixed response options or
categories.
Characteristics of Structured Interviews:
• Standardized questions and response options
• Highly structured interview format
• Focus on collecting quantitative data
Examples of Structured Interviews:
• Conducting telephone surveys to collect demographic information and opinions on
current issues.
• Administering standardized personality assessments to measure specific traits or
constructs.
• Carrying out structured interviews with employees to gather data on job satisfaction,
workplace culture, or organizational climate.
Open Interviews
Open interviews involve asking participants open-ended questions that allow them to elaborate
on their thoughts, experiences, and perspectives. The interviewer has more flexibility in guiding
the conversation.
Characteristics of Open Interviews:
• Open-ended questions
• Flexible and conversational interview format
• Focus on collecting qualitative data
Examples of Open Interviews:
• Conducting in-depth interviews with experts or stakeholders to gain insights into complex
issues or policy decisions.
• Carrying out qualitative research to explore individual experiences, perceptions, and
beliefs.
• Gathering rich narrative data to understand social phenomena, cultural practices, or
personal histories.

You might also like