BRM - Instrument Preparation and Data Collection

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 96

Business Research Methods

Alemseged Gerezgiher
(BSc, MBA, PhD)

05/07/24 1
Part V
Instrument Preparation and
Data Collection
Chapter Five: Data Sources, Instrument preparation and
Data Collection

Variables, Hypotheses,
Measurement and Scaling

Data Types and Sources

Data Collection Techniques

Reliability Testing of Instruments

Validity Testing of Instruments


Introduction
 Proper data collection, retention, and sharing are vital to
the research enterprise.
 It requires careful identification and definition of variables
and their measurements
 In survey (cause-effect) type of research, setting
hypotheses is required for proper data collection
 Identification of types of data, their sources and tools and
techniques of collection is required.
 It is the process of gathering and measuring information on
variables of interest in an accepted systematic fashion.
 Data collection methods vary by discipline and data types;
but the emphasis on ensuring accurate collection remains
the same.
Variable Identification and Definition
Concepts vs Variables
 A concept - is a generalized idea about objects, occurrences , process,
etc.,
 Concepts are highly subjective as their understanding varies from
person to person and therefore, as such, may not be measurable.
 An image, perception or concept that is capable of measurement-hence
capable of taking on different values-is called a variable.
◦A representation of concept in its variation of degree, varieties or
occurrence.
◦A characteristic of a thing that can assume varying degrees or values.
 In other words, a concept that can be measured is called a variable.
 Measurability is the main difference between a concept and a variable.
 A concept cannot be measured whereas a variable can be subjected to
measurement by crude/refined or subjective/ objective units of
measurement.
Variable Identification…
Concept vs Variable

Concepts Variables
 Effectiveness  Gender ( male/ female)
 Satisfaction  Attitude ( Good /Bad)
 Excellence  Age ( X years, Y months)
 Rich  Weight ( X Kg)
 etc  Income($ ----/Year)
 Religion(Orthodox, Muslim, protestant, etc)
 Etc
 Subjective impression  Measurable through the degree of precision varies from
 No uniformity as to its scale to scale and from variable to variable
understanding among  E.g. Attitude –subjective, Income-objective
different people
 As such cannot be
measured
Variable identification…
Conversion of Concepts in to Variables
It is critical to survey research to understand how to go
from ideas to concepts to variables – operationalization.
IF you are using a concept in your study, you need to consider its
operationalization- i.e., how it will be measured.
In most cases, to operationalize a concept you first need to go
through the process of identifying indicators.
Concepts are converted in to variables using a set of rules,
benchmarks, yardstick called indicators.
Indicators are a set of criteria reflective of the concept-which can
then be converted into variables.
 The process is called operationalization of concepts.
Variable identification…
Conversion of Concepts in to Variables
Problem:
◦ In the majority of cases conceptual definitions are not
directly observable
◦ Theoretical concepts need to be oprationalized
Oprationalization
◦ …links the language of theory to the language of empirical
measures
◦ Is the process of deciding how to measure concepts
Task:
◦ Find empirical measures for theoretical concepts
Variable identification…
Conversion of Concepts in to Variables
Simple concepts need only one variable
◦ E.g.: Age, gender, income/profit/sales, product
line, capital, market share, staff size,
productivity, performance, production capacity,
etc.
Complex concepts (multiple aspects) need more
variables
◦ E.g.: Satisfaction, commitment, well-being,
welfare, Intelligence, effectiveness, attitude,
motivation, product preference/taste, etc.
Variable identification…
Steps in converting concepts to variables
 Define concept (nominal definition)
What do you mean by the concept? (search
literature)
 Identify aspects (dimensions) of concepts
Use a tree diagram (and theoretical framework)
 Formulate indicators for every aspect of
interest
 Make variables that can measure indicators
(Items in a questionnaire)
Variable identification…
Conversion of concepts in to variables
Concept Indicators Variables Decision level
Concepts Indicators Variables Decision Level (Working
definition)
Rich a. Income a. Income per year 1. If > $ 100,000
b. Asset b. Total value of 2. If > $ 300,000
assets
High a. Average a. Percentage of 1. If > 70%
academic marks marks 2. If > 90%
achievements obtained in b. Percentage of
exam marks
b. Aggregate
marks
Effectiveness a. Number of a. Number of 1. Whether the difference in
of a health patients participants before & after kevel is
program b. Changes in served in a month satisfactorily significant
mortality b. Changes in child 2. Increase or decrease the rate
death rate
Variable identification
Types of Variables
 Knowledge of the different variables and the way they are
measured plays a crucial role in a research.
 Variables are important in bringing clarity and specification to
the conceptualization of a research problem and to the
development of a research instruments.
 There are number of ways of classifying variables. Here we
look from 3 perspectives:
a) The causal relationship
b) The design of the study and
c) The unit of measurement
Variable identification…

Types of Variables- From the view point of the causal


relationship
 In studies that attempt to investigate a causal relationship,
four sets of variables may operate:
1. Independent /Change / variable (IV)- the cause
supposed to be responsible for bringing a bout change/s in
a phenomenon or situation
◦also known as the predictor variable
◦this variable is the ‘cause’
◦can be manipulated/treated or allowed to vary
Variable identification…

Types of Variables -----From the view point of the


causal relationship
2. Dependent /Outcome/effect /variable (DV)- the
outcome of the changes brought about by the introduction
of an independent variable
also known as the criterion variable
this variable is the ‘effect’
should only vary in response to the IV
Variable identification…
Types of Variables -- From the view point of the
causal relationship
3. Extraneous / affect/ variable (EV)- factors operating
in a real-life situation that may affect changes in the
dependent variable.

◦Independent variables that have not been controlled


 These factors not measured in the study may increase or
decrease the magnitude or strength of the relationship
between independent and dependent variables.
Variable identification…
Types of Variables ------ From the view point of
the causal relationship-------
4. Intervening/linking or connecting/ Variable: sometimes
called confounding variable. It links the independent and
dependent variable.
In certain situations the r/ship b/n an IV & DV can not be estimated
with out the intervention of another variable
The cause variable/IV/ will have the assumed effect only in the
presence of an intervening variable.
E.g., income medical care /nutrition/ Longevity
IV linking variable DV
Variable identification…
Types of Variables ------- From the view point of the
study design
 A study that examines association or causation may be a controlled
experiment, a quasi-experiment, or an ex post facto of non-experiment
study.
 In controlled experiments the independent (cause) variable may be introduced
or manipulated either by the researcher or by someone else who is providing
the service. In these situations, there are two sets of variables:
1. Active variables – those variables that can be manipulated, changed
or controlled.
2. Attribute variables – those that cannot be manipulated, changed or
controlled, and reflect the characteristics of the study population, e.g.,
age, gender, education, etc.
Variable identification…
Types of Variables ------- From the view point of
the unit of measurement
 From the viewpoint of the unit of measurement, there are two
ways of categorizing variables:
a) categorical /qualitative variables-categorizing or grouping of
individuals, objects, property and responses based on common
characteristics.
 They are measured on nominal or ordinal measurement scales
Categorical variables can be of three types:
Constant;
Dichotomous (e.g. yes/no type); and
Polytomous (e.g. High, medium, low type).
Variable identification…
Types of Variables - From the view point of the
unit of measurement--
b) Quantitative/numeric variables, on the other hand, have
continuity in their measurement.
They are measured on interval or ratio measurement scales
This could be also discrete or continuous
Continuous Variable: A quantitative variable, which can be measured
with an arbitrary degree of precision. Any two points on a scale of a
continuous variable have an infinite number of values in between. It is
generally measured.
Discrete Variable: A quantitative variable where values can differ only
by well-defined steps with no intermediate values possible. It is generally
counted.
For example, age, income, an attitude score, etc.
Measurement Scales
 Measurement is often viewed as being the basis of all
scientific inquiry, and measurement techniques and
strategies are therefore an essential component of research
methodology
 Measurement can be defined as a process through which
researchers describe, explain, and predict the phenomena
and constructs of our daily existence
• For example, we measure how long we have lived in
years, our financial success in Birr, and the distance
between two points in kilo meters
• Important life decisions are based on performance on
standardized tests that measure intelligence, aptitude,
achievement, or individual adjustment
Measurement…
 It is simply the way you measure the different variables
identified in your study
 It is important to understand that the way you measure
the variables in your study determines whether a study
is quantitative or qualitative in nature, the type of
analysis that can be performed
• The concept of measurement is important in research
studies in two key areas
• First, measurement enables researchers to quantify
abstract constructs and variables
Measurement…
• Second, the level of statistical sophistication used to
analyze data derived from a study is directly dependent on
the scale of measurement used to quantify the variables of
interest
• There are two basic categories of data: nonmetric and
metric
• Nonmetric data (which cannot be quantified) are
predominantly used to describe and categorize.
• Metric data are used to examine amounts and magnitudes.
• There are four main scales of measurement subsumed
under the broader categories of nonmetric and metric
measurement: nominal scales, ordinal scales, interval
scales, and ratio scales
Measurement…
Nominal measurement scales and data
◦ Used only to qualitatively classify or categorize objects;
not to quantify.
◦ No absolute zero point.
◦ Cannot be ordered in a quantitative sequence.
◦ Impossible to use/ to conduct standard mathematical
operations.
◦ Examples include gender, religious and political
affiliation, and marital status.
◦ Purely descriptive (frequency and percentage) and
cannot be manipulated mathematically.
 Example

◦ Your gender 1. Male 0. Female


Measurement…
Ordinal measurement scales and data
◦ Build on nominal measurement.
◦ Categorize a variable and its relative magnitude in
relation to other variables.
◦ Represent an ordering of variables with some number
representing more than another.
◦ Information about relative position but not the interval
between the ranks or categories.
◦ Qualitative in nature.
◦ Example would be finishing position of runners in a
race.
◦ Lack the mathematical properties necessary for
sophisticated statistical analyses.
Measurement…
Ordinal…
It helps to rank order preferences, likes and dislikes, choices
etc
Example:

Which Kaizen implementation pillars has your organization been using


(please rank from 1 most to 4 least)

N0. Kaizen Pillar Rank


1 Quality control circle (QCC)
2 5S
3 Operation standard and time study
4 Elimination of wastes (MUDAs)
Measurement…
Interval measurement scales and data
◦ Quantitative in nature.
◦ Build on ordinal measurement.
◦ Provide information about both order and distance between
values of variables.
◦ Numbers scaled at equal distances.
◦ No absolute zero point; zero point is arbitrary.
◦ Addition and subtraction are possible.
◦ Lack of an absolute zero point makes division and
multiplication impossible.
◦ Mean and standard deviation can be calculated
E.g. Age in years: a) < 19 b) 19-24 c) 25-30 d) 31-35 etc
Measurement…
Interval scale
Likert scales are typical examples of interval scales
PERCEIVED QUALITY IMPROVEMENT With regard to your company
situation please indicate perceived quality improvement obtained since Kaizen
implementation on scale: 1 = “strongly disagree” and 7 = “strongly agree”
Code Items 1 2 3 4 5 6 7
PQ1 Implementation of Kaizen enabled us to improve
product/service quality

PQ2 Implementation of Kaizen has improved our ability


to meet customers specifications
PQ3 Since we introduced Kaizen there is reduction in
the percentage of defective products
PQ4 Since we introduced Kaizen we have reduced
rework in production
PQ5 Implementation of Kaizen enabled us meet
technical specifications
Measurement…
• Ratio measurement scales and data
◦ Identical to the interval scale, except that they have an
absolute zero point.
◦ Unlike with interval scale data, all mathematical operations
are possible.
◦ Examples include height, weight, and time.
◦ Highest level of measurement and the most powerful.
◦ Allows for the use of sophisticated statistical techniques.
◦ Some examples of ratio scales are those pertaining to actual
age, income, and the number of organizations individuals
have worked for.
 Examples
◦ Your age in years:____________
◦ Your monthly income in Birr:_________
◦ Number of years the firm has been in business ________
Measurement…
Mathematical/Statistical manipulations on the different
measurements
Hypotheses Setting
 Hypotheses are untested statements that specify a
relationship between 2 or more variables.
 They are tentative statements of relationships between
two or more variables.
 Example:

◦ Age of employees affects intention of turnover.


 Hypotheses should be clearly stated at the beginning of
a study.
◦ Do not have to have a hypothesis to conduct research,
general research questions is enough.
Hypotheses…
Characteristics of a hypothesis
◦ States a relationship between two or more variables
◦ Is stated affirmatively (not as a question)
◦ Can be tested with empirical evidence
◦ Most useful when it makes a comparison
◦ States how multiple variables are related
◦ Theory or underlying logic of the relationship makes
sense
Hypotheses…
 Positive and Negative (Inverse) Relationships
◦ Positive (alternative hypothesis = Ha): as
values of independent variable increase, the
values of the dependent variable increase
 Example: as age increases income also increases
◦ Negative (null hypothesis = Ho): as values of
independent variable increase, the values of the
dependent variable decrease (or vice versa)
 Example: Criminal behavior decreases with improved
socio-economic status
Hypotheses…
Two-directional Hypotheses
◦ More general expression of a hypothesis
◦ Usually default in statistical packages
◦ Suggests that groups are different or concepts
related, but without specifying the exact
direction of the difference
Example: Men and women have different
views on business start ups.
Hypotheses…
One-directional hypotheses
◦ More specific expression of a hypothesis
◦ Specifies the precise direction of the
relationship between the dependent and
independent variables.
◦ Example: Men tend to start business in their
early/young age than women
Variables, measurement and hypotheses
NO. Variables Description Types Measurement hypothesis

1 AHH Age of household head Continuous Number of years +/-

2 SHH Sex of the household head Dummy 0=female, 1= +/-


male
3 HHEL household head education level Continuous Years of +
schooling
4 MMV Marketed Milk Volume Continuous Litter +

5 LH Livestock holding (TLU) Continuous number +

6 NMCH Number of milking camel heads Continuous number +

7 ECP Experience in camel production Continuous Number of years +

8 AC Access to credit Dummy 0= yes 1= NO +

9 DM Distance to market Continuous Kilometer -


Data Types and Sources
 Data is a collection of facts such as:
◦ Numbers
◦ Words
◦ Figures
◦ Measurements
◦ Observations
◦ Descriptions
 Data is the actual measurements or observations that
result from an investigation or survey
 or the values (response) of the variable associated with
an element of a population or a sample
Data type…
 Data can be classified based on different ways:
 Based on the type of variable
i) numerical (quantitative) data
ii) categorical (qualitative) data
 Based on Time frame
i) Cross-sectional data
ii) Longitudinal data
 Based on Source
i) Primary data
ii) Secondary data
Data type…
 Based on the type of variable
 Qualitative data
 are measurements that cannot be measured on a natural
numerical scale; they can only be classified into one of a
group of categories.
 Categorical data may be nominal or ordinal.
 Quantitative data
 are data that are expressed numerically or they are
numerical observations of variables.
 Quantitative data are obtained using either the interval or
ratio scale of measurement.
 Further, Numerical data may be discrete or continuous.
Data type..
 Based on Time frame
 Cross-sectional data-
 are data collected from a sample at the same or
approximately the same point in time.
 Longitudinal data-
 are data collected over several time periods or at
successive points in time.
Data type…
 Based on Source
 Primary Data:
 are original data collected by the researcher in the process of
investigation.
 are data that are collected from primary source.
 Secondary Data:
 are data that are collected /obtained from published or unpublished
data collected by others.
 are data that are collected from secondary sources. Such as: journals,
reports, books, internets, newspaper, etc.
 Before using secondary data the investigator should examine the
following aspects:
 Whether the data are suitable for the purpose of investigation
 Whether the data are adequate for the purpose of the investigation
Data Collection Methods
 Itis the process of gathering and measuring information on
variables of interest in an accepted systematic fashion.
 Data collection methods vary by discipline and data types;
but the emphasis on ensuring accurate collection remains
the same.
 Consequences from improperly collected data:
Inability to repeat and validate the study.
Distorted, inaccurate findings.
Wasted resources.
Misleading other researchers to pursue fruitless avenues
of investigation.
Misleading policy recommendations
Data collection…
 Data can be acquired from Secondary and primary sources
or from both.
Secondary Sources of data
◦ Secondary sources are those, which have been collected
by other individuals or agencies.
◦ As much as possible secondary data should always be
considered first, if available.
 Why reinvent the wheel if the data already exists.
 But, when dealing with secondary data you should ask:
 Is the owner of the data making them available to you?
 Is it free of charge? If not, how will you pay?
 Are the data suitable for your investigation?
 A description of the sampling technique, i.e., how the
sample was collected.
Data collection…
Sources of Secondary Data
 Secondary data may be acquired from various sources:
 Documents (reports of various kinds, books, periodicals, reference
books (encyclopedia), university publications (thesis, dissertations,
etc.), policy documents, statistical compilations, proceedings,
personal documents (historical documents, Data archives, etc.
 The Internet
Advantages of Secondary data
 Can be found more quickly and cheaply.
 Most researches on past events or distant places have to rely on
secondary data sources.
Limitations
◦ Authenticity: not much may be known about
 genuine?, credible?, representative? Completeness? Up to date?
Data collection…
Quantitative Primary Data Collection Methods
 This method involves the collection of data so that
information can be quantified and subjected to statistical
treatment.
 Primary data may be collected through:
Direct personal observation method, or
Survey questionnaire
From a literature search, or by combining them.
Data collection…

The Observation Method


◦ Observation includes the full range of monitoring
behavioral and non-behavioral activities.
 Advantages
 It is less demanding and has less bias.
 One can collect data at the time it occurs and need
not depend on reports by others.
 with this method one can capture the whole event as
it occurs.
Data collection…

Weakness of the Method


 The observer normally must be at the scene of the
event when it takes place.
◦ But it is often difficult or impossible to predict
when and where an event will occur.
 It is also a slow and expensive process.
 Its most reliable results are restricted to data that can
be determined by an open or deliberate action or
surface indicator.
 Limited as a way to learn about the past, or difficult
to gather information on such topics as intensions,
attitudes, opinions and preferences.
Data collection…
The Survey Questionnaire Method:
 To survey is to ask people questions in a questionnaire
- mailed or handled by interviewers.
Strength of the Survey Method
 It is a versatile or flexible method - capable of many
different uses.
 Surveys tend to be more efficient and economical than
observations
Weakness of the Method
◦ The quality of information secured depends heavily on
the ability and willingness of the respondents.
 A respondent may interpret questions or concept
differently from what was intended by the researcher.
 A respondent may deliberately mislead the researcher
by giving false information.
Data collection…
 Surveys could be carried out through:
 Face to face personal interview
 By telephone interview
 By mail or e-mail, or
 By a combination of all these.
 In a survey method, the preparation of a questionnaire is
very central.
Questionnaire Design
 Actual instrument design begins by drafting specific measurement
questions in the form of a questionnaire.
 Questionnaires are easy to analyze.
Data entry and tabulation can be easily done with many
computer software packages.
 Questionnaires are familiar to most people.
Nearly everyone has had some experience completing
questionnaires and they generally do not make people
apprehensive.
 Questionnaires reduce bias.
There is uniform question presentation.
The researcher's own opinions will not influence the answer.
Questionnaire Design
The main Components of a questionnaire
◦ Covering letter: brief purpose of the survey, who is
doing it, time involved, etc.
◦ Identification data: respondent’s name, address, time
and date of interview, code of interviewer, etc.
◦ Instruction: Include clear and concise instructions on
how to complete the questionnaire.
◦ Information sought: major portion of the questionnaire
Questionnaire Design
 When the goals of a study can be expressed in a few clear
and concise sentences, the design of the questionnaire
becomes considerably easier.
 Hence, ask only questions that directly address the study
goals.
◦ Avoid the temptation to ask questions because it would
be "interesting to know".
Questionnaire Design
 As a general rule, long questionnaires get less response
than short questionnaires.
◦ Hence, keep your questionnaire short to maximize
response rate – essentials.
 Minimizing the number of questions is highly desirable,
but we should never try to ask two questions in one.
Questionnaire Design
 Indeveloping a survey instrument the following issues
need to be considered carefully:
 Question content
 Question wording
 Response form
 Question sequence
Questionnaire Design
1. Question Content
 Question content depends on the respondent’s:

◦ ability, and
◦ willingness to answer the question accurately.
a) Respondents’ ability:
◦ The respondent information level should be
assessed.
 Questions that overtax the respondent’s recall
ability may not be appropriate.
Questionnaire Design
b) Willingness of respondent to answer
◦ Even if respondents have the information, they may be
unwilling to give it.
◦ Some of the main reasons for unwillingness:
 The situation is not appropriate for disclosing the
information – embarrassing or sensitive
 Disclosure of information is a potential threat to the
respondent
 topic is irrelevant and uninteresting for them.
Questionnaire Design
 To secure more complete and truthful information
 Use indirect statements i.e., “other people”
 Change the design of the questioning process.
Apply appropriate questioning sequences that will
lead a respondent from „safe“ question gradually to
those that are more sensitive.
 Begin with non-threatening and interesting
questions.
Questionnaire Design
Different types of questions
 Types of questions depend on research question and affect
the nature of analysis
◦ Attributes – characteristics of respondents (e.g., age,
sex, etc.)
◦ Behaviour – what people do
◦ Beliefs – what people believe
◦ Knowledge – what people know
◦ Attitudes – what is desirable
Questionnaire Design

 Questions should be
◦ Relevant
◦ reliable – same response by same individual and
different people should understand the question the
same way
◦ discriminating – should capture sufficient variation
◦ increasing response rates – sensitive questions and
poor survey administration can reduce response rates
Questionnaire Design
 Questions should be
◦ Simple and short
◦ About issues respondents have knowledge of
◦ With same meaning to all
 Questions should not be
◦ Double-barrelled – do not ask two questions
◦ Leading – push people to answer in a certain way
◦ Avoid words like usually, often, sometimes,
occasionally, seldom, etc.
Questionnaire Design
2. Question Wording: Using Shared Vocabulary
 In a survey the two parties must understand each
other and this is possible only if the vocabulary used
is common to both parties.
 So, don’t use uncommon words or long sentences or
abbreviations and make items as brief as possible.
And, don’t use emotionally loaded or vaguely
defined words.
Questionnaire Design
3. Response structure or format -
 Refers to the degree and form of the structure imposed on
the responses.
◦ Open-ended or closed questions
a) Open Ended Questions
◦ In open-ended questions respondents can give any
answer.
 They may express themselves extensively.
 The freedom may be to choose a word in a “fill in “
question.
Questionnaire Design
Advantage
◦ Permit an unlimited number of answers
◦ Respondents can qualify and clarify responses
◦ Permit creativity, self expression, etc.
Limitations
 responses may not be consistent.
 Some responses may be irrelevant
 Comparison and statistical analysis difficult.
 Articulate and highly literate respondents have an
advantage, etc.
Questionnaire Design
b) Closed Questions
◦ Generally preferable in large surveys.
 dichotomous or multiple-choice questions.
Advantages
◦ Easier and quicker for respondents to answer
◦ Easier to compare the answers of different respondents
◦ Easier to code and statistically analyze
◦ Are less costly to administer
◦ reduce the variability of responses
◦ make fewer demands on interviewer skill, etc.
◦ don’t discriminate against the less talkative
Questionnaire Design
Limitations
◦ Can suggest ideas that the respondents would not
otherwise have
◦ too many choices can confuse respondents
 During the construction of closed ended questions:
 The response categories provided should be exhaustive.
 They should include all the possible responses that
might be expected.
 The answer categories must be mutually exclusive.
Questionnaire Design
4) Question Sequence – the order of the questions
 The order in which questions are asked can affect the
overall data collection activity.
 Grouping questions that are similar will make the
questionnaire easier to complete, and the respondent will
feel more comfortable.
◦ Questions that use the same response formats, or those
that cover a specific topic, should appear together.
Questionnaire Design
 Questions that jump from one unrelated topic to another
are not likely to produce high response rates.
 Each question should follow comfortably from the
previous question.
 Transitions between questions should be smooth.
Questionnaire Design
5) Physical Characteristics of a Questionnaire
 An improperly laid out questionnaire can lead respondents
to miss questions, can confuse them.
 So, take time to design a good layout
◦ ease to navigate within and between sections
◦ ease to use the questionnaire in the field; e.g., questions
on recto and codes on verso sides of the questionnaire
◦ leave sufficient space for open-ended questions
◦ questionnaire should be spread out properly.
Questionnaire Design
 Putting more than one question on a line will result in
some respondents skipping the second question.
 Abbreviating questions will result in misinterpretation
of the question.
Formats for Responses
◦ A variety of methods are available for presenting a
series of response categories.
 Boxes
 Blank spaces
Questionnaire Design
Providing Instructions
◦ Every questionnaire whether to be self administered by
the respondent or administered by an interviewer should
contain clear instructions.
 General instructions: basic instructions to be followed in
completing it.
 Introduction: If a questionnaire is arranged into subsections
it is useful to introduce each section with a short statement
concerning its content and purpose.
Questionnaire Design
 Specific Instructions: Some questions may require
special instructions.
 Interviewers instruction: It is important to provide clear
complementary instruction where appropriate to the
interviewer.
Questionnaire Design
6) Reproducing the questionnaire
 A neatly reproduced instrument will encourage a higher
response rate, thereby providing better data.
◦ Pilot Survey: The final test of a questionnaire is to try
it on representatives of the target audience.
◦ If there are problems with the questionnaire, they
almost always show up here.
Qualitative data collection approaches

◦ Qualitative data can be acquired from:


 case studies,
 Rapid appraisal methods,
 focus group discussions and
 key informant interviews.
i) Case studies
 A case study research involves a detailed investigation of
a particular case.
Through Interviews or Through Direct observation
(field visits).
Qualitative data…

ii)Participatory Rapid Appraisal (PRA)


◦ PRA is a systematic expert observation but semi-
structured activity often by a multidisciplinary team.
 The method:
 takes only a short time to complete,
 tends to be relatively cheap, and
 make use of more 'informal' data collection
procedures.
 Includes interviews with individuals, households and key
informants as well as group interview techniques.
Qualitative data…

iii) Focus group discussions


A FGD is a group discussion guided by a facilitator,
during which group members talk freely and
spontaneously about a certain topic.
The group of individuals are expected to have
experience or opinion on the topic and selected by the
researcher.
It is more than a question-answer interaction.
 group members discuss the topic and interact among
themselves with guidance from the facilitator.
Qualitative data…

Why use focus groups?


◦ The main purpose of a focus group research is to draw
upon respondents’ attitudes, feelings, beliefs,
experiences and reactions.
 attitudes, feelings and beliefs may likely be revealed
via interaction in social gatherings.
◦ Compared to individual interviews, which aim to
obtain individual attitudes, beliefs and feelings, focus
groups elicit a multiplicity of views and emotional
processes within a group context.
Qualitative data…

Strengths and weakness of FGDs


 FGDs provides valuable information in a short period
of time and at relatively low cost if the groups have
been well chosen, in terms of composition and number.
◦ BUT, FGD should may not be used for quantitative
purposes, such as the testing of hypotheses or the
generalization of findings for larger areas, which
would require more elaborate surveys.
Qualitative data…

 Itmay also be risky to use FGDs as a single tool.


◦ b/c in group discussions, people tend to centre their
opinions on the most common ones.
◦ b/c in case of very sensitive topics group members
may hesitate to express their feelings and experiences
freely.
 Therefore, it is advisable to combine FGDs with other
methods (in-depth interviews).
Qualitative data…

iv) Key Informant Interview


◦ an interviewing process for gathering information
from opinion leaders such as elected officials,
government officials, and business leaders, etc.
◦ This technique is particularly useful for:
 Raising community awareness about socio-
economic issues
 Learning minority viewpoints
 Gaining a deeper understanding of opinions and
perceptions, etc.
Triangulation
Triangulation
 refers to the use of more than one approach to the
investigation of a research question in order to
enhance confidence in the findings.
 The purpose of triangulation is to obtain
confirmation of findings through convergence of
different perspectives.
Why use triangulation
◦ By combining multiple methods, and empirical
materials, researchers can hope to overcome the
weakness or biases and problems that are associated
with a single method.
Triangulation

Taxonomy of triangulation
1. Data triangulation: Involves gathering data at different
times and situations, from different subjects using
different sampling techniques.
◦ Surveying relevant stakeholders about the impact of a
policy intervention would be an example.
E.G: Using survey data with time series data.
Triangulation

2. Investigator triangulation: involves using more than


one field researcher to collect and analyze the data
relevant to a specific research object.
 Asking scientific experimenters to attempt to
replicate each other’s work is an example.
3. Theoretical triangulation: involves making explicit
references to more than one theoretical tradition to
analyze data.
 This is intrinsically a method that allows for
different disciplinary perspectives.
Triangulation

4. Methodological triangulation: combination of different


research methods or different varieties of the same
method - two forms of methodological triangulation.
Within method triangulation involves making use of
different varieties of the same method.
making use of alternative econometric estimators
would be an example.
Between method triangulation involves making use of
different methods.
Using ‘quantitative’ and ‘qualitative’ methods in
combination..
Instrument Testing and Validations:
Reliability vs Validity
Reliability and Validity of A Measurement
 Measurement involves assigning scores to individuals so
that they represent some characteristic of the individuals.
 But, how do researchers know that the scores actually
represent the characteristic, especially when it is a
construct like intelligence, self-esteem, corruption etc?
 Researchers do not simply assume that their measures
work. Instead, they collect data to demonstrate that they
work.
 If their research does not demonstrate that a measure
works, they stop using it.
 In evaluating a measurement method, researchers
consider two general dimensions: reliability and validity.
Reliability and Validity of A Measurement
 Reliability: refers to the consistency of a measure. Three
types of reliability:
a) over time (test-retest reliability),
b) across items (internal consistency), and
c) across different researchers (inter-rater reliability).
a) Test-Retest Reliability
 When researchers measure a construct that they assume to be
consistent across time, then the scores they obtain should also
be consistent across time.
 E.g., Intelligence is generally thought to be consistent across
time. A person who is highly intelligent today will be highly
intelligent next week implying that a good measure of
intelligence should produce roughly the same scores at both
times.
Reliability and Validity of A Measurement
 Assessing test-retest reliability requires using the
measure on a group of people at one time, using it again
on the same group of people at a later time, and then
looking at test-retest correlation between the two sets of
scores.
 This is typically done by graphing the data in a scatter
plot and computing Pearson’s r.
 In general, a test-retest correlation of +.80 or greater is
considered to indicate good reliability.
 Again, high test-retest correlations make sense when the
construct being measured is assumed to be consistent
over time, e.g., intelligence.. But other constructs are not
assumed to be stable over time. E.g. Mood
Reliability and Validity of A Measurement
b) Internal Consistency
 It is the consistency of people’s responses across the
items on a multiple-item measure.
 All the items on such measures are supposed to reflect
the same underlying construct, so people’s scores on
those items should be correlated with each other.
 If people’s responses to the different items are not
correlated with each other, then it doesn’t make sense to
claim that they are all measuring the same underlying
construct.
 Like test-retest reliability, internal consistency can only
be assessed by collecting and analyzing data. One
approach is to look at a split-half correlation.
Reliability and Validity of A Measurement
 Perhaps the most common measure of internal
consistency used by researchers is a statistic
called Cronbach’s α (the Greek letter alpha).
 Conceptually, α is the mean of all possible split-half
correlations for a set of items. For example, there are 252
ways to split a set of 10 items into two sets of five.
Cronbach’s α would be the mean of the 252 split-half
correlations.
 Again, a value of +.80 or greater is generally taken to
indicate good internal consistency.
Reliability and Validity of A Measurement
c) Interrater Reliability
 Many behavioural measures involve significant judgment on the
part of an observer or a rater.
 Inter-rater reliability is the extent to which different observers
are consistent in their judgments.
 For example, if you were interested in measuring university
students’ social skills, you could make video recordings of them
as they interacted with another student whom they are meeting
for the first time. Then you could have two or more observers
watch the videos and rate each student’s level of social skills.
 Interrater reliability is often assessed using Cronbach’s α when
the judgments are quantitative or an analogous statistic
called Cohen’s κ (the Greek letter kappa) when they are
categorical.
Reliability and Validity of A Measurement
Validity
 Validity is the extent to which the scores from a measure
represent the variable they are intended to.
 But how do researchers make this judgment? We have
already considered one factor that they take into account
—reliability.
 When a measure has good test-retest reliability and
internal consistency, researchers should be more
confident that the scores represent what they are
supposed to.
 Outside of statistical research, reliability and validity are
used interchangeably. For research and testing, there are
subtle differences.
Overview of Reliability and Validity
 Validity is defined as the extent to which an instrument
measures what it purports to measure.
 Example: If you take the ACT five times, you should get
roughly the same results every time. (Reliability)
 The ACT is valid (and reliable) because it measures what
a student learned in high school. (Validity)
 Test that are valid are also reliable. However, tests that
are reliable aren’t always valid.
 For example, let’s say your thermometer was a degree
off. It would be reliable (giving you the same results each
time) but not valid (because the thermometer wasn’t
recording the correct temperature).
Reliability and Validity of A Measurement
 Here, we consider three basic kinds: face validity, content
validity, and criterion validity.
 Face Validity

a) Face validity is the extent to which a measurement method


appears “on its face” to measure the construct of interest.
 Most people would expect a self-esteem questionnaire to
include items about whether they see themselves as a person
of worth and whether they think they have good qualities. So
a questionnaire that included these kinds of items would have
good face validity.
 Face validity is at best a very weak kind of evidence that a
measurement method is measuring what it is supposed to. It is
also the case that many established measures in research work
quite well despite lacking face validity.
Reliability and Validity of A Measurement
b) Content Validity
 Content validity is the extent to which a measure
“covers” the construct of interest.
 Like face validity, content validity is not usually assessed
quantitatively. Instead, it is assessed by carefully
checking the measurement method against the conceptual
definition of the construct. E.g. Dimensions of employee
reward system
c) Criterion Validity
 Criterion validity is the extent to which people’s scores
on a measure are correlated with other variables (known
as criteria) that one would expect them to be correlated
with.
 Eg. People’s scores on a measure of poverty should be
negatively correlated with their income and positively to
Questionnaire Validation in a Nutshell
1) Establish face validity: there are two important steps in
this process.
a) Have experts or people who understand your topic to
check whether the questions effectively capture the
topic under investigation.
b) Have a psychometrician (i.e., one who is expert on
questionnaire construction).
2) Pilot Test: the second step is to pilot test the survey on a
subset of your intended population.
3) Clean Dataset: after collecting pilot data, enter the
responses into a spreadsheet and clean the data.
Questionnaire Validation in a Nutshell
4) Principal Component Analysis: identify underlying
components using principal components analysis (PCA).
Questions that measure the same thing should load onto
the same factors. Factor loadings range from -1.0 to 1.0.
Take values that are ±0.60 or higher for factor loading I.
5) Chronbach’s Alpha: check the internal consistency of
questions loading onto the same factors. This step
basically checks the correlation between questions
loading onto the same factor. It is a measure of reliability
in that it checks whether the responses are consistent. A
standard test of internal consistency is Cronbach’s Alpha
(CA). Cronbach Alpha values range from 0 – 1.0. In
most cases the value should be at least 0.60 or higher.
Questionnaire Validation in a Nutshell
6) Revise (if needed): the final step is revising the survey
based on information gleaned from the PCA. Consider
that even though a question does not adequately load
onto a factor, you might retain it because it is important.
You can always analyze it separately. If the question is
not important you can remove it from the survey.
Similarly, if removing a question greatly improves the
factor loading for a group of questions, you might just
remove it from its factor loading group and analyze it
separately.

You might also like