Unit II
Unit II
Research Design
and Measurement
Research design – Definition – types of
research design – exploratory and causal
research design – Descriptive and
experimental design – different types of
experimental design – Validity of findings –
internal and external validity – Variables in
Research – Measurement and scaling –
Different scales – Construction of instrument –
Validity and Reliability of instrument.
RESEARCH DESIGN
Meaning
• A research project conducted scientifically
has a specific framework of research
from the problem identification to the
presentation of the report.
• This framework of conducting research
is known as research design.
Definition
According to Kerlinger, “Research design is the
plan, structure, & strategy of
investigation conceived so as to obtain
answers to research questions and to control
variance”.
Acc toWilliam Zikmund” Research design is defined as
a master plan specifying the methods and
procedures for collection and analyzing the needed
information.
According to Green and Tull, “A research
design is the specification of methods
and procedures for acquiring the
information needed. It is the overall
operational pattern or framework of the
project that stipulates what information
is to be collected from which sources
by what procedures”.
Features of good research design
• Objectivity
• Reliability
• Validity
• Generalizability
• Sufficient information
• Other features- adaptability, flexibility,
efficiency etc.
Factors affecting research design
• Research questions
• Time and budget limits
• Research objective
• Research problem
• Personal experiences
• Target audience.
Types of research design
Types of Research Design
Experimenta
Exploratory Descriptive
l/Causal
Research Research
Research
Design Design
Design
EXPLORATORY RESEARCH DESIGNS
• Exploratory means to explore the hidden
things, which are not clearly visible.
• Exploratory research is a type of research
conducted for a problem that has not been
clearly defined.
• It id done during the preliminary stage.
• Data are collected through observation and
interviews
For example:
• Juice shop owner inorder to increase the sale
can include more varieties. Increasing the
variety will increase the customers.
• Following are the different ways of
conducting exploratory research
1. Secondary data analysis
2. Qualitative research (Depth interview,focus
groups, projective techniques)
3. Pilot surveys
4. Expert surveys
5. Case study
6. Two-Tiered design.
Gender composition
Age groupings
causes variable Y.
criteria
specific.
Longitudinal studies: three criteria
• Comparative experiment
• Ex: examining the growth of children based on
two health drink (Complan & Horlicks)
Important Concepts Used in
Research design
Variable: a concept which can take on
different quantitative value is called a variable.
Ex: weight, height, income
Continuous variable : Age
Non-continuous variable: No. of children
1. Dependent variable:
If one variable depends upon or is a
consequence of the other variable, it
is termed as a dependent variable.
2. Independent variable:
If the variable is antecedent to the
dependent variable it is termed as an
independent variable
For ex:
• Height depends upon age
• Height depends on gender
3.Extraneous variables: These are the
variables other than the independent
variables which influence the response of
test units to treatments.
2. Principle of Randomization
3. Principle of Local Control
1. Principle of Replication (Reproduction):
• According to the principle of replication, the
experiment should be repeated more
than once.
• By doing so the statistical accuracy of the
experiments is increased.
For Ex: suppose we are to examine the effect
we may
basis.
But if we are to apply the principle of
replication.
2. Principle of Randomization
• Principle of randomization provides
protection.
be realistic.
3. Principle of Local Control
• Through the principle of local control we can
(barricade).
• And then each such block is divided into parts
equal to the number of treatments.
• Then the treatments are randomly assigned to
these parts of block.
• Dividing the field into several homogeneous
parts is known as ‘blocking’.
TYPES OF EXPERIMENTAL DESIGN
1. QUASI EXPERIMENTAL
DESIGN
a. Pretest and posttest with
Experimental Group
b. Posttest only with Experimental
and Control group
2. TRUE EXPERIMENTAL
DESIGN
a. Pretest and Posttest with
Experimental and Control Group
b. Blind studies
c. Ex Post Facto Designs
3. STATISTICAL DESIGN
i. Completely Randomized Design
(C.R. Design)
ii. Randomized Block Design (R.B.
Design)
iii. Latin Square Design (L.S. Design)
iv. Factorial Designs
1. QUASI – EXPERIMENTAL DESIGNS
• It does not measure the true cause-and
effect relationship.
• This is so because there is no comparison
between groups.
• This experimental design is the weakest of
all designs
a. Pretest and posttest with
Experimental Group
• An experimental group (without a control
Experimenta O1 X O2
l Group
Experimental X O1
Group
O2
Control Group
Control O3 O4
Group
studied.
For ex:
• Training programs might have been
introduced in an organization 2 years earlier.
• Some might have already gone through the
training while others might not.
• To study the effects of training on work
performance, performance data might now be
collected for both groups.
• Since the study does not immediately follow
after the training, but much later, it is an ex
post facto design.
3. STATISTICAL DESIGN
a. Completely Randomized Design (C.R.
Design)
b. Randomized Block Design (R.B. Design)
c. Latin Square Design (L.S. Design)
d. Factorial Designs
a. Completely Randomized Design (C.R.
Design)
principle of randomization.
experimental treatments.
b. Randomized Block Design (R.B.
Design)
number of treatments
FERTILITY LEVEL
I II III IV V
X1 A B C D E
X2 B C D E A
Seed X3 C D E A B
Differe
nce
X4 D E A B C
X5 E A B C D
d. Factorial Design:
• The factorial experiment design is used to
test two or more variables at the same
time.
• Factorial designs can be of two types:
i. Simple factorial design
ii. Complex factorial design.
Validity
Validity in Experimentation
The researchers must make sure that any
measuring instrument selected by him is said
to be valid when it measures what it purposes
to measure.
Ex: Weight machine
Reliability
• Refers to stability and consistency through a
series of measurements.
• The reliability of a measure is its capacity to
yield the same results in repeated
applications to the same events.
Internal validity: It refers to the
confidence we place in the cause – and –
effect relationship.
• It addresses the question, “ to what extent
does the research design permit us to say
that the independent variable A causes a
change in the Dependent variable B?”
• Internal validity tries to examine whether the
observed effect on a dependent variable is
actually caused by the treatments (independent
variables) in question.
• External validity: External validity refers to
the generalization of the results of an
experiment. The concern is whether the result
of an experiment can be generalized beyond
the experimental situations.
Factors Affecting Internal Validity of
the Experiment
• Maturation
Ex:
• Test the impact of new compensation
program on sales productivity.
• If this program were tested over a year’s
time, some of the sales people probably would
mature as a result of more selling
experience or gain increased knowledge.
• Their sales productivity might improve
because of their knowledge and
experience rather than the compensation
program.
Testing
• Testing effects only occur in a before-and-
after study.
Instrumentation
• A change in wording of questions, a change
in interviewers cause instrumentation effect
Selection bias
• Sample bias that results from differential
selection of respondents.
Mortality
• Some subjects withdraw from the experiment
before it is completed.
Factors Affecting External
• The environment Validity
at the time of test
may be different from the
environment of the real world where
these results are to be generalized.
• Population used for experimentation
of the test may not be similar to the
population where the results of the
experiments are to be applied.
Environments of Conducting
• Laboratory Environment - In
Experiments a
laboratory experiment, the researcher
conducts the experiment in an artificial
environment constructed exclusively for
the experiment.
• Field Environment - The field experiment is
conducted in actual market conditions.
There is no attempt to change the real-life nature
of the environment.
Variables in Research
Variable
A variable is anything that can take on
object or person.
Types of Variables
• Dependent Variable
• Independent variable
• Moderating variable
• Extraneous variable
• Intervening variable
1. Dependent variable (DV):
If one variable depends upon or is a
consequence of the other variable, it is
termed as a dependent variable.
2. Independent variable (IV):
If the variable is antecedent to the dependent
variable it is termed as an independent
variable
Ex: Smoking causes Cancer
3. Moderating variable (MV):
A moderating variable is a second
independent variable that is
included because it is believed to
have a significant contributory
effect on the originally stated IV -
DV relationship.
4.Extraneous variables (EV):
These are the variables other
than the independent
variables which influence the
response of test units to
treatments.
5. Intervening variable (IVV):
The intervening variable (IVV) may
be defined a “that factor which
theoretically affects the observed
phenomenon but cannot be seen,
measured or manipulated”.
Measurement and Scaling
Measurement:
• NOMINAL SCALE
• ORDINAL SCALE
• INTERVAL SCALE
• RATIO SCALE
NOMINAL SCALE:
Example:
Thurstone Scale
Staple’s Scale
Multi Dimensional
Scaling
Two main categories of Attitudinal Scale
RATING SCALES
Rating scales have several response
categories and are used to elicit responses
with regard to object, event, or person studied.
RANKING SCALES
• Ranking scales make comparisons
between or among objects, events, or persons
and elicit the preferred choice and
ranking among them.
RATING SCALE
1. GRAPHIC RATING SCALE
• Respondents rate the objects by placing a
mark at the appropriate position on
a line that runs from one extreme of the
criterion variable to another.
Graphic Rating Scale – This is a continuous
scale and the respondent is asked to tick his
preference on a graph.
Examples:
Please put a tick mark (•) on the following line to indicate your
preference for fast food.
Please indicate how much do you like fast food by pointing to the
face that best shows your attitude and taste. If you do not prefer it
at all, you would point to face one. In case you prefer it the most,
you would point to face seven.
2. ITEMIZED RATING SCALE
response categories.
i. Guttman Scales/Scalogram
• Consists of statements to which a
respondent expresses his agreement or
disagreement.
• It is also known as cumulative scale
• Under this technique the respondents are
asked to answer in respect of each item
whether they agree or disagree with it.
• Ex: Customer’s expectation on Reliance Fresh
Item Expectation
No.
(i) Would you expect price discounts in
Reliance Fresh?
(ii) Do you need free door delivery service?
1. √ √ √ √ 4
2. √ √ √ X 3
3. √ √ X X 2
4. √ X X X 1
5. X X X X 0
agreeable to item 4.
ii. Likert scale
Reasons
Test–retest reliability
Split-half reliability
Cronbach’s Alpha
Validity
Criteria for good measurement
The validity of a scale refers to the question
whether we are measuring what we want to
measure.
Different ways to measure Validity
Content validity
Concurrent validity
Predictive validity
Sensitivity