0% found this document useful (0 votes)
26 views125 pages

Unit II

Research design refers to the overall plan or framework for conducting a research project from problem identification to reporting results. The key types of research design discussed in the document are exploratory, descriptive, experimental/causal designs. Exploratory research is conducted during the preliminary stage to explore a problem that is not clearly defined, while descriptive research describes characteristics of the data. Experimental research tests hypotheses about causes and effects and may involve experimental and control groups. Key concepts in research design include variables, treatments, and principles of replication, randomization and local control.

Uploaded by

Santhos Kumar.k
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views125 pages

Unit II

Research design refers to the overall plan or framework for conducting a research project from problem identification to reporting results. The key types of research design discussed in the document are exploratory, descriptive, experimental/causal designs. Exploratory research is conducted during the preliminary stage to explore a problem that is not clearly defined, while descriptive research describes characteristics of the data. Experimental research tests hypotheses about causes and effects and may involve experimental and control groups. Key concepts in research design include variables, treatments, and principles of replication, randomization and local control.

Uploaded by

Santhos Kumar.k
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 125

UNIT – II

Research Design
and Measurement
Research design – Definition – types of
research design – exploratory and causal
research design – Descriptive and
experimental design – different types of
experimental design – Validity of findings –
internal and external validity – Variables in
Research – Measurement and scaling –
Different scales – Construction of instrument –
Validity and Reliability of instrument.
RESEARCH DESIGN
Meaning
• A research project conducted scientifically
has a specific framework of research
from the problem identification to the
presentation of the report.
• This framework of conducting research
is known as research design.
Definition
According to Kerlinger, “Research design is the
plan, structure, & strategy of
investigation conceived so as to obtain
answers to research questions and to control
variance”.
Acc toWilliam Zikmund” Research design is defined as
a master plan specifying the methods and
procedures for collection and analyzing the needed
information.
According to Green and Tull, “A research
design is the specification of methods
and procedures for acquiring the
information needed. It is the overall
operational pattern or framework of the
project that stipulates what information
is to be collected from which sources
by what procedures”.
Features of good research design
• Objectivity
• Reliability
• Validity
• Generalizability
• Sufficient information
• Other features- adaptability, flexibility,
efficiency etc.
Factors affecting research design
• Research questions
• Time and budget limits
• Research objective
• Research problem
• Personal experiences
• Target audience.
Types of research design
Types of Research Design

Experimenta
Exploratory Descriptive
l/Causal
Research Research
Research
Design Design
Design
EXPLORATORY RESEARCH DESIGNS
• Exploratory means to explore the hidden
things, which are not clearly visible.
• Exploratory research is a type of research
conducted for a problem that has not been
clearly defined.
• It id done during the preliminary stage.
• Data are collected through observation and
interviews
For example:
• Juice shop owner inorder to increase the sale
can include more varieties. Increasing the
variety will increase the customers.
• Following are the different ways of
conducting exploratory research
1. Secondary data analysis
2. Qualitative research (Depth interview,focus
groups, projective techniques)
3. Pilot surveys
4. Expert surveys
5. Case study
6. Two-Tiered design.

Application of Exploratory research Design


• Investigating an issue
• Gaining information
• Establishing priorities
• Clarifying concepts
• Framing problems
• Knowing market trends
Significance of Exploratory research Design
• New discoveries
• Enhances knowledge
• Wide range of techniques
• Directs future research
• Strategic planning

Limitations of exploratory research design


• Leads to wrong decisions
• Incorrect information
• Cannot be generalized
• Costly
DESCRIPTIVE RESEARCH DESIGNS
• Descriptive studies are designed to describe
something.

• Ex: a study of class in terms of

the percentage of members who are in their senior


and junior years

Gender composition

Age groupings

Number of business courses taken


• Descriptive studies are undertaken in
organizations to learn about and describe
characteristics of a group of employees as for
example
The age
Educational level
Job status
• Example:
a bank manager wants to have a profile of the
individuals who have loan payments
outstanding for 6 months and more.
It would include details of their average age,
earnings, nature of occupation, full-
time/part-time employment status etc.
HYPOTHESIS TESTING
• Studies that engage in hypotheses testing
usually explain
the nature of certain relationships, or
establish the differences among groups or
the independence of two or more factors in
a situation.
Example:

• A marketing manager wants to know if the

sales of the company will increase if he

doubles the advertising dollars.


• Here, the manager would like to know the

nature of relationship that can be

established between advertising and sales

by testing hypothesis: if advertising is

increased, then sales will also go up


Example: the testing of hypothesis such as:
More men than women are
whistleblowers,
• It establishes the difference between two
groups - men and women - in regard to
their whistle-blowing behaviour.
CAUSAL STUDY
• Causal study is able to state that variable X

causes variable Y.

• So when variable X is removed or altered in

some way, problem Y is solved.

• The study in which the researcher wants to

delineate the cause of one or more problems

is called a causal study.


A causal study question:

Does smoking cause cancer?


Smoking – independent variable
Cancer – dependent variable
The Time Dimension
Cross-sectional research designs: two

criteria

1. carried out at a single moment in time,

therefore the applicability is temporal

specific.
 Longitudinal studies: three criteria

1. The study involves selection of a


representative group as a panel.

2. There are repeated measurement of the


researched variable on this panel over fixed
intervals of time.

3. Once selected the panel composition


needs to stay constant over the study
period.
What is an Experiment?
• The process of examining the truth of a
statistical hypothesis, relating to some
research problem, is known as experiment.
• Absolute experiment
• Ex: examining the growth of children based on
one health drink (Complan)

• Comparative experiment
• Ex: examining the growth of children based on
two health drink (Complan & Horlicks)
Important Concepts Used in
Research design
Variable: a concept which can take on
different quantitative value is called a variable.
Ex: weight, height, income
Continuous variable : Age
Non-continuous variable: No. of children
1. Dependent variable:
If one variable depends upon or is a
consequence of the other variable, it
is termed as a dependent variable.
2. Independent variable:
If the variable is antecedent to the
dependent variable it is termed as an
independent variable
For ex:
• Height depends upon age
• Height depends on gender
3.Extraneous variables: These are the
variables other than the independent
variables which influence the response of
test units to treatments.

Examples: Store size, government policies,


temperature, food intake, geographical
location, etc.
4. Experimental and control groups:
 When a group is exposed to usual
conditions, it is termed a ‘control group’.
When a group is exposed to some novel
(new) or special condition, it is termed
as experimental group.
Groups Treatment Treatment
effect (%
increase in
production
over pre-piece
rate system)

Experimental Group $ 1.00 per piece 10


I
Experimental Group $ 1.50 per piece 15
2
Experimental Group $ 2.00 per piece 20
3
Control group ( no Old hourly rate 0
treatment)
5. Treatments:
The different conditions under which
experimental and control groups are put are
usually termed to as ‘treatments’
Ex: selling cookies with free gift and without
free gift
Basic principles of Experimental
design
1. Principle of Replication

2. Principle of Randomization
3. Principle of Local Control
1. Principle of Replication (Reproduction):
• According to the principle of replication, the
experiment should be repeated more
than once.
• By doing so the statistical accuracy of the
experiments is increased.
For Ex: suppose we are to examine the effect

of two varieties of rice. For this purpose

we may

• divide the field into two parts and

• grow one variety in one part and

• the other variety in the other part.


• We can then compare the yield of the

two parts and draw conclusions on that

basis.
But if we are to apply the principle of

replication to this experiment, then

• We first divide the field into several parts,

• Grow one variety in half of these parts and

• the other variety in the remaining parts.


• We can then collect data of yield of the

two varieties and draw conclusions by

comparing the same.

• The results so obtained will be more

reliable in comparison to the conclusion

we draw without applying the principle of

replication.
2. Principle of Randomization
• Principle of randomization provides

protection.

• It avoids bias in the experiment.


For ex:

• if we grow one variety of rice, say in the first

half of the parts of a field and the other variety

is grown in the other half, then it is just possible

that the soil fertility (productiveness) may be

different in the first half in comparison to

the other half. If this is so our results would not

be realistic.
3. Principle of Local Control
• Through the principle of local control we can

eliminate the variability.

• According to the principle of local control, we

first divide the field into several

homogeneous parts know as blocks

(barricade).
• And then each such block is divided into parts
equal to the number of treatments.
• Then the treatments are randomly assigned to
these parts of block.
• Dividing the field into several homogeneous
parts is known as ‘blocking’.
TYPES OF EXPERIMENTAL DESIGN
1. QUASI EXPERIMENTAL
DESIGN
a. Pretest and posttest with
Experimental Group
b. Posttest only with Experimental
and Control group
2. TRUE EXPERIMENTAL
DESIGN
a. Pretest and Posttest with
Experimental and Control Group
b. Blind studies
c. Ex Post Facto Designs
3. STATISTICAL DESIGN
i. Completely Randomized Design
(C.R. Design)
ii. Randomized Block Design (R.B.
Design)
iii. Latin Square Design (L.S. Design)
iv. Factorial Designs
1. QUASI – EXPERIMENTAL DESIGNS
• It does not measure the true cause-and
effect relationship.
• This is so because there is no comparison
between groups.
• This experimental design is the weakest of
all designs
a. Pretest and posttest with
Experimental Group
• An experimental group (without a control

group) may be given a pretest, exposed to a

treatment, and then given a posttest to

measure the effects of the treatment.


Group Pretest Treatment Posttest
Score introduced Score

Experimenta O1 X O2
l Group

Treatment Effect = (O2 – O1)


O – Observation or Measurement
X – Exposure of a group to an experimental
treatment
b. Posttest only with Experimental
and •Control
Here onlygroup
the experimental group is exposed
to the treatment not the control group.
• The effects of the treatment are studied by
assessing the difference in the outcomes-
that is, the posttest scores of the experimental
and control groups.
Group Treatment Outcome
introduced

Experimental X O1
Group
O2
Control Group

Treatment Effect = (O1 – O2)


2. TRUE EXPERIMENTAL DESIGN
• Experimental designs, which include both
the treatment and control groups and
record information both before and
after the experimental group is exposed to
the treatment is known as true experimental
design.
a. Pretest and Posttest with
Experimental and Control Group
• Two groups – one experimental and the
other control – both are exposed to the
pretest and posttest.
Group Pretest Treatmen Posttest
t
introduce
d
Experimental O1 X O2
Group

Control O3 O4
Group

Treatment Effect = ((O2 – O1) – (O4 – O3))


b. Blind Studies:
• In case of pharmaceutical companies
experimenting with the newly developed
drugs in the prototype (trial) stage ensure
that the subjects in the experimental and
control groups are kept unaware of who is
given the drug.
• Such studies are known as blind studies.
c. Ex Post Facto Designs:

• Subjects who have already been exposed to a

stimulus and those not so exposed are

studied.
For ex:
• Training programs might have been
introduced in an organization 2 years earlier.
• Some might have already gone through the
training while others might not.
• To study the effects of training on work
performance, performance data might now be
collected for both groups.
• Since the study does not immediately follow
after the training, but much later, it is an ex
post facto design.
3. STATISTICAL DESIGN
a. Completely Randomized Design (C.R.
Design)
b. Randomized Block Design (R.B. Design)
c. Latin Square Design (L.S. Design)
d. Factorial Designs
a. Completely Randomized Design (C.R.
Design)

• It involves only two principles viz., the

principle of replication and the

principle of randomization.

• Simplest and possible research design and

its procedure of analysis is also easier.


• The essential characteristic of the design is

that subjects are randomly assigned to

experimental treatments.
b. Randomized Block Design (R.B.
Design)

• It is an improvement over the C.R. Design

• In the R.B. Design the principle of local

control can be applied along with the other

two principles of experimental designs.


c. Latin square design (L.S. Design):

• The number of blocks will be equal to the

number of treatments
FERTILITY LEVEL

I II III IV V
X1 A B C D E
X2 B C D E A
Seed X3 C D E A B
Differe
nce
X4 D E A B C
X5 E A B C D
d. Factorial Design:
• The factorial experiment design is used to
test two or more variables at the same
time.
• Factorial designs can be of two types:
i. Simple factorial design
ii. Complex factorial design.
Validity
Validity in Experimentation
The researchers must make sure that any
measuring instrument selected by him is said
to be valid when it measures what it purposes
to measure.
Ex: Weight machine
Reliability
• Refers to stability and consistency through a
series of measurements.
• The reliability of a measure is its capacity to
yield the same results in repeated
applications to the same events.
Internal validity: It refers to the
confidence we place in the cause – and –
effect relationship.
• It addresses the question, “ to what extent
does the research design permit us to say
that the independent variable A causes a
change in the Dependent variable B?”
• Internal validity tries to examine whether the
observed effect on a dependent variable is
actually caused by the treatments (independent
variables) in question.
• External validity: External validity refers to
the generalization of the results of an
experiment. The concern is whether the result
of an experiment can be generalized beyond
the experimental situations.
Factors Affecting Internal Validity of
the Experiment
• Maturation
Ex:
• Test the impact of new compensation
program on sales productivity.
• If this program were tested over a year’s
time, some of the sales people probably would
mature as a result of more selling
experience or gain increased knowledge.
• Their sales productivity might improve
because of their knowledge and
experience rather than the compensation
program.
Testing
• Testing effects only occur in a before-and-
after study.
Instrumentation
• A change in wording of questions, a change
in interviewers cause instrumentation effect
Selection bias
• Sample bias that results from differential
selection of respondents.
Mortality
• Some subjects withdraw from the experiment
before it is completed.
Factors Affecting External
• The environment Validity
at the time of test
may be different from the
environment of the real world where
these results are to be generalized.
• Population used for experimentation
of the test may not be similar to the
population where the results of the
experiments are to be applied.
Environments of Conducting
• Laboratory Environment - In
Experiments a
laboratory experiment, the researcher
conducts the experiment in an artificial
environment constructed exclusively for
the experiment.
• Field Environment - The field experiment is
conducted in actual market conditions.
There is no attempt to change the real-life nature
of the environment.
Variables in Research

Variable
A variable is anything that can take on

differing or varying values. The values

can differ at various times for the same

object or person.
Types of Variables
• Dependent Variable

• Independent variable

• Moderating variable

• Extraneous variable

• Intervening variable
1. Dependent variable (DV):
If one variable depends upon or is a
consequence of the other variable, it is
termed as a dependent variable.
2. Independent variable (IV):
If the variable is antecedent to the dependent
variable it is termed as an independent
variable
Ex: Smoking causes Cancer
3. Moderating variable (MV):
A moderating variable is a second
independent variable that is
included because it is believed to
have a significant contributory
effect on the originally stated IV -
DV relationship.
4.Extraneous variables (EV):
These are the variables other
than the independent
variables which influence the
response of test units to
treatments.
5. Intervening variable (IVV):
The intervening variable (IVV) may
be defined a “that factor which
theoretically affects the observed
phenomenon but cannot be seen,
measured or manipulated”.
Measurement and Scaling
Measurement:

• The term ‘measurement’ means assigning


numbers or some other symbols to the
characteristics of certain objects.

Ex: A teacher counts the number of students in


a class, classifies them as male or female.

• How well we like a song, a painting is also a


measurement
• Scaling: Scaling is an extension of
measurement. Scaling involves creating a
continuum on which measurements on
objects are located.
Types of Measurement Scale

• NOMINAL SCALE

• ORDINAL SCALE

• INTERVAL SCALE

• RATIO SCALE
NOMINAL SCALE:

In nominal scale, numbers are used to


identify or categorize objects or events.

For example, the population of any town


may be classified according to gender as
‘males’ and ‘females’ or according to
religion into ‘Hindus’, ‘Muslims’, and
‘Christians’.
Example: (dichotomous scale – elicit ‘Yes’
or ‘No’ answer)

Are you married?

(a) Yes (b) No

•Married person may be assigned a No. 1.

• Unmarried person may be assigned a No. 2.

Do you have a car?

(a) Yes (b) No


• The assigned numbers cannot be added,
subtracted, multiplied or divided.
• The only arithmetic operations that can be
carried out are the count of each category.
• Therefore, a frequency distribution table
can be prepared for the nominal scale
variables.
ORDINAL SCALE:

•This is the next higher level of measurement.

•The ordinal scale places events in order.

•Rank orders represent ordinal scales.

•The use of an ordinal scale implies a statement of


‘greater than’ or ‘less than’ without stating how
much greater or less.
Example:
Rank the following attributes while choosing a restaurant
for dinner. The most important attribute may be ranked 1,
the next important may be assigned a rank of 2 and so on.
In the ordinal scale, the
assigned ranks cannot be
added, multiplied,
subtracted or divided.
One can compute
median, and
percentiles of the
distribution. The other
major statistical analysis
which can be carried out
is the rank order
correlation
coefficient, sign test.
INTERVAL SCALE:

•The interval scale measurement is the next

higher level of measurement.

• it has all the characteristics of ordinal scale.

•In addition, the units of measure or intervals

between successive positions are equal.

Ex: Marks: 0-10, 11-20, 21-30, 31-40


RATIO SCALE:
•This is the highest level of measurement.

 It possesses all the features of the nominal,


ordinal, and interval scales

 It has order, and distance

Example:

Measures of weight, height, length, etc

 All mathematical and statistical operations can


be carried out using the ratio scale data.
SCALING

• Scaling may be considered as an extension of

measurement. It involves creating a continuum

upon which measured objects are located


Scaling techniques or classification
of scales/Attitude scales
Scaling
techniques

Rating Scales Ranking scales


Graphic Rating
Scale
Itemized Rating Scale Method of Paired
comparison
GuttmanScale/
Scalogram
Method of Rank
Likert Scale Order
Semantic Differential
Scale

Thurstone Scale

Staple’s Scale
Multi Dimensional
Scaling
Two main categories of Attitudinal Scale
RATING SCALES
Rating scales have several response
categories and are used to elicit responses
with regard to object, event, or person studied.
RANKING SCALES
• Ranking scales make comparisons
between or among objects, events, or persons
and elicit the preferred choice and
ranking among them.
RATING SCALE
1. GRAPHIC RATING SCALE
• Respondents rate the objects by placing a
mark at the appropriate position on
a line that runs from one extreme of the
criterion variable to another.
Graphic Rating Scale – This is a continuous
scale and the respondent is asked to tick his
preference on a graph.
Examples:
Please put a tick mark (•) on the following line to indicate your
preference for fast food.

Alternative Presentation of Graphic Rating Scale –

Please indicate how much do you like fast food by pointing to the
face that best shows your attitude and taste. If you do not prefer it
at all, you would point to face one. In case you prefer it the most,
you would point to face seven.
2. ITEMIZED RATING SCALE

•In the itemized rating scale, the respondents are

provided with a scale that has a number of

brief descriptions associated with each of the

response categories.
i. Guttman Scales/Scalogram
• Consists of statements to which a
respondent expresses his agreement or
disagreement.
• It is also known as cumulative scale
• Under this technique the respondents are
asked to answer in respect of each item
whether they agree or disagree with it.
• Ex: Customer’s expectation on Reliance Fresh

Item Expectation
No.
(i) Would you expect price discounts in
Reliance Fresh?
(ii) Do you need free door delivery service?

(iii) Would you expect to increase the duration of


the store?
(iv) Would you anticipate play area for children?
Response in Scalogram Analysis
Respond Item Number Respondent
ent Score
Number (i) (ii) (iii) (iv)

1. √ √ √ √ 4

2. √ √ √ X 3

3. √ √ X X 2

4. √ X X X 1

5. X X X X 0

• A score of 4 means that the respondent is in


agreement with all the statements of the items.

• A score or 3 means that the respondent is not

agreeable to item 4.
ii. Likert scale

 The respondents are given a certain number


of items (statements) on which they are
asked to express their degree of
agreement/disagreement.

 This is also called a summated scale because


the scores on individual items can be added
together to produce a total score for the
respondent.
• The scale is named after its inventor,
psychologist Rensis Likert
Example of a Likert Scale:
iii. Semantic Differential Scale

 This scale is widely used to compare the


images of competing brands, companies
or services.

 Here the respondent is required to rate each


attitude or object on a number of five-or
seven-point rating scales.
• This scale is bounded at each end by bipolar
adjectives or phrases.

• The difference between Likert and Semantic


differential scale is that in Likert scale, a
number of statements (items) are presented to
the respondents to express their degree of
agreement/disagreement.

• However, in the semantic differential scale,


bipolar adjectives or phrases are used.
Example of Semantic Differential Scale:
Example of Semantic Differential Scale: (Pictorial Profile)
iv. Thurstone Scale
• Thurstone and his colleagues constructed
scales for the measurement of opinions and
beliefs of human groups.
PROCEDURE:
Large number of statements pertaining to the
subject of enquiry are collected through
literature survey, personal experience, and
discussions with knowledgeable persons
Second is the selection of statements
Statements should be brief
Ambiguous statements should be avoided
The statements must be related to attitude
On the basis of above procedure, the
researcher selects some 20 to 30 statements
The scale values are equally spaced
The statements are embodied in a
questionnaire.
v. Stapel’s Scale
• Used as an alternative to semantic
differential scale.
• The scale measures how close to or
distant from the adjective given stimulus
is perceived to be.
• Stapel Scale
vi. Multi Dimensional Scaling (MDS)
• It consists of a group of analytical
techniques which are used to study
consumer attitudes related to
perceptions and preferences.
• It is a computer based technique.
RANKING SCALES
i. Method of paired comparison:
• Under this method, the respondent can
express his attitude by making a choice
between two objects, for ex. Between a
flavour of soft drink A and another soft
drink B.
ii. Method of Rank order:
• Rank order scales are comparative scales.
• For ex: a respondent may be asked to rank
three motorcycle brands on attributes
such as cost, mileage, style and pick-up
and so on.
Construction of Instrument
• The purpose of scale construction is to design
questionnaire that provides a quantitative
measurement of an abstract theoretical variable
Approaches by which scales can be developed:
• Arbitrary scales: developed on ad hoc basis
may or may not measure the concepts.
• Cumulative scales – Guttman’s scalogram
analysis
• Consensus scaling – Thurstone scale
• Staple scale
• Multidimensional scaling
Measurement Error
This occurs when the observed measurement on a construct or
concept deviates from its true values.

Reasons

 Mood, fatigue and health of the respondent

 Variations in the environment in which measurements are taken

 A respondent may not understand the question being asked and


the interviewer may have to rephrase the same. While rephrasing
the question the interviewer’s bias may get into the responses.
• Some of the questions in the questionnaire may
be ambiguous errors may be committed at the
time of coding, entering of data from
questionnaire to the spreadsheet
Criteria for good measurement
Reliability

Reliability is concerned with consistency,


accuracy and predictability of the scale.

Methods to measures Reliability

 Test–retest reliability

 Split-half reliability

 Cronbach’s Alpha
Validity
Criteria for good measurement
The validity of a scale refers to the question
whether we are measuring what we want to
measure.
Different ways to measure Validity
 Content validity
 Concurrent validity
 Predictive validity
Sensitivity

Sensitivity refers to an instrument’s ability to


accurately measure the variability in a
concept.

You might also like