0% found this document useful (0 votes)
12 views25 pages

Module 3 Quantitative Methods

The document discusses quantitative research methods, focusing on experimental designs including between-group and within-group designs, as well as small N designs. It outlines the characteristics of quantitative research, emphasizing objective measurement, manipulation of variables, and the importance of structured data collection tools. Additionally, it details various experimental designs, including control group designs and pretest-posttest designs, highlighting their significance in establishing causal relationships and ensuring validity in research outcomes.

Uploaded by

Rihana Dilshad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views25 pages

Module 3 Quantitative Methods

The document discusses quantitative research methods, focusing on experimental designs including between-group and within-group designs, as well as small N designs. It outlines the characteristics of quantitative research, emphasizing objective measurement, manipulation of variables, and the importance of structured data collection tools. Additionally, it details various experimental designs, including control group designs and pretest-posttest designs, highlighting their significance in establishing causal relationships and ensuring validity in research outcomes.

Uploaded by

Rihana Dilshad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

ASSIGNMENT

ON

MODULE III: QUANTITATIVE RESEARCH


METHODS

Experimental research methods; Between group

designs- Two group designs, ANOVA designs, Factorial


designs, Within group designs,

Small N designs

SUBMITTED TO, SUBMITTED BY,


DR.SHERIL ELIZABETH JOSE RIHANA A K
ASSISTANT PROFESSOR RESEARCH SCHOLAR
GOVERNMENT COLLEGE FOR WOMEN GOVERNMENT COLLEGE FOR WOMEN
TRIVANDRUM TRIVANDRUM
INTRODUCTION

Quantitative research deals with objective measurements and includes statistical or numerical
analysis of data collected through polls, questionnaires or surveys. The variables can be
manipulated as well as controlled in quantitative research. Basically, the variables are
manipulated to examine the cause- effect relationship, carry out comparative analysis or
interventional analysis within a specified population. Quantitative research deals with
numbers. It focuses more on convergent reasoning than on divergent reasoning which means
that the researcher tries to find out solutions to a research problem with help of standardised
tools and not by creative ideas. It mainly focuses on quantifying relationships between
variables.

The characteristics of quantitative research are as follows:

1) Clearly defined research questions: Based on the research problem, the researcher frames
clearly defined research questions and the answers to these questions are sought objectively.
2) Representative sample: The researcher selects a sample from a specified population from
which data is aimed to be collected. These samples are representative of the population, so
that the results achieved can be generalised to the population.

3) Manipulation/ control of variables: As mentioned before, the quantitative research deals


with variables and as per the requirement, the researcher manipulates (for example, increases
or decreases) and even controls the extraneous/controlled variables that can affect the
research study.

4) Structured and standardised tools used for data collection: Quantitative research deals with
numbers and the data is collected with the help of structured or standardised research
instruments. The data is analysed with help of empirical evidences. The data are collected in
form of numbers, and statistics, often arranged in tables, charts, figures, or other non-textual
forms.

5) It is reliable and valid: Since the study is done under controlled observations involving
scientific investigations, they can be replicated or repeated and provide similar results. The
quantitative research is high on reliability. Further, as quantitative research involves the use
of standard and structured instruments (which are variable specific), they are valid as well.

6) Generalisability: Since the quantitative research is done in a well- planned manner and are
highly reliable as well as valid, the results obtained through the method can be generalised
and can also be used to effectively predict results and infer causal relationships.
EXPERIMENTAL RESEARCH DESIGNS

The term “experimental design” may be used in two different ways (Kirk, 1968).

i) It may be used to refer to the sequence of steps necessary to conduct an experiment (stating
the hypothesis, detailing the data collection process, and so on).

ii) It may be used to refer to the plan by which subjects are assigned to experimental
conditions. The experimental design is relatively simple, as for example, when one group of
subjects is exposed to an independent variable and another is not. On the other hand, it may
be much more complex, involving two or more than two independent variables and repeated
measurements of the dependent variables.

The overall blueprint of the experiment is called experimental design. It contains the
specification of the plan and structure of the entire experiment. For the sake of precision, the
variables and their measures are defined and specific instructions for the experimental
conditions are clearly written. A good experimental design minimizes the influence of
extraneous or uncontrolled variation and increases the likelihood that an experiment will
produce valid and consistent results.

It may be defined as the study of the relationships among variables those manipulated and
those measured. It simply enables the researcher to improve the conditions under which the
researcher observes and thus to arrive at a more precise results. It enables him to relate a
given consequent to a specific antecedent rather than to a vague conglomeration of
antecedents. It is a scientific method; give more precise, accurate and reliable results. It is just
like an observation under controlled conditions. It acts on the law of single variable and
causing factors. It studies cause and effect relationship. It is a systematic and logical method
for answering the questions. In this the researcher seeks to evaluate something new. It leads
to contribution to the already acquired fund of knowledge.

The three essential elements in an experiment are; control, manipulation and observation. In
this, experimentor has to imagine that research conditions are entirely new, they were not
existing previously and recently. It is a method in which we study the effect of d dependent
variable on independent variable. Whatever we know about the environment, is possible only
by observation. All types of experiments are related with observation and generalization of
these observed facts and it is also possible to test the internal validity.
Definitions:

“It is a method of testing hypothesis.” - Jhoda

“An experiment is an observation under controlled conditioned.”- F. S. Chapin

“Experimental research is the description and analysis of what will be or what will occur,
under carefully controlled condition.” -John W. Best

“Experiment is a means of providing the hypothesis whereby the causal relations between
two facts is studied.”- Green Wood

“The essence of an experiment may be described as observing the effect on a dependent


variable of the manipulation of an independent variable.” – Festinger

Characteristics:

(i) Based on the law of single variable.

(ii) This method of research is maximum in use in educational / social researches where the
factors can be controlled.

(iii) Experimental method is a method of testing of clear specific hypothesis of different


intensions.

(iv) It is a bias free estimation of the true effect.

(v) It emphasizes control of conditions and the experimentation of certain variables in


controlled conditions.

(vi) It sets out more or less of the causal type relationship between the phenomenons.

(vii) It uses standardized tool for experimentation and makes the evidences very much
objective.

(viii) The sample is selected with great precaution and very care is taken to safe guard
extraneous factors.

(ix) This method helps in developing laws, postulate and theories. (x) It allows for precision
and definiteness
Basic Elements Of Valid Experimental Design

a) Factor: The independent variables of an experiment are often called the factors of
experiment. An experiment has always one factor, or independent variables, otherwise it
would not be an experiment. It is possible for an experiment to have more than one
independent variables. To have an experiment, it is necessary to vary some independent
variable, or some factors.

b) Level: a level is a particular value of an independent variable. Level refers to the degree or
intensity of a factor. Any factor may be presented in one or more of several levels, including
a zero level.

c) Condition is the broadest term used to discuss independent variables. It refers to a


particular way in which subjects are treated

d) Main effect: Main effect is the effect of one independent variable, averaged over all levels
of another independent variable.

e) Interaction; when the effect of one independent variable depends on the level of another
independent variable.

f) Treatment: the treatment is used to refer to a particular set of experimental condition. For
example 2X2 factorial experiment, the subjects are assigned to for the treatment. In
experiments, a treatment is something that researchers administer to experimental units.

Two particular elements of a design provide control over so many different threats to validity
that they are basic to good experimental designs;

(1) the existence of a control group or a control condition and,

(2) the random allocation of subjects to groups. Random allocation ensures that the groups
will be equal in all respects, except as they may differ by chance and control over the internal
threats to validity; allows one to conclude that dependent variable is associated with
independent variable and not with any other variables.

In discussing experimental design, Campbell & Stanley (1963) have used some symbols with
which a student/reader is expected to be acquainted.

R: Random selection of subjects or random assignment of treatment to experimental groups.


X: Treatment or experimental variable which is manipulated.

When treatments are compared they are levelled as X1, X2, X3 and so on.

O: Observation or measurement or test. Where there is more than one O, an arbitrary


subscript O1 , O2 , O3 and so on, is used.

Types Of Designs

Designs can be classified into a simple three fold classification by asking some key questions.
First, does the design use random assignment to groups? If random assignment is used, you
can call the design a randomized experiment or true experiment. If random assignment is not
used, then you have to ask a second question: Does the design use either multiple groups or
multiple waves of measurement? If the answer is yes, you can be labeled as quasi-
experimental design. If no, you can call it a non-experimental design. This threefold
classification is specially useful for describing the design with respect to internal validity

True Experiment At Design

The experimenter has complete control over the experiment: the who, what, when, where,
and how. Control over the who of the experiment means that the experimenter can assign
subjects to conditions randomly for example we can put ‘A’ in example 1, ‘B’ in example 2,
‘C’ in example 1, ‘D’ in example 2 etc. Control over the what, when, where, and how of the
experiment means that the experimenter has complete control over the why the experiment is
to be conducted.

True experiments include two major types of experimental designs:

A) single factor design (between subjects and within subjects),

B) two factor design (factorial design)

Experimental design differs in relation to research purpose. Control group designs are also
called true experimental design. A discussion of control group design and two factor design
are given below.
Control Group Design

Two parallel experiments are set up, identical in all respects except that only one includes the
treatment being explored by the experiment. The people in both groups should be similar.
Ideally, these are selected and assigned randomly, though in practice some groups come as
one (such as school classes) or are selected on a pseudo-random basis (such as people on the
street).The control group may have no treatment, with nothing happening to them, or they
may have a neutral treatment, such as when a placebo is used in a medical pharmaceutical
experiment.

Types of Control Group Design :

There are four types of control group design which are described hereunder: Post-Test Only,
Equivalent Group Design This design is the most effective and useful true experimental
design, which minimizes the threats to the experimental validity. In the above design are two
groups. One group R1 is given treatment (X), usually called the experimental group, and the
other group R2 is not given any treatment and R2 is called the control group. Both groups are
formed on the basis of random assignment of the subjects and hence, they are equivalent. Not
only that, subjects of both groups is initially randomly drawn from the population (R). This
fact controls for selection and experimental mortality.

Besides these, in this design no pre-test is needed for either group, which saves time and
money. As both groups are tested after the experimental group has received the treatment, the
most appropriate statistical tests would be those tests which make a comparison between the
mean of O1 and O2. Thus either t- test or ANOVA is used as the appropriate statistical test.

Let us take an example. Suppose the experimenter, with the help of the table of random
numbers, selects 50 students out of a total of 500 students. Subsequently, these 50 students
are randomly assigned to two groups. The experimenter is interested in evaluating the effect
of punishment over retention of verbal task. The hypothesis is that punishment enhances the
retention score. One group is given punishment (X) while learning a task and another group
receives no such punishment while learning a task. Subsequently, both groups are given the
test of retention.A simple comparison of mean retention scores of the two groups is carried
out through the t- test which provides the basis for refuting or accepting the hypothesis.

The only problem with the post-test is that there is no direct indication of what actual change
is found in the treatment group. This is corrected by measuring them before and after the
treatment. The control group is still useful as additional factors may have had an effect,
particularly if the treatment occurs over a long time or in a unique context.

a) Pretest- Posttest Control Group Design

This is also called the classic controlled experimental design, and the randomized
pre-test/post-test design because it

1) Controls the assignment of subjects to experimental (treatment) and control groups through
the use of a table of random numbers. This procedure guarantees that all subjects have the
same change of being in the experimental or control group. Because of strict random
assignment of subjects, it is assumed that the two groups are equivalent on all important
dimensions and that there are no systematic differences between the two
groups. Researchers may substitute matching for random assignment. Subjects in the two
groups are matched on a list of characteristics that might affect the outcome of the research
(e.g., sex, race, income). This may be cheaper but matching on more than three or four
characteristics is very difficult. And if the researcher does not know which characteristics to
match on, this compromises internal validity

2) Controls the timing of the independent variable (treatment) and which group is exposed to
it. Both groups experience the same conditions, with the exception of the experimental group,
which receives the influence of the independent variable (treatment) in addition to the shared
conditions of the two groups.

3) Controls all other conditions under which the experiment takes place. Nothing but the
intervention of the independent (treatment) variable is assumed to produce the observed
changes in the values of the dependent variable.

The steps in the classic controlled experiment are:

 Randomly assign subjects to treatment or control groups;


 Administer the pre-test to all subjects in both groups;
 Ensure that both groups experience the same conditions except that in addition the
experimental group experiences the treatment;
 Administer the post-test to all subjects in both groups;
 Assess the amount of change on the value of the dependent variable from the pre-test
to the post-test for each group separately

The difference in the control group’s score from the pre-test to the post-test indicates the
change in the value of the dependent variable that could be expected to occur without
exposure to the treatment (independent) variable X

Control group Post test scores – control group Pre test scores = control group difference

Experimental group pre-test score – Experimental group post-test scores = the difference
obtained as a result of independent variable.

The difference in the experimental group’s score from the pre-test to the post-test indicates
the change in the value of the dependent variable that could be expected to occur with
exposure to the treatment (independent) variable X.

(Experimental group – (Experimental group = (Experimental group difference pre-test score)


post-test score) on the dependent variable)

The difference between the change in the experimental group and the change in the control
group is the amount of change in the value of the dependent variable that can be attributed
solely to the influence of the independent (treatment) variable X.

Control group difference – experimental group difference = difference attributable to X (the


manipulation of the independent variable)

This design follows all the same steps as the classic pre-test/post-test design except that it
omits the pre-test. There are many situations where a pre-test is impossible because the
participants have already been exposed to the treatment, or it would be too expensive or too
time-consuming.

For large groups, this design can control most of the threats to internal and external validity
as does the classic controlled experimental design. For example, it eliminates the threat to
internal validity of pre-testing by eliminating the pre-test. It may also decrease the problem of
experimental mortality by shortening the length of the study (no pre-test). For small groups,
however, a pre-test is necessary. Also, a pre-test is necessary if the researcher wants to
determine the exact amount of change attributable to the independent variable alone.
For many true experimental designs, pretest-posttest designs are the preferred method to
compare participant groups and measure the degree of change occurring as a result of
treatments or interventions.

Pretest-posttest designs grew from the simpler posttest only designs, and address some of the
issues arising with assignment bias and the allocation of participants to groups.

One example is education, where researchers want to monitor the effect of a new teaching
method upon groups of children. Other areas include evaluating the effects of counseling,
testing medical treatments, and measuring psychological constructs. The only stipulation is
that the subjects must be randomly assigned to groups, in a true experimental design, to
properly isolate and nullify any nuisance or confounding variables.

Problems with Pretest-Posttest Designs

The main problem with this design is that it improves internal validity but sacrifices external
validity. There is no way of judging whether the process of pre-testing actually influenced the
results because there is no baseline measurement against groups that remained completely
untreated. For example, children given an educational pretest may be inspired to try a little
harder in their lessons, and both groups would outperform children not given a pretest, so it
becomes difficult to generalise the results to encompass all children.

The other major problem, which afflicts many sociological and educational research
programs, is that it is impossible and unethical to isolate all of the participants completely. If
two groups of children attend the same school, it is reasonable to assume that they mix
outside of classrooms and share ideas, potentially contaminating the results. On the other
hand, if the children are drawn from different schools to prevent this, the chance of selection
bias arises, because randomization is not possible.

The two-group control group design is an exceptionally useful research method, as long as its
limitations are fully understood. For extensive and particularly important research, many
researchers use the Solomon four group methods, a design that is more costly, but avoids
many weaknesses of the simple pretestposttest designs.

b) Solomon Four Group Design

The Solomon four-group design developed by Solomon (1949) is really a combination of the
two equivalent groups designs described above, namely the posttest – only design and pretest
– posttest design and represents the first direct attempt to control the threats of the external
validity

It is clear from this diagram that in this design four-groups are randomly set by the
experimenter. In this design two simultaneous experiments are conducted and, hence the
advantages of replication are available here. The effect of X treatment 2 1 is replicated in four
ways: O2 >O1 , O2 >O4 , O5 >O6 and O4 >O3 . This design makes it possible to evaluate
the main effects of testing as well as the reactive effect of testing, thus increasing the external
validity or generalisibility. The factorial analysis of variance can be used as the appropriate
statistical test. Because the design is complex from the methodological as well as the
statistical point of view, it is less preferred to the above two true experimental designs.

FACTORIAL DESIGN

A factorial design is one in which two or more variable or factors are employed in such a way
that all the possible combinations of selected values of each variable are used (Mcburney &
White, 2007). According to Singh (1998), Factorial design is a design in which selected
values of two or more independent variables are manipulated in all possible combinations so
that their independent as well as interactive effect upon the dependent variable may be
studied.

On the basis of the above definition it can be said that the factorial design is one in which two
or more independent variables are manipulated in all possible combinations and thus the
factorial design enables the experimenter to study the independent effect as well as
interactive effect of two or more independent variables.

Terms related to factorial design

Factors : The term factor is broadly used to include the independent variable that is
manipulated by the investigator in the experiment or that is manipulated through selection. In
the research some time it is possible to manipulate the independent variable directly, for
example in a study researcher wants to study the effect of different drugs on the recovery of
the patient. The researcher may select three dosages 2 mg, 4 mg. and 6 mg. and administer
the drug to the subjects. Further researcher may find that age is another important variable
that may influence the rate of recovery from the diseases. The second independent variable
that is age cannot be directly manipulated by the researcher. The manipulation of the variable
‘age’ is achieved through selection of the sample. The researcher then may divide the
subjects into three age groups.

Main Effect:

This is the simplest effect of a factor on a dependent variable. It is the effect of the factor
alone averaged across the level of other factors. According to Mcburney & White (2007)
main effect in a factorial experiment, the effect of one independent variable, averaged over all
levels of another independent variable.

Interaction :

The conclusion based on the main effects of two independent variables may be at times
misleading, unless we take into consideration the interaction effect of the two variables also.
According to Mcburney & White (2007) Interaction means when the effect of one
independent variable depends on the level of another independent variable.

An interaction is the variation among the difference between mean for different levels of one
factor over different levels of the other factor. For example a cholestrol reduction clinic has
two diets and one exercise regime. It was found that exercise alone was effective and diet
alone was effective in reducing cholestrol levels (main effect of exercise and main effect of
diet). Also for those patients who didn’t exercise, the two diets worked equally well (the main
effect of diet); those who follow diet A and exercised got the benefits of both (main effect of
diet A & main effect of exercise). However it was found that those patients who followed diet
B and exercised got the benefit of both plus a bonus, an interaction effect (main effect of diet
B, main effect of exercise plus an interaction effect of diet and exercise).

Types of Interaction:

1) Antagonistic interaction: When main effect is non-significant and interaction is


significant. In this situation the two independent variables tend to reverse each others effect.

2) Synergistic interaction: When higher level of one independent variable enhances the effect
of another independent variable.
3) Celling effect interaction: When the higher level of one independent variable reduces the
differential effect of another variable. that is one variable has a smaller effect when paired
with higher level of a second variable (Mcburney & White, 2007).

All of these types of interaction are common in psychological research.

Randomisation: Randomisation is the process by which experimental units are allocated to


treatment; that is by a random process and not by any subjective process. The treatment
should be allocated to units in such a way that each treatment is equally likely to be applied to
each unit.

Blocking: This is the procedure by which experimental units are grouped into homogenous
cluster in an attempt to improve the comparison of treatment by randomly allocating the
treatment within each cluster or block.

SIMPLE TWO FACTOR DESIGN

In the two factor design we have two independent variables, each of which has two values or
levels. This is known as two by two (2x2) factorial design because of the two levels of each
variables.
Importance of Interaction

Main effect is an average effect. It can be misleading when an interaction is present. When
interaction is present we should examine the effect of any factor of interest at each level of
the interacting factor before making interpretation (Minimum et.al. 2001).

The two factor design is really made up of several one factor experiments. In addition to main
effect, the factorial design also allows us to test simple effect. For example we have 2×2
design. One factor A has two levels A1 and A2 and other factor B has two levels B1 and B2 .

Main effects compare differences among the level of one factor averaged across all levels of
the other. However, this particular design consists of four one way experiment and we may
analyse each of them separately. We may be interested in effect of A (all two levels)
specifically for condition B2 . Simple effects refer to the results of these one factor analysis.
To make such comparison the interaction must first be significant.

Types of Factorial Research

1. Within-subjects design—a research design in which each participant experiences every


condition of the experiment or study.

A. Advantages

i. do not need as many participants


ii. equivalence is certain

B. Disadvantages

i. effects of repeated testing


ii. dependability of treatment effects
iii. irreversibility of treatment effects

2. Between-subjects design—a research design in which each participant experiences only


one of the conditions in the experiment or study.

A. Advantages

i. effects of testing are minimized


B. Disadvantages

i. equivalency is less assured


ii. greater number of participants needed

3. Mixed factorial design—a research design that combines/uses between- and within-
subject variables in the same design.

Advantage of factorial design

Factorial design enables the researcher to manipulate and control two or more independent
variables simultaneously. By this design we can study the separate and combined effect of
number of independent variables. Factorial design is more precise than single factor design
(Kerlinger, 2007). By factorial design we can find out the independent or main effect of
independent variables and interactive effect of two or more independent variables. The
experimental results of a factorial experiment are more comprehensive and can be generalised
to a wider range due to the manipulation of several independent variables is one experiment.

Limitation of factorial design

Sometime especially when we have more than three independent variables each with three or
more levels are to be manipulated together, the experimental setup and statistical analysis
become very complicated. In factorial experiments when the number of treatment
combinations or treatments becomes large, it becomes difficult for the experimenter to select
a homogeneous group

Analysis of variance (ANOVA)

Analysis of variance (abbreviated as ANOVA) is an extremely useful technique concerning


researches in the fields of economics, biology, education, psychology, sociology,
business/industry and in researches of several other disciplines. This technique is used when
multiple sample cases are involved. As stated earlier, the significance of the difference
between the means of two samples can be judged through either z-test or the t-test, but the
difficulty arises when we happen to examine the significance of the difference amongst more
than two sample means at the same time. The ANOVA technique enables us to perform this
simultaneous test and as such is considered to be an important tool of analysis in the hands of
a researcher. Using this technique, one can draw inferences about whether the samples have
been drawn from populations having the same mean. The ANOVA technique is important in
the context of all those situations where we want to compare more than two populations such
as in comparing the yield of crop from several varieties of seeds, the gasoline mileage of four
automobiles, the smoking habits of five groups of university students and so on. In such
circumstances one generally does not want to consider all possible combinations of two
populations at a time for that would require a great number of tests before we would be able
to arrive at a decision. This would also consume lot of time and money, and even then certain
relationships may be left unidentified (particularly the interaction effects). Therefore, one
quite often utilizes the ANOVA technique and through it investigates the differences among
the means of all the populations simultaneously.

WHAT IS ANOVA?

Professor R.A. Fisher was the first man to use the term ‘Variance’* and, in fact, it was he
who developed a very elaborate theory concerning ANOVA, explaining its usefulness in
practical field Analysis of Variance (ANOVA)

“The essence of ANOVA is that the total amount of variation in a set of data is broken down
into two types, that amount which can be attributed to chance and that amount which can be
attributed to specified causes.” There may be variation between samples and also within
sample items. ANOVA consists in splitting the variance for analytical purposes. Hence, it is a
method of analysing the variance to which a response is subject into its various components
corresponding to various sources of variation.

Through this technique one can explain whether various varieties of seeds or fertilizers or
soils differ significantly so that a policy decision could be taken accordingly, concerning a
particular variety in the context of agriculture researches. Similarly, the differences in various
types of feed prepared for a particular class of animal or various types of drugs manufactured
for curing a specific disease may be studied and judged to be significant or not through the
application of ANOVA technique.

Likewise, a manager of a big concern can analyse the performance of various salesmen of his
concern in order to know whether their performances differ significantly. Thus, through
ANOVA technique one can, in general, investigate any number of factors which are
hypothesized or said to influence the dependent variable. One may as well investigate the
differences amongst various categories within each of these factors which may have a large
number of possible values. If we take only one factor and investigate the differences amongst
its various categories having numerous possible values, we are said to use one-way ANOVA
and in case we investigate two factors at the same time, then we use two-way ANOVA. In a
two or more way ANOVA, the interaction (i.e., inter-relation between two independent
variables/factors), if any, between two independent variables affecting a dependent variable
can as well be studied for better decisions.

The basic principle of ANOVA

The basic principle of ANOVA is to test for differences among the means of the populations
by examining the amount of variation within each of these samples, relative to the amount of
variation between the samples. In terms of variation within the given population, it is
assumed that the values of (Xij) differ from the mean of this population only because of
random effects i.e., there are influences on (Xij) which are unexplainable, whereas in
examining differences between populations we assume that the difference between the mean
of the jth population and the grand mean is attributable to what is called a ‘specific factor’ or
what is technically described as treatment effect.

Thus while using ANOVA, we assume that each of the samples is drawn from a normal
population and that each of these populations has the same variance. We also assume that all
factors other than the one or more being tested are effectively controlled. This, in other
words, means that we assume the absence of many factors that might affect our conclusions
concerning the factor(s) to be studied. In short, we have to make two estimates of population
variance viz., one based on between samples variance and the other based on within samples
variance. Then the said two estimates of population variance are compared with F-test,
wherein we work out.

F = Estimate of population variance based on between samples variance

Estimate of population variance based on within samples variance

This value of F is to be compared to the F-limit for given degrees of freedom. If the F value
we work out is equal or exceeds* the F-limit value (to be seen from F tables No. 4(a) and 4(b)
given in appendix), we may say that there are significant differences between the sample
means.
ANOVA TECHNIQUE

a) One-way (or single factor) ANOVA:

Under the one-way ANOVA, we consider only one factor and then observe that the reason
for said factor to be important is that several possible types of samples can occur within that
factor. We then determine if there are differences within that factor. The technique involves
the following steps:

(i) Obtain the mean of each sample

(ii) Work out the mean of the sample means

(iii) Take the deviations of the sample means from the mean of the sample means and
calculate the square of such deviations which may be multiplied by the number of items in
the corresponding sample, and then obtain their total. This is known as the sum of squares for
variance between the samples (or SS between

(iv) Divide the result of the (iii) step by the degrees of freedom between the samples to
obtain variance or mean square (MS) between samples.

(v) Obtain the deviations of the values of the sample items for all the samples from
corresponding means of the samples and calculate the squares of such deviations and then
obtain their total. This total is known as the sum of squares for variance within samples (or
SS within).

(vi) Divide the result of (v) step by the degrees of freedom within samples to obtain the
variance or mean square (MS) within samples.

(vii) For a check, the sum of squares of deviations for total variance can also be worked out
by adding the squares of deviations when the deviations for the individual items in all the
samples have been taken from the mean of the sample means.

(viii) Finally, F-ratio may be worked out as under:

F -ratio = MS between

MS within
This ratio is used to judge whether the difference among several sample means is significant
or is just a matter of sampling fluctuations. For this purpose we look into the table* , giving
the values of F for given degrees of freedom at different levels of significance. If the worked
out value of F, as stated above, is less than the table value of F, the difference is taken as
insignificant i.e., due to chance and the null-hypothesis of no difference between sample
means stands. In case the calculated value of F happens to be either equal or more than its
table value, the difference is considered as significant (which means the samples could not
have come from the same universe) and accordingly the conclusion may be drawn. The
higher the calculated value of F is above the table value, the more definite and sure one can
be about his conclusions.

Two-Way ANOVA

Two-way ANOVA technique is used when the data are classified on the basis of two factors.
For example, the agricultural output may be classified on the basis of different varieties of
seeds and also on the basis of different varieties of fertilizers used. A business firm may have
its sales data classified on the basis of different salesmen and also on the basis of sales in
different regions. In a factory, the various units of a product produced during a certain period
may be classified on the basis of different varieties of machines used and also on the basis of
different grades of labour. Such a two-way design may have repeated measurements of each
factor or may not have repeated values. The ANOVA technique is little different in case of
repeated measurements where we also compute the interaction variation.

The Two Way ANOVA extends one way ANOVA to include two independent or explanatory
variables. It is like an experimental study that gives numerical results with two explanatory
variables which are categorical in nature. The independent variables are referred to as factors
and may have further levels. It is assumed that the dependent variable is affected by two
factors. A subject may be exposed to a level of each of the two explanatory categorical
variables. As there are two independent variables involved the impact that change in an
independent variable may produce on the outcome may not be related to the other
independent variable.

Types of Two-Way ANOVA

Two-way ANOVA can also have subtypes:

 Two way repeated measures ANOVA-two independent variables both measured


using the same participants
 Two way mixed ANOVA- two independent variables, one measured using
different participants and the other measured using the same participants

Assumptions of Two-way ANOVA:

The Two-way ANOVA as an analytic method is appropriate for a study with a quantitative
outcome and two or more categorical explanatory variables,
 Normality: it is assumed that the parent populations from which the samples have been
drawn are normally distributed
 Independence: the chosen samples are independent of each other.
 Homogeneity of Variances: it is assumed that the scatter of scores or the variance among
the samples is uniform or in other words the obtained scores are uniformly scattered around
their means in each sample.
 Sample size: it is important that the samples chosen are of the same size

Models of two Way ANOVA


The two-way ANOVA might have different models to its disposal:

Structural Model with interaction: the structural model of ANOVA suggests that the
combination of explanatory variables being explored have their respective populations with
corresponding means. The interactive patterns of these means may follow any arbitrary
pattern. This would imply that the outcome produced by changing any one variable may
depend upon the levels of other variable.
No interaction Additive Model: the no interaction additive model places conditions on the
population means of the outcomes. The change for one explanatory variable is supposed to
have same effect on the outcome for every other explanatory variable. In other words one
may add that the outcome of change in an explanatory variable does not depend upon any
level of other variable.

Procedures of Two Way ANOVA

 To begin with, Two-way ANOVA requires formulation of hypotheses which are:


the first factor population means are equal; the second factor population means are
equal and the two factors do not show any interaction.

 The treatment groups are then formed by making all possible combinations of the
two factors. For instance, if the first factor has 3 levels and the second one has 2
levels, there will be 3 x 2 = 6 different treatment groups.

Advantages of 2 Way ANOVA


Following are the advantages of 2- way ANOVA:

 Have a usually smaller total sample size


 More efficient to study 2 factors at once than separately
 Removes some part of the random variability as the random variability gets explained by
the second factor easing the search for significant differences
 Interactions between factors can be looked at to find the effect of change in one variable
with the level of the other factor
 Statistical significance can be obtained for each factor, for interaction effect or any
combination of these

Applications of 2- way ANOVA

The 2- way ANOVA is applied to the data when the researcher wishes:

 To study the effects of multiple levels of a single independent variable or factor on a


dependent variable o Simultaneous analysis of two independent variables can be
carried out saving on the expenses and resources for two separate researches

 To generalize results better and have broader interpretation of the results Some
highlights of applying ANOVA to the data are:
 Focus of analysis: the purpose of the 2-way or multi factorial ANOVA is to analyze
the mean differences across main effects, interaction effects and simple effects.
 Number of independent variables: the independent variable is referred to as a factor
here so, it is explanatory enough that more than one independent variable is involved
in multi factor ANOVA.
 Scales of measurement of dependent variable: the dependent variable should be
continuously scaled so, it should either be in interval or ratio scale.
 Relationship of the participants across groups being compared: the multi factor
ANOVA may be used for within group (related subjects) or between group
(independent) designs.
 Assumptions: the dependent variable must meet the assumptions of normality,
homogeneity of variance and co-variance, independence of scores.

SMALL N DESIGNS

Small-N designs are single-case experiments - they are used frequently in special education
and clinical settings (behavioral modification).

•they are true experiments.


•they involve manipulation of one or more IVs and comparison of outcome on the DV
•causal statements can be made if there is adequate control of confounding variables.
•Although there are several variations on the Small N design, typically, they involve
observing an individual before and after treatment
•The number of observations made before and after treatment varies across designs.
•Other factors that can also vary are the number of before and after treatment cycles, and also
the number of treatments (IVs).
•Do not use statistical methods to analyze data- instead results are represented visually on a
graph & you look for meaningful before & after differences.
•We need to begin by having a stable trend
•Look for consistency and change in levels
•Look for consistency and change in trends
•Change in levels and in trends should be immediate and noticeable.
•For levels, you can calculate averages for each phase.
•Changes in trends should be noticeably different Dealing with unstable data:
•Wait until the data stabilize
•Average a set of two or more observations
•Look for patterns within the inconsistencies

Types of Small N designs/ single-case experimental designs

a) SIMPLE REVERSAL DESIGN: ABA OR ABAB DESIGNS

•These types of designs are necessary to demonstrate cause and effect- so long as you
expect the treatment to have a short-term effect.
•Maximize the likelihood that observed changes are related to the treatment. Can be
used to show presence and absence of effect
•ABA design - no replication, we just want to make sure that after the treatment is
taken away, it goes back to baseline
•ABAB design - replication, to make sure that it didn’t happen randomply but really
because of the treatment
•Advantage :
Can establish the existence of a cause-and-effect relationship

•Problems: Irreversibility of treatment- these designs are not appropriate if long-


lasting treatment effects are expected.

b) MULTIPLE BASELINE DESIGN


•To evaluate treatments with long-lasting effects.
Can compare results across:
•Participants: same treatment, different participants
•Behaviors: same treatment, different problems
•Situations: same treatment, different situations

Advantages
• eliminates need for a reversal
Disadvantages
•clarity of results can be compromised by individual differences btw participants or
btw behaviors
•when single participant used to examine more than one behavior, the treatment for
one behavior may produce changes in other behavior Small N Designs

CONCLUSION
Research can be categorised in to quantitative and qualitative methods. The
characteristics of quantitative research include clearly defined research questions,
representative sample, manipulation/ control of variables, structured and standardised
tools used for data collection. Also they are reliable and valid and generalisation is
possible. The strengths and limitations of quantitative research were also discussed.
Further, the unit also covered the methods of quantitative research including
experimental research, non experimental research, laboratory experiments, field
experiments and field studies. In an experimental research, the researcher can
manipulate the predictor or independent variable(s) as per the requirement of the
research to examine the cause- effect relationship. The researcher conducts the
experiment within a controlled environment. In nonexperimental research, the
researcher cannot manipulate the independent variable(s).The non- experimental
research is high on external validity and can be generalised on a larger population.
The field experiments are conducted in natural setting within the environment of the
participants. The experimenter can manipulate independent variable(s), but control is
low. Field studies are non - experimental in nature, as the researcher cannot
manipulate any variable(s), and study is carried out in a natural setting

References
Edwards A.L. (1980) Experimental Design in Psychological Research, Holt,
Richart and Winston, New York.

Kothari, C. R. (1986). Research Methodology: Methods and Techniques, New Delhi:


Wiley Eastern Ltd

Singh, A.K. (2006). Tests, Measurements, and Research Methods in Behavioural


Sciences. Patna: BharatiBhavan Publishers and Distributors

You might also like