Experimental Research
Experimental Research
Experimental research is any research conducted with a scientific approach, where a set of
variables are kept constant while the other set of variables are being measured as the subject of
experiment. Experimental research is one of the founding quantitative research methods.
It is important for an experimental research to establish cause and effect of a phenomenon, which
means, it should be definite that effects observed from an experiment are due to the cause. As
naturally, occurring event can be confusing for researchers to establish conclusions. For instance,
if a cardiology student conducts research to understand the effect of food on cholesterol and
derives that most heart patients are non-vegetarians or have diabetes. They are aspects (causes)
which can result in a heart attack (effect).
Aim
The aim of experimental design is to efficiently obtain sufficient data—with the least effort and cost—
from which scientifically and statistically valid conclusions can be drawn. Understanding the
experimental goals and processes as well as the amount and sources of experimental variability are all
important to designing successful experiments.
1. Biological Variation.
Biological variation depends on the characteristic of the population being studied. For example,
measuring the height of a random group of people will have a larger variability than a study
limited to people of one age or sex. Also, for human gene expression, the coefficient of variation
ranges from 20 to 100%.
2. Process Variation. Process variation refers to variability in the data that is exhibited when the
same sample is run independently multiple times. Process variation results from the following:
1. Random (or Common-Cause) Variation: These include unpredictable and natural
variations that may affect some, but not all, samples (e.g., a pipetting error). Efforts
should be made to identify and reduce them, but they can never be completely eliminated.
Taking the most accurate measurements possible and carefully following the
experimental protocol or standard operating procedure (SOP) are also part of controlling
random variations.
2. System Variation. System variation comes from the instrument used to take
measurements. The variability of the measurement system contributes to the process
variability and can be a common cause or a special cause. A standard ruler is an example
of a measurement system. Accuracy is usually taken to be half of the smallest division
mark (e.g., ±0.5 mm, if the ruler has 1 mm marks). This is based on the assumption that
estimating halfway between any two marks is relatively easy, while smaller fractions are
not as accurately estimated by eye. Additional implicit assumptions are that the person
taking the masurement has good eyesight and that the ruler markings are accurate. If the
ruler manufacturer mismarked the ruler, the ruler would have a bias.
3. Experimental Variation. Experimental variation is the total variation seen in an experiment and
comes from both the process and biological population variability.
1. Pretest-Posttest
A pre-test post-test allows a researcher to compare the measurement of something before the
treatment and after the treatment. The assumption is that any difference in the scores of before
and after is due to the treatment. Doing the tests takes into account the confounding of the
different contexts of the setting and individual characteristics.
2. Homogeneous Sampling
This approach involves selecting people who are highly similar on the particular trait that is being
measured. This removes the problem of individual differences when attempting to interpret the
results. The more similar the subjects in the sample are the more controlled the traits of the
people are controlled for.
3. Covariate
Covariates is a statistical approach in which controls are placed on the dependent variable through
statistical analysis. The influence of other variables are removed from the explained variance of
the dependent variable. Covariates help to explain more about the relationship between the
independent and dependent variables.This is a difficult concept to understand. However, the point
is that you use covariates to explain in greater detail the relationship between the independent and
dependent variable by removing other variables that might explain the relationship.
4. Matching
Matching is deliberate, rather than randomly, assigning subject to various groups. For example, if
you are looking at intelligence. You might match high achievers in both groups of the study. By
placing the achievers in both groups you cancel out there difference.
1. Control Group (Group of participants for research that are familiar to the
Experimental group but experimental research rules do not apply to them.) and
Experimental Group (Research participants on whom experimental research rules
do apply.)
2. Variable which can be manipulated by the researcher
3. Random distribution
True experimental research design has following types:
1. Post-test Only Design/After Only Design– This type of design has two randomly
assigned groups: an experimental group and a control group. Neither group is
pretested before the implementation of the treatment. The treatment is applied to the
experimental group and the post-test is carried out on both groups to assess the effect
of the treatment or manipulation. This type of design is common when it is not
possible to pretest the subjects.
2. Pretest-Post-test Only Design/ Before and After design - The subjects are again
randomly assigned to either the experimental or the control group. Both groups are
pretested for the independent variable. The experimental group receives the treatment
and both groups are post-tested to examine the effects of manipulating the
independent variable on the dependent variable.
3. Solomon Four Group Design – Subjects are randomly assigned into one of four
groups. There are two experimental groups and two control groups. Only two groups
are pretested. One pretested group and one unprotested group receive the treatment.
All four groups will receive the post-test. The effects of the dependent variable
originally observed are then compared to the effects of the independent variable on
the dependent variable as seen in the post-test results. This method is really a
combination of the previous two methods and is used to eliminate potential sources of
error.
4. Factorial Design – The researcher manipulates two or more independent variables
(factors) simultaneously to observe their effects on the dependent variable. This
design allows for the testing of two or more hypotheses in a single project. One
example would be a researcher who wanted to test two different protocols for burn
wounds with the frequency of the care being administered in 2, 4, and 6 hour
increments.
State A: O1 O2 O3 O4 X O5 O6 O7 O8
State B: O1 O2 O3 O4 - O5 O6 O7 O8
3. Pretest Posttest Nonequivalent Group: With this design, both a control group
and an experimental group is compared, however, the groups are chosen and
assigned out of convenience rather than through randomization. This might be
the method of choice for our study on work experience as it would be difficult to
choose students in a college setting at random and place them in specific groups
and classes. We might ask students to participate in a one-semester work
experience program. We would then measure all of the students’ grades prior to
the start of the program and then again after the program. Those students who
participated would be our treatment group; those who did not would be our
control group.
INTERNAL VALIDITY
This is most concerned with strength and control of a research design and its ability to
determine causal relationships between independent and dependent variables. For example to
determine whether a certain treatment type is effective at treating depression, the study would
have to be controlled so that other factors aren’t responsible for depression levels
EXTERNAL VALIDITY
It refers to the degree to which the results of an empirical investigation can be
generalized to and across individuals, settings, and times.
Types of external validity
External validity can be divided into
1. Population validity: How representative is the sample of the population? The
more representative, the more confident we can be in generalizing from the
sample to the population. How widely does the finding apply? Generalizing
across populations occurs when a particular research finding works across many
different kinds of people, even those not represented in the sample.
2. Ecological validity: Ecological validity is present to the degree that a result
generalizes across settings.
Threats of External Validity of Experimental Research Design
1. Interaction effect of testing: Interaction effect of testing: Pre-testing interacts with the
experimental treatment and causes some effect such that the results will not generalize to
an untested population. Example: In a physical performance experiment, the pre-test
clues the subjects to respond in a certain way to the experimental treatment that would
not be the case if there were no pre-test.
2. Interaction effects of selection biases and experimental treatment: An effect of some
selection factor of intact groups interacting with the experimental treatment that would
not be the case if the groups were randomly selected. Example: The results of an
experiment in which teaching method is the experimental treatment, used with a class of
low achievers, do not generalize to heterogeneous ability students.
3. Reactive effects of experimental arrangements: An effect that is due simply to the fact
that subjects know that they are participating in an experiment and experiencing the
novelty of it — the Hawthorne effect. Example: An experiment in remedial reading
instruction has an effect that does not occur when the remedial reading program, which is
the experimental treatment, is implemented in the regular program.
4. Multiple-treatment interference: When the same subjects receive two or more treatments
as in a repeated measures design, there may be a carryover effect between treatments
such the results cannot be generalized to single treatments. Example: Multiple-treatment
interference: In a drug experiment the same animals are administered four different drug
doses in some sequence. The effects of the second through fourth doses cannot be
separated from the possible delayed effects of preceding doses.
Advantages of Experimental Research
1. Researchers have a stronger hold over variables to obtain desired results.
2. Experimental research designs are repeatable and therefore, results can be
checked and verified.
3. Results are extremely specific.
4. Once the results are analyzed, they can be applied to various other similar aspects.
5. Cause and effect of a hypothesis can be derived so that researchers can analyze
greater details.
6. Experimental research can be used in association with other research methods.