0% found this document useful (0 votes)
308 views

Experimental Research

This document discusses experimental research, which involves manipulating variables in a controlled setting to determine cause-and-effect relationships. The key aspects covered include: - Experimental research aims to establish causation by varying an independent variable and measuring its impact on a dependent variable while controlling other factors. - True experiments use random assignment to experimental and control groups to establish internal validity. - Sources of variation include biological differences, random errors, and differences in measurement systems, which experimental design aims to minimize. - Techniques to control extraneous variables include pre-testing/post-testing, homogeneous sampling, covariates, and matching between groups.

Uploaded by

Anzala Sarwar
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
308 views

Experimental Research

This document discusses experimental research, which involves manipulating variables in a controlled setting to determine cause-and-effect relationships. The key aspects covered include: - Experimental research aims to establish causation by varying an independent variable and measuring its impact on a dependent variable while controlling other factors. - True experiments use random assignment to experimental and control groups to establish internal validity. - Sources of variation include biological differences, random errors, and differences in measurement systems, which experimental design aims to minimize. - Techniques to control extraneous variables include pre-testing/post-testing, homogeneous sampling, covariates, and matching between groups.

Uploaded by

Anzala Sarwar
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 14

EXPERIMENTAL RESEARCH

Experimental research is any research conducted with a scientific approach, where a set of
variables are kept constant while the other set of variables are being measured as the subject of
experiment. Experimental research is one of the founding quantitative research methods.

The simplest example of an experimental research is conducting a laboratory test. As long as


research is being conducted under scientifically acceptable conditions – it qualifies as an
experimental research. A true experimental research is considered to be successful only when
the researcher confirms that a change in the dependent variable is solely due to the manipulation
of the independent variable.

It is important for an experimental research to establish cause and effect of a phenomenon, which
means, it should be definite that effects observed from an experiment are due to the cause. As
naturally, occurring event can be confusing for researchers to establish conclusions. For instance,
if a cardiology student conducts research to understand the effect of food on cholesterol and
derives that most heart patients are non-vegetarians or have diabetes. They are aspects (causes)
which can result in a heart attack (effect).

Aim
The aim of experimental design is to efficiently obtain sufficient data—with the least effort and cost—
from which scientifically and statistically valid conclusions can be drawn. Understanding the
experimental goals and processes as well as the amount and sources of experimental variability are all
important to designing successful experiments.

Why Psychologists Conduct Experiments


1. Researchers conduct experiments to test hypotheses about the causes of behavior.
2. Experiments allow researchers to decide whether a treatment or program effectively changes
behavior.

Logic of Experimental Research


Researchers manipulate an independent variable in an experiment to observe the effect on behavior, as
assessed by the dependent variable.
Control is the essential ingredient of experiments; experimental control is gained through manipulation,
holding conditions constant, and balancing. Experimental control allows researchers to make the causal
inference that the independent variable caused the observed changes in the dependent variable.

Components/ elements of experimental research design


1. Independent variable
2. Dependent variable
3. EXPERIMENTAL group
4. Control group
5. Randomization
6. Pretesting
7. Post testing
8. Control on extraneous variables
Sources of Variation
All experimental data have variability that comes from several sources. Understanding these sources can
lead to improved experimental design and results.

1. Biological Variation.
Biological variation depends on the characteristic of the population being studied. For example,
measuring the height of a random group of people will have a larger variability than a study
limited to people of one age or sex. Also, for human gene expression, the coefficient of variation
ranges from 20 to 100%.
2. Process Variation. Process variation refers to variability in the data that is exhibited when the
same sample is run independently multiple times. Process variation results from the following:
1. Random (or Common-Cause) Variation: These include unpredictable and natural
variations that may affect some, but not all, samples (e.g., a pipetting error). Efforts
should be made to identify and reduce them, but they can never be completely eliminated.
Taking the most accurate measurements possible and carefully following the
experimental protocol or standard operating procedure (SOP) are also part of controlling
random variations.
2. System Variation. System variation comes from the instrument used to take
measurements. The variability of the measurement system contributes to the process
variability and can be a common cause or a special cause. A standard ruler is an example
of a measurement system. Accuracy is usually taken to be half of the smallest division
mark (e.g., ±0.5 mm, if the ruler has 1 mm marks). This is based on the assumption that
estimating halfway between any two marks is relatively easy, while smaller fractions are
not as accurately estimated by eye. Additional implicit assumptions are that the person
taking the masurement has good eyesight and that the ruler markings are accurate. If the
ruler manufacturer mismarked the ruler, the ruler would have a bias.
3. Experimental Variation. Experimental variation is the total variation seen in an experiment and
comes from both the process and biological population variability.

Techniques to control variations


Random assignment directly leads to the concern of controlling extraneous variables. Extraneous
variables are any factors that might influence the cause and effect relationship that you are trying to
establish. These other factors confound or confuse the results of a study. There are several methods for
dealing with this as shown below
1. Pretest-posttest
2. Homogeneous sampling
3. Covariate
4. Matching

1. Pretest-Posttest
A pre-test post-test allows a researcher to compare the measurement of something before the
treatment and after the treatment. The assumption is that any difference in the scores of before
and after is due to the treatment. Doing the tests takes into account the confounding of the
different contexts of the setting and individual characteristics.
2. Homogeneous Sampling
This approach involves selecting people who are highly similar on the particular trait that is being
measured. This removes the problem of individual differences when attempting to interpret the
results. The more similar the subjects in the sample are the more controlled the traits of the
people are controlled for.
3. Covariate
Covariates is a statistical approach in which controls are placed on the dependent variable through
statistical analysis. The influence of other variables are removed from the explained variance of
the dependent variable. Covariates help to explain more about the relationship between the
independent and dependent variables.This is a difficult concept to understand. However, the point
is that you use covariates to explain in greater detail the relationship between the independent and
dependent variable by removing other variables that might explain the relationship.
4. Matching
Matching is deliberate, rather than randomly, assigning subject to various groups. For example, if
you are looking at intelligence. You might match high achievers in both groups of the study. By
placing the achievers in both groups you cancel out there difference.

Characteristics of Experimental Research


1. Can establish cause-and-effect (ex. Proof that that the treatment worked)
2. The researcher manipulates the independent variable (ex. Researcher actively applies the
treatment to the participants).
3. The researcher has control over assignment of groups to treatments.
4. The researcher can randomly assign participants to groups (this is different than random selection
of subjects).
5. High control on extraneous variables.
6. Consider the context of the problem then choose appropriate design and setting for experiment.

SYMBOLS USED IN EXPERIMENTAL RESEARCH DESIGN


TYPES OF EXPERIMENTAL RESEARCH DESIGN
There are three primary types of experimental research design:

1. Pre-experimental research design


2. True experimental research design
3. Quasi-experimental research design

I. Pre-Experimental Research Design


This is the simplest form of experimental research design. A group, or various groups,
are kept under observation after factors are considered for cause and effect. It is
usually conducted to understand whether further investigation needs to be carried out
on the target group/s, due to which it is considered to be cost-effective. The pre-
experimental research design is further bifurcated into three types:

I. One-shot Case Study Research Design


II. One-group Pretest-posttest Research Design
III. Static-group Comparison
1. One-shot case study design: A single group is studied at a single point in time
after some treatment that is presumed to have caused change. The carefully studied single
instance is compared to general expectations of what the case would have looked like had
the treatment not occurred and to other events casually observed. No control or
comparison group is employed.

2. One-group pretest-posttest design: A single case is observed at two time points,


one before the treatment and one after the treatment. Changes in the outcome of interest
are presumed to be the result of the intervention or treatment. No control or comparison
group is employed.
3. Static-group comparison: A group that has experienced some treatment is
compared with one that has not. Observed differences between the two groups are
assumed to be a result of the treatment.

2. True Experimental Research Design


True experimental research is the most accurate form of experimental research design as
it relies on statistical analysis to prove or disprove a hypothesis. It is the only type of
Experimental Design that can establish a cause-effect relationship within a group/s. In a
true experiment, there are three factors which need to be satisfied:

1. Control Group (Group of participants for research that are familiar to the
Experimental group but experimental research rules do not apply to them.) and
Experimental Group (Research participants on whom experimental research rules
do apply.)
2. Variable which can be manipulated by the researcher
3. Random distribution
True experimental research design has following types:
1. Post-test Only Design/After Only Design– This type of design has two randomly
assigned groups: an experimental group and a control group. Neither group is
pretested before the implementation of the treatment. The treatment is applied to the
experimental group and the post-test is carried out on both groups to assess the effect
of the treatment or manipulation. This type of design is common when it is not
possible to pretest the subjects.
2. Pretest-Post-test Only Design/ Before and After design - The subjects are again
randomly assigned to either the experimental or the control group. Both groups are
pretested for the independent variable. The experimental group receives the treatment
and both groups are post-tested to examine the effects of manipulating the
independent variable on the dependent variable.

3. Solomon Four Group Design – Subjects are randomly assigned into one of four
groups. There are two experimental groups and two control groups. Only two groups
are pretested. One pretested group and one unprotested group receive the treatment.
All four groups will receive the post-test. The effects of the dependent variable
originally observed are then compared to the effects of the independent variable on
the dependent variable as seen in the post-test results. This method is really a
combination of the previous two methods and is used to eliminate potential sources of
error.
4. Factorial Design – The researcher manipulates two or more independent variables
(factors) simultaneously to observe their effects on the dependent variable. This
design allows for the testing of two or more hypotheses in a single project. One
example would be a researcher who wanted to test two different protocols for burn
wounds with the frequency of the care being administered in 2, 4, and 6 hour
increments.

5. Crossover Design/Repeat Measures Design/counter balancing design– Subjects in


this design are exposed to more than one treatment and the subjects are randomly
assigned to different orders of the treatment. The groups compared have an equal
distribution of characteristics and there is a high level of similarity among subjects
that are exposed to different conditions. Crossover designs are excellent research
tools, however, there is some concern that the response to the second treatment or
condition will be influenced by their experience with the first treatment. In this type
of design, the subjects serve as their own control groups.

3. Quasi-Experimental Research Design


The word “Quasi” indicates resemblance. A quasi-experimental research design is
similar to experimental research but is not exactly that. The difference between the
two the assignment of a control group. In this research design, an independent
variable is manipulated but the participants of a group are not randomly assigned as
per conditions. The independent variable is manipulated before calculating the
dependent variable and so, directionality problem is eliminated. Quasi-research is
used in field settings where random assignment is either irrelevant or not required.
Types of quasi experimental research design are:
1. Interrupted Time Series Design: This design uses several waves of observation
before and after the introduction of the independent (treatment) variable X. It is
diagrammed as follows:

2. Interrupted Time Series Design with Comparison Group: The addition of a


second time series for a comparison group helps to provide a check on some of
the threats to validity of the Single Interrupted Time Series Design discussed
above, especially history. This design uses several waves of observation in both
groups (treatment and comparison groups) before and after the introduction of the
independent variable X in the treatment group. It is diagrammed as follows:

State A: O1 O2 O3 O4 X O5 O6 O7 O8
State B: O1 O2 O3 O4 - O5 O6 O7 O8
3. Pretest Posttest Nonequivalent Group: With this design, both a control group
and an experimental group is compared, however, the groups are chosen and
assigned out of convenience rather than through randomization. This might be
the method of choice for our study on work experience as it would be difficult to
choose students in a college setting at random and place them in specific groups
and classes. We might ask students to participate in a one-semester work
experience program. We would then measure all of the students’ grades prior to
the start of the program and then again after the program. Those students who
participated would be our treatment group; those who did not would be our
control group.
INTERNAL VALIDITY
This is most concerned with strength and control of a research design and its ability to
determine causal relationships between independent and dependent variables. For example to
determine whether a certain treatment type is effective at treating depression, the study would
have to be controlled so that other factors aren’t responsible for depression levels

Internal validity threats of experimental research design


1. History--the specific events which occur between the first and second measurement.
Example: Iin a short experiment designed to investigate the effect of computer-based
instruction, Ss missed some instruction because of a power failure at the school.
2. Maturation--the processes within subjects which act as a function of the passage of time.
i.e. if the project lasts a few years, most participants may improve their performance
regardless of treatment. Example: The performance of first graders in a learning
experiment begins decreasing after 45 minutes because of fatigue.
3. Pre- Testing--the effects of taking a test on the outcomes of taking a second test.
Example: Testing: In an experiment in which performance on a logical reasoning test is
the dependent variable, a pre-test cues the subjects about the post-test.
4. Instrumentation--the changes in the instrument, observers, or scorers which may produce
changes in outcomes. Example: Two examiners for an instructional experiment
administered the post-test with different instructions and procedures.
5. Statistical regression--It is also known as regression to the mean. This threat is caused by
the selection of subjects on the basis of extreme scores or characteristics. Give me forty
worst students and I guarantee that they will show immediate improvement right after my
treatment. Example: : In an experiment involving reading instruction, subjects grouped
because of poor pre-test reading scores show considerably greater gain than do the groups
who scored average and high on the pre-test.
6. Selection of subjects--the biases which may result in selection of comparison groups.
Randomization (Random assignment) of group membership is a counter-attack against
this threat. However, when the sample size is small, randomization may lead to Simpson
Paradox, which has been discussed in an earlier lesson. Example: The experimental
group in an instructional experiment consisted of a high-ability class, while the
comparison group was an averageability class.
7. Experimental mortality--the loss of subjects. For example, in a Web-based instruction
project entitled Eruditio, it started with 161 subjects and only 95 of them completed the
entire module. Those who stayed in the project all the way to end may be more motivated
to learn and thus achieved higher performance. Example: In a health experiment designed
to determine the effect of various exercises, those subjects who find the exercise most
difficult stop participating.
8. Selection-maturation interaction--the selection of comparison groups and maturation
interacting which may lead to confounding outcomes, and erroneous interpretation that
the treatment caused the effect.
9. Design contamination: In an expectancy experiment, students in the experimental and
comparison groups “compare notes” about what they were told to expect. Example: In an
expectancy experiment, students in the experimental and comparison groups “compare
notes” about what they were told to expect.
10. Compensatory rivalry (John Henry effect): When subjects in some treatments receive
goods or services believed to be desirable and this becomes known to subjects in other
groups, social competition may motivate the latter to attempt to reverse or reduce the
anticipated effects of the desirable treatment levels. Saretsky (1972) named this the “John
Henry” effect in honor of the steel driver who, upon learning that his output was being
compared with that of a steam drill, worked so hard that he outperformed the drill and
died of overexertion.
11. Resentful demoralization. If subjects learn that their group receives less desirable goods
or services, they may experience feelings of resentment and demoralization. Their
response may be to perform at an abnormally low level, thereby increasing the magnitude
of the difference between their performance and that of groups that receive the desirable
goods or services.

EXTERNAL VALIDITY
It refers to the degree to which the results of an empirical investigation can be
generalized to and across individuals, settings, and times.
Types of external validity
External validity can be divided into
1. Population validity: How representative is the sample of the population? The
more representative, the more confident we can be in generalizing from the
sample to the population. How widely does the finding apply? Generalizing
across populations occurs when a particular research finding works across many
different kinds of people, even those not represented in the sample.
2. Ecological validity: Ecological validity is present to the degree that a result
generalizes across settings.
Threats of External Validity of Experimental Research Design
1. Interaction effect of testing: Interaction effect of testing: Pre-testing interacts with the
experimental treatment and causes some effect such that the results will not generalize to
an untested population. Example: In a physical performance experiment, the pre-test
clues the subjects to respond in a certain way to the experimental treatment that would
not be the case if there were no pre-test.
2. Interaction effects of selection biases and experimental treatment: An effect of some
selection factor of intact groups interacting with the experimental treatment that would
not be the case if the groups were randomly selected. Example: The results of an
experiment in which teaching method is the experimental treatment, used with a class of
low achievers, do not generalize to heterogeneous ability students.

3. Reactive effects of experimental arrangements: An effect that is due simply to the fact
that subjects know that they are participating in an experiment and experiencing the
novelty of it — the Hawthorne effect. Example: An experiment in remedial reading
instruction has an effect that does not occur when the remedial reading program, which is
the experimental treatment, is implemented in the regular program.
4. Multiple-treatment interference: When the same subjects receive two or more treatments
as in a repeated measures design, there may be a carryover effect between treatments
such the results cannot be generalized to single treatments. Example: Multiple-treatment
interference: In a drug experiment the same animals are administered four different drug
doses in some sequence. The effects of the second through fourth doses cannot be
separated from the possible delayed effects of preceding doses.
Advantages of Experimental Research
1. Researchers have a stronger hold over variables to obtain desired results.
2. Experimental research designs are repeatable and therefore, results can be
checked and verified.
3. Results are extremely specific.
4. Once the results are analyzed, they can be applied to various other similar aspects.
5. Cause and effect of a hypothesis can be derived so that researchers can analyze
greater details.
6. Experimental research can be used in association with other research methods.

Disadvantages of experimental research design


1. Experimental research can create artificial situations that do not always represent real-life
situations. This is largely due to fact that all other variables are tightly controlled which
may not create a fully realistic situation.
2. Because the situations are very controlled and do not often represent real life, the
reactions of the test subjects may not be true indicators of their behaviors in a non-
experimental environment.
3. Human error also plays a key role in the validity of the project as discussed in previous
modules.
4. It may not be really possible to control all extraneous variables. The health, mood, and
life experiences of the test subjects may influence their reactions and those variables may
not even be known to the researcher.
5. Experimental research designs help to ensure internal validity but sometimes at the
expense of external validity. When this happens, the results may not be generalizable to
the larger population.

You might also like