0% found this document useful (0 votes)
124 views27 pages

Experimental Methods

The document discusses experimental methods and designs. It explains that the experimental method allows researchers to systematically study cause-and-effect relationships by manipulating independent variables and measuring their impact on dependent variables while controlling for other influences. Key aspects covered include random assignment, operationalizing variables, counterbalancing, blinding procedures, threats to validity, and examples of basic and more advanced experimental designs like factorial designs. The overall goal of experimental research is to establish internal validity and determine causal relationships between variables.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
124 views27 pages

Experimental Methods

The document discusses experimental methods and designs. It explains that the experimental method allows researchers to systematically study cause-and-effect relationships by manipulating independent variables and measuring their impact on dependent variables while controlling for other influences. Key aspects covered include random assignment, operationalizing variables, counterbalancing, blinding procedures, threats to validity, and examples of basic and more advanced experimental designs like factorial designs. The overall goal of experimental research is to establish internal validity and determine causal relationships between variables.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

EXPERIMENTAL

METHODS
THE EXPERIMENTAL
METHOD
✖ The experimental method is just a systematic way of acquiring knowledge,
but it has played a pivotal role in helping us to understand the physical
world.
✖ correlation is no proof of causality.
✖ J.S Mill (1882) suggested to identify cause and effect relationship - ‘the
Method of Difference’
✖ In this design, one or more independent variables are manipulated by the
researcher (as treatments),
✖ subjects are randomly assigned to different treatment levels (random
assignment), and the results of the treatments on outcomes (dependent
variables) are observed.
✖ The unique strength of experimental research is its internal validity
(causality) due to its ability to link cause and effect through treatment
manipulation, while controlling for the spurious effect of extraneous
variable
✖ Experimental research is best suited for explanatory research (rather than
for descriptive or exploratory research), where the goal of the study is to
examine cause-effect relationships.
✖ Experimental research can be conducted in laboratory or field settings.
THE ADVANTAGES OF THE
EXPERIMENTAL
METHOD
✖ they enable us to determine causal relationships between variables.
✖ enables us to produce events to order, rather than waiting for them to occur
naturally, and allows us to control the circumstances in which they occur.
THE BASIC PRINCIPLES OF
THE EXPERIMENTAL
METHOD
✖ Manipulate one variable (IV)
✖ Measure the effects of these manipulations on other variable (DV)
✖ keeping all other variables as constant as possible.

✖ Variables that have unwanted influences on our experimental results are


called confounding variables
✖ An extraneous variable is any variable that you’re not investigating that
can potentially affect the dependent variable of your research study.

✖ A confounding variable is a type of extraneous variable that not only


affects the dependent variable, but is also related to the independent
variable.
✖ Example of a confounding variable
✖ You collect data on sunburns and ice cream consumption. You find that
higher ice cream consumption is associated with a higher probability of
sunburn. Does that mean ice cream consumption causes sunburn?
✖ Here, the confounding variable is temperature: hot temperatures cause
people to both eat more ice cream and spend more time outdoors under the
sun, resulting in more sunburns.
ELIMINATING
CONFOUNDING VARIABLES
✖ The effects of many confounding variables can be minimized by careful
attention to how the experiment is conducted.
✖ It is more difficult to deal with confounding variables that stem from the
participants themselves
✖ Random allocation to groups
✖ Individual differences between participants will still add ‘noise’ (variability)
to the data, but at least they will not produce systematic differences between
the conditions.
✖ Experiments in which randomization is not possible are called quasi-
experiments
✖ Experimental research can be grouped into two broad categories: true
experimental designs and quasi-experimental designs.
✖ Both designs require treatment manipulation, but while true experiments
also require random assignment, quasi-experiments do not.
REDUCING ‘NOISE’ IN THE
DATA
✖ randomization does not remove the effects of potential confounding
variables altogether
✖ good experimental design aims to reduce the amount of variation
introduced into the results by factors other than the independent variable, in
order to maximize the chances of detecting effects of the independent
variable.
OPERATIONAL DEFINITIONS
OF INDEPENDENT
AND DEPENDENT
VARIABLES
✖ Examples?
BETWEEN-GROUPS VERSUS
WITHIN-GROUPS
DESIGNS
COUNTER-BALANCING
✖ Counterbalancing removes confounding variables from an experiment by
giving slightly different treatments to different participant groups.
✖ For example, you might want to test whether people react positively or
negatively to a series of images.
✖ Counterbalancing is a technique used to deal with order effects when using
a repeated measures design
MATCHED PAIRS DESIGN
✖ This is a special version of the between-groups design, in which
participants are paired up on the basis of variables that might have an effect
on the dependent variable.
✖ Matched pairs design is an experimental design where pairs of participants
are matched in terms of key variables, such as age and IQ.
✖ One member of each pair is then placed into the experimental group and
the other member into the control group.
RELIABILITY AND VALIDITY
✖ results are reproducible - reliability
✖ results are measuring what we think they are measuring, and not something
else – validity
✖ External – Internal (validity)
✖ Can Results be highly reliable but not valid?
FACTORS AFFECTING
RELIABILITY
✖ Sample size
✖ The stability of the phenomenon being investigated
FACTORS AFFECTING
VALIDITY
✖ Time threats
✖ History
✖ Maturation
✖ Repeated testing
✖ Instrumental change
✖ Group threats
 Initial non-equivalence of groups
 Selection–maturation interaction
 Regression towards the mean
 Differential mortality
✖ Participant reactivity threats
 Experimenter effects
 Hawthrone effect
CONT..
✖ Evaluation apprehension
✖ Experimenter effects
✖ Control group’s awareness of its status
✖ Blind and double-blind procedures
✖ In a single-blind experiment, participants do not know which group they
have been placed in until after the experiment has finished.

✖ Example: Single-blind vaccine study


✖ You have developed a new flu vaccine. In order to test the effectiveness of
your new treatment, you run an experiment, giving half of your participants
the flu vaccine and the other half a fake vaccine that will have no effect (to
control for the placebo effect).
✖ In double-blind experiments, the group assignment is hidden from both the
participant and the person administering the experiment.

✖ In the flu vaccine study that you are running, you have recruited several
experimenters to administer your vaccine and measure the outcomes of your
participants.
✖ If these experimenters knew which vaccines were real and which were fake,
they might accidentally reveal this information to the participants, thus
influencing their behavior and indirectly the results.
IMPORTANCE OF BLINDING
✖ Blinding helps ensure a study’s internal validity, or the extent to which you
can be confident any link you find in your study is a true cause-and-effect
relationship.
✖ Since non-blinded studies can result in participants modifying their
behavior or researchers finding effects that do not really exist, blinding is an
important tool to avoid research bias in all types of scientific research.

✖ Risk of unblinding
✖ Unblinding occurs when researchers have blinded participants or
experimenters, but they become aware of who received which treatment
before the experiment has ended.
✖ This may result in the same outcomes as would have occurred without any
blinding.
TYPES OF EXPERIMENTAL
DESIGN
✖ ‘Post-test only/control group’ design
✖ Pre-test/post-test control group’ design
✖ Solomon four-group design: The four groups in this design are (see figure
below):
1. A treatment group with both pre-intervention and post-intervention
measurements (a.k.a. pretest and posttest)
2. A control group with both pretest and posttest measurements
3. A treatment group with only a posttest measurement
4. A control group with only a posttest measurement
The objective is to assess the efficacy of the treatment (or intervention).
EXPERIMENTAL DESIGNS
AND THREATS TO
VALIDITY
MORE ADVANCED
EXPERIMENTAL DESIGNS
(DISCUSSION)
✖ LONGITUDINAL VERSUS CROSS-SECTIONAL DESIGNS
✖ MULTIFACTORIAL DESIGNS
✖ MULTIPLE DEPENDENT VARIABLES
✖ MIXED DESIGN
FACTORIAL RESEARCH
DESIGN
✖ A factorial research design is an experiment that has a minimum of two
factors. These factors, also known as independent variables, are what the
researcher controls.
✖ A 2x2 factorial design example would be the following:
✖ A researcher wants to evaluate two groups, 10-year-old boys and 10-year-
old girls, and how the effects of taking a summer enrichment course or not
affects math test scores.
✖ In this case, there are two factors, the boys and girls. There is also two
levels, those who do and do not take summer enrichment. Thus, this would
be written as 2x2, where the first factor has two levels and the second factor
has two levels

You might also like