Expe Psych Notes
Expe Psych Notes
Response Measure of the Dependent Variable a. Where one is unable to identify the
include: extraneous variable,
Two kinds of control of the independent Experimental designs with control groups are
variable powerful. The addition of a control group
combined with random assignment greatly
1
increases our understanding of what is occurring It focuses on controlling for order effects by varying
in our experiment. As noted, if a relationship is the order of conditions.
found, these designs allow us to rule out many
❖ Practice Effect – influence performance due
competing interpretations. Therefore, with these
to familiarity with the task (Positive –
designs, we can separate the effects of our
perform better in later condition) and (negative
independent variable from such factors as
– decline performance due to fatigue)
individual histories, maturation, participant
selection, testing effects, participant ❖ Adaptation effect – reaction change due to
expectancies, regression effects, and various becoming accustomed with treatment
other possibilities. such as in sound experiments which influence
individual sensory adaptation or
Representation of the use of control group as
habituation.
a technique
2
✓ For organismic variables, individual Social Variable: demand Characteristics
differences generally equalized (level of
➢ Cues may nudge participants to consciously
motivation, amount of food eaten on the
or unconsciously change their responses.
experimental day, romantic relationship,
money matters) Participants might guess at the aim of the
study, and their interpretations could influence
Exercising extraneous-variable control their behaviors or responses.
❖ Good subject participant tries to be helpful
and confirm the researcher’s hypothesis
❖ Negative subject - The participant attempts to
act in ways that refute the researcher’s
hypothesis—also called the “screw you” effect
❖ Apprehensive subject - The participant
attempts to produce the most socially
desirable answers, out of fear of being judged
Other potential source of errors How to control:
➢ Control group “normal” for experimental, ➢ “I wander what would happen if I did this?”
whereas Comparison group is used to identify 2. Confirmatory Experiment- conducted to
normal groups in nonexperimental test explicit hypothesis and allow
➢ In comparison group the participants are researcher to predict how an experiment
already formed together and selected can turn out
because of common characteristics. ➢ “I’ll bet this would happen if I did this?”
3. Crucial Experiments- an experiment that
➢ Evidence based report obtained thrpugh purports to test all possible hypothesis
experimentation is more reliable than when simultaneously. Example, the
obtained through the use of nonexperimental ➢ Stroop test developed by John Ridley
Stroop in 1935, help understand additional
➢ In nonexperimental methods it is also usually
brain mechanisms and expanded to aid in
more difficult to define the variables studies
brain damage and psychopathology
than in the case where they actually
research.
produced, as in an experiment.
4. Pilot studies or experiment- refers to a
There is less error variance in experimental preliminary experiment, one conducted
research. prior to the major experiment.
➢ Usually with only a small number of
➢ Lack of control makes it harder to reduce error subjects to suggest what specific
variance
4
values should be assigned to the variables ➢ The experimenter may led to modify the
being studied, original problem in such way that the
➢ To try out certain procedures to see how experimenter becomes more valuable
well they work,
➢ It may reveal whether the experiment even
➢ More generally, to find out what mistakes
need to be conducted unless it is designed to
might be made in conducting the actual
confirm previous findings.
experiment so that the experimenter can be
ready for them. ➢ Suggestions about extraneous variables hits
5. Field studies- efforts are made to discover on how to control them.
relationship in the real social structure of
everyday life.
➢ May take in a form of quasi-experimental
design
➢ Since field studies occur in everyday life
settings, control is markedly reduced relative
to laboratory research
➢ Error variance (noise) is usually considered
increased
➢ Testing new medications to establish cause-
effects yet complexities arises in explaining
➢ State the problem, preferably as a question
behavior or attitudes.
➢ State your hypothesis
➢ People are aware they are being observed in
➢ Define your variables. They must be
a lab, which can alter their behavior
operationally defined, if not, the hypothesis is
(Hawthorne effect)
untestable.
Planning an Experiment ➢ Specify your Apparatus.
➢ State the extraneous variables that need to be
The antecedent conditions of the hypothesis controlled and the ways in which you will
must be satisfied. control them.
If the antecedent conditions of hypothesis are not ➢ Select the design most appropriate to the
satisfied, the evidence report will be irrelevant to problem.
the hypothesis and further progress in the inquiry ➢ Indicate a manner of selecting your
is prohibited. participants, the way in which they will be
➢ What apparatus will best allow manipulation assigned to groups, and the number to be in
and observation of the phenomenon of each group.
interest. ➢ List the steps of your experimental procedure,
➢ What extraneous variable may contaminate including ethical principles to be followed
the phenomenon of primary interest and are ➢ Specify the type of statistical analysis to be
therefore in need of control? used
➢ Which events should be observed and which ➢ State the possible evidence reports. Will the
should be ignored? results tell you something about your
➢ How can the behavioral data best be hypothesis no matter how they came out?
observed, recorded, and quantified? ➢ Determine to what extent you will be able to
generalized your findings.
Outline for an Experimental plan
Label the experiment. The title should be clearly
specified, as well as the time and location of the
experiment.
Survey the Literature.
➢ It helps in formulating the problem
5
Understanding Experimental Design: observed effects can be confidently attributed
Variables and Validity to the manipulated variables
1. Defined and explained independent and ➢ Replicability is essential for verifying results
dependent variables using examples from and building a body of evidence
psychology experiments.
2. Understood and applied concept of The Independent and Dependent Variable
operational definitions, emphasizing their Experiment's Independent Variable (IV) is the
importance in ensuring clarity and replicability dimension that the experimenter intentionally
in research. manipulates or changes to observe its effects on
3. Applied the different levels of measurement another variable.
(nominal, ordinal, interval, ratio) and their
❖ It is the antecedent the experimenter
applications in psychological research.
chooses to vary.
4. Explained reliability and validity and how
these concepts applied to experimental ❖ Are sometimes aspects of the physical
design. environment that can be brought under
5. familiarize the concept of internal validity and experimenter's direct control.
its importance in establishing causality.
6. Familiarized and addressed the eight classic ❖ considered the "cause" in a cause-and-
threats to internal validity as identified by effect relationship within the study.
Campbell and colleagues. ❖ the independent variable can have
7. Identified, explained, and applied techniques different levels or conditions.
for controlling extraneous variables in
experimental design to ensure internal validity Variable is "independent" is the sense that its
and reliability of research findings. value is created by the experimenter and are not
affected by anything else that happens in the
Important considerations in choosing experiment.
research design in psychological research?
✓ Nature of the Research Question In a true experiment, we test the effects of a
✓ Feasibility of Manipulating Variables manipulated IV - not the effects of different kinds
✓ Ethical Considerations of subjects.
✓ Control Over Extraneous Variables
Researchers in a true experiments have to be
✓ Internal vs. External Validity
certain that treatment groups do not consist of
✓ Availability of Resources
people who are different on preexisting
✓ Measurement Precision
characteristics.
✓ Longitudinal vs. Cross-Sectional Design
✓ Generalizability of Findings Example: If we gave introverts their exam in blue
✓ Potential for Replication paper and extroverts in their exam in yellow
Importance of experimental design paper, we would not know whether differences in
➢ Control Over variables - researchers can their exam grades were cause by the color of the
isolate the effect of the independent variable paper or the personality differences.
on the dependent variable by controlling
variable.
➢ Establishing Causality - by manipulating the The Dependent Variable - is the particular
independent variable and observing the behavior we expect to expect because of
changes in the dependent variable, experimental treatment. It is the outcome we are
researchers can infer causality. trying to explain. It is also called Dependent
➢ researchers have significant control over the measures.
variables, allows to isolate the independent
variable’s impact on the dependent variable, ❖ The dependent variable is the outcome that
reducing the influence of confounding the researcher observes and measures.
variables
❖ It is considered the "effect" in a cause-and-
➢ experimental designs generally have high
effect relationship.
internal validity, which means that the
6
❖ This means the researcher expects that ➢ Clarity and Consistency It clarify the study
changes in the independent variable will focus, while operational definitions transform
cause changes in the dependent variable. variables into consistent measurement.
❖ Dependent variables are also referred to ➢ Replicability – Operational definition helps
measures, effects, outcomes, results researchers replicate studies, validating
findings through similar measurement
➢ Imagine you are conducting an experiment to methods.
test the effect of different types of music on
concentration levels while studying. ➢ Interpretation of Results - Understanding
what was measured (operational definition)
Operational Definitions
and its broader implications (conceptual
The Independent variable and Dependent definition) is important for interpreting
variable has two definitions research findings.
7
Defining scales of measurement ❖ To assess test-retest reliability, the anxiety
questionnaire is administered to the same
Levels
group of participants twice, several weeks
➢ Nominal Data - Often used in diagnostic apart and determined by correlating the
categorization, demographic data collection, scores from the two test administrations.
and grouping variables in experimental 3. Interitem Reliability, including both Internal
designs. Consistency and Split-Half Reliability,
ensures that the items on a test or scale are
➢ Ordinal Data - Common in survey research, consistent with each other and accurately
particularly with Likert scales and rankings, as measure the same underlying construct.
well as in assessments of performance or c. Internal Consistency is a type of interitem
preference. reliability that examines how well the items on
➢ Interval Data - Used in standardized testing a test or scale correlate with each other. The
(e.g., IQ tests, psychological scales) where most common measure of internal
equal intervals are assumed but a true consistency is Cronbach's Alpha.
zero is not present. Example
➢ Ratio Data - Utilized in physiological Suppose a researcher is developing a new scale
measurements (e.g., reaction times, duration to measure self-esteem, consisting of 10 items
of behaviors) where the presence of a true (questions).
zero allows for meaningful ratio comparisons.
❖ In assessing the internal consistency, the
researcher would calculate Cronbach's Alpha
Evaluating Operational Definitions for the 10 items on the self-esteem scale. A
high alpha value (typically above 0.70)
Reliability- it is the consistency and dependability indicates that the items are highly
of experimental procedures and measurements. correlated and consistently measure the
Good operational definitions are reliable if we same construct.
apply them in more than one experiment, they
ought to work in similar ways each time. b. Split-Half Reliability the test is split into two
halves, and the consistency of the scores
1. Interrater Reliability The degree of between the two halves is assessed. It is a way to
agreement among different observers or estimate internal consistency by considering the
raters when they are assessing the same correlation between equivalent halves of a test.
phenomenon.
Example:
Researchers are studying children's aggressive
behaviors in a playground setting. Multiple Operational Construct: Self-esteem (measured by
observers are tasked with recording the frequency the sum of responses to 10 scale items).
of aggressive acts by the children. ➢ Split-Half Reliability is assessed by spliting
Operational Construct: Aggression the 10-item scale into two halves (e.g., odd-
numbered items vs. even-numbered items)
➢ To assess interrater reliability, researchers and calculates the correlation between the
would compare the observations recorded by scores on the two halves.
different observers.
Validity is the extent to which an operational
2. Test-Retest Reliability refers to the stability definition accurately measures or represents the
and consistency of a measure over time. intended construct. The soundness of an
Consistency between an individual’s scores operational definition; in experiments, the
on the same test taken at two or more principle of actually studying the variables
different times. intended to be manipulated or measured.
Consider a psychological study that measures the 1. Face Validity The degree to which a
levels of anxiety using a standardized manipulation or measurement technique is
questionnaire. self- evident. It is the most basic form of
8
validity, assessed by examining whether the 5. Construct Validity The degree to which an
measure seems appropriate at face value. operational definition accurately represents
the construct it is intended to manipulate or
Suppose researchers are developing a new
measure.
questionnaire to measure Neuroticism
6. Convergent Validity - aspect of construct
➢ If the questionnaire intuitively seem to
validity examines whether the measure
measure a construct, then the
correlates well with other measures that it
questionnaire has good face validity.
theoretically should be related to.
2. Content Validity The degree to which the
7. Discriminant Validity - aspect assesses
content of a measure reflects the content
whether the measure does not correlate with
of what is being measured.
measures of different, unrelated constructs.
Consider a test designed to measure BEPP
A researchers are developing a new scale to
Operational Construct: Psychometrician Board measure social anxiety.
exam.
Operational Construct: Social anxiety (measured
➢ To ensure content validity, the questionnaires by the new scale)
should includes the four major board subjects.
➢ Researchers would show that scores on the
Experts in Psychology education review the
new social anxiety scale correlate strongly
test to confirm that it adequately represents
with scores on existing, well-established
the entire domain expected for professional measures of social anxiety
practice (skills and knowledge).
➢ Researchers would also ensure that the new
3. Predictive Validity The degree to which a
scale does not correlate strongly with
measuring instrument yields information
unrelated constructs (general intelligence
allowing prediction of actual behavior or or physical health)
performance.
intelligence or
Operational Construct: Academic potential
(measured by scores on the college admissions Evaluating the Experiment: Internal Validity
test).
Internal validity is one of the most important
➢ If students' scores on the MCAT correlate concepts in experimentation.
strongly with their future grades in GE
➢ The certainty that the changes in behavior
courses, the test demonstrates high
observed across treatment conditions in the
predictive validity indicating effective in
experiment were actually caused by the
predicting future academic performance.
independent variable.
4. Concurrent Validity The degree to which
❑ Extraneous variables are any variables other
scores on the measuring instrument correlate
than the independent variable that might affect
with another known standard for measuring
the variable being studied. the dependent variable. A variable other than
an independent or dependent variable; a
Imagine researchers are developing a new scale variable that is not the focus of an experiment
and want to compare it with an established but can produce effects on the dependent
Inventory scale. Operational Construct sample: variable if not controlled.
Depression (measured by the new depression
scale and the BDI). ➢ They can include differences among subjects,
equipment failures, inconsistent instructions
➢ Concurrent Validity Assessment - A high (Anything that varies within the experiment).
correlation between the scores from the new
❑ Confounding variables are a specific type of
scale and the old and well validated test
extraneous variable that systematically varies
indicates good concurrent validity, which
with the independent variable and can create
suggesting that the new scale is measuring
an alternative explanation for the observed
what it intent to measure consistent with the
established measure. relationship between the independent and
9
dependent variables. A confounding variable dropout rates across the conditions of the
is intertwined with the independent variable, experiment.
making it difficult to separate its effects from
8. Selection Interactions A family of threats to
those of the independent variable.
internal validity produced when a selection
➢ Suppose a researcher was interested in the threat combines with one or more of the other
effects of age on communicator threats to internal validity; when a selection
persuasiveness. She hypothesized that older threat is already present, other threats can
communicators would be more persuasive affect some experimental groups but not
than younger communicators—even if both others
presented the same arguments. She set up
an experiment with two experimental groups.
Subjects listened to either an 18-year-old
man or a 35-year-old man presenting the
same 3-minute argument in favor of gun
control. After listening to one of the
communicators, subjects rated how
persuaded they were by the argument they
had just heard. As the researcher predicted,
subjects who heard the older man speak were
more persuaded.
Classic Threats to Internal Validity
1. History A threat to internal validity in which an
outside event or occurrence might have
produced effects on the dependent variable.
2. Maturation A threat to internal validity
produced by internal (physical or
psychological) changes in subjects.
3. Testing effects A threat to internal validity
produced by a previous administration of the
same test or other measure.
4. Instrumentation A threat to internal validity
produced by changes in the measuring
instrument itself.
5. Selection A threat to internal validity that can
occur when nonrandom procedures are used
to assign subjects to conditions or when
random assignment fails to balance out
differences among subjects across the
different conditions of the experiment.
6. Statistical Regression A threat to internal
validity that can occur when subjects are
assigned to conditions on the basis of
extreme scores on a test; upon retest, the
scores of extreme scorers tend to regress
toward the mean even without any treatment.
7. Subject Mortality (Attrition) A threat to
internal validity produced by differences in
10