Effect of Post-Encoding Emotion On Long-Term Memory: Modulation of Emotion Category and Memory Strength
Effect of Post-Encoding Emotion On Long-Term Memory: Modulation of Emotion Category and Memory Strength
Bo Wang
To cite this article: Bo Wang (2021) Effect of post-encoding emotion on long-term memory:
Modulation of emotion category and memory strength, The Journal of General Psychology,
148:2, 192-218, DOI: 10.1080/00221309.2020.1769543
Ample evidence has shown that emotion induced after encoding can
enhance memory (e.g., Dudai, 2004; Sch€ onauer, 2018). For instance, in a
study by Nielson, Yee, and Erickson (2005), participants encoded a list of
words and then watched either a neutral video (i.e., tooth brushing) or a
negative video (i.e., dental surgery). The researchers found that the video-
induced emotion enhanced delayed performance of free recall and
recognition memory. Such an enhancement effect has been replicated by
subsequent studies (e.g., Judde & Rickard, 2010; Liu, Graham, & Zorawski,
2008; Nielson & Lorber, 2009; Nielson & Powless, 2007; Wang & Fu, 2010;
Wang & Sun, 2015; 2017). However, two questions remain to be answered:
(1) Does the effect differ depending on a specific category of emotion (e.g.,
sadness, fear, anger)? (2) Does memory strength modulate the effect?
Method
Participants
A sample size estimation was calculated using GPower Version 3.1.9.2
software. Using moderate parameters (power ¼ 0.95, effect size f ¼ 0.25),
the analysis estimated a sample size of 90. Because of the need to
THE JOURNAL OF GENERAL PSYCHOLOGY 197
Table 1. Brief Descriptions for the Six Videos Used in the Current Study.
Video type Descriptions Validation studies
Neutral A professor showed how to repair a CD- Wang and Ren (2017)
ROM drive.
Anger A man rushed into a cake shop and Wang and Ren (2017)
brutally beat a pregnant woman.
Sadness A woman narrated her tragic experience Wang and Fu (2010)
in Wenchuan Earthquake.
Fear Part of a film that describes a curse born The pilot study described here
when someone dies in the grip of a
powerful rage or extreme sorrow.
Disgust A surgeon demonstrated a Nielson et al., (2005)
dental surgery.
Happiness A comic short play by three famous Wang and Fu (2010)
comedians in mainland China.
Stimuli
Six 3 min videos were used respectively in the emotion conditions: neutral,
anger, sadness, fear, disgust, and happiness. Table 1 presents brief descrip-
tions of these videos, which had been validated in prior studies (e.g., Wang
& Fu, 2010; Wang & Sun, 2015) except for the one used to induce fear. A
pilot study, however, showed that 8 our of the 13 participants reported fear
as the dominant emotion they experienced during the video presentation.
Furthermore, participants had a significant increase in arousal ratings
(before watching: M ¼ 4.85, SE ¼ 0.37; after watching: M ¼ 6.62, SE ¼ 0.46),
F (1, 42) ¼ 345.99, p < .001, gp2 ¼ 0.45.
Most of the words were translated versions of the English words used in
the study by Nielson et al. (2005). The original English words were chosen
from a normative database (Paivio, Yuille, & Madigan, 1968), where all the
words (e.g., apple, child, cottage) were highly imageable and concrete, thus
rendering all the words (targets and distracters) equally “memorable”. In
order to ensure that all the words consisted of two Chinese characters (e.g.,
天空), six words that result in three-character Chinese words were excluded.
The final word list consists of 30 words as targets in the study phase and
104 words as distractors in the memory test phase (see Appendix I for all
the words and their parameters). The target and distractor words did not
significantly differ in pleasantness, F (1, 132) ¼ 0.18, p ¼ .67, arousal, F (1,
132) ¼ 0.46, p ¼ .50, concreteness, F (1, 132) < 0.001, p ¼ .98, and
198 B. WANG
Figure 1. The Self-Assessment Manikin used to assess participants’ mood (A) and arousal (B)
before and after video presentation.
the questionnaires, all items in a questionnaire were presented for the same
period of time for all participants. After each item disappeared from the
screen, participants were instructed to press immediately a key to make a
response. All participants finished the questionnaire tasks.
At the stage of delayed test, which began after 30 minutes had elapsed
after the end of encoding, participants first rated their current mood and
arousal on the scales that were used after video presentation, and were
then presented with a list of the 30 old words and 104 new words. In each
trial, a fixation cross appeared for 500 ms, followed by a word presented on
the screen for 1500 ms1. Right after a word disappeared, participants were
instructed to press a key to determine whether the word was old or new
(“F” for old words; “J” for new words). After identifying a word as old,
participants were instructed to make a “R” (i.e., “remember”) or “K” (i.e.,
“know”) response (Mccabe & Geraci, 2009) by pressing “G” and “H”,
respectively. A “R” response indicates conscious retrieval of details related
to the encoding of a word, whereas a “K” response indicates only familiar-
ity with the word but lack of retrieval of any details related to the encoding
episode (Yonelinas, 2002). There was a practice block involving six trials
(two old words and four new words) to make sure that all participants
understand the instructions regarding “R” and “K” responses. Participants
were encouraged to be as accurate as possible. The detailed instructions are
presented in Appendix II. All words were presented randomly for each
participant.
Statistical analyses
According to the two-high threshold model (Snodgrass & Corwin, 1988)
and prior studies (e.g., Wang, 2013), overall recognition was calculated by
subtracting false alarm rates from hit rates. In order to make a comparison
with studies that used d’ as the dependent variable, we also conducted an
analysis on d’. In accordance with Sharot and Yonelinas (2008), recollection
was computed by subtracting the proportion of new items receiving “R”
(i.e., “remember”) responses from the proportion of old items receiving “R”
responses. Familiarity was computed by the formula:
K ¼ Khit=ð1-RhitÞ-Kfa=ð1-RfaÞ
where Khit, Rhit, Kfa, and Rfa respectively represent “K” hit rates, “R” hit
rates, “K” false alarm rates, and “R” false alarm rates. Post hoc tests were
based on Bonferroni adjustment.
THE JOURNAL OF GENERAL PSYCHOLOGY 201
Results
Participants’ characteristics
Table 2 presents participants’ characteristics as revealed from the question-
naires. Participants across the six emotion conditions did not significantly
differ in arousal predisposition, emotion reappraisal, emotion suppression,
state anxiety, trait anxiety or depression scores.
0RRG5DWLQJV
7LPH
7LPH
1HXWUDOLW\ $QJHU 6DGQHVV )HDU 'LVJXVW +DSSLQHVV
(PRWLRQ&DWHJRU\
Figure 2. Mood ratings at time 1 (before video presentation) and time 2 (after video presenta-
tion) in the six emotion categories (i.e., the groups who were presented with the six types of
videos). All groups rated happiness on a scale (where 1 ¼ most unpleasant and 9 ¼ most pleas-
ant). Error bars represent standard errors.
Table 3. Pairwise Comparisons with Confidence Intervals for Mood and Arousal Ratings
Immediately after Video Presentation.
Neutrality- Neutrality- Neutrality-
Dimension Neutrality-Anger Sadness Neutrality-Fear Disgust Happiness
Mood CI ¼ [1.54, CI ¼ [0.13, CI ¼ [1.08, CI ¼ [0.22, CI ¼ [–3.28,
3.27], p < .001 1.83], p ¼ .024 2.78], p < .001 1.92], p ¼ .014 1.58], p
< .001
Arousal CI ¼ [–3.69, CI ¼ [–2.71, CI ¼ [–3.65, CI ¼ [–2.21, CI ¼ [–3.48,
1.46], p 0.50], p 1.44], p 0.009], p ¼ .052 1.27], p
< .001 ¼ .005 < .001 < .001
Note. The confidence intervals (CI) are for the differences in mood/arousal ratings between two neutrality and
other emotion conditions. Mood/arousal ratings before video presentation was included as covariates.
$URXVDO5DWLQJV
7LPH
7LPH
1HXWUDOLW\ $QJHU 6DGQHVV )HDU 'LVJXVW +DSSLQHVV
(PRWLRQ&DWHJRU\
Figure 3. Arousal ratings at time 1 (before video presentation) and time 2 (after video presen-
tation) in the six emotion categories (i.e., the groups who were presented with the six types of
videos). All groups rated arousal on a scale (where 1 ¼ least aroused and 9 ¼ most aroused).
Error bars represent standard errors.
was no significant change in arousal ratings over the period of video pres-
entation in the neutral condition (p ¼ .057). In the conditions of anger, sad-
ness, fear, or happiness, there were significant increases in arousal ratings
(p ¼ .001, p ¼.041, p < .001, and p ¼ .001, respectively). In the disgust
THE JOURNAL OF GENERAL PSYCHOLOGY 203
condition, however, arousal ratings did not significantly change over the
period of video presentation (p ¼ .77).
Furthermore, prior to video presentation, participants across the six emo-
tion conditions had comparable arousal ratings, F (5, 90) ¼ 1.00, p ¼ .42,
gp2 ¼ 0.05. Immediately after video presentation, however, there was a sig-
nificant main effect of Emotion Category, F (5, 90) ¼ 6.80, p < .001,
gp2 ¼ 0.27. Results of planned pairwise comparisons were presented in the
second row of Table 3.
Based on participants’ retrospective reports, the percentage was calcu-
lated of participants reporting their dominant emotions during video pres-
entation. For participants who watched videos that were respectively
intended for induction of neutrality, anger, sadness, fear, disgust, and hap-
piness, there were 75.00%, 93.75%, 75.00%, 93.75%, 87.50%, and 81.25% of
them who reported the above six emotions as the dominant emotions they
experienced during video presentation. There was only one male partici-
pant (1/16, 6.25%) who reported “surprise” in the “sadness” condition.
Therefore, the videos elicited specific emotions as expected.
Table 4. Overall Hit Rates and False Alarm Rates for Words Presented Once and Four Times in
the Six Emotion Categories.
Emotion category
Neutrality Anger Sadness Fear Disgust Happiness
Presentation
times Measure M SE M SE M SE M SE M SE M SE
Once Overall Hit Rates 0.64 0.05 0.60 0.05 0.69 0.05 0.68 0.05 0.57 0.05 0.66 0.05
Overall False Alarm Rates 0.05 0.03 0.15 0.03 0.07 0.03 0.05 0.03 0.03 0.03 0.07 0.03
Four times Overall Hit Rates 0.65 0.05 0.67 0.05 0.77 0.05 0.67 0.05 0.63 0.05 0.78 0.05
Overall False Alarm Rates 0.05 0.03 0.15 0.03 0.07 0.03 0.05 0.03 0.03 0.03 0.07 0.03
Note. These hit rates and false alarm rates are not adjusted and are the same as simple proportions of hits and
false alarms.
1HXWUDO
2YHUDOO5HFRJQLWLRQ 3U
$QJHU
6DGQHVV
)HDU
'LVJXVW
+DSSLQHVV
2QFH )RXUWLPHV
3UHVHQWDWLRQ7LPHV
Figure 4. Overall recognition (hit rates minus false alarm rates) as a function of presentation
times in the six emotion conditions. Overall recognition was adjusted in that it was calculated
by subtracting false alarm rates from hit rates. Error bars represent standard errors.
Times: once vs. four times) ANOVA on overall recognition showed a sig-
nificant main effect of Presentation Times, F (1, 90) ¼ 7.14, p ¼ .009,
gp2 ¼ 0.07, indicating better overall recognition for words that were pre-
sented four times (M ¼ 0.62, SE ¼ 0.02) than for words presented once
(M ¼ 0.57, SE ¼ 0.02). There was also a significant main effect of Emotion
Category, F (5, 90) ¼ 2.76, p ¼ .023, gp2 ¼ 0.13. The interaction between
Emotion Category and Presentation Times was not significant as shown in
Figure 4, F (5, 90) < 1.
With regard to the significant main effect of Emotion Category, planned
contrasts showed that overall recognition in the anger condition (M ¼ 0.48,
SE ¼ 0.04) was significantly lower than in the neutral condition (M ¼ 0.59,
SE ¼ 0.04) (p ¼ .049). There were no significant differences between all
other emotion conditions and the neutral condition (all ps > 0.26). Post
hoc tests showed that overall recognition in the anger condition was signifi-
cantly lower than in the happiness condition (M ¼ 0.65, SE ¼ 0.04)
(p ¼ .037) and the sadness condition (M ¼ 0.65, SE ¼ 0.04) (p ¼ .039). All
other comparisons were not significant (all ps > 0.20). The pattern of
results was similar when d’ was used as the dependent variable.
THE JOURNAL OF GENERAL PSYCHOLOGY 205
Table 5. Overall Hit Rates and False Alarm Rates concerning R and K Responses for Words
Presented Once and Four Times in the Six Emotion Categories.
Emotion Category
Neutrality Anger Sadness Fear Disgust Happiness
Presentation times Measure M SE M SE M SE M SE M SE M SE
Once “R” Hit Rates 0.62 0.05 0.47 0.05 0.60 0.05 0.64 0.05 0.55 0.05 0.58 0.05
"K" Hit Rates 0.02 0.03 0.13 0.03 0.08 0.03 0.04 0.03 0.02 0.03 0.09 0.03
“R” False Alarm Rates 0.03 0.02 0.10 0.02 0.04 0.02 0.03 0.02 0.02 0.02 0.02 0.02
“K” False Alarm Rates 0.02 0.01 0.05 0.01 0.04 0.01 0.02 0.01 0.01 0.01 0.05 0.01
Four times “R” Hit Rates 0.62 0.05 0.62 0.05 0.73 0.05 0.64 0.05 0.62 0.05 0.73 0.05
“K” Hit Rates 0.03 0.02 0.05 0.02 0.03 0.02 0.03 0.02 0.02 0.02 0.05 0.02
“R” False Alarm Rates 0.03 0.02 0.10 0.02 0.04 0.02 0.03 0.02 0.02 0.02 0.02 0.02
“K” False Alarm Rates 0.02 0.01 0.05 0.01 0.04 0.01 0.02 0.01 0.01 0.01 0.05 0.01
Note. These hit rates and false alarm rates are not adjusted and are the same as simple proportions of hits and
false alarms.
Effect on recollection
Table 5 shows overall hit rates and false alarm rates for words presented
once and four times in the six emotion categories. The ANOVA on recol-
lection (i.e., adjusted hit rates for “R” responses) showed a significant main
effect of Presentation Times, F (1, 90) ¼ 11.47, p ¼ .001, gp2 ¼ 0.11, indicat-
ing that words presented four times (M ¼ 0.62, SE ¼ 0.02) were more often
recollected than words presented once (M ¼ 0.54, SE ¼ 0.02). There was
also a significant main effect of Emotion Category, F (5, 90) ¼ 2.57,
p ¼ 0.032, gp2 ¼ 0.13. The interaction between Emotion Category and
Presentation Times was not significant as shown in Figure 5, F (5,
90) ¼ 1.41, p ¼ .23, gp2 ¼ 0.07. The planned contrasts showed that recollec-
tion in the anger condition (M ¼ 0.45, SE ¼ 0.04) was significantly lower
than in the neutral condition (M ¼ 0.59, SE ¼ 0.04) (p ¼ .025). Post hoc
tests showed that participants who watched the anger-inducing video recol-
lected fewer words than those who watched the sadness-inducing video
(M ¼ 0.63, SE ¼ 0.04) (p ¼ .047). There was a trend that participants who
watched the anger-inducing video recollected fewer words than those who
watched the happiness-inducing video (M ¼ 0.63, SE ¼ 0.04) (p ¼ .053). All
other comparisons were not significant (all ps > 0.14).
Effect on familiarity
The ANOVA on familiarity (i.e., adjusted hit rates for “K” responses) also
showed a significant main effect of Presentation Times, F (1, 90) ¼ 9.04,
p ¼ 0.003, gp2 ¼ 0.09, characterized by lower familiarity for words presented
four times (M ¼ 0.004, SE ¼ 0.006) than for words presented once
(M ¼ 0.03, SE ¼ 0.009). The main effect of Emotion Category was not sig-
nificant, F (5, 90) ¼ 0.62, p ¼ 0.69, gp2 ¼ 0.03. There was a significant inter-
action between Presentation Times and Emotion Category as shown in
206 B. WANG
1HXWUDO
$QJHU
6DGQHVV
5HFROOHFWLRQ
)HDU
'LVJXVW
+DSSLQHVV
2QFH )RXUWLPHV
3UHVHQWDWLRQ7LPHV
Figure 5. Recollection as a function of presentation times in the six emotion conditions.
Recollection was adjusted in that it was derived from subtracting the proportion of new items
receiving “R” (i.e., “remember”) responses from the proportion of old items receiving “R”
responses. Error bars represent standard errors.
1HXWUDO
$QJHU
6DGQHVV
)HDU
'LVJXVW
)DPLOLDULW\
+DSSLQHVV
2QFH )RXUWLPHV
3UHVHQWDWLRQ7LPHV
Figure 6. Familiarity as a function of presentation times in the six emotion conditions.
Familiarity was adjusted in that it was computed from “K” (i.e., “know”) responses (K ¼ Khit/
(1Rhit))(Kfa/(1Rfa)), where Khit, Rhit, Kfa, and Rfa respectively represent “K” hit rates, “R”
hit rates, “K” false alarm rates, and “R” false alarm rates. Error bars represent standard errors.
Figure 6, F (5, 90) ¼ 3.15, p ¼ 0.012, gp2 ¼ 0.15. For words presented four
times, planned contrasts showed no significant differences among emotion
conditions and the neutral condition (all ps > 0.25). For words presented
once, planned contrasts showed that familiarity in the anger condition
(M ¼ 0.07, SE ¼ 0.02) was significantly higher than in the neutral condition
(M ¼ .002, SE ¼ 0.02) (p ¼ 0.012). No significant differences were found
between other emotion conditions and the neutral condition (all ps > 0.10).
Whether for words presented four times or once, post-hoc tests showed no
significant differences among the emotion conditions (all ps > 0.26).
Post hoc tests were also made between the “once” and “four times” con-
ditions in each emotion condition. The results showed that familiarity for
prioritized words was lower than for non-prioritized words in both the
THE JOURNAL OF GENERAL PSYCHOLOGY 207
anger and sadness conditions (p < .001 and p ¼ .019, respectively). In other
emotion conditions, no significant differences were observed (ps > 0.11).
Therefore, the source of interaction comes from the differences between
one and four presentations in some (but not all) emotion categories.
Discussion
Built on past studies that examined the enhancing effect of post-encoding
emotion on long-term memory, the current study sought to answer two
questions: (1) Does the enhancing effect vary depending on a specific cat-
egory of emotion? (2) Is the enhancing effect contingent on memory
strength? Based on differential mechanisms underlying discrete emotions
(e.g., Sinha et al., 1992) and distinct effects of discrete emotions on mem-
ory (e.g., Levine & Burgess, 1997), the effect of post-encoding emotion was
expected to vary depending on a specific category of emotion. In addition,
it was hypothesized that memory strength would be modulatory in that
weak memory rather than strong memory would be enhanced.
The current finding shows that participants who watched the anger-
inducing video had lower overall recognition than those who watched the
sadness-inducing or happiness-inducing video. Furthermore, this pattern
was generally replicated when recollection was used as the dependent vari-
able although it should be noted that the difference between the anger and
happiness conditions was marginal. Regardless of whether overall recogni-
tion or recollection is used, no significant differences were observed
between the neutral and all other emotion conditions (i.e., sadness, fear,
disgust, and happiness). Furthermore, no significant differences were found
for all other pairwise comparisons (i.e., anger vs. fear, anger vs. disgust,
sadness vs. fear, sadness vs. disgust, sadness vs. happiness, fear vs. disgust,
fear vs. happiness, disgust vs. happiness). Taken together, the current
results provide partial support to the hypothesis that the effect of post-
encoding emotion would vary depending on a specific category of emotion.
It must be noted, however, that the above statement is true in comparing
one emotion condition to another, but not when comparing one emotion
condition to the neutral condition.
With regard to the second question, the current study shows that mem-
ory strength does not modulate the effect of post-encoding emotion on
overall recognition and recollection. With familiarity as the dependent
variable, however, there was an interaction between memory strength and
emotion category (i.e., familiarity was lower for prioritized words than for
non-prioritized words in the “anger” and “sadness” categories). Taken
together, the current finding provides preliminary support to the hypothesis
208 B. WANG
Note
1. Test words on screen for only 1500 ms rather than remaining on screen until a
response was given because this ensures that all participants spent exactly the same
amount of time viewing each word. This practice is in accordance with Wang and
Ren (472017).
Funding
This study was supported by a Grant from the National Natural Science Foundation of
China [No. 31100736].
References
Beck, A. T., Ward, C. H., Mendelson, M., Mock, J., & Erbaugh, J. (1961). An inventory for
measuring depression. Archives of General Psychiatry, 4, 561–571.
Calder, A. J., Lawrence, A. D., & Young, A.W. (2001). Neuropsychology of fear and loath-
ing. Nature Reviews Neuroscience, 2(5), 352–363. doi:10.1038/35072584
Collet, C., Vernet-Maury, E., Delhomme, G., & Dittmar, A. (1997). Autonomic system
response patterns specificity to basic emotions. Journal of the Autonomic Nervous System,
62(1–2), 45–57. doi:10.1016/S0165-1838(96)00108-7
Coren, S. (1988). Prediction of insomnia from arousability predisposition scores: Scale
development and cross-validation. Behaviour Research and Therapy, 26(5), 415–420. doi:
10.1016/0005-7967(88)90076-9
Dudai, Y. (2004). The Neurobiology of Consolidations, Or, How Stable is the Engram?
Annual Review of Psychology, 55, 51–86. doi:10.1146/annurev.psych.55.090902.142050
Dunsmoor, J. E., Murty, V. P., Davachi, L., & Phelps, E. A. (2015). Emotional learning
selectively and retroactively strengthens memories for related events. Nature, 520(7547),
345–348. doi:10.1038/nature14106
Ekman, P., & Cordaro, D. (2011). What is meant by calling emotions basic. Emotion
Review, 3(4), 364–370. doi:10.1177/1754073911410740
Gable, P., & Harmon-Jones, E. (2010). The blues broaden, but the nasty narrows.
Psychological Science, 21(2), 211–215. doi:10.1177/0956797609359622
Gross, J. J., & John, O. P. (2003). Individual differences in two emotional regulations proc-
esses: Implications for affect, relationships and well-being. Journal of Personality and
Social Psychology, 85(2), 348–362. doi:10.1037/0022-3514.85.2.348
THE JOURNAL OF GENERAL PSYCHOLOGY 213
Jack, R., Garrod, O., & Schyns, P. (2014). Dynamic facial expressions of emotion transmit
an evolving hierarchy of signals over time. Current Biology: CB, 24(2), 187–192. doi:10.
1016/j.cub.2013.11.064
Judde, S., & Rickard, N. (2010). The effect of postlearning presentation of music on long-
term word-list retention. Neurobiology of Learning and Memory, 94(1), 13–20. doi:10.
1016/j.nlm.2010.03.002
Kaplan, R. L., Damme, I. V., & Levine, L. J. (2012). Motivation matters: Differing effects of
pre-goal and post-goal emotions on attention and memory. Frontiers in Psychology, 3,
404. doi:10.3389/fpsyg.2012.00404
Kazen, M., Kuenne, T., Heiko Frankenberg, H., & Quirin, M. (2012). Inverse relation
between cortisol and anger and their relation to performance and explicit memory.
Biological Psychology, 91(1), 28–35. doi:10.1016/j.biopsycho.2012.05.006
Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2008). International affective picture system
(IAPS): Affective ratings of pictures and instruction manual. Technical Report A-7.
University of Florida, Gainesville, FL.
Levine, L., & Burgess, S. L. (1997). Beyond general arousal: Effects of specific emotions on
memory. Social Cognition, 15(3), 157–181. doi:10.1521/soco.1997.15.3.157
Liu, D. L. J., Graham, S., & Zorawski, M. (2008). Enhanced selective memory consolidation
following post-learning pleasant and aversive arousal. Neurobiology of Learning and
Memory, 89(1), 36–46. doi:10.1016/j.nlm.2007.09.001
Madan, C. R., Scott, S., & Kensinger, E. (2018). Positive emotion enhances association-
memory. Emotion, 19(4), 733–740.
Mather, M., & Sutherland, M. (2009). Disentangling the effects of arousal and valence on
memory for intrinsic details. Emotion review: Journal of the International Society for
Research on Emotion, 1(2), 118–119. doi:10.1177/1754073908100435
McCabe, D. P., & Geraci, L. D. (2009). The influence of instructions and terminology on
the accuracy of remember–know judgments. Consciousness and Cognition, 18(2),
401–413. doi:10.1016/j.concog.2009.02.010
McCullough, A. M., & Yonelinas, A. P. (2013). Cold-pressor stress after learning enhances
familiarity-based recognition memory in men. Neurobiology of Learning and Memory,
106, 11–17. doi:10.1016/j.nlm.2013.06.011
McGaugh, J. L. (2004). The amygdala modulates the consolidation of memories of emo-
tionally arousing experiences. Annual Review of Neuroscience, 27, 1–28. doi:10.1146/
annurev.neuro.27.070203.144157
McGaugh, J. L. (2018). Emotional arousal regulation of memory consolidation. Current
Opinion in Behavioral Sciences, 19, 55–60. doi:10.1016/j.cobeha.2017.10.003
Nielson, K. A., & Arentsen, T. J. (2012). Memory modulation in the classroom:
Selective enhancement of college examination performance by arousal induced after
lecture. Neurobiology of Learning and Memory, 98(1), 12–16. doi:10.1016/j.nlm.2012.04.
002
Nielson, K. A., & Lorber, W. (2009). Enhanced post-learning memory consolidation
is influenced by arousal predisposition and emotion regulation but not by stimulus
valence or arousal. Neurobiology of Learning and Memory, 92(1), 70–79. doi:10.1016/j.
nlm.2009.03.002
Nielson, K. A., & Powless, M. (2007). Positive and negative sources of emotional arousal
enhance long-term word-list retention when induced as long as thirty minutes after
learning. Neurobiology of Learning and Memory, 88(1), 40–47. doi:10.1016/j.nlm.2007.03.
005
214 B. WANG
Nielson, K. A., Yee, D., & Erickson, K. I. (2005). Memory enhancement by a semantically
unrelated emotional arousal source induced after learning. Neurobiology of learning and
memory, 84(1), 49–56. doi:10.1016/j.nlm.2005.04.001
Paivio, A., Yuille, J. C., & Madigan, S. A. (1968). Concreteness, imagery, [Database] and
meaningfulness values for 925 nouns. Journal of Experimental Psychology, 76(1, Pt.2),
1–25. doi:10.1037/h0025327
Rainville, P., Bechara, A., Naqvi, N., & Antonio, R. (2006). Damasio basic emotions are
associated with distinct patterns of cardiorespiratory activity. International Journal of
Psychophysiology, 61(1), 5–18. doi:10.1016/j.ijpsycho.2005.10.024
Rajaram, S. (1993). Remembering and knowing: Two means of access to the personal past.
Memory & Cognition, 21(1), 89–102. doi:10.3758/bf03211168
Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social
Psychology, 39(6), 1161–1178. doi:10.1037/h0077714
Saarim€aki, H., Gotsopoulos, A., J€a€askel€ainen, I. P., Lampinen, J., Vuilleumier, P., Hari, R.,
Sams, M., & … ummenmaa, L. (2016). Discrete Neural Signatures of Basic Emotions.
Cerebral Cortex, 26(6), 2563–2573. doi:10.1093/cercor/bhv086
Schoch, S. F., Cordi, M. J., & Rasch, B. (2017). Modulating influences of memory strength
and sensitivity of the retrieval test on the detectability of the sleep consolidation effect.
Neurobiology of Learning and Memory, 145, 181–189. doi:10.1016/j.nlm.2017.10.009
Sch€onauer, M. (2018). Sleep spindles: Timed for memory consolidation. Current Biology:
CB, 28(11), R656–R658. doi:10.1016/j.cub.2018.03.046
Sharot, T., & Yonelinas, A. P. (2008). Differential time-dependent effects of emotion on
recollective experience and memory for contextual information. Cognition, 106(1),
538–547. doi:10.1016/j.cognition.2007.03.002
Sinha, R., Lovallo, W. R., & Parson, O. A. (1992). Cardiovascular differentiation of emo-
tions. Psychosomatic Medicine, 54(4), 422–435. doi:10.1097/00006842-199207000-00005
Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition memory:
Applications to dementia and amnesia. Journal of Experimental Psychology: General,
117(1), 34–50. doi:10.1037/0096-3445.117.1.34
Spielberger, C. D., Gorsuch, R. L., Lushene, P. R., Vagg, P. R., & Jacobs, A. G. (1983).
Manual for the State-Trait Anxiety Inventory (Form Y). Palo Alto: Consulting
Psychologists Press, Inc.
Wais, P. E., Mickes, L., & Wixted, J. T. (2008). Remember/know judgments probe degrees
of recollection. Journal of Cognitive Neuroscience, 20(3), 400–405. doi:10.1162/jocn.2008.
20041
Wang, B. (2013). Facial expression influences recognition memory for faces: Robust
enhancement effect of fearful expression. Memory (Hove, England), 21(3), 301–314. doi:
10.1080/09658211.2012.725740
Wang, B. (2015). Negative emotion elicited in high school students enhances consolidation
of item memory, but not source memory. Consciousness and Cognition, 33, 185–195. doi:
10.1016/j.concog.2014.12.015
Wang, B., & Fu, X. (2010). Gender differences in the effects of post-learning emotion on
consolidation of item memory and source memory. Neurobiology of Learning and
Memory, 93(4), 572–580. doi:10.1016/j.nlm.2010.02.005
Wang, B., & Ren, Y. (2017). Effect of post-encoding emotion on recollection and familiarity
for pictures. Quarterly Journal of Experimental Psychology (2006), 70(7), 1236–1253. doi:
10.1080/17470218.2016.1178311
THE JOURNAL OF GENERAL PSYCHOLOGY 215
Wang, B., & Sun, B. (2015). Timing matters: Negative emotion elicited 5 min but not
30 min or 45 min after learning enhances consolidation of internal-monitoring source
memory. Acta Psychologica, 157, 56–64. doi:10.1016/j.actpsy.2015.02.006
Wang, B., & Sun, B. (2017). Post-encoding emotional arousal enhances consolidation of
item memory, but not reality-monitoring source memory. The Quarterly Journal of
Experimental Psychology, 70(3), 461–472. doi:10.1080/17470218.2015.1134604
Yonelinas, A. (1999). The contribution of recollection and familiarity to recognition and
source-memory judgments: A formal dual-process model and an analysis of receiver
operating characteristics. Journal of Experimental Psychology: Learning Memory and
Cognition, 25(6), 1415–1434. doi:10.1037/0278-7393.25.6.1415
Yonelinas, A. P. (2002). The nature of recollection and familiarity: A review of 30 years of
research. Journal of Memory and Language, 46(3), 441–517. doi:10.1006/jmla.2002.2864
Yonelinas, A. P., Parks, C. M., Koen, J. D., Jorgenson, J., & Mendoza, S. P. (2011). The
effects of post-encoding stress on recognition memory: Examining the impact of skydiv-
ing in young men and women. Stress (Amsterdam, Netherlands), 14(2), 136–144. doi:10.
3109/10253890.2010.520376
Appendix I
The Chinese words (with their English words and parameters) used in the study.
Continued.
English Chinese Type Pleasantness Arousal Familiarity Concreteness
Lip 嘴唇 Target 5.79 5.89 6.74 7.74
Plank 板条 Distractor 5.32 5.21 5.63 5.58
Gem 宝石 Distractor 6.58 6.37 5.89 6.16
Newspaper 报纸 Distractor 5.79 5.95 6.53 7.16
Leopard 豹子 Distractor 6.16 6 5.79 6.79
Refrigerator 冰箱 Distractor 6.32 5.95 7.05 6.79
Spinach 菠菜 Distractor 5.47 6.11 6.95 7.32
Meadow 草甸 Distractor 5.84 5.84 5.53 5.63
Lawn 草坪 Distractor 6.11 5.89 6.58 6.95
Fork 叉子 Distractor 5.42 5.74 6.63 6.53
Hammer 锤子 Distractor 5.74 5.47 6.79 7.11
Hall 大厅 Distractor 6.16 6.26 7 7.63
University 大学 Distractor 5.89 5.68 7.11 7
Geese 鹅肉 Distractor 5.26 4.89 5.16 5.05
Bowl 饭碗 Distractor 6.68 6.16 7.32 7.47
Tomb 坟墓 Distractor 4.53 4.47 5.37 5.74
Cane 甘蔗 Distractor 6.63 6.37 7.05 7
Piano 钢琴 Distractor 6.74 6.26 6.21 7
Dove 鸽子 Distractor 6.42 6.32 6.68 6.95
Factory 工厂 Distractor 5.37 5.21 5.79 6.53
Palace 宫殿 Distractor 5.53 6.11 5.42 6.84
Shore 海岸 Distractor 6.79 6.37 7 7.37
Beaver 海狸 Distractor 5.63 5.32 4.58 4.63
Ocean 海洋 Distractor 6.21 6 6.79 6.84
River 河流 Distractor 6 5.74 7 6.84
Flood 洪水 Distractor 4.95 4.95 5.37 6.32
Fox 狐狸 Distractor 5.32 5.63 6.05 6.16
Lake 湖泊 Distractor 6.26 6 6.21 6.95
Bouquet 花束 Distractor 7.11 7.11 7.37 7.26
Garden 花园 Distractor 6.26 6.58 6.95 6.89
Fire 火焰 Distractor 5.53 5.42 6.95 7.21
Jail 监狱 Distractor 4.42 4.63 5.05 5.95
Claw 脚爪 Distractor 5.42 5.32 5.79 5.79
Saloon 轿车 Distractor 6.05 5.89 6.05 6.79
Church 教堂 Distractor 6.37 6.05 6.16 6.53
Money 金钱 Distractor 6.95 6.53 7.42 7.37
Policeman 警察 Distractor 5.42 5.68 6.16 7.16
Bar 酒吧 Distractor 5.68 5.63 5.47 6
Hotel 酒店 Distractor 5.79 5.74 6.58 6.63
Alcohol 酒精 Distractor 5.16 5 5.79 6.11
Coffee 咖啡 Distractor 5.74 5.79 6.21 6.05
Truck 卡车 Distractor 5.74 5.32 6.63 6.95
Oven 烤箱 Distractor 6.16 6.21 6.68 6.89
Dawn 黎明 Distractor 6.74 6.89 7.32 7.11
Mule 骡子 Distractor 5.68 5.74 5.53 5.84
Green 绿色 Distractor 6.58 6.21 7.16 7.16
Vest 马甲 Distractor 5.84 5.84 5.47 6.58
Hoof 马蹄 Distractor 5.05 5.37 5.63 6.32
Caterpillar 毛虫 Distractor 4.47 5.26 6.16 6.53
Rose 玫瑰 Distractor 7 6.68 7.37 7.37
Dollar 美元 Distractor 6.68 6.84 6.21 6.42
Flour 面粉 Distractor 6.32 6 6.58 7.26
Star 明星 Distractor 6.16 5.89 6.26 7.16
Ink 墨水 Distractor 5.63 5.79 6.32 6.79
Barrel 木桶 Distractor 5.53 5.32 6.32 7.16
Priest 牧师 Distractor 5.95 5.89 5.68 5.79
Boy 男孩 Distractor 6.32 6.32 7.42 7.26
Nun 尼姑 Distractor 5 5.05 5.37 5.63
Lemonade 柠檬 Distractor 6.37 6.53 7.16 7.42
Woman 女人 Distractor 5.74 5.79 6.47 6.79
Flask 瓶子 Distractor 6.13 6.11 6.95 7.5
(continued)
THE JOURNAL OF GENERAL PSYCHOLOGY 217
Continued.
English Chinese Type Pleasantness Arousal Familiarity Concreteness
Cattle 肉牛 Distractor 6.21 5.89 6.16 6.74
Meat 肉片 Distractor 6.53 6.26 6.89 7.16
Forest 森林 Distractor 6.42 5.95 6.47 7.11
Mountain 山峰 Distractor 6.58 6.26 7.16 7.11
Valley 山谷 Distractor 5.84 5.95 6.16 6.47
Cottage 山寨 Distractor 5.89 5.47 6.16 6.11
Clock 时钟 Distractor 5.63 5.79 6.53 6.32
Arm 手臂 Distractor 5.58 5.53 7.11 7.63
Shotgun 手枪 Distractor 5.42 5.21 5.89 6.95
Book 书本 Distractor 6.53 6.21 7.79 7.84
Leaf 树叶 Distractor 6.63 6.26 7.16 7.53
Harp 竖琴 Distractor 6.11 5.79 5.68 6.32
Water 水域 Distractor 5.42 5.05 5.74 5.58
Tower 塔楼 Distractor 5.84 5.47 5.74 6.79
Tablespoon 汤匙 Distractor 6.42 6.05 7 7.21
Candy 糖果 Distractor 6.58 6.11 7 7.21
Peach 桃子 Distractor 6.58 6.53 7.26 7.53
Pupil 瞳孔 Distractor 5.84 5.42 6.26 6.89
Skull 头骨 Distractor 5.16 4.95 5.37 6
Toast 吐司 Distractor 7 6.84 7 7.21
Rattle 响板 Distractor 5.16 4.58 5.11 4.84
Trumpet 小号 Distractor 5.32 5.21 5.05 6.16
Wheat 小麦 Distractor 5.58 5.05 5.47 5.84
Bird 小鸟 Distractor 6.37 6.05 6.58 7.26
Hut 小屋 Distractor 6.11 6 6.26 6.37
Shoes 鞋子 Distractor 6.11 5.95 6.68 6.58
Letter 信件 Distractor 6.16 6.05 6.63 6.79
Blood 血液 Distractor 4.63 4.58 6.58 6.21
Chair 椅子 Distractor 6.37 6.16 7.37 7.47
Engine 引擎 Distractor 5.42 5.11 5.95 5.63
Infant 婴儿 Distractor 6.37 6.16 6.79 6.63
Camp 营地 Distractor 6.21 6.11 6.11 6.05
Coin 硬币 Distractor 6.68 6.74 7.37 7.74
Fisherman 渔夫 Distractor 5.79 5.37 5.84 6.16
Umbrella 雨伞 Distractor 5.79 5.79 6.37 6.95
Tomahawk 战斧 Distractor 5.05 5.11 5.37 5.68
Steamer 蒸笼 Distractor 6.26 6.53 6.89 6.95
Plant 植物 Distractor 5.95 5.79 6.68 6.42
Paper 纸张 Distractor 5.47 6.11 6.53 7.42
Elbow 肘部 Distractor 6.11 5.95 6.79 6.89
Bullet 子弹 Distractor 5.74 5.74 5.53 6.16
Soccer 足球 Distractor 5.68 5.58 5.95 6.74
Diamond 钻石 Distractor 6.95 6.68 6.21 6.32
Note. (1) A total of 19 participants (mean age ¼ 20.21 years, SD ¼ 0.98) were recruited to rate the words. During
the rating, each word was presented at the screen center, and participants were instructed to rate the pleas-
antness, arousal, familiarity and concreteness of each word. The ratings were made using 9-point likert scales
ranging from 1 to 9, with 1 indicating the least degree and 9 the highest degree. For instance, a rating of 1
for concreteness indicates that a word is extremely abstract (not concrete), whereas a rating of 9 for concrete-
ness indicates that a word is extremely concrete. (2) The following three words were not used in the study of
Nielson et al. (2005): 足球(soccer), 面粉(flour), and 树叶(leaf).
218 B. WANG
Appendix II
The instructions for the delayed memory test.
Now we are going to test your memory for the words previously presented. Please first
focus your attention on”þ” at the screen center, and then pay close attention to the word at
the screen center. If you think you saw the word during previous study phase, please press the
key that corresponds to the letter above “saw”; If you think you did not see the word during
previous study phase, please press the key that corresponds to the letter above “did not see”.
After you press the key to indicate that you saw a word, you will be further asked to make
the following judgments. If you can consciously recollect the episode related to the study of
the word (e.g., something that happened during the encoding, or a thought occurring to you
during the study phase) , please press the key that corresponds to the letter above
“remember”. If you felt that you were just familiar with the word but could not retrieve any
details related to the study phase and could not state the reason why you felt a word was
familiar to you, please press the key that corresponds to the letter above “know”.