Picture 577
Picture 577
Picture 577
Contents
1. The Origins of MFT 58
2. The Current Theory 61
2.1 Nativism: There is a “first draft” of the moral mind 61
2.2 Cultural learning: The first draft gets edited during development within a
particular culture 63
2.3 Intuitionism: Intuitions come first, strategic reasoning second 65
2.4 Pluralism: There were many recurrent social challenges, so there are many
moral foundations 67
3. Empirical Findings 71
3.1 Methods and measures 72
3.2 Moral foundations and political ideology 74
3.3 Moral foundations and other psychological constructs 87
3.4 Cross-cultural differences and intergroup relations 91
3.5 Implicit moral cognition 94
4. Future Directions 98
4.1 Criticisms of the theory 98
4.2 Getting specific: What does it take to be a foundation? 107
4.3 Looking ahead: New directions for moral foundations research 114
5. Conclusion 118
Acknowledgments 119
References 119
Abstract
Where does morality come from? Why are moral judgments often so similar across cul-
tures, yet sometimes so variable? Is morality one thing, or many? Moral Foundations
Theory (MFT) was created to answer these questions. In this chapter, we describe
the origins, assumptions, and current conceptualization of the theory and detail the
empirical findings that MFT has made possible, both within social psychology and
beyond. Looking toward the future, we embrace several critiques of the theory and
specify five criteria for determining what should be considered a foundation of human
morality. Finally, we suggest a variety of future directions for MFT and moral psychology.
“The supreme goal of all theory is to make the irreducible basic elements as simple
and as few as possible without having to surrender the adequate representation of
a single datum of experience.” (Einstein, 1934, p. 165)
“I came to the conclusion that there is a plurality of ideals, as there is a plurality of
cultures and of temperaments. . .There is not an infinity of [values]: the number of
human values, of values which I can pursue while maintaining my human sem-
blance, my human character, is finite—let us say 74, or perhaps 122, or 27, but
finite, whatever it may be. And the difference this makes is that if a man pursues
one of these values, I, who do not, am able to understand why he pursues it or
what it would be like, in his circumstances, for me to be induced to pursue it. Hence
the possibility of human understanding.” (Berlin, 2001, p. 12)
Aristotle was an early moral pluralist, dismissed by Kohlberg (1971) for promot-
ing a “bag of virtues.” Gilligan (1982) was a pluralist when she argued that the
“ethic of care” was not derived from (or reducible to) the ethic of justice. Isaiah
Berlin said, in our opening quotation, that there are a finite but potentially large
number of moral ideals that are within the repertoire of human beings and that
an appreciation of the full repertoire opens the door to mutual understanding.
We are unabashed pluralists, and in this chapter, we will try to convince you
that you should be, too. In the first two parts of this chapter, we present a plu-
ralist theory of moral psychology—Moral Foundations Theory (MFT). In part
three, we will provide an overview of empirical results that others and we have
obtained using a variety of measures developed to test the theory. We will show
that the pluralism of MFT has led to discoveries that had long been missed by
monists. In part four, we will discuss criticisms of the theory and future research
directions that are motivated in part by those criticisms. We will also propose
specific criteria that researchers can use to decide what counts as a foundation.
Throughout the chapter, we will focus on MFT’s pragmatic validity (Graham
et al., 2011)—that is, its scientific usefulness for both answering existing ques-
tions about morality and allowing researchers to formulate new questions.
We grant right at the start that our particular list of moral foundations is
unlikely to survive the empirical challenges of the next several years with no
changes. But we think that our general approach is likely to stand the test of
time. We predict that 20 years from now moral psychologists will mostly
be pluralists who draw on both cultural and evolutionary psychology to
examine the psychological mechanisms that lead people and groups to hold
divergent moral values and beliefs.
We also emphasize, at the outset, that our project is descriptive, not nor-
mative. We are not trying to say who or what is morally right or good. We
58 Jesse Graham et al.
are simply trying to analyze an important aspect of human social life. Cul-
tures vary morally, as do individuals within cultures. These differences often
lead to hostility, and sometimes violence. We think it would be helpful for
social psychologists, policy makers, and citizens more generally to have a
language in which they can describe and understand moralities that are
not their own. We think a pluralistic approach is necessary for this descrip-
tive project. We do not know how many moral foundations there really are.
There may be 74, or perhaps 122, or 27, or maybe only 5, but certainly more
than one. And moral psychologists who help people to recognize the inher-
ent pluralism of moral functioning will be at the forefront of efforts to pro-
mote the kind of “human understanding” that Berlin described.
So now we are up to three. Or maybe it’s four? Fiske (1991) proposed that
moral judgment relies upon the same four “relational models” that are used to
think about and enact social relationships: Communal Sharing, Authority
Ranking, Equality Matching, and Market Pricing (see also Rai & Fiske, 2011).
Having worked with both Fiske and Shweder, Haidt wanted to integrate
the two theories into a unified framework for studying morality across cul-
tures. But despite many points of contact, the three ethics and four relational
models could not be neatly merged or reconciled. They are solutions to dif-
ferent problems: categorizing explicit moral discourse (for Shweder) and
analyzing interpersonal relationships (for Fiske). After working with the
two theories throughout the 1990s—the decade in which evolutionary psy-
chology was reborn (Barkow, Cosmides, & Tooby, 1992)—Haidt sought to
construct a theory specifically designed to bridge evolutionary and anthro-
pological approaches to moral judgment. He worked with Craig Joseph,
who was studying cultural variation in virtue concepts (Joseph, 2002).
The first step was to broaden the inquiry beyond the theories of Fiske and
Shweder to bring in additional theories about how morality varies across cul-
tures. Schwartz and Bilsky’s (1990) theory of values offered the most prom-
inent approach in social psychology. Haidt and Joseph also sought out theorists
who took an evolutionary approach, trying to specify universals of human
moral nature. Brown (1991) offered a list of human universals including many
aspects of moral psychology, and de Waal (1996) offered a list of the “building
blocks” of human morality that can be seen in other primates.
Haidt and Joseph (2004) used the analogy of taste to guide their review of
these varied works. The human tongue has five discrete taste receptors (for
sweet, sour, salt, bitter, and umami). Cultures vary enormously in their cui-
sines, which are cultural constructions shaped by historical events, yet the
world’s many cuisines must ultimately please tongues equipped with just five
innate and universal taste receptors. What are the best candidates for being the
innate and universal “moral taste receptors” upon which the world’s many
cultures construct their moral cuisines? What are the concerns, perceptions,
and emotional reactions that consistently turn up in moral codes around
the world, and for which there are already-existing evolutionary explanations?
Haidt and Joseph identified five best candidates: Care/harm, Fairness/
cheating, Loyalty/betrayal, Authority/subversion, and Sanctity/degrada-
tion.1 We believe that there are more than five; for example, Haidt
1
Prior to 2012, we used slightly different terms: Harm/care, Fairness/reciprocity, Ingroup/loyalty,
Authority/respect, and Purity/sanctity.
Moral Foundations Theory 61
designed to yield pleasure when sweetness is tasted, there are cognitive mod-
ules that yield pleasure when fair exchanges occur, and displeasure when one
detects cheaters. In the moral domain, the problems to be solved are social
and the human mind evolved a variety of mechanisms that enable individuals
(and perhaps groups) to solve those problems within the “moral matrices”—
webs of shared meaning and evaluation—that began to form as humans
became increasingly cultural creatures during the past half-million years
(see Haidt, 2012, chapter 9, which draws on Richerson & Boyd, 2005;
Tomasello, Carpenter, Call, Behne, & Moll, 2005).
MFT proposes that the human mind is organized in advance of experi-
ence so that it is prepared to learn values, norms, and behaviors related to a
diverse set of recurrent adaptive social problems (specified below in
Table 2.1). We think of this innate organization as being implemented by
sets of related modules which work together to guide and constrain
responses to each particular problem. But you do not have to embrace mod-
ularity, or any particular view of the brain, to embrace MFT. You only need
to accept that there is a first draft of the moral mind, organized in advance of
experience by the adaptive pressures of our unique evolutionary history.
the modules present at or soon after birth are “learning modules.” That is,
they are innate templates or “learning instincts” whose function is to gen-
erate a host of more specific modules as the child develops. They generate
“the working modules of acquired cognitive competence” (p. 57). They are
a way of explaining phenomena such as preparedness (Seligman, 1971).
For example, children in traditional Hindu households are frequently
required to bow, often touching their heads to the floor or to the feet of
revered elders and guests. Bowing is used in religious contexts as well, to show
deference to the gods. By the time a Hindu girl reaches adulthood, she will
have developed culturally specific knowledge that makes her automatically
initiate bowing movements when she encounters, say, a respected politician
for the first time. Note that this knowledge is not just factual knowledge—it
includes feelings and motor schemas for bowing and otherwise showing def-
erence. Sperber (2005) refers to this new knowledge—in which a pattern of
appraisals is linked to a pattern of behavioral outputs—as an acquired module,
generated by the original “learning module.” But one could just as well drop
the modularity language at this point and simply assert that children acquire all
kinds of new knowledge, concepts, and behavioral patterns as they employ
their innately given moral foundations within a particular cultural context.
A girl raised in a secular American household will have no such experiences
in childhood and may reach adulthood with no specialized knowledge or abil-
ity to detect hierarchy and show respect for hierarchical authorities.
Both girls started off with the same sets of universal learning modules—
including the set we call the Authority/subversion foundation. But in the
Hindu community, culture and psyche worked together to generate a host
of more specific authority-respecting abilities (or modules, if you prefer).
In the secular American community, such new abilities were not generated,
and the American child is more likely to hold anti-authoritarian values as an
adult. An American adult may still have inchoate feelings of respect for some
elders and might even find it hard to address some elders by first name (see
Brown & Ford, 1964). But our claim is that the universal (and incomplete)
first draft of the moral mind gets filled in and revised so that the child can suc-
cessfully navigate the moral “matrix” he or she actually experiences.
This is why we chose the architectural metaphor of a “foundation.” Ima-
gine that thousands of years ago, extraterrestrial aliens built 100 identical
monumental sites scattered around the globe. But instead of building entire
buildings, they just built five solid stone platforms, in irregular shapes, and
left each site like that. If we were to photograph those 100 sites from the air
today, we had probably be able to recognize the similarity across the sites,
Moral Foundations Theory 65
even though at each site people would have built diverse structures out of
local materials. The foundations are not the finished buildings, but the founda-
tions constrain the kinds of buildings that can be built most easily. Some
societies might build a tall temple on just one foundation, and let the other
foundations decay. Other societies might build a palace spanning multiple
foundations, perhaps even all five. You cannot infer the exact shape and
number of foundations by examining a single photograph, but if you collect
photos from a few dozen sites, you can.
Similarly, the moral foundations are not the finished moralities, although they
constrain the kinds of moral orders that can be built. Some societies build
their moral order primarily on top of one or two foundations. Others use
all five. You cannot see the foundations directly, and you cannot infer
the exact shape and number of foundations by examining a single culture’s
morality. But if you examine ethnographic, correlational, and experimental
data from a few dozen societies, you can. And if you look at the earliest
emergence of moral cognition in babies and toddlers, you can see some
of them as well (as we will show in Section 4.2). MFT is a theory about
the universal first draft of the moral mind and about how that draft gets
revised in variable ways across cultures.
Drawing on this work (including Nisbett & Wilson, 1977; Wegner &
Bargh, 1998), Haidt (2001) formulated the Social Intuitionist Model
(SIM) and defined moral intuition as:
In other words, the SIM proposed that moral evaluations generally occur rap-
idly and automatically, products of relatively effortless, associative, heuristic
processing that psychologists now refer to as System 1 thinking (Kahneman,
2011; Stanovich & West, 2000; see also Bastick, 1982; Bruner, 1960; Simon,
1992, for earlier analyses of intuition that influenced the SIM). Moral evalu-
ation, on this view, is more a product of the gut than the head, bearing a closer
resemblance to esthetic judgment than principle-based reasoning.
This is not to say that individuals never engage in deliberative moral rea-
soning. Rather, Haidt’s original formulation of the SIM was careful to state
that this kind of effortful System 2 thinking, while seldom the genesis of our
moral evaluations, was often initiated by social requirements to explain,
defend, and justify our intuitive moral reactions to others. This notion that
moral reasoning is done primarily for socially strategic purposes rather than
to discover the honest truth about who did what to whom, and by what stan-
dard that action should be evaluated, is the crucial “social” aspect of the SIM.
We reason to prepare for social interaction in a web of accountability concerns
(Dunbar, 1996; Tetlock, 2002). We reason mostly so that we can support our
judgments if called upon by others to do so. As such, our moral reasoning, like
our reasoning about virtually every other aspect of our lives, is motivated
(Ditto, Pizarro, & Tannenbaum, 2009; Kunda, 1990). It is shaped and
directed by intuitive, often affective processes that tip the scales in support
of desired conclusions. Reasoning is more like arguing than like rational, dis-
passionate deliberation (Mercier & Sperber, 2010), and people think and act
more like intuitive lawyers than intuitive scientists (Baumeister & Newman,
1994; Ditto et al., 2009; Haidt, 2007a, 2007b, 2012).
The SIM is the prequel to MFT. The SIM says that most of the action in
moral judgment is in rapid, automatic moral intuitions. These intuitions
were shaped by development within a cultural context, and their output
can be edited or channeled by subsequent reasoning and self-presentational
concerns. Nonetheless, moral intuitions tend to fall into families or catego-
ries. MFT was designed to say exactly what those categories are, why we are
Moral Foundations Theory 67
5
There is an intense debate as to whether this competition of groups versus groups counts as group-level
selection, and whether group-level selection shaped human nature. On the pro side, see Haidt
(2012), Chapter 9. On the con side, see Pinker (2012).
Moral Foundations Theory 71
3. EMPIRICAL FINDINGS
In this chapter, we argue for the pragmatic validity of MFT, and of
moral pluralism in general. Debates over our theoretical commitments—
such as nativism and pluralism—can go on for centuries, but if a theory pro-
duces a steady stream of novel and useful findings, that is good evidence for
its value. MFT has produced such a stream of findings, from researchers both
within and outside of social psychology. Through its theoretical constructs,
and the methods developed to measure them, MFT has enabled empirical
advances that were not possible using monistic approaches. In this section,
we review some of those findings, covering work on political ideology,
72 Jesse Graham et al.
mechanisms that give rise to the intuition that are inaccessible.) (2) Implicit
measures—Reaction time and other methods of implicit social cognition have
been modified to bypass self-report and capture reactions to foundation-
related words, sentences, and pictures (see Section 3.5.1). (3) Psychophysiolog-
ical and neuroscience methods—These are also intended to bypass self-report, and
measure nonconscious and affective reactions more directly, via facial micro-
expressions, event-related potentials, or neuroimaging (see Section 3.5.2). (4)
Text analysis—The Moral Foundations Dictionary has been useful for measuring
foundation-related word use in a wide range of applications and disciplines,
from computer science analyses of blogs (Dehghani, Gratch, Sachdeva, &
Sagae, 2011) to digital humanities analyses of eighteenth-century texts
(Pasanek, 2009) to political science analyses of the discourse of political elites
(Clifford & Jerit, in press). The many methods developed have provided ini-
tial convergent and discriminant validity for our pluralistic model (see e.g.,
Graham et al., 2011), and several of them demonstrate the intuitive nature
of moral judgment. Materials for most of the methods described in
Table 2.2 can be found at www.MoralFoundations.org.
5
Relevance to moral decisions (0 = never, 5 = always)
3
3
2
2
Care Care
1 Fairness 1 Fairness
Loyalty Loyalty
Authority Authority
Sanctity Sanctity
0 0
Strongly
liberal
Moderately
liberal
Slightly
liberal
Neutral
(moderate)
Slightly
conservative
Moderately
conservative
Strongly
conservative
Strongly
liberal
Moderately
liberal
Slightly
liberal
Neutral
(moderate)
Slightly
conservative
Moderately
conservative
Strongly
conservative
Self-reported political identity Self-reported political identity
8
Average amount required to violate taboos
Care
Fairness
2
Loyalty
Authority
Sanctity
1
Liberal
Conservative
Slightly liberal
Neutral
Very Liberal
Slightly
conservative
Very
conservative
in the very foundations upon which moral arguments could rest). Consistent
with the intuitionist tradition, arguments about culture-war issues such as gay
marriage, abortion, art, and welfare spending should not be expected to influ-
ence or convince people on the other side, because attitudes about specific issues
are based on deep intuitions, not on the specific reasons put forth during a debate.
A number of studies using a variety of different methods and samples, con-
ducted by several different research groups, have now replicated that first
Moral Foundations Theory 77
empirical finding. Graham et al. (2009), for example, used four different
methods and consistently found that liberals valued Care and Fairness more than
did conservatives, whereas conservatives valued Loyalty, Authority, and Sanc-
tity more than did liberals (see Figure 2.1). Using a simple self-report political
orientation scale (very liberal to very conservative) and examining large Internet
samples, Graham et al. (2009) show this pattern in explicit judgments of moral
relevance (upper left panel, Figure 2.1), agreement with foundation-relevant
moral statements (upper right panel, Figure 2.1), and willingness to engage
in foundation-related “taboo” acts for money (bottom panel, Figure 2.1). In
each case, care and fairness are valued highly across the political spectrum, with
liberals on average endorsing them slightly more than conservatives. Loyalty,
Authority, and Sanctity, in contrast, show a clear linear increase in importance
moving from extreme liberals to extreme conservatives. In a fourth study,
Graham et al. (2009) found the same pattern of liberal-conservative differences
comparing the frequency of foundation-related words used in the sermons of
liberal and conservative churches (see Table 2.2).
Additional evidence of the robustness of this basic pattern of foundation dif-
ferences is reported by Graham, Nosek, and Haidt (2012), who obtained the
same results in a representative sample of U.S. citizens. Graham et al. (2011) have
also replicated this ideological pattern using respondents at YourMorals.org
from 11 different world regions (see Section 3.4 and Table 2.3).
Finally, McAdams et al. (2008) conducted life narrative interviews with a
group of highly religious and politically engaged adults and coded their
responses for themes related to the five moral foundations. They found what
they characterized as “strong support” for MFT:
When asked to describe in detail the nature and development of their own religious
and moral beliefs, conservatives, and liberals engaged in dramatically different
forms of moral discourse. Whereas conservatives spoke in moving terms about
respecting authority and order, showing deep loyalty to family and country,
and working hard to keep the self pure and good, liberals invested just as much
emotion in describing their commitments to relieve the suffering of others and their
concerns for fairness, justice, and equality. (McAdams et al., 2008, p. 987).
are dispositional traits such as the Big 5. These are global, decontextualized
traits that describe broad patterns of cognitive or emotional responding.
At Level 2 are what McAdams calls characteristic adaptations, including values,
goals, and moral strivings that are often reactions (or adaptations) to the con-
texts and challenges an individual encounters. Characteristic adaptations are
therefore more conditional and domain-specific than dispositional traits and
are thus more variable across life stages and situational contexts. Finally, at
Level 3 in McAdams’s framework are integrative life stories—the personal nar-
ratives that people construct to make sense of their values and beliefs. For
many people, these life stories include an account of the development of
their current moral beliefs and political ideology. Haidt, Graham, and
Joseph (2009) elaborated McAdams’ third level for work in political psy-
chology by pointing out that many such stories are not fully self-authored,
but rather are often “borrowed” from ideological narratives and stereotypes
commonly held in the culture.
We view the moral and personality traits measured by our various
methods (as summarized in Table 2.2) as Level 2 characteristic adaptations,
linked closely to particular dispositional traits (Level 1). We cannot measure
moral foundations directly—we cannot see the “first draft” of the moral
mind. All we can do is read the finished books and quantify the differences
among individuals and groups. All we can do is measure the morality of a
person and quantify the degree to which that person’s morality is based
on each foundation. (We sometimes say that a person scored high on a par-
ticular foundation, but that is a shorthand way of saying that their morality, as
we measure it, relies heavily on virtues and concepts related to that founda-
tion.) An individual’s morality is constructed as they grow up in a particular
culture, with particular life experiences. But two siblings who bring different
dispositional traits to otherwise similar contexts and experiences will develop
different moral and political characteristic adaptations. As young adults, they
will then find different ideological narratives compelling and may come to
self-identify as members of different political parties.
For example, substantial evidence suggests that political conservatism is
associated with personality characteristics that incline individuals toward a
general resistance to novelty and change. In a comprehensive meta-analysis
of the psychological correlates of conservatism, Jost, Glaser, Kruglanski, and
Sulloway (2003) found that, compared to liberals, conservatives have higher
needs for order, structure, and closure, and report lower levels of openness to
experience. Conservatives have been found to respond less positively to
novel stimuli at physiological and attentional levels as well (Amodio, Jost,
80 Jesse Graham et al.
Master, & Yee, 2007; Hibbing & Smith, 2007; Oxley et al., 2008; Shook &
Fazio, 2009). Similarly, a growing body of literature has revealed a relation
between greater political conservatism and heightened levels of disgust sen-
sitivity (Dodd et al., 2012; Helzer & Pizarro, 2011; Inbar, Pizarro, & Bloom,
2009; Inbar, Pizarro, Iyer, & Haidt, 2012; Smith, Oxley, Hibbing, Alford, &
Hibbing, 2011). Together, this constellation of dispositional tendencies may
provide the emotional infrastructure underlying conservative reverence for
long-established institutions and highly structured systems of social hierar-
chy and sexual propriety. Conversely, individuals with lower need for struc-
ture, greater openness to experience, and dampened disgust sensitivity
should be less anxious about challenging traditional authority structures, life-
style, and sexual practices. These dispositional tendencies may in turn afford
greater attraction to liberal policy positions seeking to “reform” traditional
values and institutions to reflect greater equality for historically oppressed
social groups and a less restrictive view of sexual purity and moral contam-
ination more generally.
Providing empirical support for the causal connections between person-
ality characteristics, moral concerns, and political ideology is a challenging
task, and more research in this area is clearly needed. A small set of studies,
however, have directly examined these types of associations. Lewis and Bates
(2011) measured the Big Five personality traits, moral foundations, and
political ideology and found that higher scores on Care–Fairness were
related to greater openness, neuroticism, and agreeableness, and that higher
Loyalty–Authority–Sanctity scores were associated with greater conscien-
tiousness and extraversion, and lower levels of neuroticism. Importantly,
and consistent with McAdams’ three-level personality model, moral foun-
dation endorsements mediated the relationship between Big Five traits and
political ideology.
In a similar study, Hirsh, DeYoung, Xu, and Peterson (2010) used a
more fine-grained measure of the Big Five personality traits that separates
each trait into two separate “aspects” (DeYoung, Quilty, & Peterson,
2007). Like Lewis and Bates (2011), they found an overall measure of agree-
ableness to be a significant predictor of greater endorsement of the Care and
Fairness foundations, but that when examined at the level of aspects, this
relation was limited to the aspect of agreeableness they term compassion.
The other aspect of agreeableness, politeness, was not related to Care–
Fairness scores but was, in fact, predictive of higher scores on the Authority
foundation. Also, where Lewis and Bates (2011) found openness to be a sig-
nificant predictor of Care–Fairness, Hirsh et al. (2010) found no significant
Moral Foundations Theory 81
relation, but they did find a negative relation between openness (particularly
the intellect aspect) and the Authority and Sanctity foundations. Hirsh et al.
(2010) also found an association between greater overall conscientiousness
and endorsement of Loyalty, Authority, and Sanctity foundations, but these
relations were driven only by the orderliness (not the industriousness) aspect
of that trait. Subtle differences between these and the Lewis and Bates (2011)
findings notwithstanding, the Hirsh et al. (2010) findings are consistent with
the general thrust of MFT, and their study again provides evidence that
moral foundation endorsements mediated the relationships between person-
ality factors and political ideology.
Finally, in an attempt to integrate research on conservative sensitivity to
threatening stimuli with MFT, Van Leeuwen and Park (2009) examined
whether a conservative pattern of moral foundation endorsement mediated
the relationship between perceived social dangers and political conservatism.
They found that the tendency to emphasize Loyalty, Authority, and Sanctity
over Care and Fairness was related to both explicit and implicit conservatism
in the expected directions, and that it also partially mediated the relationship
between Belief in a Dangerous World and conservatism. The authors argue
that these results suggest that a basic inclination to perceive the environment
as dangerous may lead to greater endorsement of the Loyalty, Authority, and
Sanctity foundations, perhaps due to the perceived protection these group-
oriented values seem to provide.
information, over and above their ideology, about their social group prej-
udices. These results speak to the tight relationship between social and moral
judgment, while also demonstrating the predictive and discriminant validity
of the five foundations.
Graham, Nosek, et al. (2012) used MFT to examine the moral stereotypes
liberals and conservatives hold about each other. Participants filled out the
MFQ either normally, or else as a “typical” liberal, or else as a “typical” con-
servative. Overall, participants correctly simulated the general liberal-
conservative pattern predicted by MFT. That is, the typical liberal scores were
higher than the typical conservative scores on Care and Fairness, and the typical
conservative scores were higher than the typical liberal scores on Loyalty,
Authority, and Sanctity. However, participants’ estimations of these differences
were exaggerated. In fact, the differences in moral foundation scores that par-
ticipants reported for the typical liberal and the typical conservative were sig-
nificantly larger than the actual differences observed between even the most
extreme partisans. Although participants who identified as liberals, moderates,
and conservatives all exaggerated these stereotypes, they did so to varying
degrees. Liberals, more than conservatives and moderates, reported the most
exaggerated stereotypes of political partisans when estimating all five founda-
tions. Most importantly, conservatives tended to be relatively accurate in their
beliefs about how much liberals valued Care and Fairness, but liberals estimated
that conservatives valued these foundations far less than they actually did.
MFT’s pluralistic approach thus allows not only for a better understanding of
the moral differences between liberals and conservatives but also for a more
nuanced understanding of the moral stereotypes that contribute to the seem-
ingly intractable nature of partisan conflict.
In terms of judgments of individuals rather than groups, Federico,
Weber, Ergun, and Hunt (in press) asked two groups of respondents (pro-
fessors solicited from liberal and conservative colleges and visitors to
Mechanical Turk) to evaluate the extent to which 40 of the most influential
people of the twentieth century were “moral exemplars.” A moral exemplar
was defined simply as “a highly moral person.” The target individuals had
previously been rated by a separate sample of social science professors as
to how much each individual embodied each of the five moral foundations.
The results were generally quite consistent with the predictions of MFT,
although subtle and important differences did emerge. Overall, there was
substantial agreement across the ideological spectrum on what led an indi-
vidual to be perceived as virtuous, with both liberal and conservative
respondents relying most heavily in their moral evaluations on the targets’
Moral Foundations Theory 83
5.00
4.00
C F
Moral foundation valuation
C
F
3.00 F A
C A L
L S C
F S
2.00
A
L A L
1.00 S
S
0.00
These two clusters offered no surprises. They are just what you had
expect from our common stereotypes about liberals and conservatives,
and from the findings of Graham et al. (2009) and Jost et al. (2003). How-
ever, the other two groups were different. The third group, dubbed “liber-
tarians,” scored low on all five moral foundations, and they tended to highly
value hedonism and self-direction on the Schwartz Values Scale (Schwartz &
Bilsky, 1990), and they showed high levels of atheism. The fourth group,
labeled “religious left,” scored relatively high on all five foundations, on reli-
gious participation, and on the Schwartz values of benevolence, tradition,
conformity, security, and spirituality. Importantly, neither the libertarians
nor the religious left fits neatly into the categories of “liberal” or “conser-
vative,” but their unique moral and psychological identity was detectable
when their moralities were analyzed using the five scores of the MFQ.
The left-right dimension is indeed useful as a first pass (Jost, 2006). But
the pluralism of MFT gives us greater resolution and detects groups that
do not fit well on that one dimension.
In a similar vein, Weber and Federico (in press) used a mixed model latent
class analysis to argue for a more heterogeneous approach to understanding
political ideology after identifying six discrete ideological groups (consistent
liberals, libertarians, social conservatives, moderates, consistent conservatives,
inconsistent liberals). They found each group to have unique sets of economic
and social policy preferences that were reflected in distinct patterns of moral
foundation endorsement. Further, Care and Fairness concerns were most
related to an ideological preference dimension of equality–inequality, while
Loyalty, Authority, and Sanctity were most aligned with the ideological pref-
erence dimension of openness-conformity (Federico et al., in press).
The most extensive examination of an ideological subgroup that cannot
be easily placed along a simple liberal-conservative dimension is the work of
Iyer, Koleva, Graham, Ditto, and Haidt (2012) that set out to identify the
cognitive, affective, and moral characteristics of self-identified libertarians.
Libertarians are an increasingly influential group in American politics, with
their ideological positions gaining attention through the popularity of the
Tea Party movement and media coverage of the Presidential campaign of
Congressman Ron Paul (R-TX). Libertarian values, however, presented
a challenge for MFT, as the primary value that libertarians espouse—
individual liberty—was not well captured by the existing five foundations.
Indeed, the original conception of MFT (Haidt & Joseph, 2004) took
Shweder’s ethic of autonomy and created foundations that represented
the liberal vision of positive liberty, where individual freedom is defined
86 Jesse Graham et al.
Political identification
5.00
Liberals
Conservatives
Libertarians
4.00
3.00
Mean
2.00
1.00
0.00
Care/ Fairness/ Loyalty/ Authority/ Sanctity/ Economic Lifestyle
harm cheating betrayal subversion degradation liberty liberty
Does that mean that libertarians have no morality—or, at least, less con-
cern with moral issues than liberals or conservatives? Or might it be that their
core moral value was simply not represented among the five foundations mea-
sured by the MFQ? Consistent with the latter position, when Iyer et al. exam-
ined the items tapping the value placed on liberty as a moral concern, they
found that libertarians did indeed score higher than both liberals and conser-
vatives. This relative valuation of liberty was found both on items tapping
concerns about economic and property-related freedoms (typically valued
by political conservatives more than liberals) as well as lifestyle freedoms (typ-
ically valued by political liberals more than conservatives). Similar findings
emerged from the Good Self measure (Barriga, Morrison, Liau, & Gibbs,
2001), where libertarians reported valuing being independent more than other
groups, as well as from the Schwartz Values Scale (Schwartz, 1992), on which
libertarians scored highest of all groups on valuing self-direction.
Iyer et al. also identified a number of other interesting psychological
characteristics of their libertarian sample. Perhaps reflecting the emotional
underpinnings of their focus on individual liberty, libertarians scored higher
than liberals or conservatives on a scale measuring psychological reactance
(Brehm & Brehm, 1981; Hong, 1996). Libertarians also showed a relatively
cerebral as opposed to emotional cognitive style (e.g., high in need for cog-
nition, low empathizing, and high systematizing [Baron-Cohen, 2009]) and
lower interdependence and social relatedness (e.g., low collectivism, low on
all three subscales of the Identification with All of Humanity Scale).
Together, these findings paint a consistent portrait of the moral psychology
of libertarianism. Libertarians—true to their own descriptions of themselves—
value reason over emotion and show more autonomy and less interdependence.
Their central moral value, therefore, is one that grants people the right to be left
alone. MFT’s five moral foundations appeared to be inadequate in capturing
libertarians’ moral concerns, but the approach that gave birth to these founda-
tions served us well in examining this new group, and stimulated us to consider
Liberty/oppression as a candidate for addition to our list of foundations (see
Section 4.1.5, and further discussion in Haidt, 2012, chapter 8).
3.3.1 Attitudes
Koleva, Graham, Iyer, Ditto, and Haidt (2012) illustrated the utility of MFT’s
pluralistic framework for understanding the psychological underpinnings of spe-
cific policy issues. In two web studies (N ¼ 24,739), we used scores on the MFQ
to predict moral disapproval and attitude stands on 20 hot-button issues, such as
same-sex marriage, abortion, torture, and flag desecration/protection. We
found that MFQ scores predicted attitudes on these issues, even after partialling
out participants’ ideology, gender, religiosity, and other demographic variables.
We expected that the foundations would predict variation based on overlapping
content—for example, people who scored high on the Care foundation would
be particularly sensitive to issues involving violence or cruelty, and this was
indeed true, in general. But unexpectedly, the Sanctity foundation emerged
as the strongest predicting foundation for most issues. For example, people
who score high on the loyalty foundation tend to be more patriotic, and there-
fore more strongly in favor of “protecting” the flag from desecration, but scores
on the Sanctity foundation were even more predictive. Some people see the flag
as merely a piece of cloth; others see it as a sacred object, containing a nonmaterial
essence that must be protected. These findings about the importance of Sanctity
in ongoing political controversies could not have been obtained using moral the-
ories that limited the moral domain to issues of Care or Fairness.
Another advantage of using a multifaceted approach like MFT is that it
helps us understand how a person could hold different attitudes across issues
that appear to engender similar moral concerns. For example, even though
abortion, euthanasia, and the death penalty all evoke arguments for the sanc-
tity of life, opposition to the first two was best predicted by Sanctity, whereas
opposition to the third was best predicted by Care scores. This may help
explain why liberals, who score low on Sanctity concerns (Graham et al.,
2009; Haidt & Graham, 2007), do not generally oppose access to abortion
and euthanasia, but do tend to oppose the death penalty.
Aside from refining our understanding of ideological opinions, these
findings suggest novel approaches to persuasion and attitude change. For
example, Feinberg and Willer (2013) showed that framing messages about
the environment in terms of Sanctity, rather than just Care, increased con-
servatives’ support for environmental policies, presumably because this
framing triggers intuitions which resonate with conservatives.
3.3.2 Emotion
In a related line of inquiry, researchers have examined the interplay between
morality and emotion, particularly the emotion of disgust, in shaping moral
judgments and ideological attitudes and self-identification. Much of this
Moral Foundations Theory 89
work has either explicitly drawn on MFT or offers indirect evidence that
supports its premises. For example, Horberg, Oveis, Keltner, and Cohen
(2009) showed that an individual’s trait propensity toward feeling disgust,
an emotion that is related to Sanctity concerns (Rozin et al., 2008), as well
as experimental inductions of disgust, intensified the moral importance of
maintaining physical and spiritual purity. This effect was specific: other emo-
tions, such as trait or state anger, fear, or sadness did not have an effect on
judgments related to purity, and disgust did not affect nonpurity moral judg-
ments, such as Care/harm or justice. Finally, Preston and Ritter (2012)
showed that the concepts of religion and cleanliness are linked such that
priming religion increased the mental accessibility of cleanliness-related
concepts and the desirability of cleaning products, whereas priming thoughts
of personal cleanliness increased ratings of the value ascribed to religious
beliefs. This work underscores the relevance of experiences with and con-
cerns about the Sanctity foundation to moral judgment.
Building on the finding that conservatives tend to moralize Sanctity con-
cerns more than liberals (Graham et al., 2009). Helzer and Pizarro (2011)
reported two experiments in which subtle reminders of physical purity—
standing by a hand sanitizer and using hand wipes—led participants to report
being more politically conservative and more disapproving of sexual purity
violations, like incest or masturbation. Similarly, Inbar, Pizarro, and Bloom
(2011) found that experimental inductions of disgust led participants to report
more negative attitudes toward gay men but not toward lesbians or other out-
groups. However, unlike Helzer and Pizarro (2011), these researchers did not
find a general effect of disgust on political attitudes or on self-identification.
Finally, Inbar, Pizarro, Iyer, and Haidt (2012) showed that self-identified con-
servatives, both in the United States and around the world, reported greater
propensity toward feeling contamination disgust, and that disgust sensitivity
predicted voting patterns in the United States. Interestingly, Jarudi (2009)
found that conservatives were more sensitive to purity concerns about sex
(e.g., anal sex), but not about food (e.g., eating fast food), even though disgust
sensitivity was related to disapproval in both domains.
Finally, several studies have examined the role of anger and contempt, in
addition to disgust, in response to foundation-related violations. For exam-
ple, Russell and Giner-Sorolla (2011) gave participants scenarios that
depicted violations of Care, Fairness, or Sanctity and assessed their moral
judgments, anger, and disgust. Next, participants were asked to generate cir-
cumstances that could change their opinion and then to reevaluate the sce-
narios assuming these new circumstances. Whereas ratings of disgust did not
change during reevaluation, anger for the harm and fairness violations was
90 Jesse Graham et al.
and Chamberlin (2002) found that when liberals were tired, distracted, or
under cognitive load, they showed levels of personal attributions such as
victim-blaming akin to those of conservatives. The authors posited “moti-
vated correction” as the process liberals undergo to bring these automatic
reactions in line with their conscious egalitarian goals and values. Similarly,
Eidelman, Crandall, Goodman, and Blanchar (2012) found that low-effort
thought (induced by cognitive load, time pressure, or alcohol) increased
aspects of conservatism such as acceptance of hierarchy and preference for
the status quo.
Graham (2010) tested whether MFT could provide an organizing
framework for such findings, with the hypothesis that liberals intuitively
respond to Loyalty, Authority, and Sanctity cues more strongly than would
be suggested by their explicitly endorsed moral opinions. Using several
implicit measures of reactions to foundation-related stimuli—evaluative
priming, the AMP, and event-related brain potentials using EEG (see
Table 2.2)—the authors found support for this hypothesis and found no
evidence of such implicit–explicit discrepancy for conservatives. More-
over, when randomly assigned to give their first “gut” reactions on the
MFQ, participants across the political spectrum indicated that their answers
were the same as their consciously endorsed opinions, indicating that lib-
erals are unaware of the discrepancy between their implicit and explicit
moralities. In contrast to these studies, Wright and Baril (2011) found that
cognitive load or ego depletion manipulations decreased MFQ endorse-
ments of Loyalty, Authority, and Sanctity among conservatives. Although
two large studies using different samples have failed to replicate this effect,
more work needs to be done to test whether conservatives also have
implicit–explicit discrepancies in their moralities, particularly for Care
and Fairness concerns.
for Sanctity violations and second highest for Fairness violations, while
corrugator activity (angry microexpression) was highest for violations of Care.
Moreover, muscle activity differentially predicted severity of explicit moral
judgments for different types of concerns, with disgust expressions predicting
harsher Sanctity and Fairness judgments, anger expressions predicting harsher
Care judgments, and smiling predicting less harsh Loyalty judgments.
In a vignette study contrasting judgments about Care (accidental vs.
intentional assault) and Sanctity (accidental vs. intentional incest), Young
and Saxe (2011) found that intentionality was central to the Care judgments
but was much less crucial for Sanctity judgments. They followed up this
finding with an fMRI study and found that the right temporoparietal junc-
tion (TPJ)—an area implicated in theory of mind reasoning, and hence
intentionality judgments—was more involved in Care judgments than in
Sanctity judgments.
Two other studies have looked for links between moral foundations and
brain structures or responses. Lewis, Kanai, Bates, and Rees (2012) gave sub-
jects the MFQ and then collected structural MRI brain scans. They found a
variety of significant and interpretable relationships, including: (1) Scores on
the Care and Fairness foundations (combined) were associated with larger
gray-matter volume in the dorsomedial prefrontal cortex (DMPFC, an area
associated with mentalizing and empathy) and (2) Sanctity scores were asso-
ciated with more gray-matter volume in the left anterior insula (a region
active in several moral emotions including disgust). They also found that
high scores on the Authority and Sanctity foundations were associated with
more gray-matter volume in the subcallosal gyrus, although they did not
know how to interpret this finding.
Parkinson et al. (2011) wrote vignettes to trigger a range of moral intu-
itions, inspired partly by MFT, and then carried out an fMRI study. They
found that stories about people committing intentional physical harm pref-
erentially activated regions associated with understanding and imagining
actions; stories about sexual deviance preferentially activated many areas
associated with affective processing (including the amygdalae and the ante-
rior insula); and stories about dishonesty preferentially activated brain areas
associated with reasoning about mental states (including the DMPFC and
the TPJ). Their interpretation of these results was strongly supportive of
the pluralist approach we emphasize in this chapter:
These results provide empirical support for philosophical arguments against the
existence of a functional or anatomical module common and peculiar to all moral
judgments. . .Separate systems were found to characterize different kinds of moral
98 Jesse Graham et al.
judgment. . .It is likely that moral judgment is even more multidimensional than
what is suggested here, given that there remain other domains of morality that
were not examined in the current study (e.g., disrespect, betrayal of an in-group,
fairness). These results suggest that, just as disparate systems are now understood
to subserve aspects of cognitive faculties once thought to be monolithic (e.g.,
memory, attention), distinct systems subserve different types of moral judgment.
Future research may benefit from working toward a taxonomy of these systems
as Haidt and Graham (2007) have suggested (Parkinson et al., 2011, p. 3171).
4. FUTURE DIRECTIONS
In this section, we look toward the future of moral foundations
research, with special attention paid to new areas of inquiry and the evolu-
tion of the theory itself. We begin by describing notable recent critiques of
MFT, which we see as essential for helping to shape its future development.
We then offer five criteria for foundationhood, to guide future discussions of
what exactly the list of foundations should be, and what it would take to
change or expand our current list. Finally, we give additional consideration
to what will characterize the next several years of research in MFT and in
moral psychology more generally.
(see Section 4.2.4), but we reject their claim that nativists are obligated to
point to specific neural circuits, or to genes for those circuits. Given that
nobody can find a set of genes that, collectively, explains 5% of the variance
in how tall people are (Gudbjartsson et al., 2008), what chance is there that
anyone will find a set of genes that code for mental modules such as loyalty or
sanctity whose expression is far more subject to cultural influence than is
height? To insist that nativists must point to genes is to ban nativism from
psychology.
And yet, psychology has made enormous strides in recent years because
of a flood of nativist findings. Personality psychology has been transformed
by the discovery that nearly all personality traits are heritable (Bouchard,
1994; Turkheimer, 2000). Developmental psychology has been transformed
by the discovery that infants have a great deal of innate knowledge about the
physical world (Baillargeon, 1987; Spelke, 2000), and even about the social
world (Hamlin, Wynn, & Bloom, 2007). These findings have earth-shaking
implications for moral psychology, rendering blank slate or pure learning
approaches nonstarters. None of these findings were reduced to “hand
waving” by their authors’ failure to point to specific genes or brain circuits.
It may have been a defensible strategy in the 1970s to assume that the mind is
a blank slate and then require nativists to shoulder the burden of proof, but
nowadays, we believe, the discussion should focus on how exactly moral
knowledge is innate, not whether it is (Tooby et al., 2005).
Nonetheless, Suhler and Churchland do point out places in which our
“how exactly” discussion has been vague or underspecified, giving us an
opportunity to improve the theory. In response to their critique, we offered
a more detailed discussion of moral modularity (Haidt & Joseph, 2011; see
also Haidt & Joseph, 2007). We have also tried to be much more specific in
this chapter about what exactly a foundation is, and how you know when
something is innate (see Section 4.2).
morality (which is universally applicable). But both men believed that real
morality (postconventional, for Kohlberg; the moral domain, for Turiel) was
something the child identified for herself during social interactions with
peers, aided by the process of role-taking. Cognitive developmentalists car-
ried out a variety of cross-cultural studies, but the goal of these studies—and
their consistent conclusion—was that the fundamental stuff of morality did
not vary across cultures (Hollos, Leis, & Turiel, 1986; Kohlberg, 1969).
Again, as Kohlberg (1971) asserted: “Virtue is ultimately one, not many,
and it is always the same ideal form regardless of climate or culture. . .The
name of this ideal form is justice.” Any cross-cultural differences in the abil-
ity to reason about justice were explained as developmental differences: chil-
dren in some cultures did not have as many opportunities for role-taking in
egalitarian interactions, but if they did have those opportunities, they had
reached the same endpoint.
Of MFT’s four main claims, cultural learning has received the least direct
criticism. Following Piaget, Kohlberg, and Turiel, researchers in the
cognitive-developmental tradition could argue that MFT has overstated
the role of cultural learning and underplayed the role of self-construction
by conscious reasoning about care and fairness. This argument was made
by Turiel, Killen, and Helwig (1987) against Shweder et al. (1987). But none
have advanced such a critique against MFT yet.
Haidt's data on the differences between liberals and conservatives is interesting, but
is his interpretation correct? It seems possible, for instance, that his five foundations
of morality are simply facets of a more general concern about harm. What, after
all, is the problem with desecrating a copy of the Qu'ran? There would be no
problem but for the fact that people believe that the Qu'ran is a divinely authored
text. Such people almost surely believe that some harm could come to them or to
their tribe as a result of such sacrileges—if not in this world, then in the next (p. 89
[see also pages 180–181]).
Harris makes his monist critique in the context of the larger normative argu-
ment that science should determine human values and pronounce which
moral views are correct based on which ones lead to the greatest happiness
(which can be measured in the brain by neuroscientific techniques). For the
person morally offended by the desecration of a holy book, Harris suggests
simply discarding the incorrect view that any deity exists who would cause
harm because of it. Once that illusion is gone, one can correctly see,
according to Harris, that desecrating a holy book is morally acceptable
because it causes no harm. Moral monism is thus necessary for such a project,
which requires a single standard by which to measure moral rightness or
wrongness. For Harris, that standard is human welfare, defined in a rather
narrow way: the absence of suffering.
But even if one agrees with Harris’s normative views, would the reduction
of all morality to harm help us understand how morality actually works? Or
would it be (to paraphrase William James) another attempt to clean up the litter
the world actually contains? A monist model in which all moral judgments
(even those based on explicitly harmless transgressions) are produced by a
104 Jesse Graham et al.
felt guilty about (or ways in which they were not living up to their values),
honesty violations come up more frequently than any other kind of concern
(see Iyer, 2010, on treating honesty as a separate foundation). We are cur-
rently investigating all of these as part of the method-theory coevolution
of MFT.
not one taste receptor on the tongue whose output tells us “delicious!”
Rather, we posit that there are a variety of rapid, automatic reactions to pat-
terns in the social world. When we detect such patterns, moral modules fire,
and a fully enculturated person has an affectively valenced experience. Not just
a feeling of “good!” or “bad!,” but an experience with a more specific “flavor”
to it, such as “cruel!,” “unfair!,” “betrayal!,” “subversive!,” or “sick!” If a
moral reaction can be elicited quickly and easily, with a variety of images,
bumper-stickers, or one-sentence stories, that is a point in favor of its
foundationhood. Reactions to unequal distributions among children are often
visible on the face of the disadvantaged child within one second (LoBue,
Chiong, Nishida, DeLoache, & Haidt, 2011), and fMRI studies repeatedly
show that people have rapid, affectively laden reactions to being cheated,
and those reactions tend to activate brain areas related to emotion, including
the anterior insula and the orbitofrontal cortex (Rilling et al., 2002; Sanfey
et al., 2003). In an fMRI study of economic games, fair offers (compared
to unfair offers of the same value) activated neural reward circuitry, while
accepting unfair offers activated self-control circuitry (Tabibnia, Satpute, &
Lieberman, 2008). It is easy to trigger rapid and affectively laden judgments
of unfairness using still photos, bumper stickers, or a single number on a com-
puter screen that reveals one’s partner’s choice in a cooperative game. The
same is true for images of harm or cruelty activating the Care foundation
(e.g., Luo et al., 2006), and stories about sexual violations activating the Sanc-
tity foundation (e.g., Parkinson et al., 2011). There has been less research on
automatic reactions to violations of Loyalty and Authority, but here too stud-
ies have shown split-second reactions to sentences, words, or pictures showing
violations of these foundations (Cannon et al., 2011; Graham, 2010).
The case for innateness grows much stronger when a behavior or ability
is found in nonhuman primates (particularly chimpanzees and bonobos) and
when it can be shown to emerge in young children before they have been
exposed to relevant teaching or reinforcement. Contrary to Suhler and
Churchland (2011), we do not believe that claims about innateness need
to point to specific genes or brain areas. Rather, nativists must offer some
reason for believing that a behavior or ability is “organized in advance of
experience.”
de Waal (1996) has long argued that the “building blocks” of human
morality are present in other primates. We believe that such building blocks
have been shown for the Care foundation (i.e., empathy and nurturance;
Hrdy, 2009; Preston & de Waal, 2002), the Loyalty foundation (coalitional
behavior and intercoalitional conflict; de Waal, 1982), and the Authority
foundation (rank and deference; Boehm, 1999, 2012). There is some evi-
dence for precursors of Fairness (Brosnan, 2006), but it is more anecdotal,
and the limited lab evidence (e.g., Brosnan & de Waal, 2003) has been dis-
puted (Brauer, Call, & Tomasello, 2006; see also Hammerstein, 2003). We
know of no evidence that nonhuman primates have any building blocks of
the Sanctity foundation, such as the emotion of disgust, or even contamina-
tion sensitivity (see Rozin & Fallon, 1987). We presume that Sanctity is the
most recently evolved foundation, perhaps coevolving with human religi-
osity in the past one or two hundred thousand years.
Recent findings in developmental psychology strongly support the
nativist claims of MFT. The fourth row of Table 2.2 lists examples of such
research. In the past 6 years, infants and young children have been shown to
have surprisingly sophisticated social-cognitive abilities, often including
affective reactions to third-party violators (i.e., puppets who do bad things
to other puppets). For example, infants do not like puppets who harm
others, but they do like puppets who help others (Hamlin et al., 2007).
Infants are also sensitive to third-party fairness violations (Sloane et al.,
2012); interestingly, this sensitivity predicted infants’ own altruistic sharing
behavior (Schmidt & Sommerville, 2011). Children as young as three are
adept at sharing rewards equally, but only when they both cooperated to
produce the benefit (Hamann, Warneken, Greenberg, & Tomasello,
2011). Infants notice markers of ingroup membership and prefer members
of their ingroup (Kinzler et al., 2007), and even prefer those who help similar
others and harm dissimilar others (Hamlin, Mahajan, Liberman, & Wynn,
in press). We know of no research on how infants process markers of author-
ity and respect, or of purity, sanctity, or contagion; we hope that such
Moral Foundations Theory 113
research will be done in the future. But we do note that children’s games are
often based on a single foundation, giving children the opportunity to prac-
tice a portion of their moral repertoire. For example, the game of “Simon
Says” appoints a leader who commands followers, and the game of cooties is
about contagion and how to remove contagion (i.e., with a “cooties shot”).
The concept of “cooties” is not found universally, but it has been identified
in several far-flung cultures (Hirschfeld, 2002; Samuelson, 1980), it seems to
emerge with no encouragement from adults, and it emerges in Western soci-
eties that discourage the use of caste and contagion as moral categories.
Importantly, cooties games tend to emerge around the age of 7 or 8
(Opie & Opie, 1969), which is the age at which disgust sensitivity becomes
pronounced (Rozin & Fallon, 1987). In other words, these games seem to
reflect the externalization of children’s developing social-emotional abilities,
not the internalization of prevailing cultural norms.
hopeful that as more and more researchers make use of MFT’s methods and
constructs, the benefits of moral pluralism can be realized in more and more
content areas and disciplines. Here are a few areas we see as particularly
fertile.
4.3.2.2 Development
Second, developmental psychologists are just beginning to test the earliest
signs of emergence for moral concerns other than care and fairness. There
is much fertile research ground here for both infant/toddler studies and
lifespan development studies—do the “binding” concerns of Loyalty,
Authority, and Sanctity become more important as people get older,
become parents, or take on leadership positions at work? What are the dif-
ferent patterns of emergence and developmental trajectories for different
foundational concerns?
5. CONCLUSION
A cherished maxim in psychology comes from Lewin (1951): “There
is nothing so practical as a good theory.” Putting this maxim together with
Einstein’s maxim at the opening of this chapter, we think MFT is a good
theory. It is a practical theory—complete with a set of well-validated mea-
surement tools—which has quickly yielded a great variety of new findings,
in many fields. It is a non-Procrustean theory which does not force
researchers to “surrender the adequate representation” of experience.
And it is an open and revisable theory, offering an initial list of foundations
along with a list of criteria for how to revise the list. MFT is a theory in
motion, a theory to be expanded, constricted, refined, and built upon.
Above all, we think it is the right theory for our age—a golden age of
cross-disciplinary research in which most scientists studying morality have
at least some familiarity with findings in neighboring fields. Conferences
on moral psychology nowadays often include researchers who study chim-
panzees, psychopaths, infants, hunter-gatherers, or people with brain dam-
age. MFT gives this varied set of researchers a common language for talking
about the moral domain. It calms the sometimes-divisive nature-nurture
debate by distinguishing the first draft of the moral mind and the experiential
editing process.
We think MFT is practical in another way too: it helps researchers as well
as the general public look beyond the moral values that are dearest to them,
and understand those who live in a different moral matrix. We close with a
final quote from Berlin (2001), who explains one reason why pluralism is so
practical:
If I am a man or a woman with sufficient imagination (and this I do need), I can
enter into a value system which is not my own, but which is nevertheless some-
thing I can conceive of men pursuing while remaining human, while remaining
creatures with whom I can communicate, with whom I have some common
values—for all human beings must have some common values or they cease
to be human, and also some different values else they cease to differ, as in fact
Moral Foundations Theory 119
they do. That is why pluralism is not relativism—the multiple values are objective,
part of the essence of humanity rather than arbitrary creations of men's subjective
fancies.
ACKNOWLEDGMENTS
The authors wish to thank Trish Devine, Jeremy Frimer, Ashby Plant, Linda Skitka, and the
USC Values, Ideology, and Morality Lab for helpful comments on a draft of this chapter.
REFERENCES
Ambady, N., & Rosenthal, R. (1992). Thin slices of expressive behavior as predictors of
interpersonal consequences: A meta-analysis. Psychological Bulletin, 111, 256–274.
Amodio, D. M., Jost, J. T., Master, S. L., & Yee, C. M. (2007). Neurocognitive correlates of
liberalism and conservatism. Nature Neuroscience, 10, 1246–1247.
Baillargeon, R. (1987). Object permanence in 3 1/2- and 4 1/2-month-old infants. Devel-
opmental Psychology, 23(5), 655–664.
Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American
Psychologist, 54, 462–479.
Baril, G., & Wright, J. C. (2012). Different types of moral cognition: Moral stages versus
moral foundations. Personality and Individual Differences, 53, 468–473.
Barkow, J., Cosmides, L., & Tooby, J. (Eds.), (1992). The adapted mind: Evolutionary psychology
and the generation of culture. New York: Oxford University Press.
Baron-Cohen, S. (2009). Autism: The empathizing-systemizing (E-S) theory. The Year in
Cognitive Neuroscience. Annals of the New York Academy of Science, 1156, 68–80.
Barrett, H. C., & Kurzban, R. (2006). Modularity in cognition: Framing the debate. Psycho-
logical Review, 113, 628–647.
Barriga, A. Q., Morrison, E. M., Liau, A. K., & Gibbs, J. C. (2001). Moral cognition:
Explaining the gender difference in antisocial behavior. Merrill-Palmer Quarterly, 47,
532–562.
Bastick, T. (1982). Intuition: How we think and act. Chichester, England: Wiley.
Baumard, N., André, J. B., & Sperber, D. (2013). A mutualistic approach to morality. Behav-
ioral and Brain Sciences, 36, 59–122.
Baumeister, R. F., & Newman, L. S. (1994). How stories make sense of personal experiences:
Motives that shape autobiographical narratives. Personality and Social Psychology Bulletin,
20, 676–690.
Berlin, I. (1969). Four essays on liberty. USA: Oxford University Press.
Berlin, I. (2001). My intellectual path. In H. Hardy (Ed.), The power of ideas (pp. 1–23).
Princeton, NJ: Princeton University Press.
Bloom, P. (2010). How do morals change? Nature, 464, 490.
Bobbio, A., Nencini, A., & Sarrica, M. (2011). Il Moral Foundation Questionnaire: Analisi
della struttura fattoriale della versione italiana. Giornale di Psicologia, 5, 7–18.
Boehm, C. (1999). Hierarchy in the forest: The evolution of egalitarian behavior. Cambridge, MA:
Harvard University Press.
Boehm, C. (2012). Moral origins: The evolution of virtue, altruism, and shame. New York: Basic.
Bouchard, T. J. J. (1994). Genes, environment, and personality. Science, 264, 1700–1701.
Bowlby, J. (1969). Attachment and loss. Attachment Vol. 1. New York: Basic.
Brauer, J., Call, J., & Tomasello, M. (2006). Are apes really inequity averse? Proceedings of the
Royal Society B, 273, 3123–3128.
Brehm, S. S., & Brehm, J. W. (1981). Psychological reactance: A theory of freedom and control.
London: Academic Press, Inc.
120 Jesse Graham et al.
Brosnan, S. F. (2006). Nonhuman species’ reactions to inequity and their implications for
fairness. Social Justice Research, 19, 153–185.
Brosnan, S. F., & de Waal, F. B. (2003). Monkeys reject unequal pay. Nature, 425, 297–299.
Brown, D. E. (1991). Human universals. Philadelphia: Temple University Press.
Brown, R. W., & Ford, M. (1964). Address in American English. In D. Hymes (Ed.), Lan-
guage in culture and society (pp. 234–244). New York: Harper & Row.
Bruneau, E. G., Dufour, N., & Saxe, R. (2012). Social cognition in members of conflict
groups: Behavioural and neural responses in Arabs, Israelis and South Americans to each
other’s misfortunes. Philosophical Transactions of the Royal Society Biological Sciences, 367,
717–730.
Bruner, J. S. (1960). The process of education. Cambridge, MA: Harvard University Press.
Cannon, P. R., Schnall, S., & White, M. (2011). Transgressions and expressions: Affective
facial muscle activity predicts moral judgments. Social Psychological and Personality Science,
2, 325–331.
Clifford, S., & Jerit, J. (in press). How word do the work of politics: Moral Foundations The-
ory and the debate over stem-cell research. Journal of Politics.
Cochran, G., & Harpending, H. (2009). The 10,000 year explosion: How civilization accelerated
human evolution. New York: Basic.
Conover, P. J., & Feldman, S. (1981). The origins and meaning of liberal-conservative self-
identifications. American Journal of Political Science, 25(4), 617–645.
Cosmides, L., & Tooby, J. (1994). Origins of domain specificity: The evolution of functional
organization. In L. A. Hirschfeld & S. A. Gelman (Eds.), Mapping the mind: Domain spec-
ificity in cognition and culture (pp. 85–116). Cambridge, UK: Cambridge University Press.
Dawkins, R. (1976). The selfish gene. New York: Oxford University Press.
De Waal, F. (1982). Chimpanzee politics: Power and sex among apes. London: Jonathan Cape.
De Waal, F. B. M. (1996). Good natured: The origins of right and wrong in humans and other ani-
mals. Cambridge, MA: Harvard University Press.
Dehghani, M., Sagae K., Sachdeva, S. & Gratch, J. (in press). Linguistic Analysis of the debate
over the Construction of the ‘Ground Zero Mosque’. Journal of Information Technol-
ogy & Politics.
DeLoache, J. S., & LoBue, V. (2009). The narrow fellow in the grass: Human infants associate
snakes and fear. Developmental Science, 12(1), 201–207.
DeYoung, C. G., Quilty, L. C., & Peterson, J. B. (2007). Between facets and domains: 10
aspects of the Big Five. Journal of Personality and Social Psychology, 93, 880–896.
Ditto, P., & Koleva, S. P. (2011). Moral empathy gaps and the American culture war. Emotion
Review, 3(3), 331–332 (special issue on “Morality and Emotion” edited by Joshua
Greene).
Ditto, P. H., Liu, B., & Wojcik, S. P. (2012). Is anything sacred anymore? Commentary on
target article. Mind perception is the essence of morality (K. Gray, L. Young, & A. Waytz).
Psychological Inquiry, 23, 155–161.
Ditto, P. H., Pizarro, D. A., & Tannenbaum, D. (2009). Motivated moral reasoning. In B. H.
Ross (Series Ed.) & D. M. Bartels, C. W. Bauman, L. J. Skitka, & D. L. Medin (Eds.),
Psychology of learning and motivation: Vol. 50. Moral judgment and decision making
(pp. 307–338). San Diego, CA: Academic Press.
Dodd, M., Balzer, A., Jacobs, C., Gruszczynski, M., Smith, K., & Hibbing, J. (2012). The
political left rolls with the good and the political right confronts the bad: Connecting
physiology and cognition to preferences. Philosophical Transactions of the Royal Society
B, 367(1589), 640–649.
Douglas, M. (1966). Purity and danger. London: Routledge and Kegan Paul.
Duckitt, J. (2001). A cognitive-motivational theory of ideology and prejudice. In M. P.
Zanna (Ed.), Advances in experimental social psychology: Vol. 33. (pp. 41–113). San Diego:
Academic Press.
Moral Foundations Theory 121
Dunbar, R. (1996). Grooming, gossip, and the evolution of language. Cambridge, MA: Harvard
University Press.
Durkheim, E. (1925/1973). Moral education (E. Wilson & H. Schnurer, Trans.). New York:
The Free Press.
Eidelman, S., Crandall, C. S., Goodman, J. A., & Blanchar, J. C. (2012). Low-effort thought
promotes political conservatism. Personality and Social Psychology Bulletin, 38, 808–820.
Einstein, A. (1934). On the method of theoretical physics. Philosophy of Science, 1, 163–169.
Ekman, P. (1992). An argument for basic emotions. Cognition and Emotion, 6, 169–200.
Ekman, P. (1994). All emotions are basic. In P. Ekman & R. Davidson (Eds.), The nature of
emotion (pp. 15–19). New York: Oxford University Press.
Ekman, P., Sorenson, E. R., & Friesen, W. V. (1969). Pan-cultural elements in facial displays
of emotion. Science, 164, 86–88.
Faulkner, J., Schaller, M., Park, J. H., & Duncan, L. A. (2004). Evolved disease-avoidance
mechanisms and contemporary xenophobic attitudes. Group Processes & Intergroup Rela-
tions, 7, 333–353.
Federico, C. M., Weber, C. R., Ergun, D., & Hunt, C. (in press). Mapping the connections
between politics and morality: The multiple sociopolitical orientations involved in moral
intuition. Political Psychology.
Feinberg, M., & Willer, R. (2013). The moral roots of environmental attitudes. Psychological
Science.
Ferguson, M. J. (2007). On the automatic evaluation of end-states. Journal of Personality and
Social Psychology, 92, 596–611.
Fiske, A. P. (1991). Structures of social life: The four elementary forms of human relations: Communal
sharing, authority ranking, equality matching, market pricing. New York: Free Press.
Fiske, S. T. (1992). Thinking is for doing: Portraits of social cognition from daguerreotype to
laser photo. Journal of Personality and Social Psychology, 63, 877–889.
Fodor, J. A. (1983). Modularity of mind: An essay on faculty psychology. Cambridge, MA: MIT
Press.
Frank, R. (1988). Passions within reason: The strategic role of the emotions. New York: Norton.
Freud, S. (1923/1962). The ego and the id (J. Riviere, Trans.). New York: Norton.
Frimer, J. A., Biesanz, J. C., Walker, L. J., & MacKinlay, C. W. (in press). Liberals and con-
servatives rely on common moral foundations when making moral judgments about
influential people. Journal of Personality and Social Psychology.
Gigerenzer, G. (2007). Gut feelings: The intelligence of the unconscious. New York: Viking Press.
Gilligan, C. (1982). In a different voice: Psychological theory and women’s development. Cambridge,
MA: Harvard University Press.
Glenn, A. L., Iyer, R., Graham, J., Koleva, S., & Haidt, J. (2009). Are all types of morality
compromised in psychopathy? Journal of Personality Disorders, 23(4), 384–398.
Goodall, J. (1986). The chimpanzees of Gombe: Patterns of behavior. Cambridge, MA: Belknap
Press.
Gould, S. J., & Lewontin, R. C. (1979). The spandrels of San Marco and the Panglossian
paradigm: A critique of the adaptationist programme. Proceedings of the Royal Society of
London, 205B, 581–598.
Graham, J. (2010). Left gut, right gut: Ideology and automatic moral reactions, Doctoral dis-
sertation. Retrieved from ProQuest Dissertations and Theses (AAT 3437423).
Graham, J. (2013). Beyond economic games: A mutualistic approach to the rest of moral life
[Commentary on Baumard, André, & Sperber]. Behavioral and Brain Sciences.
Graham, J., & Haidt, J. (2010). Beyond beliefs: Religions bind individuals into moral com-
munities. Personality and Social Psychology Review, 14, 140–150.
Graham, J., & Haidt, J. (2012). Sacred values and evil adversaries: A Moral Foundations
approach. In P. Shaver & M. Mikulincer (Eds.), The social psychology of morality: Exploring
the causes of good and evil. New York: APA Books.
122 Jesse Graham et al.
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets
of moral foundations. Journal of Personality and Social Psychology, 96, 1029–1046.
Graham, J., & Iyer, R. (2012). The unbearable vagueness of “essence”: Forty-four clarifica-
tion questions for Gray, Young, and Waytz. Psychological Inquiry, 23, 162–165.
Graham, J., Meindl, P., & Beall, E. (2012). Integrating the streams of morality research: The
case of political ideology. Current Directions in Psychological Science, 21, 373–377.
Graham, J., Nosek, B. A., & Haidt, J. (2012). The moral stereotypes of liberals and conservatives:
Exaggeration of differences across the political spectrum. PLoS One, 7, e50092.
Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the
moral domain. Journal of Personality and Social Psychology, 101, 366–385.
Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality.
Psychological Inquiry, 23, 101–124.
Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive
load selectively interferes with utilitarian moral judgment. Cognition, 107, 1144–1154.
Greenwald, A. G. (2012). There is nothing so theoretical as a good method. Perspectives on
Psychological Science, 7, 99–108.
Gudbjartsson, D. F., Walters, G. B., Thorleifsson, G., Stefansson, H., Halldorsson, B. V.,
Zusmanovich, P., et al. (2008). Many sequence variants affecting diversity of adult
human height. Nature Genetics, 40, 609–615.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to
moral judgment. Psychological Review, 108, 814–834.
Haidt, J. (2007a). The new synthesis in moral psychology. Science, 316, 998–1002.
Haidt, J. (2007b). Moral psychology and the misunderstanding of religion. http://www.
edge.org/3rd_culture/haidt07/haidt07_index.html Retrieved on July 20, 2012.
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion.
New York: Pantheon.
Haidt, J., & Bjorklund, F. (2008). Social intuitionists answer six questions about moral
psychology. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2: The cognitive science
of morality: Intuition and diversity (pp. 181–217). Cambridge, MA: MIT Press.
Haidt, J., & Graham, J. (2007). When morality opposes justice: Conservatives have moral
intuitions that liberals may not recognize. Social Justice Research, 20, 98–116.
Haidt, J., & Graham, J. (2009). Planet of the Durkheimians, where community, authority,
and sacredness are foundations of morality. In J. Jost, A. C. Kay & H. Thorisdottir (Eds.),
Social and psychological bases of ideology and system justification (pp. 371–401). New York:
Oxford University Press.
Haidt, J., Graham, J., & Joseph, C. (2009). Above and below left-right: Ideological narratives
and moral foundations. Psychological Inquiry, 20, 110–119.
Haidt, J., & Hersh, M. A. (2001). Sexual morality: The cultures and reasons of liberals and
conservatives. Journal of Applied Social Psychology, 31, 191–221.
Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate
culturally variable virtues. Daedalus, 133, 55–66.
Haidt, J., & Joseph, C. (2007). The moral mind: How 5 sets of innate intuitions guide the
development of many culture-specific virtues, and perhaps even modules. In P.
Carruthers, S. Laurence & S. Stich (Eds.), The innate mind: Vol. 3. (pp. 367–391).
New York: Oxford.
Haidt, J., & Joseph, C. (2011). How moral foundations theory succeeded in building on
sand: A response to suhler and churchland. Journal of Cognitive Neuroscience, 23,
2117–2122.
Haidt, J., Koller, S., & Dias, M. (1993). Affect, culture, and morality, or is it wrong to eat your
dog? Journal of Personality and Social Psychology, 65, 613–628.
Hamann, K., Warneken, F., Greenberg, J. R., & Tomasello, M. (2011). Collaboration
encourages equal sharing in children but not in chimpanzees. Nature, 476, 328–331.
Moral Foundations Theory 123
Hamilton, W. D. (1964). The genetical evolution of social behavior. II. Journal of Theoretical
Biology, 7, 17–52.
Hamlin, J. K., Mahajan, N., Liberman, Z., & Wynn, K. (in press). Not like me ¼ bad: Infants
prefer those who harm dissimilar others. Psychological Science.
Hamlin, K., Wynn, K., & Bloom, P. (2007). Social evaluation by preverbal infants. Nature,
450, 557–559.
Hammerstein, P. (2003). Why is reciprocity so rare in social animals? In P. Hammerstein
(Ed.), Genetic and cultural evolution of cooperation (pp. 55–82). Cambridge: MIT.
Harris, S. (2010). The moral landscape: How science can determine human values. New York: Free
Press.
Helzer, E. G., & Pizarro, D. A. (2011). Dirty liberals!: Reminders of physical cleanliness
influence moral and political attitudes. Psychological Science, 22, 517–522.
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? The
Behavioral and Brain Sciences, 33, 61–83.
Henrich, N., & Henrich, J. (2007). Why humans cooperate: A cultural and evolutionary explana-
tion. Oxford: New York.
Herdt, G. H. (1981). Guardians of the flutes. New York: Columbia University Press.
Hibbing, J. R., & Smith, K. B. (2007). The biology of political behavior. The Annals of the
American Academy of Political and Social Science, 617, 6–14.
Hirschfeld, L. A. (2002). Why don’t anthropologists like children? American Anthropologist,
104, 611–627.
Hirsh, J. B., DeYoung, C. G., Xu, X., & Peterson, J. B. (2010). Compassionate liberals and
polite conservatives: Associations of agreeableness with political ideology and moral
values. Personality and Social Psychology Bulletin, 36, 655–664.
Hollos, M., Leis, P., & Turiel, E. (1986). Social reasoning in Ijo children and adolescents in
Nigerian communities. Journal of Cross-Cultural Psychology, 17, 352–374.
Hong, S. (1996). Refinement of the Hong psychological reactance scale. Educational and Psy-
chological Measurement, 56, 173–182.
Horberg, E. J., Oveis, C., Keltner, D., & Cohen, A. B. (2009). Disgust and the moralization
of purity. Journal of Personality and Social Psychology, 97(6), 963–976.
Hrdy, S. B. (2009). Mothers and others: The evolutionary origins of mutual understanding.
Cambridge, MA: Harvard.
Hunter, J. D. (1991). Culture wars: The struggle to define America. New York: Basic Books.
Hutcherson, C. A., & Gross, J. J. (2011). The moral emotions: A social–functionalist account
of anger, disgust, and contempt. Journal of Personality and Social Psychology, 100,
719–737.
Inbar, Y., Pizarro, D. A., & Bloom, P. (2009). Conservatives are more easily disgusted than
liberals. Cognition and Emotion, 23, 714–725.
Inbar, Y., Pizarro, D. A., & Bloom, P. (2011). Disgusting smells cause decreased liking of gay
men. Emotion, 12, 23–27.
Inbar, Y., Pizarro, D. A., & Cushman, F. (2012). Benefiting from misfortune: When harmless
actions are judged to be morally blameworthy. Personality and Social Psychology Bulletin,
38, 52–62.
Inbar, Y., Pizarro, D., Iyer, R., & Haidt, J. (2012). Disgust sensitivity, political conservatism,
and voting. Social Psychological and Personality Science, 5, 537–544.
Inbar, Y., Pizarro, D. A., Knobe, J., & Bloom, P. (2009). Disgust sensitivity predicts intuitive
disapproval of gays. Emotion, 9(3), 435.
Iyer, R. (2009). What are the basic foundations of morality? http://www.polipsych.com/
2009/11/13/what-are-the-basic-foundations-of-morality/ Retrieved on June 26, 2012.
Iyer, R. (2010). The case for honesty as a moral foundation. http://www.polipsych.com/
2010/12/07/the-case-for-honesty-as-a-moral-foundation/ Retrieved on June 26,
2012.
124 Jesse Graham et al.
Iyer, R., Graham, J., Koleva, S., Ditto, P., & Haidt, J. (2010). Beyond identity politics: Moral
psychology and the 2008 Democratic primary. Analyses of Social Issues and Public Policy,
10, 293–306.
Iyer, R., Koleva, S. P., Graham, J., Ditto, P. H., & Haidt, J. (2012). Understanding libertarian
morality: The psychological roots of an individualist ideology. PLoS One, 7, e42366.
Iyer, R., Read, S. J., & Correia, J. (2010). Functional justice: Productivity and well-being
goals define fairness. Available at SSRN: http://ssrn.com/abstract¼1691969 or
http://dx.doi.org/10.2139/ssrn.1691969.
James, W. (1890/1950). The principles of psychology. New York: Dover.
James, W. (1909/1987). A pluralistic universe. New York: Library of America.
Janoff-Bulman, R., & Sheikh, S. (2012). The forbidden, the obligatory, and the permitted:
Moral regulation and political orientation. Paper presented to the Society for Personality
and Social Psychology annual conference, San Diego, CA.
Janoff-Bulman, R., Sheikh, S., & Baldacci, K. (2008). Mapping moral motives: Approach,
avoidance, and political orientation. Journal of Experimental Social Psychology, 44,
1091–1099.
Janoff-Bulman, R., Sheikh, S., & Hepp, S. (2009). Proscriptive versus prescriptive morality:
Two faces of moral regulation. Journal of Personality and Social Psychology, 96, 521–537.
Jarudi, I. N. (2009). Everyday morality and the status quo: Conservative concerns about
moral purity, moral evaluations of everyday objects, and moral objections to perfor-
mance enhancement, Doctoral dissertation, Yale University.
Jones, B. (2012). The morality of representation: Constituent moral foundations and
position-taking in congress. Social Science Research Network. http://ssrn.com/
abstract¼2018491, http://dx.doi.org/10.2139/ssrn.2018491.
Joseph, C. (2002). Morality and the virtues in Islam, Doctoral dissertation, University of
Chicago.
Jost, J. T. (2006). The end of the end of ideology. American Psychologist, 61, 651–670.
Jost, J. T. (2009). Group morality and ideology: Left and right, right and wrong. Paper pres-
ented to the Society for Personality and Social Psychology annual conference, Tampa,
FL.
Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. (2003). Political conservatism as
motivated social cognition. Psychological Bulletin, 129, 339.
Joyce, R. (2006). The evolution of morality. Cambridge, MA: The MIT Press.
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Strauss, Giroux.
Kesebir, P., & Pyszczynski, T. (2011). A moral-existential account of the psychological fac-
tors fostering intergroup conflict. Social and Personality Psychology Compass, 5, 878–890.
Killen, M., & Smetana, J. G. (2006). Handbook of moral development. Mahwah, New Jersey:
Erlbaum.
Kim, K. R., Kang, J., & Yun, S. (2012). Moral intuitions and political orientation: Similarities
and differences between Korea and the United States. Psychological Reports, 111, 173–185.
Kinzler, K. D., Dupoux, E., & Spelke, E. S. (2007). The native language of social cognition.
Proceedings of the National Academy of Sciences of the United States of America, 104(30),
12577–12580. http://dx.doi.org/10.1073/pnas.0705345104.
Knutson, K., Krueger, F., Koenigs, M., Hawley, A., Escobedo, J., Vasudeva, V., et al. (2009).
Behavioral norms for condensed moral vignettes. Social Cognitive and Affective Neurosci-
ence, 5(4), 378–384.
Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to social-
ization. In D. A. Goslin (Ed.), Handbook of socialization theory and research (pp. 347–480).
Chicago: Rand McNally.
Kohlberg, L. (1971). From is to ought: How to commit the naturalistic fallacy and get away
with it in the study of moral development. In T. Mischel (Ed.), Psychology and genetic epis-
temology (pp. 151–235). New York: Academic Press.
Moral Foundations Theory 125
Kohlberg, L., Levine, C., & Hewer, A. (1983). Moral stages: A current formulation and a response
to critics. Basel, Switzerland: Karger.
Koleva, S. (2011). Birds of a moral feather: The role of morality in romantic attraction and
relationship satisfaction, Doctoral dissertation. Retrieved from ProQuest Dissertations
and Theses (AAT 3472884).
Koleva, S., Graham, J., Iyer, Y., Ditto, P. H., & Haidt, J. (2012). Tracing the threads: How
five moral concerns (especially purity) help explain culture war attitudes. Journal of
Research in Personality, 46(2), 184–194.
Koleva, S., & Haidt, J. (2012). Let’s use Einstein’s safety razor, not Occam’s Swiss Army knife
or Occam’s chainsaw. Commentary on target article, Mind perception is the essence of moral-
ity (K. Gray, L. Young, & A. Waytz). Psychological Inquiry, 23, 175–178.
Koonz, C. (2003). The Nazi conscience. Cambridge, MA: Belknap.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480–498.
Kurzban, R., Dukes, A., & Weeden, J. (2010). Sex, drugs and moral goals: Reproductive
strategies and views about recreational drugs. Proceedings of Biological Sciences, 277
(1699), 3501–3508.
Lewin, K. (1951). Field theory in social science. Chicago: University of Chicago Press.
Lewis, G. J., & Bates, T. C. (2011). From left to right: How the personality system allows
basic traits to influence politics via characteristic moral adaptations. British Journal of Psy-
chology, 102, 1–13.
Lewis, G. J., Kanai, R., Bates, T. C., & Rees, G. (2012). Moral values are associated with
individual differences in regional brain volume. Journal of Cognitive Neuroscience, 24,
1657–1663.
Lilienfeld, S. O., Ammirati, R., & Landfield, K. (2009). Giving debiasing away: Can psycho-
logical research on correcting cognitive errors promote human welfare? Perspectives on
Psychological Science, 4, 390–398.
LoBue, V., Chiong, C., Nishida, T., DeLoache, J., & Haidt, J. (2011). When getting some-
thing good is bad: Even 3-year-olds react to inequality. Social Development, 2011,
154–170.
Luo, Q., Nakic, M., Wheatley, T., Richell, R., Martin, A., & Blair, R. J. R. (2006). The
neural basis of implicit moral attitude—An IAT study using event-related fMRI.
NeuroImage, 30, 1449–1457.
Makiniemi, J., Pirttila-Backman, A., & Pieri, M. (in press). The endorsement of the moral
foundations in food-related moral thinking in three European countries. Journal of Agri-
cultural and Environmental Ethics.
Marcus, G. (2004). The birth of the mind. New York: Basic.
Marcus, G. (2008). Kluge: The haphazard construction of the human mind. Boston: Houghton
Mifflin.
Markowitz, E. M., & Shariff, A. F. (2012). Climate change and Moral Judgement: Psycho-
logical challenges and opportunities. Nature Climate Change, 2, 243–247.
Marler, P. (1991). The instinct to learn. In S. Carey & R. Gelman (Eds.), The epigenesis of
mind: Essays on biology and cognition. Mahwah, NJ: Erlbaum.
Marques, J. M., Yzerbyt, V. Y., & Leyens, J. P. (1988). The ‘black sheep effect’: Extremity of
judgments towards ingroup members as a function of group identification. European Jour-
nal of Social Psychology, 18, 1–16.
McAdams, D. P. (1995). What do we know when we know a person? Journal of Personality,
63, 365–396.
McAdams, D., Albaugh, M., Farber, E., Daniels, J., Logan, R., & Olson, B. (2008). Family
metaphors and moral intuitions: How conservatives and liberals narrate their lives. Journal
of Personality and Social Psychology, 95(4), 978–990.
McAdams, D. P., & Pals, J. L. (2006). A new Big Fig: Fundamental principles for an inte-
grative science of personality. American Psychologist, 61, 204–217.
126 Jesse Graham et al.
McClosky, H., & Zaller, J. (1984). The American ethos: Public attitudes toward capitalism and
democracy. Cambridge, MA: Harvard University Press.
McGregor, H. A., Lieberman, J. D., Greenberg, J., Solomon, S., Arndt, J., Simon, L., et al.
(1998). Terror management and aggression: Evidence that mortality salience motivates
aggression against worldview-threatening others. Journal of Personality and Social Psychol-
ogy, 74(3), 590.
Mercier, H., & Sperber, D. (2010). Why do humans reason? Arguments for an argumentative
theory. The Behavioral and Brain Sciences, 34, 57–74.
Mikhail, J. (2007). Universal moral grammar: Theory, evidence and the future. Trends in
Cognitive Sciences, 11(4), 143–152.
Mineka, S., & Cook, M. (1988). Social learning and the acquisition of snake fear in monkeys.
In T. R. Zentall & J. B. G. Galef (Eds.), Social learning: Psychological and biological perspec-
tives (pp. 51–74). Hillsdale, NJ: Lawrence Erlbaum.
Motyl, M. (2012). How moral migration geographically polarizes the electorate. Invited talk
given at the Society for Experimental Social Psychology’s annual conference, Austin, TX.
Motyl, M., & Pyszczynski, T. (2009). The existential underpinnings of the cycle of violence
and terrorist and counterterrorist pathways to peaceful resolutions. International Review of
Social Psychology, 22, 267–291.
Narvaez, D. (2008). The social-intuitionist model: Some counter-intuitions. In W. A.
Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2, The cognitive science of morality: Intuition
and diversity (pp. 233–240). Cambridge, MA: MIT Press.
Narvaez, D. (2010). Moral complexity: The fatal attraction of truthiness and the importance
of mature moral functioning. Perspectives on Psychological Science, 5, 163–181.
Navarrete, C. D., & Fessler, D. M. T. (2006). Disease avoidance and ethnocentrism: The
effects of disease vulnerability and disgust sensitivity on intergroup attitudes. Evolution
and Human Behavior, 27, 270–282.
Neuberg, S. L., Kenrick, D. T., & Schaller, M. (2010). Evolutionary social psychology. In S.
T. Fiske, D. T. Gilbert & G. Lindzey (Eds.), Handbook of social psychology (pp. 761–796).
(5th ed.). New York: John Wiley & Sons.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on
mental processes. Psychological Review, 84, 231–259.
Nosek, B. A., Graham, J., & Hawkins, C. B. (2010). Implicit political cognition. In B.
Gawronski & B. K. Payne (Eds.), Handbook of implicit social cognition (pp. 548–564).
New York, NY: Guilford.
Nucci, L., & Turiel, E. (1978). Social interactions and the development of social concepts in
preschool children. Child Development, 49, 400–407.
Oaten, M., Stevenson, R. J., & Case, T. I. (2009). Disgust as a disease avoidance mechanism.
Psychological Bulletin, 135, 303–321.
Oishi, S., & Graham, J. (2010). Social ecology: Lost and found in psychological science. Per-
spectives on Psychological Science, 5, 356–377.
Opie, I., & Opie, P. (1969). Children’s games in street and playground. Oxford, UK: Clarendon
Press.
Oxley, D. (2010). Fairness, justice and an individual basis for public policy, Doctoral disser-
tation, University of Nebraska.
Oxley, D. R., Smith, K. B., Alford, J. R., Hibbing, M. V., Miller, J. L., Scalora, M., et al.
(2008). Political attitudes vary with physiological traits. Science, 321, 1667–1670.
Parkinson, C., Sinnott-Armstrong, W., Koralus, P. E., Mendelovici, A., McGeer, V., &
Wheatley, T. (2011). Is morality unified? Evidence that distinct neural systems underlie
judgments of harm, dishonesty, and disgust. Journal of Cognitive Neuroscience, 23,
3162–3180.
Moral Foundations Theory 127
Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. (2008). Disgust as embodied moral judg-
ment. Personality and Social Psychology Bulletin, 34, 1096–1109.
Schwartz, S. H. (1992). Universals in the content and structure of values. In M. P. Zanna
(Ed.), Advances in experimental social psychology: Vol. 25. (pp. 1–65). New York: Academic
Press.
Schwartz, S. H., & Bilsky, W. (1990). Toward a theory of the universal content and structure
of values: Extensions and cross-cultural replications. Journal of Personality and Social Psy-
chology, 58, 878–891.
Seligman, M. E. P. (1971). Phobias and preparedness. Behavior Therapy, 2, 307–320.
Sherif, M., Harvey, O. J., White, B. J., Hood, W., & Sherif, C. (1961/1954). Intergroup conflict
and cooperation: The Robbers Cave experiment. Norman, OK: University of Oklahoma
Institute of Group Relations.
Shook, N. J., & Fazio, R. H. (2009). Political ideology, exploration of novel stimuli, and
attitude formation. Journal of Experimental Social Psychology, 45(4), 995–998.
Shweder, R. A. (1990). In defense of moral realism: Reply to Gabennesch. Child Develop-
ment, 61, 2060–2067.
Shweder, R. A., Mahapatra, M., & Miller, J. (1987). Culture and moral development.
In J. Kagan & S. Lamb (Eds.), The emergence of morality in young children (pp. 1–83).
Chicago: University of Chicago Press.
Shweder, R. A., Much, N. C., Mahapatra, M., & Park, L. (1997). The “big three” of moral-
ity (autonomy, community, and divinity), and the “big three” explanations of suffering.
In A. Brandt, & P. Rozin (Eds.), Morality and health (pp. 119–169). New York:
Routledge.
Simon, H. (1992). What is an “explanation” of behavior? Psychological Science, 3, 150–161.
Skitka, L. J., Mullen, E., Griffin, T., Hutchinson, S., & Chamberlin, B. (2002). Dispositions,
scripts, or motivated correction?: Understanding ideological differences in explanations
for social problems. Journal of Personality and Social Psychology, 83(2), 470.
Sloane, S., Baillargeon, R., & Premack, D. (2012). Do infants have a sense of fairness? Psy-
chological Science, 23, 196–204.
Smith, K. B., Oxley, D. R., Hibbing, M. V., Alford, J. R., & Hibbing, J. R. (2011). Linking
genetics and political attitudes: Re-conceptualizing political ideology. Political Psychology,
32, 369–397.
Spelke, E. S. (2000). Core knowledge. American Psychologist, 55, 1233–1243.
Sperber, D. (1994). The modularity of thought and the epidemiology of representations. In
L. A. Hirschfeld, & S. A. Gelman (Eds.), Mapping the mind: Domain specificity in cognition
and culture (pp. 39–67). Cambridge, UK: Cambridge University Press.
Sperber, D. (2005). Modularity and relevance: How can a massively modular mind be
flexible and context-sensitive? In P. Carruthers, S. Laurence, & S. P. Stich (Eds.),
The innate mind: Structure and contents: Vol. 1. (pp. 53–68). New York: Oxford
University Press.
Spiro, M. (1956). Kibbutz: Venture in utopia. Cambridge, MA: Harvard.
Stanovich, W., & West, R. F. (2000). Individual difference in reasoning: Implications for the
rationality debate? The Behavioral and Brain Sciences, 23, 645–726.
Stenner, K. (2005). The authoritarian dynamic. New York: Cambridge.
Suhler, C. L., & Churchland, P. (2011). Can innate, modular “foundations” explain moral-
ity? Challenges for Haidt’s moral foundations theory. Journal of Cognitive Neuroscience,
23(9), 2103–2116.
Tabibnia, G., Satpute, A. B., & Lieberman, M. D. (2008). The sunny side of fairness: Pref-
erence for fairness activates reward circuitry (and disregarding unfairness activates self-
control circuitry). Psychological Science, 19, 339–347.
Tamborini, R. (2011). Moral intuition and media entertainment. Journal of Media Psychology:
Theories, Methods, and Applications, 23, 39–45.
Moral Foundations Theory 129
Tamborini, R., Eden, A., Bowman, N. D., Grizzard, M., & Lachlan, K. A. (2012). The influ-
ence of morality subcultures on the acceptance and appeal of violence. Journal of Com-
munication, 62(1), 136–157.
Tannenbaum, D., Uhlmann, E. L., & Diermeier, D. (2011). Moral signals, public outrage,
and immaterial harms. Journal of Experimental Social Psychology, 47, 1249–1254.
Tetlock, P. E. (2002). Social-functionalist frameworks for judgment and choice: The intu-
itive politician, theologian, and prosecutor. Psychological Review, 109, 451–472.
Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and
sharing intentions: The origins of cultural cognition. The Behavioral and Brain Sciences, 28
(5), 675–735.
Tooby, J., & Cosmides, L. (1992). The psychological foundations of culture. In J. H.
Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind: Evolutionary psychology
and the generation of culture (pp. 19–136). New York: Oxford.
Tooby, J., Cosmides, L., & Barrett, H. C. (2005). Resolving the debate on innate ideas:
Learnability constraints and the evolved interpenetration of motivational and conceptual
functions. In P. Carruthers, S. Laurence, & S. Stich (Eds.), The innate mind: Structure and
contents (pp. 305–337). New York: Oxford.
Triandis, H. C. (1995). Individualism and collectivism. Boulder, CO: Westview.
Trivers, R. L. (1971). The evolution of reciprocal altruism. The Quarterly Review of Biology,
46, 35–57.
Turiel, E. (1979). Distinct conceptual and developmental domains: Social-convention and
morality. Nebraska symposium on motivation, Lincoln, NE: University of Nebraska Press.
Turiel, E. (1983). The development of social knowledge: Morality and convention. Cambridge,
England: Cambridge University Press.
Turiel, E., Killen, M., & Helwig, C. C. (1987). Morality: Its structure, function, and vagaries.
In J. Kagan, & S. Lamb (Eds.), The emergence of morality in young children (pp. 155–243).
Chicago: University of Chicago Press.
Turkheimer, E. (2000). Three laws of behavior genetics and what they mean. Current Direc-
tions in Psychological Science, 9, 160–164.
Van Berkum, J. J. A., Holleman, B., Nieuwland, M., Otten, M., & Murre, J. (2009). Right or
wrong? The brain’s fast response to morally objectionable statements. Psychological Science,
20(9), 1092–1099.
Van Leeuwen, F., & Park, J. H. (2009). Perceptions of social dangers, moral foundations, and
political orientation. Personality and Individual Differences, 47, 169–173.
Van Leeuwen, F., Park, J. H., Koenig, B. L., & Graham, J. (2012). Regional variation in
pathogen prevalence predicts endorsement of group-focused moral concerns. Evolution
and Human Behavior, 33, 429–437.
Van Vugt, M., & Park, J. H. (2009). Guns, germs, and sex: How evolution shaped our inter-
group psychology. Social and Personality Psychology Compass, 3(6), 927–938.
Vauclair, C., & Fischer, R. (2011). Do cultural values predict individuals’ moral attitudes? A
cross-cultural multilevel approach. European Journal of Social Psychology, 41(5), 645–657.
Walster, E., Walster, G., & Berscheid, E. (1978). Equity: Theory and research. Boston: Allyn &
Bacon.
Waytz, A., Epley, N., & Cacioppo, J. T. (2010). Social cognition unbound. Current Directions
in Psychological Science, 19(1), 58.
Weber, C., & Federico, C. M. (2007). Interpersonal attachment and patterns of ideological
belief. Political Psychology, 28(4), 389–416.
Weber, C., & Federico, C. M. (in press). Moral foundations and heterogeneity in ideological
preferences. Political Psychology.
Wegner, D. M., & Bargh, J. A. (1998). Control and automaticity in social life. In D.
Gilbert, S. T. Fiske, & G. Lindzey (Eds.), Handbook of social psychology: Vol. 1.
(pp. 446–496). (4th ed.). New York: McGraw-Hill.
130 Jesse Graham et al.
Wheatley, T., & Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe. Psy-
chological Science, 16, 780–784.
Wiessner, P. (2005). Norm enforcement among the Ju/’hoansi Bushmen. Human Nature, 16,
115–145.
Wilson, D. S. (2002). Darwin’s cathedral: Evolution, religion, and the nature of society. Chicago:
University of Chicago Press.
Winegard, B., & Deaner, R. O. (2010). The evolutionary significance of Red Sox nation:
Sport fandom as a byproduct of coalitional psychology. Evolutionary Psychology, 8(3),
432–446.
Winterich, K., Zhang, Y., & Mittal, V. (2012). How political identity and charity positioning
increase donations: Insights from Moral Foundations Theory. International Journal of
Research in Marketing: Special Issue on Consumer Identities, 29, 346–354.
Wright, R. (1994). The moral animal. New York: Pantheon.
Wright, J. C., & Baril, G. (2011). The role of cognitive resources in determining our moral
intuitions: Are we all liberals at heart? Journal of Experimental Social Psychology, 47,
1007–1012.
Young, L., & Saxe, R. (2011). When ignorance is no excuse: Different roles for intent across
moral domains. Cognition, 120, 202–214.
Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psy-
chologist, 35, 151–175.