100% found this document useful (1 vote)
218 views21 pages

! Thinking, Fast and Slow by Daniel Kahneman - Summary & Notes

The document summarizes key aspects of Daniel Kahneman's book "Thinking, Fast and Slow" which discusses two systems that govern the way our minds work - System 1 which operates automatically and quickly, and System 2 which operates through effortful cognition. Some key points discussed include: intuitive judgments often substitute easier questions for harder ones; overconfidence is common due to hindsight bias; attention and self-control are limited resources; priming effects unconsciously influence our thoughts and actions; cognitive ease influences memory and decision making.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
218 views21 pages

! Thinking, Fast and Slow by Daniel Kahneman - Summary & Notes

The document summarizes key aspects of Daniel Kahneman's book "Thinking, Fast and Slow" which discusses two systems that govern the way our minds work - System 1 which operates automatically and quickly, and System 2 which operates through effortful cognition. Some key points discussed include: intuitive judgments often substitute easier questions for harder ones; overconfidence is common due to hindsight bias; attention and self-control are limited resources; priming effects unconsciously influence our thoughts and actions; cognitive ease influences memory and decision making.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Introduction

Valid intuitions develop when experts have learned to recognize familiar elements
in a new situation and to act in a manner that is appropriate to it.
The essence of intuitive heuristics: when faced with a difficult question, we often
answer an easier one instead, usually without noticing the substitution.
We are prone to overestimate how much we understand about the world and to
underestimate the role of chance in events. Overconfidence is fed by the illusory
certainty of hindsight. My views on this topic have been influenced by Nassim
Taleb, the author of The Black Swan.

Part 1: Two Systems


Chapter 1: The Characters of the Story
System 1 operates automatically and quickly, with little or no effort and no sense of
voluntary control.
System 2 allocates attention to the effortful mental activities that demand it,
including complex computations. The operations of System 2 are often associated
with the subjective experience of agency, choice, and concentration.
I describe System 1 as effortlessly originating impressions and feelings that are the
main sources of the explicit beliefs and deliberate choices of System 2. The
automatic operations of System 1 generate surprisingly complex patterns of ideas,
but only the slower System 2 can construct thoughts in an orderly series of steps.

In rough order of complexity, here are some examples of the automatic activities that
are attributed to System 1:

Detect that one object is more distant than another.


Orient to the source of a sudden sound.

The highly diverse operations of System 2 have one feature in common: they require
attention and are disrupted when attention is drawn away. Here are some examples:

Focus on the voice of a particular person in a crowded and noisy room.


Count the occurrences of the letter a in a page of text.
Check the validity of a complex logical argument.
It is the mark of effortful activities that they interfere with each other, which is why it
is difficult or impossible to conduct several at once.
The gorilla study illustrates two important facts about our minds: we can be blind to
the obvious, and we are also blind to our blindness.
One of the tasks of System 2 is to overcome the impulses of System 1. In other
words, System 2 is in charge of self-control.
The best we can do is a compromise: learn to recognize situations in which
mistakes are likely and try harder to avoid significant mistakes when the stakes are
high.

Chapter 2: Attention and Effort


People, when engaged in a mental sprint, become effectively blind.
As you become skilled in a task, its demand for energy diminishes. Talent has
similar effects.
One of the significant discoveries of cognitive psychologists in recent decades is
that switching from one task to another is effortful, especially under time pressure.

Chapter 3: The Lazy Controller


It is now a well-established proposition that both self-control and cognitive effort
are forms of mental work. Several psychological studies have shown that people
who are simultaneously challenged by a demanding cognitive task and by a
temptation are more likely to yield to the temptation.
People who are cognitively busy are also more likely to make selfish choices, use
sexist language, and make superficial judgments in social situations. A few drinks
have the same effect, as does a sleepless night.
Baumeister’s group has repeatedly found that an effort of will or self-control is
tiring; if you have had to force yourself to do something, you are less willing or less
able to exert self-control when the next challenge comes around. The phenomenon
has been named ego depletion.
The evidence is persuasive: activities that impose high demands on System 2
require self-control, and the exertion of self-control is depleting and unpleasant.
Unlike cognitive load, ego depletion is at least in part a loss of motivation. After
exerting self-control in one task, you do not feel like making an effort in another,
although you could do it if you really had to. In several experiments, people were
able to resist the effects of ego depletion when given a strong incentive to do so.
Restoring glucose levels can have a counteracting effect to mental depletion.
Chapter 4: The Associative Machine
Priming effects take many forms. If the idea of EAT is currently on your mind
(whether or not you are conscious of it), you will be quicker than usual to recognize
the word SOUP when it is spoken in a whisper or presented in a blurry font. And of
course you are primed not only for the idea of soup but also for a multitude of food-
related ideas, including fork, hungry, fat, diet, and cookie.
Priming is not limited to concepts and words; your actions and emotions can be
primed by events of which you are not even aware, including simple gestures.
Money seems to prime individualism: reluctance to be involved with, depend on, or
accept demands from others.
Note: the effects of primes are robust but not necessarily large; likely only a few in
a hundred voters will be affected.

Chapter 5: Cognitive Ease


Cognitive ease: no threats, no major news, no need to redirect attention or
mobilize effort.
Cognitive strain: affected by both the current level of effort and the presence of
unmet demands; requires increased mobilization of System 2.
Memories and thinking are subject to illusions, just as the eyes are.
Predictable illusions inevitable occur if a judgement is based on an impression of
cognitive ease or strain.
A reliable way to make people believe in falsehoods is frequent repetition, because
familiarity is not easily distinguished from truth.
If you want to make recipients believe something, general principle is to ease
cognitive strain: make font legible, use high-quality paper to maximize contrasts,
print in bright colours, use simple language, put things in verse (make them
memorable), and if you quote, make sure it’s an easy name to pronounce.
Weird example: stocks with pronounceable tickers do better over time.
Mood also affects performance: happy moods dramatically improve accuracy.
Good mood, intuition, creativity, gullibility and increased reliance on System 1 form
a cluster.
At the other pole, sadness, vigilance, suspicion, an analytic approach, and
increased effort also go together. A happy mood loosens the control of System 2
over performance: when in a good mood, people become more intuitive and more
creative but also less vigilant and more prone to logical errors.

Chapter 6: Norms, Surprises, and Causes


We can detect departures from the norm (even small ones) within two-tenths of a
second.

Chapter 7: A Machine for Jumping to Conclusions


Jumping to conclusions is efficient if the conclusions are likely to be correct and the
costs of an occasional mistake acceptable, and if the jump saves much time and
effort. Jumping to conclusions is risky when the situation is unfamiliar, the stakes
are high, and there is no time to collect more information.

A Bias to Believe and Confirm

The operations of associative memory contribute to a general confirmation bias.


When asked, "Is Sam friendly?" different instances of Sam’s behavior will come to
mind than would if you had been asked "Is Sam unfriendly?" A deliberate search
for confirming evidence, known as positive test strategy, is also how System 2
tests a hypothesis. Contrary to the rules of philosophers of science, who advise
testing hypotheses by trying to refute them, people (and scientists, quite often)
seek data that are likely to be compatible with the beliefs they currently hold.

Exaggerated Emotional Coherence (Halo Effect)

If you like the president’s politics, you probably like his voice and his appearance
as well. The tendency to like (or dislike) everything about a person—including
things you have not observed—is known as the halo effect.
To counter, you should decor relate error - in other words, to get useful information
from multiple sources, make sure these sources are independent, then compare.
The principle of independent judgments (and decorrelated errors) has immediate
applications for the conduct of meetings, an activity in which executives in
organizations spend a great deal of their working days. A simple rule can help:
before an issue is discussed, all members of the committee should be asked to
write a very brief summary of their position.

What You See is All There is (WYSIATI)


The measure of success for System 1 is the coherence of the story it manages to
create. The amount and quality of the data on which the story is based are largely
irrelevant. When information is scarce, which is a common occurrence, System 1
operates as a machine for jumping to conclusions.
WYSIATI: What you see is all there is.
WYSIATI helps explain some biases of judgement and choice, including:
Overconfidence: As the WYSIATI rule implies, neither the quantity nor the quality
of the evidence counts for much in subjective confidence. The confidence that
individuals have in their beliefs depends mostly on the quality of the story they can
tell about what they see, even if they see little.
Framing effects: Different ways of presenting the same information often evoke
different emotions. The statement that the odds of survival one month after surgery
are 90% is more reassuring than the equivalent statement that mortality within one
month of surgery is 10%.
Base-rate neglect: Recall Steve, the meek and tidy soul who is often believed to
be a librarian. The personality description is salient and vivid, and although you
surely know that there are more male farmers than male librarians, that statistical
fact almost certainly did not come to your mind when you first considered the
question.

Chapter 9: Answering an Easier Question


We often generate intuitive opinions on complex matters by substituting the target
question with a related question that is easier to answer.
The present state of mind affects how people evaluate their happiness.
affect heuristic: in which people let their likes and dislikes determine their beliefs
about the world. Your political preference determines the arguments that you find
compelling.
If you like the current health policy, you believe its benefits are substantial and its
costs more manageable than the costs of alternatives.

Part 2: Heuristics and Biases


Chapter 10: The Law of Small Numbers
A random event, by definition, does not lend itself to explanation, but collections of
random events do behave in a highly regular fashion.
Large samples are more precise than small samples.
Small samples yield extreme results more often than large samples do.

A Bias of Confidence Over Doubt

The strong bias toward believing that small samples closely resemble the
population from which they are drawn is also part of a larger story: we are prone to
exaggerate the consistency and coherence of what we see.

Cause and Chance

Our predilection for causal thinking exposes us to serious mistakes in evaluating


the randomness of truly random events.

Chapter 11: Anchoring Effects


The phenomenon we were studying is so common and so important in the
everyday world that you should know its name: it is an anchoring effect. It occurs
when people consider a particular value for an unknown quantity before estimating
that quantity. What happens is one of the most reliable and robust results of
experimental psychology: the estimates stay close to the number that people
considered—hence the image of an anchor.

The Anchoring Index

The anchoring measure would be 100% for people who slavishly adopt the anchor
as an estimate, and zero for people who are able to ignore the anchor altogether.
The value of 55% that was observed in this example is typical. Similar values have
been observed in numerous other problems.
Powerful anchoring effects are found in decisions that people make about money,
such as when they choose how much to contribute to a cause.
In general, a strategy of deliberately "thinking the opposite" may be a good
defense against anchoring effects, because it negates the biased recruitment of
thoughts that produces these effects.

Chapter 12: The Science of Availability


The availability heuristic, like other heuristics of judgment, substitutes one question
for another: you wish to estimate the size of a category or the frequency of an
event, but you report an impression of the ease with which instances come to
mind. Substitution of questions inevitably produces systematic errors.
You can discover how the heuristic leads to biases by following a simple
procedure: list factors other than frequency that make it easy to come up with
instances. Each factor in your list will be a potential source of bias.
Resisting this large collection of potential availability biases is possible, but
tiresome. You must make the effort to reconsider your impressions and intuitions
by asking such questions as, "Is our belief that thefts by teenagers are a major
problem due to a few recent instances in our neighborhood?" or "Could it be that I
feel no need to get a flu shot because none of my acquaintances got the flu last
year?" Maintaining one’s vigilance against biases is a chore—but the chance to
avoid a costly mistake is sometimes worth the effort.

The Psychology of Availability

For example, people:

believe that they use their bicycles less often after recalling many rather than few
instances
are less confident in a choice when they are asked to produce more arguments to
support it
are less confident that an event was avoidable after listing more ways it could have
been avoided
are less impressed by a car after listing many of its advantages

The difficulty of coming up with more examples surprises people, and they subsequently
change their judgement.

The following are some conditions in which people "go with the flow" and are affected
more strongly by ease of retrieval than by the content they retrieved:

when they are engaged in another effortful task at the same time
when they are in a good mood because they just thought of a happy episode in
their life
if they score low on a depression scale
if they are knowledgeable novices on the topic of the task, in contrast to true
experts
when they score high on a scale of faith in intuition
if they are (or are made to feel) powerful
Chapter 13: Availability, Emotion, and Risk
The affect heuristic is an instance of substitution, in which the answer to an easy
question (How do I feel about it?) serves as an answer to a much harder question
(What do I think about it?).
Experts sometimes measure things more objectively, weighing total number of lives
saved, or something similar, while many citizens will judge “good” and “bad” types
of deaths.
An availability cascade is a self-sustaining chain of events, which may start from
media reports of a relatively minor event and lead up to public panic and large-
scale government action.
The Alar tale illustrates a basic limitation in the ability of our mind to deal with small
risks: we either ignore them altogether or give them far too much weight—nothing
in between.
In today’s world, terrorists are the most significant practitioners of the art of
inducing availability cascades.
Psychology should inform the design of risk policies that combine the experts’
knowledge with the public’s emotions and intuitions.

Chapter 14: Tom W’s Specialty


The representativeness heuristic is involved when someone says "She will win the
election; you can see she is a winner" or "He won’t go far as an academic; too
many tattoos."

One sin of representativeness is an excessive willingness to predict the occurrence of


unlikely (low base-rate) events. Here is an example: you see a person reading The New
York Times on the New York subway. Which of the following is a better bet about the
reading stranger?

She has a PhD.


She does not have a college degree.

Representativeness would tell you to bet on the PhD, but this is not necessarily wise.
You should seriously consider the second alternative, because many more
nongraduates than PhDs ride in New York subways.
The second sin of representativeness is insensitivity to the quality of evidence.

There is one thing you can do when you have doubts about the quality of the evidence:
let your judgments of probability stay close to the base rate.

The essential keys to disciplined Bayesian reasoning can be simply summarized:

Anchor your judgment of the probability of an outcome on a plausible base rate.


Question the diagnosticity of your evidence.

Chapter 15: Linda: Less is More


When you specify a possible event in greater detail you can only lower its
probability. The problem therefore sets up a conflict between the intuition of
representativeness and the logic of probability.
conjunction fallacy: when people judge a conjunction of two events to be more
probable than one of the events in a direct comparison.
Representativeness belongs to a cluster of closely related basic assessments that
are likely to be generated together. The most representative outcomes combine
with the personality description to produce the most coherent stories. The most
coherent stories are not necessarily the most probable, but they are plausible, and
the notions of coherence, plausibility, and probability are easily confused by the
unwary.

Chapter 17: Regression to the Mean


An important principle of skill training: rewards for improved performance work
better than punishment of mistakes. This proposition is supported by much
evidence from research on pigeons, rats, humans, and other animals.

Talent and Luck

My favourite equations:
success = talent + luck
great success = a little more talent + a lot of luck

Understanding Regression
The general rule is straightforward but has surprising consequences: whenever the
correlation between two scores is imperfect, there will be regression to the mean.
If the correlation between the intelligence of spouses is less than perfect (and if
men and women on average do not differ in intelligence), then it is a mathematical
inevitability that highly intelligent women will be married to husbands who are on
average less intelligent than they are (and vice versa, of course).

Chapter 18: Taming Intuitive Predictions


Some predictive judgements, like those made by engineers, rely largely on lookup
tables, precise calculations, and explicit analyses of outcomes observed on similar
occasions. Others involve intuition and System 1, in two main varieties:
Some intuitions draw primarily on skill and expertise acquired by repeated
experience. The rapid and automatic judgements of chess masters, fire chiefs, and
doctors illustrate these.
Others, which are sometimes subjectively indistinguishable from the first, arise
from the operation of heuristics that often substitute an easy question for the
harder one that was asked.
We are capable of rejecting information as irrelevant or false, but adjusting for
smaller weaknesses in the evidence is not something that System 1 can do. As a
result, intuitive predictions are almost completely insensitive to the actual predictive
quality of the evidence.

A Correction for Inuitive Predictions

Recall that the correlation between two measures—in the present case reading
age and GPA—is equal to the proportion of shared factors among their
determinants. What is your best guess about that proportion? My most optimistic
guess is about 30%. Assuming this estimate, we have all we need to produce an
unbiased prediction. Here are the directions for how to get there in four simple
steps:
Start with an estimate of average GPA.
Determine the GPA that matches your impression of the evidence.
Estimate the correlation between your evidence and GPA.
If the correlation is .30, move 30% of the distance from the average to the
matching GPA.
Part 3: Overconfidence
Chapter 19: The Illusion of Understanding
From Taleb: narrative fallacy: our tendency to reshape the past into coherent
stories that shape our views of the world and expectations for the future.
As a result, we tend to overestimate skill, and underestimate luck.
Once humans adopt a new view of the world, we have difficulty recalling our old
view, and how much we were surprised by past events.
Outcome bias: our tendency to put too much blame on decision makers for bad
outcomes vs. good ones.
This both influences risk aversion, and disproportionately rewarding risky
behaviour (the entrepreneur who gambles big and wins).
At best, a good CEO is about 10% better than random guessing.

Chapter 20: The Illusion of Validity


We often vastly overvalue the evidence at hand; discount the amount of evidence
and its quality in favour of the better story, and follow the people we love and trust
with no evidence in other cases.
The illusion of skill is maintained by powerful professional cultures.
Experts/pundits are rarely better (and often worse) than random chance, yet often
believe at a much higher confidence level in their predictions.

Chapter 21: Intuitions vs. Formulas


A number of studies have concluded that algorithms are better than expert judgement,
or at least as good.

The research suggests a surprising conclusion: to maximize predictive accuracy, final


decisions should be left to formulas, especially in low-validity environments.

More recent research went further: formulas that assign equal weights to all the
predictors are often superior, because they are not affected by accidents of sampling.

In a memorable example, Dawes showed that marital stability is well predicted by a


formula:
frequency of lovemaking minus frequency of quarrels

The important conclusion from this research is that an algorithm that is constructed on
the back of an envelope is often good enough to compete with an optimally weighted
formula, and certainly good enough to outdo expert judgment.

Intuition can be useful, but only when applied systematically.

Interviewing

To implement a good interview procedure:

Select some traits required for success (six is a good number). Try to ensure they
are independent.
Make a list of questions for each trait, and think about how you will score it from 1-
5 (what would warrant a 1, what would make a 5).
Collect information as you go, assessing each trait in turn.
Then add up the scores at the end.

Chapter 22: Expert Intuition: When Can We Trust It?


When can we trust intuition/judgements? The answer comes from the two basic
conditions for acquiring a skill:

an environment that is sufficiently regular to be predictable


an opportunity to learn these regularities through prolonged practice

When both these conditions are satisfied, intuitions are likely to be skilled.

Whether professionals have a chance to develop intuitive expertise depends essentially


on the quality and speed of feedback, as well as on sufficient opportunity to practice.

Among medical specialties, anesthesiologists benefit from good feedback, because the
effects of their actions are likely to be quickly evident. In contrast, radiologists obtain
little information about the accuracy of the diagnoses they make and about the
pathologies they fail to detect. Anesthesiologists are therefore in a better position to
develop useful intuitive skills.
Chapter 23: The Outside View
The inside view: when we focus on our specific circumstances and search for evidence
in our own experiences.

Also: when you fail to account for unknown unknowns.

The outside view: when you take into account a proper reference class/base rate.

Planning fallacy: plans and forecasts that are unrealistically close to best-case
scenarios could be improved by consulting the statistics of similar cases

Reference class forecasting: the treatment for the planning fallacy

The outside view is implemented by using a large database, which provides information
on both plans and outcomes for hundreds of projects all over the world, and can be
used to provide statistical information about the likely overruns of cost and time, and
about the likely underperformance of projects of different types.

The forecasting method that Flyvbjerg applies is similar to the practices recommended
for overcoming base-rate neglect:

Identify an appropriate reference class (kitchen renovations, large railway projects,


etc.).
Obtain the statistics of the reference class (in terms of cost per mile of railway, or of
the percentage by which expenditures exceeded budget). Use the statistics to
generate a baseline prediction.
Use specific information about the case to adjust the baseline prediction, if there
are particular reasons to expect the optimistic bias to be more or less pronounced
in this project than in others of the same type.
Organizations face the challenge of controlling the tendency of executives
competing for resources to present overly optimistic plans. A well-run organization
will reward planners for precise execution and penalize them for failing to
anticipate difficulties, and for failing to allow for difficulties that they could not have
anticipated—the unknown unknowns.

Chapter 24: The Engine of Capitalism


Optimism bias: always viewing positive outcomes or angles of events
Danger: losing track of reality and underestimating the role of luck, as well as the risk
involved.

To try and mitigate the optimism bias, you should a) be aware of likely biases and
planning fallacies that can affect those who are predisposed to optimism, and,

Perform a premortem:

The procedure is simple: when the organization has almost come to an important
decision but has not formally committed itself, Klein proposes gathering for a brief
session a group of individuals who are knowledgeable about the decision. The
premise of the session is a short speech: "Imagine that we are a year into the
future. We implemented the plan as it now exists. The outcome was a disaster.
Please take 5 to 10 minutes to write a brief history of that disaster."

Part 4: Choices
Chapter 25: Bernoulli’s Error
theory-induced blindness: once you have accepted a theory and used it as a tool in
your thinking, it is extraordinarily difficult to notice its flaws.

Chapter 26: Prospect Theory


It’s clear now that there are three cognitive features at the heart of prospect theory.
They play an essential role in the evaluation of financial outcomes and are
common to many automatic processes of perception, judgment, and emotion. They
should be seen as operating characteristics of System 1.
Evaluation is relative to a neutral reference point, which is sometimes referred to
as an "adaptation level."
For financial outcomes, the usual reference point is the status quo, but it can also
be the outcome that you expect, or perhaps the outcome to which you feel entitled,
for example, the raise or bonus that your colleagues receive.
Outcomes that are better than the reference points are gains. Below the reference
point they are losses.
A principle of diminishing sensitivity applies to both sensory dimensions and the
evaluation of changes of wealth.
The third principle is loss aversion. When directly compared or weighted against
each other, losses loom larger than gains. This asymmetry between the power of
positive and negative expectations or experiences has an evolutionary history.
Organisms that treat threats as more urgent than opportunities have a better
chance to survive and reproduce.

Loss Aversion

The “loss aversion ratio” has been estimated in several experiments and is usually
in the range of 1.5 to 2.5.

Chapter 27: The Endowment Effect


Endowment effect: for certain goods, the status quo is preferred, particularly for
goods that are not regularly traded or for goods intended “for use” - to be
consumed or otherwise enjoyed.
Note: not present when owners view their goods as carriers of value for future
exchanges.

Chapter 28: Bad Events


The brain responds quicker to bad words (war, crime) than happy words (peace,
love).
If you are set to look for it, the asymmetric intensity of the motives to avoid losses
and to achieve gains shows up almost everywhere. It is an ever-present feature of
negotiations, especially of renegotiations of an existing contract, the typical
situation in labor negotiations and in international discussions of trade or arms
limitations. The existing terms define reference points, and a proposed change in
any aspect of the agreement is inevitably viewed as a concession that one side
makes to the other. Loss aversion creates an asymmetry that makes agreements
difficult to reach. The concessions you make to me are my gains, but they are your
losses; they cause you much more pain than they give me pleasure.

Chapter 29: The Fourfold Pattern


Whenever you form a global evaluation of a complex object—a car you may buy,
your son-in-law, or an uncertain situation—you assign weights to its characteristics.
This is simply a cumbersome way of saying that some characteristics influence
your assessment more than others do.
The conclusion is straightforward: the decision weights that people assign to
outcomes are not identical to the probabilities of these outcomes, contrary to the
expectation principle. Improbable outcomes are overweighted—this is the
possibility effect. Outcomes that are almost certain are underweighted relative to
actual certainty.
When we looked at our choices for bad options, we quickly realized that we were
just as risk seeking in the domain of losses as we were risk averse in the domain
of gains.
Certainty effect: at high probabilities, we seek to avoid loss and therefore accept
worse outcomes in exchange for certainty, and take high risk in exchange for
possibility.
Possibility effect: at low probabilities, we seek a large gain despite risk, and avoid
risk despite a poor outcome.

Indeed, we identified two reasons for this effect.

First, there is diminishing sensitivity. The sure loss is very aversive because the
reaction to a loss of $900 is more than 90% as intense as the reaction to a loss of
$1,000.
The second factor may be even more powerful: the decision weight that
corresponds to a probability of 90% is only about 71, much lower than the
probability.
Many unfortunate human situations unfold in the top right cell. This is where people
who face very bad options take desperate gambles, accepting a high probability of
making things worse in exchange for a small hope of avoiding a large loss. Risk
taking of this kind often turns manageable failures into disasters.

Chapter 30: Rare Events


The probability of a rare event is most likely to be overestimated when the
alternative is not fully specified.
Emotion and vividness influence fluency, availability, and judgments of probability
—and thus account for our excessive response to the few rare events that we do
not ignore.
Adding vivid details, salience and attention to a rare event will increase the
weighting of an unlikely outcome.
When this doesn’t occur, we tend to neglect the rare event.

Chapter 31: Risk Policies


There were two ways of construing decisions i and ii:

narrow framing: a sequence of two simple decisions, considered separately


broad framing: a single comprehensive decision, with four options

Broad framing was obviously superior in this case. Indeed, it will be superior (or at least
not inferior) in every case in which several decisions are to be contemplated together.

Decision makers who are prone to narrow framing construct a preference every time
they face a risky choice. They would do better by having a risk policy that they routinely
apply whenever a relevant problem arises. Familiar examples of risk policies are
"always take the highest possible deductible when purchasing insurance" and "never
buy extended warranties." A risk policy is a broad frame.

Chapter 32: Keeping Score


Agency problem: when the incentives of an agent are in conflict with the objectives
of a larger group, such as when a manager continues investing in a project
because he has backed it, when it’s in the firms best interest to cancel it.
Sunk-cost fallacy: the decision to invest additional resources in a losing account,
when better investments are available.
Disposition effect: the preference to end something on a positive, seen in
investment when there is a much higher preference to sell winners and “end
positive” than sell losers.
An instance of narrow framing.

Regret

People expect to have stronger emotional reactions (including regret) to an


outcome produced by action than to the same outcome when it is produced by
inaction.
To inoculate against regret: be explicit about your anticipation of it, and consider it
when making decisions. Also try and preclude hindsight bias (document your
decision-making process).
Also know that people generally anticipate more regret than they will actually
experience.

Chapter 33: Reversals


You should make sure to keep a broad frame when evaluating something; seeing
cases in isolation is more likely to lead to a System 1 reaction.

Chapter 34: Frames and Reality


The framing of something influences the outcome to a great degree.
For example, your moral feelings are attached to frames, to descriptions of reality
rather than to reality itself.
Another example: the best single predictor of whether or not people will donate
their organs is the designation of the default option that will be adopted without
having to check the box.

Part 5: Two Selves


Chapter 35: Two Selves
Peak-end rule: The global retrospective rating was well predicted by the average of
the level of pain reported at the worst moment of the experience and at its end.
We tend to overrate the end of an experience when remembering the whole.
Duration neglect: The duration of the procedure had no effect whatsoever on the
ratings of total pain.
Generally: we tend to ignore the duration of an event when evaluating an
experience.
Confusing experience with the memory of it is a compelling cognitive illusion—and
it is the substitution that makes us believe a past experience can be ruined.

Chapter 37: Experienced Well-Being


One way to improve experience is to shift from passive leisure (TV watching) to
active leisure, including socializing and exercising.
The second-best predictor of feelings of a day is whether a person did or did not
have contacts with friends or relatives.
It is only a slight exaggeration to say that happiness is the experience of spending
time with people you love and who love you.
Can money buy happiness? Being poor makes one miserable, being rich may
enhance one’s life satisfaction, but does not (on average) improve experienced
well-being.
Severe poverty amplifies the effect of other misfortunes of life.
The satiation level beyond which experienced well-being no longer increases was
a household income of about $75,000 in high-cost areas (it could be less in areas
where the cost of living is lower). The average increase of experienced well-being
associated with incomes beyond that level was precisely zero.

Chapter 38: Thinking About Life


Experienced well-being is on average unaffected by marriage, not because
marriage makes no difference to happiness but because it changes some aspects
of life for the better and others for the worse (how one’s time is spent).
One reason for the low correlations between individuals’ circumstances and their
satisfaction with life is that both experienced happiness and life satisfaction are
largely determined by the genetics of temperament. A disposition for well-being is
as heritable as height or intelligence, as demonstrated by studies of twins
separated at birth.
The importance that people attached to income at age 18 also anticipated their
satisfaction with their income as adults.
The people who wanted money and got it were significantly more satisfied than
average; those who wanted money and didn’t get it were significantly more
dissatisfied. The same principle applies to other goals—one recipe for a
dissatisfied adulthood is setting goals that are especially difficult to attain.
Measured by life satisfaction 20 years later, the least promising goal that a young
person could have was "becoming accomplished in a performing art."

The focusing illusion:


Nothing in life is as important as you think it is when you are thinking about
it.

Miswanting: bad choices that arise from errors of affective forecasting; common
example is the focusing illusion causing us overweight the effect of purchases on our
future well-being.

Conclusions
Rationality

Rationality is logical coherence—reasonable or not. Econs are rational by this


definition, but there is overwhelming evidence that Humans cannot be. An Econ
would not be susceptible to priming, WYSIATI, narrow framing, the inside view, or
preference reversals, which Humans cannot consistently avoid.
The definition of rationality as coherence is impossibly restrictive; it demands
adherence to rules of logic that a finite mind is not able to implement.
The assumption that agents are rational provides the intellectual foundation for the
libertarian approach to public policy: do not interfere with the individual’s right to
choose, unless the choices harm others.
Thaler and Sunstein advocate a position of libertarian paternalism, in which the
state and other institutions are allowed to nudge people to make decisions that
serve their own long-term interests. The designation of joining a pension plan as
the default option is an example of a nudge.

Two Systems

What can be done about biases? How can we improve judgments and decisions,
both our own and those of the institutions that we serve and that serve us? The
short answer is that little can be achieved without a considerable investment of
effort. As I know from experience, System 1 is not readily educable. Except for
some effects that I attribute mostly to age, my intuitive thinking is just as prone to
overconfidence, extreme predictions, and the planning fallacy as it was before I
made a study of these issues. I have improved only in my ability to recognize
situations in which errors are likely: "This number will be an anchor…," "The
decision could change if the problem is reframed…" And I have made much more
progress in recognizing the errors of others than my own
The way to block errors that originate in System 1 is simple in principle: recognize
the signs that you are in a cognitive minefield, slow down, and ask for
reinforcement from System 2.
Organizations are better than individuals when it comes to avoiding errors,
because they naturally think more slowly and have the power to impose orderly
procedures. Organizations can institute and enforce the application of useful
checklists, as well as more elaborate exercises, such as reference-class
forecasting and the premortem.
At least in part by providing a distinctive vocabulary, organizations can also
encourage a culture in which people watch out for one another as they approach
minefields.
The corresponding stages in the production of decisions are the framing of the
problem that is to be solved, the collection of relevant information leading to a
decision, and reflection and review. An organization that seeks to improve its
decision product should routinely look for efficiency improvements at each of these
stages.
There is much to be done to improve decision making. One example out of many is
the remarkable absence of systematic training for the essential skill of conducting
efficient meetings.
Ultimately, a richer language is essential to the skill of constructive criticism.
Decision makers are sometimes better able to imagine the voices of present
gossipers and future critics than to hear the hesitant voice of their own doubts.
They will make better choices when they trust their critics to be sophisticated and
fair, and when they expect their decision to be judged by how it was made, not only
by how it turned out.

You might also like