! Thinking, Fast and Slow by Daniel Kahneman - Summary & Notes
! Thinking, Fast and Slow by Daniel Kahneman - Summary & Notes
Valid intuitions develop when experts have learned to recognize familiar elements
in a new situation and to act in a manner that is appropriate to it.
The essence of intuitive heuristics: when faced with a difficult question, we often
answer an easier one instead, usually without noticing the substitution.
We are prone to overestimate how much we understand about the world and to
underestimate the role of chance in events. Overconfidence is fed by the illusory
certainty of hindsight. My views on this topic have been influenced by Nassim
Taleb, the author of The Black Swan.
In rough order of complexity, here are some examples of the automatic activities that
are attributed to System 1:
The highly diverse operations of System 2 have one feature in common: they require
attention and are disrupted when attention is drawn away. Here are some examples:
If you like the president’s politics, you probably like his voice and his appearance
as well. The tendency to like (or dislike) everything about a person—including
things you have not observed—is known as the halo effect.
To counter, you should decor relate error - in other words, to get useful information
from multiple sources, make sure these sources are independent, then compare.
The principle of independent judgments (and decorrelated errors) has immediate
applications for the conduct of meetings, an activity in which executives in
organizations spend a great deal of their working days. A simple rule can help:
before an issue is discussed, all members of the committee should be asked to
write a very brief summary of their position.
The strong bias toward believing that small samples closely resemble the
population from which they are drawn is also part of a larger story: we are prone to
exaggerate the consistency and coherence of what we see.
The anchoring measure would be 100% for people who slavishly adopt the anchor
as an estimate, and zero for people who are able to ignore the anchor altogether.
The value of 55% that was observed in this example is typical. Similar values have
been observed in numerous other problems.
Powerful anchoring effects are found in decisions that people make about money,
such as when they choose how much to contribute to a cause.
In general, a strategy of deliberately "thinking the opposite" may be a good
defense against anchoring effects, because it negates the biased recruitment of
thoughts that produces these effects.
believe that they use their bicycles less often after recalling many rather than few
instances
are less confident in a choice when they are asked to produce more arguments to
support it
are less confident that an event was avoidable after listing more ways it could have
been avoided
are less impressed by a car after listing many of its advantages
The difficulty of coming up with more examples surprises people, and they subsequently
change their judgement.
The following are some conditions in which people "go with the flow" and are affected
more strongly by ease of retrieval than by the content they retrieved:
when they are engaged in another effortful task at the same time
when they are in a good mood because they just thought of a happy episode in
their life
if they score low on a depression scale
if they are knowledgeable novices on the topic of the task, in contrast to true
experts
when they score high on a scale of faith in intuition
if they are (or are made to feel) powerful
Chapter 13: Availability, Emotion, and Risk
The affect heuristic is an instance of substitution, in which the answer to an easy
question (How do I feel about it?) serves as an answer to a much harder question
(What do I think about it?).
Experts sometimes measure things more objectively, weighing total number of lives
saved, or something similar, while many citizens will judge “good” and “bad” types
of deaths.
An availability cascade is a self-sustaining chain of events, which may start from
media reports of a relatively minor event and lead up to public panic and large-
scale government action.
The Alar tale illustrates a basic limitation in the ability of our mind to deal with small
risks: we either ignore them altogether or give them far too much weight—nothing
in between.
In today’s world, terrorists are the most significant practitioners of the art of
inducing availability cascades.
Psychology should inform the design of risk policies that combine the experts’
knowledge with the public’s emotions and intuitions.
Representativeness would tell you to bet on the PhD, but this is not necessarily wise.
You should seriously consider the second alternative, because many more
nongraduates than PhDs ride in New York subways.
The second sin of representativeness is insensitivity to the quality of evidence.
There is one thing you can do when you have doubts about the quality of the evidence:
let your judgments of probability stay close to the base rate.
My favourite equations:
success = talent + luck
great success = a little more talent + a lot of luck
Understanding Regression
The general rule is straightforward but has surprising consequences: whenever the
correlation between two scores is imperfect, there will be regression to the mean.
If the correlation between the intelligence of spouses is less than perfect (and if
men and women on average do not differ in intelligence), then it is a mathematical
inevitability that highly intelligent women will be married to husbands who are on
average less intelligent than they are (and vice versa, of course).
Recall that the correlation between two measures—in the present case reading
age and GPA—is equal to the proportion of shared factors among their
determinants. What is your best guess about that proportion? My most optimistic
guess is about 30%. Assuming this estimate, we have all we need to produce an
unbiased prediction. Here are the directions for how to get there in four simple
steps:
Start with an estimate of average GPA.
Determine the GPA that matches your impression of the evidence.
Estimate the correlation between your evidence and GPA.
If the correlation is .30, move 30% of the distance from the average to the
matching GPA.
Part 3: Overconfidence
Chapter 19: The Illusion of Understanding
From Taleb: narrative fallacy: our tendency to reshape the past into coherent
stories that shape our views of the world and expectations for the future.
As a result, we tend to overestimate skill, and underestimate luck.
Once humans adopt a new view of the world, we have difficulty recalling our old
view, and how much we were surprised by past events.
Outcome bias: our tendency to put too much blame on decision makers for bad
outcomes vs. good ones.
This both influences risk aversion, and disproportionately rewarding risky
behaviour (the entrepreneur who gambles big and wins).
At best, a good CEO is about 10% better than random guessing.
More recent research went further: formulas that assign equal weights to all the
predictors are often superior, because they are not affected by accidents of sampling.
The important conclusion from this research is that an algorithm that is constructed on
the back of an envelope is often good enough to compete with an optimally weighted
formula, and certainly good enough to outdo expert judgment.
Interviewing
Select some traits required for success (six is a good number). Try to ensure they
are independent.
Make a list of questions for each trait, and think about how you will score it from 1-
5 (what would warrant a 1, what would make a 5).
Collect information as you go, assessing each trait in turn.
Then add up the scores at the end.
When both these conditions are satisfied, intuitions are likely to be skilled.
Among medical specialties, anesthesiologists benefit from good feedback, because the
effects of their actions are likely to be quickly evident. In contrast, radiologists obtain
little information about the accuracy of the diagnoses they make and about the
pathologies they fail to detect. Anesthesiologists are therefore in a better position to
develop useful intuitive skills.
Chapter 23: The Outside View
The inside view: when we focus on our specific circumstances and search for evidence
in our own experiences.
The outside view: when you take into account a proper reference class/base rate.
Planning fallacy: plans and forecasts that are unrealistically close to best-case
scenarios could be improved by consulting the statistics of similar cases
The outside view is implemented by using a large database, which provides information
on both plans and outcomes for hundreds of projects all over the world, and can be
used to provide statistical information about the likely overruns of cost and time, and
about the likely underperformance of projects of different types.
The forecasting method that Flyvbjerg applies is similar to the practices recommended
for overcoming base-rate neglect:
To try and mitigate the optimism bias, you should a) be aware of likely biases and
planning fallacies that can affect those who are predisposed to optimism, and,
Perform a premortem:
The procedure is simple: when the organization has almost come to an important
decision but has not formally committed itself, Klein proposes gathering for a brief
session a group of individuals who are knowledgeable about the decision. The
premise of the session is a short speech: "Imagine that we are a year into the
future. We implemented the plan as it now exists. The outcome was a disaster.
Please take 5 to 10 minutes to write a brief history of that disaster."
Part 4: Choices
Chapter 25: Bernoulli’s Error
theory-induced blindness: once you have accepted a theory and used it as a tool in
your thinking, it is extraordinarily difficult to notice its flaws.
Loss Aversion
The “loss aversion ratio” has been estimated in several experiments and is usually
in the range of 1.5 to 2.5.
First, there is diminishing sensitivity. The sure loss is very aversive because the
reaction to a loss of $900 is more than 90% as intense as the reaction to a loss of
$1,000.
The second factor may be even more powerful: the decision weight that
corresponds to a probability of 90% is only about 71, much lower than the
probability.
Many unfortunate human situations unfold in the top right cell. This is where people
who face very bad options take desperate gambles, accepting a high probability of
making things worse in exchange for a small hope of avoiding a large loss. Risk
taking of this kind often turns manageable failures into disasters.
Broad framing was obviously superior in this case. Indeed, it will be superior (or at least
not inferior) in every case in which several decisions are to be contemplated together.
Decision makers who are prone to narrow framing construct a preference every time
they face a risky choice. They would do better by having a risk policy that they routinely
apply whenever a relevant problem arises. Familiar examples of risk policies are
"always take the highest possible deductible when purchasing insurance" and "never
buy extended warranties." A risk policy is a broad frame.
Regret
Miswanting: bad choices that arise from errors of affective forecasting; common
example is the focusing illusion causing us overweight the effect of purchases on our
future well-being.
Conclusions
Rationality
Two Systems
What can be done about biases? How can we improve judgments and decisions,
both our own and those of the institutions that we serve and that serve us? The
short answer is that little can be achieved without a considerable investment of
effort. As I know from experience, System 1 is not readily educable. Except for
some effects that I attribute mostly to age, my intuitive thinking is just as prone to
overconfidence, extreme predictions, and the planning fallacy as it was before I
made a study of these issues. I have improved only in my ability to recognize
situations in which errors are likely: "This number will be an anchor…," "The
decision could change if the problem is reframed…" And I have made much more
progress in recognizing the errors of others than my own
The way to block errors that originate in System 1 is simple in principle: recognize
the signs that you are in a cognitive minefield, slow down, and ask for
reinforcement from System 2.
Organizations are better than individuals when it comes to avoiding errors,
because they naturally think more slowly and have the power to impose orderly
procedures. Organizations can institute and enforce the application of useful
checklists, as well as more elaborate exercises, such as reference-class
forecasting and the premortem.
At least in part by providing a distinctive vocabulary, organizations can also
encourage a culture in which people watch out for one another as they approach
minefields.
The corresponding stages in the production of decisions are the framing of the
problem that is to be solved, the collection of relevant information leading to a
decision, and reflection and review. An organization that seeks to improve its
decision product should routinely look for efficiency improvements at each of these
stages.
There is much to be done to improve decision making. One example out of many is
the remarkable absence of systematic training for the essential skill of conducting
efficient meetings.
Ultimately, a richer language is essential to the skill of constructive criticism.
Decision makers are sometimes better able to imagine the voices of present
gossipers and future critics than to hear the hesitant voice of their own doubts.
They will make better choices when they trust their critics to be sophisticated and
fair, and when they expect their decision to be judged by how it was made, not only
by how it turned out.