Topic 3 Summary
Topic 3 Summary
Topic 3 Summary
probability
When choosing between alternatives, we gather and interpret information so that we can come to
some conclusion that allows us to choose the best course of action.
o The likelihoods that particular outcomes have, are or will happen in the future (we often have
incomplete information about the past/present and always have incomplete information about
the future, so when these are important we have to make a judgement i.e. assess
likelihoods). For example, do we have an accurate view of the current state of the market, of
whether we are competitive, of facts relating to an incident or case? These relate to the past
and present. Is there going to be an expansion in demand, is the product going to be a winner,
will this client do what he says? These relate to the future. Together, these are the risk or
probability aspects of Decision Theory (see Week 1)
o The attractiveness of outcomes (i.e. their subjective value or utility) either to me or my
organisation. This relates to the utility part of Decision Theory (see Week 1).
We will explore the argument that people base a lot of their gathering and interpretation of evidence
on System 1 thinking that can lead to biased interpretations and conclusions, and that we can help
them improve by highlighting this and then presenting System 2 forms of thinking (where they exist).
We will begin by asking a very simple question: how good are you at assessing risk and probability
and using it to inform decisions? We touched on the concept of self-knowledge in Week 1 when we
discussed people's tendency to be overconfident in their own ability.
Consider the child abuse and breast cancer screening problems (these were part of the judgement and
decision making pre-module questionnaire and will be reviewed in the webinar for this
week). Responses to these problems constitute evidence of System 1 thinking. People either focus on
the ‘evidence’ and engage in some intuitive System 1 re-adjustments OR just focus on the ‘initial
likelihood’ and use that. Often can’t engage System 2 because not taught it (though there is System 2
thinking for these situations- Bayes Theorem).
There is much evidence that experts, including expert witnesses, fall foul of this bias – focus just on
the evidence with base rate neglect i.e. forget that rare things occur rarely and common things occur
commonly. Importantly, they do not follow System 2 thinking, misperceive the risk and end up taking
a bad decision because of it.
Bayesian inference tasks are also key to management decisions, e.g., will the group complete a project
on time if one of the key engineers gets sick unexpectedly? Does a customer place more weight on
quality than price if her yearly income is above average?
In the jelly bean problem, the bowl with 1 red bean out of 10 offers objectively better odds of winning
than the bowl with 9 red beans out of 100 (10% vs. 9%). However, Denes-Raj and Epstein (1994)
showed that participants often chose the bowl that offered the lower probability of winning! This is
not rational behaviour. In this bowl there is a large number of winning red beans (i.e. 9) even though
the proportion is smaller. These authors suggested that this choice is driven by an affective-driven
automatic, intuitive mode of thinking (i.e., System 1 thinking). Instead, considering the objective
probabilities in both cases requires engaging in a more rational, analytical, deliberative mode of
thought (i.e., System 2 thinking). Participants in this study often knew the probabilities in both bowls,
but their choices seemed to be driven by their intuitions and feelings concerning the number of
winning red beans.
The ratio bias effect refers to people’s tendency to judge an event as more likely when presented as a
large-numbered ratio, such as 10/100, than as a smaller-numbered but equivalent ratio, such as 1/10.
The study conducted by Denes-Raj and Epstein shows that this tendency can lead people to make
suboptimal choices.
Some authors have noted that the ratio bias effect reflects people’s tendency to pay too much attention
to numerators in ratios (i.e., the number of times a target event has happened) and insufficient
attention to denominators (i.e., the overall opportunities for it to happen). In other words, people often
show denominator neglect. This tendency can lead to incorrect risk assessments in several domains,
e.g. people judge cancer as riskier when it is described as killing 1,286 out of 10,000 people than as
killing 24.14 out of 100 people (Yamagishi, 1997), or 36,500 people dying of cancer every year seems
riskier than 100 dying every day (Bonner & Newell, 2008).
Implications:
o People are often bad at assessing risk and probability and this can lead to poor judgements
and inappropriate decisions.
o People need to be taught how to reason using System 2 forms of thinking, but rarely are.
o However, there are some numerical and graphical representations that can facilitate
assessments of risk and probability.
Research by Gigerenzer and others has shown that performance in Bayesian reasoning problems (e.g.,
the child abuse and breast cancer screening problems from Webinar 3) can be improved significantly
by presenting information using natural frequencies instead of conditional probabilities (which express
the probability of some event conditional on the occurrence of some other event).
Hoffrage and Gigerenzer (1998) conducted a study involving doctors with around 14 years of
professional experience. They were asked to indicate the probability that a woman who has a positive
mammogram result actually has breast cancer (similar to the problem we discussed in class). Doctors
who were given conditional probabilities gave answers ranging from 1% to 90%, and very few of
them gave the correct response! In contrast, those who were given natural frequencies were more
likely to give the correct response or to give an estimate that was close to it. Similar results have been
found in studies involving medical students, lawyers and law students.
The specific mechanism that can explain this beneficial effect is still a matter of debate. Gigerenzer
argues that natural frequencies simplify the mental computations required, and that our minds are
adapted to reason with natural frequencies. In contrast, others (e.g., Sloman et al., 2003) have argued
that frequencies improve performance because they clarify the set structure of the problem and how
key elements relate to one another.
Facilitating judgements of probability/Absolute vs. relative risks
The pill scare example illustrates how the use of relative risks (e.g., “the risk increased by 100%”) can
lead to confusion. In this example, it led women to worry significantly about the risk of suffering
blood clots associated with taking third-generation contraceptive pills. This resulted in an increase in
unwanted pregnancies and abortions.
However, while the relative risk increase was indeed 100%, the absolute risk increase was only 1 in
7,000 (i.e., from 1 in 7,000 to 2 in 7,000). The use of absolute risks can often facilitate risk
understanding and avoid unnecessary anxieties (or unrealistic expectations concerning the
effectiveness of a medical treatment or a given policy).
The 2015 announcement by the World Health Organisation concerning the increase in cancer risk
associated with eating processed meat can also help to illustrate these ideas. It is key to consider
whether information is communicated using relative risks (e.g., “eating processed meat increases the
risk of developing bowel cancer by around 17%”) or absolute risks (e.g., “Out of every 1,000 people
who eat the lowest amount of meat, 56 are expected to develop bowel cancer; Out of every 1,000
people who eat the largest amount, 66 are expected to develop bowel cancer”).
Recent research has indicated that visual aids or graphs are a promising means to help people to
overcome difficulties in assessing risks and probabilities. For example, icon arrays (i.e., matrices of
stick figures, faces, circles, squares) can improve performance in Bayesian reasoning problems (see
e.g., Brase, 2009), and can contribute to overcome common biases such as denominator neglect.
More generally, well-designed graphs can facilitate the communication of complex quantitative
information. However, graphs can be a double-edged sword! The work by Stone et al. (2003) shows
that depicting the same risk information using different graphic formats (simple bar charts vs. stacked
bar charts) can affect people’s risk perceptions and the amount of money that they are willing to pay
for products.
The work by Sun et al. (2011) also shows that manipulations in scales (e.g., distances along axes) can
have a significant impact on people’s preferences. Indeed, there is increasing evidence that
manipulations and distortions in graphs can mislead decision makers and alter their choices.
Overconfidence
If you have tried activity 3.2.1 in the online resources, it is probable that you found your confidence
judgement intervals for estimation tasks were not perfectly calibrated. (If you have not yet tried, have
a go now before you continue). You may have demonstrated some overconfidence.
Overconfidence can be defined as the tendency for our beliefs to be held with stronger degrees of
conviction than is justified.
As Will Rogers, the cowboy, actor and political commentator from the early 20th century said “It’s
not what we don’t know that gives us trouble, it’s what we know, that ain’t so”.
Donald Rumsfeld, US Secretary of State for Defence 1975-1977 and 2001-2006 famously
said: “There are known knowns: there are things we know we know. We also know there are known
unknowns: that is to say we know there are some things we do not know. But there are also unknown
unknowns – the things we don’t know we don’t know.”
A special kind of known unknowns relate to a key concept related to overconfidence – things that we
think we know, but actually we do not know, as they are incorrect.
In other words, the biggest problem is not what people do not know if they are aware they do not
know it, but rather what they think they know that is not true.
Overconfidence can occur with experts as well as novices. Overconfidence can be a major reason for
organisational and business failure. Think how many start-up entrepreneurs are over-optimistic in their
predictions about the potential uptake of their own product or service, and are therefore overconfident
that it will succeed. This is brought into focus when you consider the UK five-year survival rate for
businesses launched in 2012 and still active in 2017 was just 43% (ONS 2017).
Optimism bias
Related to overconfidence is the optimism bias. This is the tendency to judge the likelihood of good
things happening too highly and bad things too low. When people think about downside risk they are
almost never pessimistic enough. They tend to be over optimistic about success and under predict the
likelihood of failure. As a result they do not build plans and contingency for managing situations when
predicted successes do not occur. Dunning, Heath and Suls (2004) discuss some important
implications of optimism bias and overconfidence for applied settings including the workplace. These
include:
If we are well calibrated and have low confidence in our judgement, we should anticipate being
incorrect and will take appropriate precautions or organise contingencies. These measures will help
mitigate against individual and organisational failure linked to overconfidence and optimism
(Roxburgh 2003).
To progress, there is a need to understand why overconfidence and optimism bias occurs. Once you
have this information, you can then start to reduce the negative impact of these two traits. This issue is
considered in the following section.
Confirmation bias
One of the reasons why people are overconfident in their judgments is due to the way they collect
evidence on which to base their judgments. People tend to seek out and favour information that
confirms their assumptions, preconceptions or hypotheses. As a result, they do not consider
disconfirming evidence, which would provide valuable feedback on their judgment. This faulty search
strategy is known as the confirmation bias.
Confirmation bias can be defined as System 1 thinking. This involves looking for confirming evidence
and not looking for (or avoiding) disconfirming evidence when testing ideas about what has, is or is
going to happen.
In trying to understand how false premises about weapons of mass destruction (WMD) led to the
decision to invade Iraq without any apparent consideration of the risks involved and the failure to
develop contingency plans to deal with these risks, the US Senate Intelligence Committee reported:
The Intelligence Community suffered from a collective presumption that Iraq had an active and
growing WMD programme. This ‘group think’ dynamic led intelligence community analysts,
collectors and managers to both incorrectly interpret ambiguous evidence. They misread it as
collectively indicative of a WMD programme and ignored or minimized evidence that Iraq did not
have active and expanding weapons of mass destruction programmes. This presumption was so strong
and formalised that intelligence community mechanisms established to challenge assumptions and
group think were not utilised.
*************