Adapting The Adaptive Toolbox - Set of Cognitive Mechanisms
Adapting The Adaptive Toolbox - Set of Cognitive Mechanisms
Author: Supervisors:
Marieke S WEERS Maria O TWOROWSKA, MSc.
Student number: 3046907 Dr. H. Todd WAREHAM
Dr. Iris VAN R OOIJ
One of the main challenges in cognitive science is to explain how people make reasonable
inferences in daily life. Theories that attempt to explain this either fail to capture infer-
ence in its full generality or they seem to postulate intractable computations. One account
that seems to aspire to achieve generality without running into the problem of computa-
tional intractability is “the adaptive toolbox” by Gigerenzer and Todd (1999b). This theory
proposes that humans have a toolbox, adapted through learning and/or evolution to the
environment. Such a toolbox contains heuristics, each of them computationally tractable,
and a selector which selects a heuristic for every situation so that the toolbox can solve the
type of inference problems that people solve in their daily life. In this project we investi-
gate whether such a toolbox can have adapted and under what circumstances. We propose
a formalization of an adaptive toolbox and two formalizations of the adaptation process
and analyze them with computational complexity theory. Our results show that applying a
toolbox is doable in reasonable amount of time, but adapting a toolbox can only be done
efficiently when certain restrictions are placed on the environment. If these restrictions
occur in the environment and the adaptation processes exploit them humans could have
indeed adapted an adaptive toolbox.
i
Acknowledgements
The process of making my thesis was long and often difficult. At multiple points during
the one-and-a-half year process have I wondered whether obtaining a master’s degree was
all worth the trouble, but at the end of it all I am happy that I went through all of it. I
would like to thank some people who helped me during the process.
First of all, my supervisors, who all had the best interest and who all took a great deal
of their time for me. They have helped me at all stages of the thesis. Maria was my daily
supervisor and I would like to thank her for being there almost every week to have deep
thoughts about the toolbox theory. Todd I would like to thank for the long complexity
analysis sessions we had (over mail and in person). My friends used to joke that half the
work of my thesis was writing e-mails. Also, I felt comforted by his words, quoted from
Douglas Adams (don’t panic!), which were necessary at a few points in time. I would like
to thank Iris for helping me develop my research skills and going beyond that by telling
me about matters like the downsides of perfectionism. I am grateful that I got a chance to
work with you three.
Then my family: my parents and my sister, for being there when I needed to talk to
them and supporting me unconditionally. Furthermore, my friends, who supported me
too. Especially Thomas who sat next to me in the tk (the AI computer room) for countless
hours and whom I interrupted from his own thesis by making him proofread some text or
small parts of the analyses. I would also like to thank all the people in the tk who kept me
company during the long process, mostly Franc who also provided cookies :), and lastly all
the other people who helped me during the process in one way or another whom I haven’t
mentioned above.
ii
Contents
1 Introduction 1
1.1 The adaptive toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Background 4
2.1 The mind and the environment . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 The adaptive toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.1 Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.2 The selection mechanism . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.3 Ecological rationality . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.4 Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Previous research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 Research on toolbox heuristics . . . . . . . . . . . . . . . . . . . . . 8
2.3.2 Research on toolbox adaptation . . . . . . . . . . . . . . . . . . . . . 8
3 Methods 10
3.1 Computational complexity theory . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.1 Classical computational complexity theory . . . . . . . . . . . . . . . 12
3.1.2 Parameterized computational complexity theory . . . . . . . . . . . . 16
3.2 The research questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Formalizing the adaptive toolbox . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3.1 The environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.2 The adaptive toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3.3 Ecological rationality . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4 Results 26
iii
4.1 RQ1: Is the application of the adaptive toolbox
tractable? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.1.1 Introducing T OOLBOX A PPLICATION . . . . . . . . . . . . . . . . . . . 26
4.1.2 Analyzing T OOLBOX A PPLICATION . . . . . . . . . . . . . . . . . . . . 27
4.2 RQ2: Is the adaptive toolbox tractably adaptable in
general? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2.1 Introducing T OOLBOX A DAPTATION and T OOLBOX R EADAPTATION . . 29
4.2.2 Analyzing T OOLBOX A DAPTATION . . . . . . . . . . . . . . . . . . . . 32
4.2.3 Analyzing T OOLBOX R EADAPTATION . . . . . . . . . . . . . . . . . . . 40
4.3 RQ3: Are there restrictions under which the adaptive toolbox is tractably
adaptable? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3.1 Introducing the parameters . . . . . . . . . . . . . . . . . . . . . . . 43
4.3.2 Fixed-parameter intractability results . . . . . . . . . . . . . . . . . . 44
4.3.3 Fixed-parameter tractability results . . . . . . . . . . . . . . . . . . . 46
4.4 Summary of the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5 Discussion 53
5.1 The toolbox formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.2 Plausibility of current tractability results . . . . . . . . . . . . . . . . . . . . 54
5.3 Other parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.4 The role of amax in intractability . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.5 Some last remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
References 58
Appendices 62
iv
Chapter 1
Introduction
During a lifetime humans make millions of decisions. These can be decisions with little
impact such as choosing what to eat for dinner or whether or not to do the laundry
today or decisions with large impact such as choosing whom to marry or choosing what
educational degree to pursue. Gerd Gigerenzer, Peter Todd and the ABC research group
have proposed a theory which is intended to capture how humans make such decisions
(2008; 2015; 1999b).
Different models of decision making have been proposed previously (Anderson &
Milson, 1989; Laplace, 1951). According to Gigerenzer et al., these models assume
that humans can make perfect decisions and can use unlimited time, knowledge and
computational power for this while humans do not have these resources to make de-
cisions (Gigerenzer & Todd, 1999a). Taking unlimited time to decide what to eat at a
restaurant would results in the restaurant closing before dinner was served or worse,
in starvation. Moreover, humans do not have unlimited knowledge. One cannot see
into the future and know the exact consequences of marrying someone. Lastly, even if
humans were omniscient creatures, they still do not have the computational power to
integrate all that knowledge.
1
metaphor for a set of cognitive mechanisms, the tools, each of them adapted to differ-
ent situations. The tools are called heuristics, which are rules of thumb. They are all
fast and frugal, meaning that they can make decision quickly using little information.
As such, the heuristics do not pose an unrealistic computational demand on humans.
Gigerenzer and colleagues have shown that the heuristics work well (Czerlinski, Gige-
renzer, & Goldstein, 1999; Gigerenzer, 2008), and claim that this is so because they
have been adapted to fit to the environment through evolution and/or learning (Wilke
& Todd, 2010) by changing the heuristics successively in small steps.
1.2 Motivation
The adaptive toolbox account is promising since it does not seem to propose unfeasible
resources and puts high emphasis on the environment. It may therefore be able to
explain resource-bounded human decision making. If the account is accurate it can
be used in numerous ways, for example to facilitate rational thinking by presenting
information in a format to which the cognitive system is adapted (Chase, Hertwig, &
Gigerenzer, 1998; Gigerenzer & Todd, 1999a) or to build human-level rationality into
AI systems.
However, to date, the adaptive toolbox account has not been completely worked
out. For example, it is stated that in each situation one heuristic is used but it is un-
clear how such a heuristic is selected from the entire set of heuristics. In this research
we put forth one possible formalization of the adaptive toolbox which includes a selec-
tion mechanism and determine whether under this formalization the adaptive toolbox
is fast (uses little time resources) with computational complexity analyses. This will
potentially advance the theory by rekindling the debate on how heuristics are selected.
Furthermore, there has been very little research on the adaptation process (through
evolution and/or learning) of an adaptive toolbox. Toolbox adaptation is not trivial, for
there is a large number of configurations a toolbox might have. For example, the num-
ber of heuristics, their configuration (in terms of decisions that can be made depending
on the information that they use) and the selection mechanism may all vary. Therefore
the number of possible toolboxes (which may or may not perform well) is huge and
adapting a toolbox such that it performs well may not be easy. Schmitt and Martignon
(2006) looked into the time resources needed for the adaptation process of one heuris-
tic, called Take The Best, but did not look into adapting an entire toolbox. However,
2
it is very important for the adaptive toolbox account to determine whether a complete
adaptive toolbox can have adapted, because humans cannot posses an adaptive toolbox
if it has not been adapted. This research is a first step in determining whether the tool-
box can have adapted. Using computational complexity analyses, we investigated the
time resources needed to adapt a toolbox and determined under what circumstances a
plausible amount of time is needed, where a plausible amount is some time which is
polynomial with respect to the size of the environment. We found that under certain
restrictions on the environment a toolbox is indeed adaptable in polynomial time.
1.3 Outline
The thesis is structured as follows. First we give an overview of the adaptive toolbox
account and review some prior research (Chapter 2). The methods section (Chapter 3)
includes an introduction to the formal concepts and tools of computational complexity
theory, the threefold research question and our formalization of an adaptive toolbox.
We use computational complexity theory to answer the research questions in the results
section (Chapter 4) and end with a general discussion (Chapter 5).
3
Chapter 2
Background
In this chapter we give an overview of the adaptive toolbox account. First, we briefly
explain how the mind and the environment shape human decision making. We continue
with an overview of the adaptive toolbox itself, which includes an explanation of how
heuristics work, the necessity of a fast and frugal selection mechanism and the notion
of ecological rationality and adaptability. Lastly, we discuss some prior research into the
adaptive toolbox.
4
sistency (always prefer a over b) and transitivity (if a is preferred over b and b is pre-
ferred over c than a is preferred over c—bounded rationality is measured with corre-
spondence criteria—accuracy, frugality (use of little information), and speed—which
measure performance relative to the external world. This is deemed a more appropriate
performance measure, as humans need to perform well in the environment in which
they live, not perform perfect internally.
2.2.1 Heuristics
A heuristic is a mechanism which uses little information in order to make fast decisions
which are still accurate. As to date, Gigerenzer and colleagues have proposed nearly a
dozen heuristics (Gigerenzer, 2008; Gigerenzer & Gaissmaier, 2011). Each heuristic in
the toolbox fits to a certain part of the environment, where the environment is a set of all
the situations in which a decision needs to be made that an individual may encounter. If
there would be a separate heuristic for every situation, the toolbox could perform very
well, since every heuristic could be fit precisely to its own situation. However, since the
number of situations a human can come across in its lifetime is near infinite, the number
of heuristics needed to cover all possible situations would not be encodable in a brain.
Gigerenzer et al. avoid this by stating that the heuristics are able to generalize well
over different situations because they are so simple, using little information. Due to this
generalization the heuristics cannot match any situation precisely (Gigerenzer & Todd,
1999a, pg.19), but instead give ‘good enough’ decisions (Gigerenzer, 2008, pg.23).
An example heuristic is Take The Best. This heuristic has been proposed as a descrip-
tion of the processes by which people determine which of two alternatives has a higher
value of some variable based on an ordered list of pieces of information. It does not
1
Gigerenzer et al. state that in some situations other systems, like logic and probability theory are
used instead of the toolbox (Gigerenzer, 2008). In this thesis we focus on those situations in which the
toolbox is used.
5
combine all information to come to a decision. Instead, Take The Best searches through
the list successively, deciding which alternative to choose based only on the first piece of
information that distinguishes the two. For example, it can be used to determine which
of two cities is larger based on a list of information which states whether the cities have
a train station, have soccer teams or are capitals. If both have a train station, this piece
of information cannot be used to decide and the second piece is used, whether the cities
have a soccer team. The first information piece which differentiates the two alternatives
is used to make a decision.
6
2.2.3 Ecological rationality
If a heuristic performs well in a certain environment, it is fit to that environment and
is said to be ecologically rational in that environment. The degree to which a heuristic
exploits the structure of the environment is called its ecological rationality (Gigerenzer
& Todd, 1999a, pg.13). It is claimed by Gigerenzer et al. that heuristics, which have
bounded rationality because they are so simple, can perform well because they have a
high ecological rationality.
Gigerenzer states that “Heuristics aim at satisficing solutions (i.e., results that are
good enough), which can be found even when optimizing is unfeasible” (Gigerenzer,
2008). So, in order for a toolbox to have a high ecological rationality a satisfactory
solution must always be found. The term ‘satisfactory solution’ in this context is not
clearly defined by Gigerenzer and colleagues, although it should be a solution which is
good enough.
2.2.4 Adaptation
It is stated that an adapted toolbox is ecologically rational because it is adapted to the
environment through evolution and/or learning (Gigerenzer & Goldstein, 1999; Todd,
2001; Wilke & Todd, 2010). The term ‘adapted’ is used as both the process of changing
the toolbox to fit to the environment and the property of the toolbox.
Both the heuristics themselves and their specific configuration (e.g. ordering of cues
in Take The Best) are supposedly adapted. By exploiting the structure of the environ-
ment the adaptive toolbox is postulated to have a high accuracy even though it is fast
(Czerlinski et al., 1999; Gigerenzer & Todd, 1999a, pg.18).
Adapting heuristics
The heuristics are presumably constructed by recombining heuristics and building blocks,
small principles that for example determine how information is searched for or how a
decision is made based in the information. The building blocks may be evolved capaci-
ties, such as recognition memory, forgetting unnecessary information, imitating others
(Gigerenzer & Todd, 1999a; Wilke & Todd, 2010), and emotions (Gigerenzer, 2001;
Gigerenzer & Todd, 1999a).
7
2.3 Previous research
2.3.1 Research on toolbox heuristics
Take The Best has been evaluated with empirical studies and computer simulations.
Empirical studies provided evidence that Take The Best is used by humans (Bergert
& Nosofsky, 2007; Bröder, 2000; Dieckmann & Rieskamp, 2007), but this has been
questioned by others (Hilbig & Richter, 2011; Newell, 2005; Newell & Shanks, 2003;
Newell, Weston, & Shanks, 2003). With computer simulations Take The Best has been
compared to other heuristics and methods like multiple linear regression in real-world
environments where one had to decide which of two alternatives (e.g. persons) had
a higher value (e.g. attractiveness) based on cues. The tests indicated that Take The
Best predicts new data as well as, or better than, other methods such as multiple linear
regression (Czerlinski et al., 1999; Gigerenzer & Goldstein, 1999). Other heuristics,
such as the recognition heuristic (used to decide between alternatives based only on
recognition information), have also been evaluated (Borges, Goldstein, Ortmann, &
Gigerenzer, 1999; Goldstein & Gigerenzer, 1999; Pohl, 2006).
8
al. (2015) suggest that the processes of evolution alone could not produce ecologically
rational toolboxes. For their argumentation they used a simple environment which was
structured as a toolbox as formalized in this thesis. That is, there was only one correct
action in each situation and it was determined by this ‘environment toolbox’. They did
a mathematical analysis and computer simulations and found that even in this simple
environment, it seemed that the toolbox has too many degrees of freedom to have been
created by a random process, such as evolution. It was not ruled out that the toolbox is
adapted through the combination of evolution and learning, or learning alone.
In this thesis, we evaluate a part of the adaptive toolbox account, the process of
adapting the toolbox to fit to the environment, by evolution and/or learning. The com-
plexity of adapting the entire toolbox is analyzed using computational complexity theory
and we determine under what circumstances it is tractable.
9
Chapter 3
Methods
In this section we present the used methods. We give a short background in com-
putational complexity theory (Section 3.1), present the three-fold research question
(Section 3.2) and give the formalization of an adaptive toolbox (Section 3.3).
10
capacities.
When analyzing the complexity of a model, one first defines a function F that rep-
resents the model as a mapping from input I to output O: F : I → O. The function
specifies this mapping, without stating how this output is computed from the input.
In computational complexity theory, one tries to determine the resources required to
compute a function by any algorithm. The traveling salesman problem is an example
function and its task is to find a short route through a set of cities. Note that the TSP
function does not specify how this route is to be found. During the rest of the overview
we use the traveling salesman problem (TSP) as an example function.
A function can be defined as a search function or a decision function. The search
version asks for an object called a solution that satisfies some criterion if there is such
an object. The decision version merely ask whether a solution exists (van Rooij, 2003).
The output for this function is thus either ‘yes’ or ‘no’. An instance of the function for
which the output is ‘yes’ is called a yes-instance; if the output is ‘no’, the instance is
called a no-instance. A function F : I → O is solved by an algorithm A if it gives the
correct output o ∈ O for any instance i ∈ I, i.e. if A outputs ‘yes’ if i is a yes-instance
and ‘no’ if i is a no-instance.
The search version of TSP is to find a route shorter than length k (the output) given
a set of cities and a pairwise distance between them (the input). Here, any route shorter
than length k is a solution. The decision function takes the same input, but asks whether
there is a solution (a route shorter than length k). Formally the TSP search and decision
functions are defined as follows:
11
and ending in the same city, such that the total cost of the tour is smaller
than or equal to B?
12
running time for intractable functions grows so quickly for larger inputs, it is assumed
that people cannot solve those functions. Even if humans can do a high number of
computations a second, say ten thousand computations, it would take over a day to
solve an instance of input size 30 of an intractable function. It is even assumed that
only a subset of all tractable functions is solvable by humans (van Rooij, 2008, pg.948).
Tractability of a function is then a necessary but not sufficient condition for computa-
tional plausibility. We assume that evolution cannot solve intractable functions either,
because with reasonably large input size, for example a hundred, the time to solve an
intractable function exceeds the time that earth has existed (Dalrymple, 2001). There-
fore we say that modeling a cognitive function or the evolution of a cognitive function
must be tractable.
The class of functions which can be solved in polynomial time is called P . To prove
that a function is tractable, one must give an algorithm which can solve that function in
polynomial time.
Definition 3.1. P is the class of decision problems which are solvable in polynomial time.
To explain how we can prove that a problem is intractable, we need the class N P ,
where N P stands for non-deterministic polynomial time.1 The definition for this class
is abstract. A function is in the class N P if there is an algorithm A which can determine
in polynomial time whether a given candidate solution of a function F : I → O for a
yes-instance iyes is correct.
Definition 3.2. NP is the class of decision problems for which the solution of a yes-instance
can be verified in polynomial time.
13
F is N P -hard. Assuming that the hardest function in N P is not solvable in polynomial
time, an N P -hard function F is not solvable in polynomial time either. One can prove
that a function F is N P -hard with a polynomial time reduction.
• Could a probabilistic algorithm solve the function faster than any deterministic
algorithm? Such an algorithm makes guesses and outputs the correct answer for
a high number of inputs. This notion of probabilistic algorithms is captured by
14
Input O(n2 ) O(2n ) O(2k n2 ) (10,000 steps/sec)
size n 100 steps/sec 100 steps/sec 10,000 steps/sec k=2 k=10 k=25
2 0.04 sec 0.04 sec 0.02 msec 0.0016 sec 0.41 sec 3.7 hrs
5 0.25 sec 0.32 sec 0.19 msec 0.01 sec 2.56 sec 23.3 hrs
10 1.00 sec 10.2 sec 0.10 sec 0.04 sec 10.2 sec 3.9 days
15 2.25 sec 5.46 min 3.28 sec 0.09 sec 23 sec 8.7 days
20 4.00 sec 2.91 hrs 1.75 min 0.16 sec 41 sec 15.5 days
30 9.00 sec 4.1 mths 1.2 days 0.36 sec 1.5 min 5.0 wks
50 25.0 sec 8.4 ×104 yrs 8.4 centuries 1.0 sec 4.3 min 3.2 mths
100 1.67 min 9.4 ×1019 yrs 9.4 ×1017 yrs 4.0 sec 17 min 1.1 yrs
1000 2.78 hrs 7.9 ×10290 yrs 7.9 ×10288 yrs 6.7 min 28 hrs 106 yrs
Table 3.1: The computing time for algorithms which run in polynomial time (n2 ), ex-
ponential time (2n ) and fixed-parameter tractable time as a function of the input size n
and parameter k (in the case of the fixed-parameter tractable algorithms). In column 2
and 3 it is assumed a hundred computing steps per second are taken, while in column
4 to 7 ten thousand computing steps per second are taken. Adapted from Table 2.1 and
2.2 in van Rooij (2003).
15
either.
Even though the above ways of trying to deal with intractability do not work, there is
a useful and widely-applicable way of coping with intractability. In the next section we
show how this can be done.
16
for the parameter size, but if k is 25, the function is already unfeasible for input size
2, taking over 3 hours to calculate. The class of functions which can be solved in fixed
parameter tractable time is called F P T . To prove that a function is in F P T , one must
give an algorithm which solves that function in fp-tractable time.
The class F P T is the fixed-parameter analogue of the class P and there is also
an analogue to N P -hardness, called W [x]-hardness, where W [x] is a class in the W -
hierarchy = {W [1], W [2], . . . , W [P ], ....XP } (see Downey & Fellows, 2013, for the de-
tailed definition of these classes). Parameterized functions that are W [x]-hard are at
least as hard to solve as any problem in W [x]. It is assumed that F P T 6= W [x], and
thus W [x]-hard functions are postulated not to be solvable in fixed-parameter tractable
time. W [x]-hard functions are said to be fixed-parameter intractable. To prove that a
parameterized function is W [x]-hard we can do a parameterized reduction from another
W [x]-hard parameterized problem.
With a parameterized reduction from κ1 -F1 to κ2 -F2 , κ1 -F1 can be solved indirectly by
κ2 -F2 by reducing an instance of κ1 -F1 in fp-tractable time to an instance of κ2 -F2 and
then solving it with an algorithm which solves κ2 -F2 .
Using Lemma 3.1 and 3.2 (taken from Wareham, 1999), additional results can be
obtained from fp-(in)tractability results. That is, given some fp-(in)tractability results
for a parameterized function, usually more parameter sets can be obtained for which
the function is fp-(in)tractable.
17
Lemma 3.1. If function F is fp-tractable relative to parameter-set κ then F is fp-tractable
for any parameter-set κ0 such that κ ⊂ κ0 .
18
tion process, we need to know whether there is a tractable formalization of the toolbox.
If there is no such formalization of the toolbox, there does not exist a fast(-and-frugal)
toolbox and thus determining whether or not it can have adapted becomes redundant,
because humans cannot use an intractable toolbox. The first question therefore states
whether there is a formalization of the adaptive toolbox which is tractable. This ques-
tion is answered using the adaptive toolbox formalization from Section 3.3.2. If there is
indeed a tractable adaptive toolbox, the main topic can be addressed. Is the adaptation
process tractable? The second and third question cover this question in two steps. The
second question concerns the tractability of the formalization of the adaptive toolbox in
general. No restrictions are posed, not on the mind, nor on the environment. However,
it is plausible that some parameters need to be restricted in order to make the adaptive
toolbox tractably adaptable, e.g. posing size constraints on the toolbox or constraining
the value of the required ecological rationality. Schmitt and Martignon showed that
adapting Take The Best is intractable, which is a strong indication that adapting the
entire toolbox is intractable as well. The third question addresses this: If the toolbox
is not tractably adaptable in general, are there restrictions which do make it tractably
adaptable?
3. If not, are there restrictions under which the adaptive toolbox is tractably
adaptable?
In the next section the environment, the toolbox and ecological rationality are for-
malized. These formalizations are used in Chapter 4 where the research questions are
addressed.
19
tory in these situations. Our formalization of the adaptive toolbox is described next and
the term ecological rationality is formalized last.
20
ing
hin
ich
m
de
gry
ss
rea
in g
de
dw
y
t si
d
i
n
wa
tire
ec
tsi
ain
un
hu
ou
an
nA
ou
t ic
ep
es
sr
tS
m
Sle
Go
It i
Ru
Th
Ia
Ia
Ia
Ea
Ea
T T F T F 1 1 0 0 0
F F F T T 0 0 1 0 0
F T F T F 0 1 0 0 0
F F T F T 0 0 0 1 0
F T T F T 0 1 0 1 0
T F F F F 0 0 0 0 1
Table 3.2: An example environment. Each row is one situation. Column one to five
denote whether an information piece is true (T) or false (F) in the situation; column six
to ten denote whether an action is satisfactory (1) or unsatisfactory (0) in the situation.
Row one states that in the situation when the sun is shining, an agent is hungry, not
tired and outside and it is not raining, the satisfactory actions are eating an ice cream
and eating a sandwich.
multiple actions are satisfactory. The set of actions which are satisfactory in at least one
situation in S is called A.
An environment E = (S, A) is a set of situations and the set of satisfactory actions
for those situations. The environment does not need to contain all possible situations
({T, F }|I| ), only those that an agent would come across. Thus, S ⊆ {T, F }|I| . A multi-
valued function DE : S → A maps a situation to a set of actions in environment E. An
example environment is shown in Table 3.2. It contains 5 pieces of information and 6
of all 32 possible situations. For each situation the satisfactory actions are listed.
21
Fast and Frugal Trees One of the heuristics that Gigerenzer and colleagues propose
as a tool in the toolbox is the fast-and-frugal tree (Gigerenzer & Gaissmaier, 2011;
Martignon, Vitouch, Takezawa, & Forster, 2003). This tree is a one-reason decision
mechanism, because it decides what to do based on just one piece of information.
A fast-and-frugal tree contains internal nodes and leaf nodes. Each internal node has
exactly two children, of which at least one is a leaf node. Only the last internal node
has two leaves as children. The size of a fast-and-frugal tree is defined as the number of
internal nodes it contains. See Figure 3.1a (left) for an example tree. Here, the green
nodes are internal and the blue nodes are leaf nodes. The children of cue-node c1 are
cue c2 and action a1 .
To make a decision, an agent uses a fast-and-frugal tree by metaphorically walking
through the tree over internal nodes until she arrives at a leaf node, an exit. The route of
traversal is determined by the situation in which the agent finds herself and the values
of the internal nodes. The internal nodes are cues, functions which evaluate whether
a piece of information is true or false in a situation. For example, for the information
piece: ‘I am hungry’ a cue can either ask: ‘Am I hungry?’ or ‘Am I not hungry?’. We
call a cue positive if it evaluates whether an information piece is true and negative if
it evaluates whether it is false. If a cue evaluates to true in the situation the agent
proceeds to the leaf child, otherwise she proceeds to the cue child. The leaf nodes are
actions and reaching an action is equivalent to deciding to perform that action. If none
of the cues evaluates to true, the last action of the heuristic is performed. We call this
the default action. The tree is called fast because there is an exit node at each internal
node and thus a decision can be made very quickly. It is frugal in its use of information,
because each cue evaluates only single piece of information.
The adaptive toolbox Instead of formalizing the toolbox where only one heuristic
which is a fast-and-frugal tree, as Gigerenzer et al. propose, we formalized all heuris-
tics as such. We found that at least some of the heuristics that Gigerenzer has proposed
that can be rewritten as fast-and-frugal trees (see Appendix A), so that the set of heuris-
tics in our formalization represents more than just one heuristic. Moreover, if we find
that the toolbox adaptation is already intractable with this smaller toolbox (the toolbox
containing a subset of all heuristics), it is likely that toolbox adaptation for a toolbox
which includes other heuristics is also intractable. We cannot make a similar gener-
22
c1 a1 i4 ¬i2 i1 i3
c4 a4 ¬i3 a2 a3 i7 a1
a5 a4 a5
(a)
(b)
Figure 3.1: a) A fast-and-frugal tree. The tree contains cues (in green; c1 , c2 , c3 and
c4 ) and actions associated to them (in blue; a1 , a2 , a3 , a4 and a5 ). Each cue ci ∈ C is
a simple Boolean function which asks whether one piece of information ii ∈ I, part of
the information in the environment, is true or false. If the cue evaluates to true, the
action attached to that cue is executed; otherwise the next cue function is tried. In this
example tree, the tree traversal stops at latest when c4 returns false. In that case action
a5 is executed. The tree has size four as it contains four cues.
b) A toolbox. Cues are named by the function they evaluate. For example, the first cue
of the selector evaluates whether i4 is true, while the second cue evaluates whether i2 is
false. The selector is represented in orange. The selector is traversed from left to right.
If a selector-cue is evaluated to true, the corresponding heuristic is executed. When the
last cue of the selector returns false, the first heuristic is executed.
alization when toolbox adaptation is tractable with this subset of tools, because then
the extra tools might make toolbox adaptation intractable. Lastly, this is a first step in
the investigation of the tractability of the toolbox. Later research can include a higher
diversity of heuristics, including those that cannot be rewritten as fast-and-frugal trees.
As explained in Section 2.2.2, no formal definition of a selector has been given by
Gigerenzer et al., although they have stated that the selector should be some fast-and-
23
frugal mechanism in order for the whole toolbox to work fast and frugal. We formalized
the selector as a fast-and-frugal tree as well because this is a simple mechanism3 . Again,
if we find that toolbox adaptation is intractable with this simple version of a selector, it
will probably also be intractable for a more complicated selector.
The selector’s leaf nodes are heuristics instead of actions. If none of the cues eval-
uates to true in a situation, the first heuristic is performed.4 We call this the default
heuristic, as it is executed when no other heuristic is applicable.
An example toolbox can be seen in Figure 3.1b. Here, an agent starts at the top left
node, which evaluates i3 , and traverses the selector to the right until a cue evaluates to
true. Then a heuristic is traversed until a decision is made.
is the ecological rationality of a toolbox T . Here, T (s) is the action chosen by toolbox T
in situation s and DE (s) is the set of satisfactory actions according to environment E. If
T (s) is in the set DE (s) a satisfactory action is given. We say that a toolbox is ecologically
rational when its ecological rationality er is greater or equal to some minimal ecological
rationality ermin , er ≥ ermin , where ermin is some value between zero and one.
The formalizations of the environment, the adaptive toolbox and the ecological ratio-
3
Technically, although the heuristics and the selector are trees, the entire toolbox is not a tree, since
the two nodes (the first and the last of the selector) are parents for the first heuristic node.
4
We chose to have the toolbox execute the first heuristic rather than executing the last heuristic. The
first heuristic can be seen as the most important heuristic. For example, one may first want to check
whether there is a predator. Finding yourself in a situation where no heuristic was picked by the selector,
it is a safe bet to use the most important heuristic.
24
nality given above are used in the functions with which we model toolbox application
and adapting a toolbox. In the next section these functions are defined and used to
formulate the research questions.
25
Chapter 4
Results
In this section, we present both our three research questions and results of complexity
analyses answering these questions. For each of the first two research question we in-
troduce the functions that we will subsequently analyze with computational complexity
theory. For the last research question, the functions derived for the second research
question are analyzed with parameterized complexity theory.
26
The function T OOLBOX A PPLICATION is as follows.
T OOLBOX A PPLICATION
Input: A toolbox T , a situation s.
Output: The action a which is associated with situation s according to T .
27
use (see Chapter 2), it is valid to assume there will not be double cues as the second of
the two cues will simply be redundant.
28
The function can thus be solved in polynomial time which is tractable.
Proof of correctness
When executing Algorithm 1, an agent goes through the selector cues of the given
toolbox one at a time, which takes at most time |selector|. For each cue, the agent
determines whether a cue is true by looking up the truth value of the piece of informa-
tion that the cue evaluates in the given situation. If the cue is true, the corresponding
heuristic is executed. The agent goes through at most |heuristic| cues of the heuristic
in the same manner as the selector: if a cue evaluates to true in the situation, the corre-
sponding action is chosen. If none of the cues in the selector evaluates to true, the first
heuristic is executed. Using this algorithm, the agent applies the toolbox in the exact
manner as proposed in our formalization. Thus, it gives the action belonging to the
given situation according to our formalized adaptive toolbox which is the output which
should be given according to the function T OOLBOX A PPLICATION.
Given the above the answer to research question 1 is yes, the application of the adaptive
toolbox is in P . We continue with the results from the second research question.
In this section, two functions are introduced which will be used to answer research
questions 2 and 3. These functions need to be solved to adapt a toolbox, either by
making it from scratch or by readapting it to a slightly changed environment. Because
we analyze functions and not an algorithm solving the functions, the functions are
generic, i.e., they model both evolution and learning.
at bottom of a heuristic) = |I| + 1 + 2(|I| + 1)2 + |I| + 1 = 2(|I|2 + 3|I| + 2. Thus, the entire input is:
2(|I|2 + 3|I| + 2 + |I|.
29
Toolbox Adaptation
This function models adapting a toolbox to an environment from scratch. The input is
an environment and a minimal ecological rationality; the output is an adaptive toolbox
which performs good enough, i.e., has an ecological rationality equal to or higher than
the value defined in the input. This function is defined as follows:
T OOLBOX A DAPTATION
Input: An environment E = (S, A), the positive integers #h, |h| and nc de-
noting the maximum number of heuristics, the maximum size of a heuristic
and the maximum number of negative cues in the entire toolbox respectively,
and the minimal ecological rationality ermin ∈ [0, 1].
Question: Is there an adaptive toolbox T with at most nc negative cues,
with at most #h heuristics each at most of size |h| and with an ecological
rationality er ≥ ermin for environment E?
Environment E, the ecological rationality er and toolbox T are defined as in Section 3.3.
Toolbox Readaptation
This function models adapting an existing toolbox to a new, slightly changed environ-
ment, which we call readapting. This new environment is mostly the same as the prior
environment, that is, one new situation is added to the old environment. We analyze
this function to determine how hard it is to adapt the toolbox in small steps. It could
very well be that adapting a toolbox relative to a large set of environmental changes
(and in the extreme a whole new environment) is intractable but that incrementally
adapting to each of the individual environmental changes in that set is tractable rela-
tive to each individual change.
The input of the function is an environment, a minimal ecological rationality, a
toolbox which has at least that ecological rationality in the given environment, a new
situation-action pair and the changes which can be made to the toolbox. Readaptation
will be done via the following toolbox-structure changes:
30
• Change an action
One can make more heuristics by adding a cue-action pair to the selector, thereby
adding a heuristic with just one action. No more complicated changes, such as switching
two heuristic, are in the list. Although it is plausible such mechanisms exist—think
of learning to change the order without having to switch the heuristic bit by bit, or
cross-over of (bits of) genes in evolution—each such mechanism can be simulated by
using a constant-length sequence of actions drawn from the set of four given above.
For example, a switch in the order of two heuristics A and B could be simulated by
deleting heuristic A and then adding it again after B. Note that such simulations will
not change the complexity of a function from something not solvable in polynomial time
to something that is solvable in polynomial time.
The output of the function is a toolbox which is made from the changes C and which
performs good enough in the new environment, i.e., has an ecological rationality equal
to or higher than the given value in the input.
T OOLBOX R EADAPTATION
Input: An environment E = (S, A), the positive integers #h, |h| and nc de-
noting the maximum number of heuristics, the maximum size of a heuristic
and the maximum number of negative cues in the entire toolbox respec-
tively, the minimal ecological rationality ermin ∈ [0, 1], an adaptive toolbox
T which has at most #h heuristics where each heuristic is at most size |h|,
with at most nc negative cues and ecological rationality er ≥ ermin , a new
s-a pair e and a set of changes which can be made C = {delete a cue-action
pair, add a cue-action pair, change a cue, change an action}.
Question: Is there an adaptive toolbox T 0 reconfigured with changes from
C which has at most #h heuristics where each heuristic is at most size |h|,
with at most nc negative cues and ecological rationality er ≥ ermin in the
new environment E 0 = E ∪ e?
31
RQ2: Is the function T OOLBOX A DAPTATION in P , and if not, is
T OOLBOX R EADAPTATION then in P ?
D OMINATING S ET
Input: A graph G = (V, E) and a positive integer k ≤ |V |.
Question: Is there a dominating set V 0 for G with |V 0 | ≤ k?
32
1 7 5
3 4
2 6
Figure 4.1: An example instance, iDSx = (G, k), of D OMINATING S ET, where parameter
k is 3. As there is a dominating set with size 3 (nodes 2, 6 and 7), this is a yes-instance.
Note that no dominating set of size 2 exists.
• #h is set to 1.
• |h| is set to k − 1.
• nc is set to k.
• ermin is set to 0.5
33
Situation v1 v2 v3 v4 v5 v6 v7 x1 x2 x3 x4 x5 x6 x7 av1 av2 av3 av4 av5 av6 av7 ax1 ax2 ax3 ax4 ax5 ax6 ax7
n(v1 ) T F T F F F T F F F F F F F 1 0 1 0 0 0 1 0 0 0 0 0 0 0
n(v2 ) F T T F F F F F F F F F F F 0 1 1 0 0 0 0 0 0 0 0 0 0 0
n(v3 ) T T T T F F F F F F F F F F 1 1 1 1 0 0 0 0 0 0 0 0 0 0
n(v4 ) F F T T T T F F F F F F F F 0 0 1 1 1 1 0 0 0 0 0 0 0 0
n(v5 ) F F F T T F T F F F F F F F 0 0 0 1 1 0 1 0 0 0 0 0 0 0
n(v6 ) F F F T F T F F F F F F F F 0 0 0 1 0 1 0 0 0 0 0 0 0 0
n(v7 ) T F F F T F T F F F F F F F 1 0 0 0 1 0 1 0 0 0 0 0 0 0
x1 F F F F F F F T F F F F F F 0 0 0 0 0 0 0 1 0 0 0 0 0 0
x2 F F F F F F F F T F F F F F 0 0 0 0 0 0 0 0 1 0 0 0 0 0
x3 F F F F F F F F F T F F F F 0 0 0 0 0 0 0 0 0 1 0 0 0 0
x4 F F F F F F F F F F T F F F 0 0 0 0 0 0 0 0 0 0 1 0 0 0
x5 F F F F F F F F F F F T F F 0 0 0 0 0 0 0 0 0 0 0 1 0 0
x6 F F F F F F F F F F F F T F 0 0 0 0 0 0 0 0 0 0 0 0 1 0
x7 F F F F F F F F F F F F F T 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Table 4.1: The environment of the instance of T OOLBOX A DAPTATION with ermin = 0.5
which is constructed from the example instance iDSx of D OMINATING S ET (Figure 4.1).
Each row is one situation and its name is displayed in the first column, where n(vi ) is a
neighborhood-situation and xi is an x-situation. Columns 2 to 15 denote the informa-
tion pieces I, where each vi ∈ I (column 2 to 8) is a vertex information piece and each
xi ∈ I (column 6 to 15) is an x information piece. A value T or F in column i and row s
denotes that information piece i is true or false, respectively, in situation s. Columns 16
to 29 denote the set A of actions that are correct in at least one situation, where each
avi is a vertex-action and each axi is an x-action. A value 1 or 0 in column a and row s
denotes that action a is correct or incorrect, respectively, in situation s.
are x-actions, denoted as ax . An axi action is unsatisfactory for all situations, ex-
cept for situation xi . In short, the action az of any situation s is satisfactory if the
information piece iz is true.
From instance iDSx , shown in Figure 4.1, we can make instance iT Ax using transforma-
tion algorithm AT A . The parameters belonging to this instance are: #h = 2, |h| = 3,
nc = 0, ermin = 0.5. See Table 4.1 for the constructed environment.
Running time of algorithm AT A : Setting the parameters takes one step for each pa-
rameter (4 steps in total). Each of the |V | situations n(v) of the environment is set by
looking up the neighbors of v in G. This takes time |V |. Setting the value of all infor-
mation pieces takes one step for each piece (2|V | steps). Setting an information piece
for each of the |V | x-situations takes one step as well (2|V | steps). The action az of
any situation s is set to satisfactory if the information piece iz is true (2|V |). Making
the whole instance takes time 4 + 5|V |2 + 4|V |2 , which has complexity O(|V |2 ). The
transformation thus runs in polynomial time with respect to the size of the instance of
34
D OMINATING S ET.
Step 2. If iDS is a yes-instance then iT A = AT A (iDS ) is a yes-instance
Given a If yes-instance iDS = (G(V, E), k), there is a set of vertices V 0 ∈ V which
dominates all of the vertices v ∈ V . That is, for every vertex v ∈ V , a vertex in its
closed neighborhood is in V 0 . For instance iT A , which is constructed from iDS using
algorithm AT A , a toolbox can be constructed with one heuristic with size k − 1. The
selector cue is set to v1 . This could have been any cue, as there is only one heuristics
and as this heuristic is also the default and thus the only one which is used. All vertices
in dominating set V 0 except one, called vk , are cues in the heuristic. That is, for each
vi 6= vk ∈ V 0 there is an information piece vi as cue. Each cue vi has a corresponding
action avi and the default action is avk , the action belonging to vk . Figure 4.2a illustrates
the constructed toolbox for instance iT Ax .
There are |V | + |X| = 2|V | situations in the environment, so in order for the toolbox
to obtain an ecological rationality ≥ 0.5, a satisfactory action needs to be given in at
least |V | situations. For all |V | neighborhood-situations n(v) a satisfactory action is
given because the actions of the heuristic are set as the dominating set. That is, in any
neighborhood-situation n(v), a cue vi is true if and only if an action avi is satisfactory and
if and only if vertex vi is in the closed neighborhood of v. Because there is a dominating
set for iDS , there is at least one such vi cue and action for n(v). Either this vi is a cue-
action pair in the heuristic, and then a satisfactory action is chosen for situation v, or
none of the cues vj in the heuristic are vi and the default action is performed. However,
as there is at least one vertex vi for each v in the dominating set, and none of the cue
pairs corresponding to the dominating set (except vk ) were true, the last vertex, vk ,
must dominate v and thus vertex vk corresponding to action avk must dominate v. As
such, for any neighborhood-situation a satisfactory action is chosen. There are no ax
actions in this toolbox and thus no actions which satisfy any x-situation. The ecological
rationality of this toolbox is thus exactly
|V |
er = = ermin = 0.5. (4.1)
2|V |
The single heuristic has the correct length (k − 1). Thus, if instance iDS of D OMINATING
S ET is a yes-instance, the transformed instance iT A of T OOLBOX A DAPTATION is also a
35
yes-instance.
Step 3. If iT A = AT A (iDS ) is a yes-instance then iDS is a yes-instance
Given a yes-instance iT A = (E 0 , ermin = 0.5), there is a toolbox T such that the ecological
rationality is equal to or higher than 0.5 in environment E 0 with one heuristic of length
k − 1. This means that T gives a satisfactory action for at least |V | situations as there
are 2|V | situations in E 0 . We show that it is only possible to satisfy |V | situations if there
is a dominating set of size at most k in graph G of iDS = (G, k). The selector cue may
evaluate any information piece, since the default heuristic is always applied because
there are no other heuristics. We go through three toolbox types:
36
This means that we do not even need to assume there may only be positive cues,
as it does not matter what value the cues have.
m + |V | − m
er ≥ = ermin = 0.5. (4.4)
2|V |
The m x-situations are satisfied only if there are m x-actions in the toolbox. The
|V | − m neighborhood-situations must then be satisfied by the k − m actions which
are left, because none of the x-actions are associated with any neighborhood-
situations. If this is possible there is a dominating set of size ≤ k for iDS . This
is so because there are k − m v-actions which correspond to vertices, a partial
dominating set, with which at least |V | − m vertices are covered. There are at
most m unsatisfied neighborhood-situations (corresponding to the vertices not
dominated by this partial dominating set). The partial dominating set may be m
pieces larger, since the dominating set is allowed to be size k. The at-most-m not-
dominated vertices can be covered by choosing those m vertices as the rest of the
dominating set. The total size of the dominating set is then ≤ k and all |V | vertices
are covered. Here too, it does not matter how many negative cues there are in the
toolbox, because as long as for all situations a satisfactory action is chosen, it does
not matter how one arrives at that action.
The three types of toolboxes above form the only possible toolboxes that can be made
given a transformed instance of T OOLBOX A DAPTATION. Thus, if an instance of T OOLBOX
A DAPTATION is a yes-instance, there is at least one viable toolbox of one of the toolbox
types, i.e., a toolbox with er ≥ ermin with one heuristic of size k − 1. We have shown
that if a viable toolbox of any of those types can be made for an instance, there is a
possible dominating set of at most size k for the instance of D OMINATING S ET. Thus, we
have proved that iDS is a yes-instance if iT A = AT A (iDS ) is a yes-instance.
We have now proven that all three conditions of a correct polynomial time reduction
hold for our reduction from D OMINATING S ET to T OOLBOX A DAPTATION (with ermin =
37
0.5), thereby proving that DS polynomial time reduces to TA. This means that T OOLBOX
A DAPTATION is at least as hard as D OMINATING S ET and because D OMINATING S ET is
N P -hard we prove that T OOLBOX A DAPTATION is N P -hard as well. The answer to the
first part of research question 2 is no, T OOLBOX A DAPTATION is not in P .
Discussion
The reduction above proves that adapting a toolbox is not tractable if the toolbox needs
to have an ecological rationality of at least 0.5. This proof can be adapted for any
minimal ecological rationality writable as
|V |
ermin = , (4.5)
|V | + |X|
where |X| is the number of x-situations, x-information pieces and x-actions which can
be any positive integer including zero. Note that Equations 4.1 to 4.4 in step 2 and 3
of the proof can be replaced by their generalized versions in Equations 4.6 to 4.9. The
reduction still holds relative to these generalized versions.
|V |
er = = ermin (4.6)
|V | + |X|
k
er = < ermin (4.7)
|V | + |X|
|V |
er = = ermin (4.8)
|V | + |X|
m + |V | − m
er ≥ = ermin (4.9)
|V | + |X|
We have shown that adapting a toolbox is not tractable in general for a minimal
ecological rationality writable as Equation 4.5. However, although this means for each
such ermin at least one instance exists which is not solvable in polynomial time, there
exist also instances for at least some such values of ermin where adapting a toolbox
can be done in polynomial time. This is true as long as the number of situations for
which a satisfactory action needs to be given is small. For example, in an instance
where the toolbox needs to perform a satisfactory action in only one situation in an
1
entire environment E = (S, A), the minimal ecological rationality is |S| . A toolbox can
38
be constructed in polynomial time which contains one heuristic with one action a ∈ A
which is associated with at least one situation s ∈ S (Figure 4.3a). Constructing a
2
toolbox which performs a satisfactory action in at least two situations in E (ermin = |S| )
can also be done in polynomial time, even if these situations need different actions
(Figure 4.3b). The first cue in the only heuristic in a toolbox is set to cue ii and an
action a which is satisfactory in a situation where ii is true is paired with the cue. The
default action of the heuristic is set to an action b which is satisfactory in a situation
where ii is false.
It is not clear how constructing a toolbox in such a way generalizes to having to
give a satisfactory action in a higher number of situations. Since we know that for
any ermin writable as Equation 4.5 there exists an instance which is not solvable in
polynomial time, we know for certain that, unless P = N P , a generalized polynomial
time construction of a toolbox with high enough ecological rationality is impossible.
v1 k1
v1 av1 v1 v
v2 av2 v2 v
vi avi vk v
vj avj d v
avk a
(a) The toolbox which is constructed for (b) The toolbox which is constructed for
a yes-instance of T OOLBOX A DAPTATION. a yes-instance of T OOLBOX R EADAPTA -
All actions avz correspond to a vertex vz TION . All actions ki correspond to a ver-
in a dominating set V 0 . tex vi in a dominating set V 0 .
Figure 4.2: Example toolboxes for T OOLBOX A DAPTATION (a) and T OOLBOX R EADAPTA -
TION (b).
39
i1
i1 ii a
a b
(a) A toolbox which always performs ac- (b) A toolbox which performs action a in
tion a, no matter what the situation is. all situations where ii is true and b in all
situations where ii is false.
Figure 4.3: Simple toolboxes with er ≥ ermin for sufficiently small values of ermin .
• #h is set to 1.
• |h| is set to k.
• nc is set to 0.
• ermin is set to 1.
40
• For each vertex vi ∈ V one information piece called vi is made, and an additional
piece, called d. For each neighborhood-situation n(v), all information pieces vi ∈ I
are set to false, except those information pieces corresponding to the closed neigh-
borhood of v and piece d which are set to true. For situation ∅, all information
pieces are set to false.
• Action set A contains three actions (v, a and b). For each neighborhood-situation
n(v), the action v is satisfactory, the other actions are unsatisfactory. For the ∅-
situation all actions except a are unsatisfactory; a is satisfactory.
Algorithm AT R constructs the new situation e by setting all information pieces to false
except piece d, which is set to true. Action b is satisfactory, the others are unsatisfactory.
Algorithm AT R constructs toolbox T as a toolbox with one heuristic and selector cue
v1 . The first k cues of the heuristic are random cues (other than d) and the last cue is d.
It has action v for all actions except for the default action, which is a. See Figure 4.2a
for the toolbox. This prior toolbox performs perfect in E: if none of the cues above
d evaluates to true cue d will catch the neighborhood-situation so that a satisfactory
action, v, is still performed. Situation ∅ is also satisfied, because none of the cues is true
for ∅.
Finally, the set of changes C can be found in Section 4.2.1.
From instance iDSx , shown in Figure 4.1, we can make instance iT Rx using transfor-
mation algorithm AT R . The parameters belonging to this instance are: #h = 1, |h| = 3,
nc = 0 and ermin = 1. The environment E and new situation e are shown in Table 4.2
and toolbox T can be found in Figure 4.2a.
Running time of algorithm AT R : Setting the parameters takes one step each (4 steps
in total). For each vertex v of the |V | vertices in D OMINATING S ET, a situation with |V |+1
information pieces and three actions is constructed by determining the neighbors of v in
|V | steps and then setting the |V | + 1 information pieces and 3 actions. Constructing the
situations ∅ and e takes in total time 2(|V | + 1 + 3). Constructing an instance iT A takes
time 4 + |V |(2|V | + 1) + (2|V | + 1 + 3), which has complexity O(|V |2 ). The transformation
thus runs in polynomial time with respect to the size of the instance of D OMINATING S ET.
41
Situation v1 v2 v3 v4 v5 v5 v6 d v a b
n(v1 ) T F T F F F T T 1 0 0
n(v2 ) F T T F F F F T 1 0 0
n(v3 ) T T T T F F F T 1 0 0
n(v4 ) F F T T T T F T 1 0 0
n(v5 ) F F F T T F T T 1 0 0
n(v6 ) F F F T F T F T 1 0 0
n(v7 ) T F F F T F T T 1 0 0
∅ F F F F F F F F 0 1 0
e F F F F F F F T 0 0 1
42
when action a is the default action and cue d paired with action b is directly above. The
only way a neighborhood-situation n(v) is satisfied is if at least one of the k cues is an
information piece which is true in situation n(v). Since ‘true’ for information piece vi
directly codes that vi is in the closed neighborhood of v, the vertex is dominated. This
holds for all vertices, since all have a corresponding situation. Thus, if an instance of
T OOLBOX R ECONFIGURATION is a yes-instance, the corresponding instance of D OMINAT-
ING S ET is also a yes-instance.
We have proven that all three conditions for a correct polynomial time reduction
hold for our reduction from D OMINATING S ET to T OOLBOX R EADAPTATION, thereby prov-
ing that DS polynomial time reduces to TA. As D OMINATING S ET is N P -hard, this proves
that T OOLBOX R EADAPTATION is NP-hard as well. The answer to the second part of re-
search question 2 is no, T OOLBOX R EADAPTATION is not in P .
Discussion
We have proven that apart from T OOLBOX A DAPTATION, even the seemingly simpler
function T OOLBOX R EADAPTATION is N P -hard. However, this second result only holds
when the ecological rationality is 1.0, which means that the toolbox must perform per-
fectly.
43
size of a heuristic (defined as the maximum number of cues in a heuristic), are pa-
rameters for the size of the toolbox. Together they completely constrain the size of the
toolbox. It seems plausible that restraining these parameters makes it easier to adapt a
toolbox, because this restricts the search space. The parameters nc and pc represent the
number of negative and positive cues in the toolbox, respectively. Combined, they are
the total number of cues in the selector and heuristics in a toolbox. The minimal eco-
logical rationality, ermin , states how well a toolbox needs to perform. It seems plausible
that it is easier to find a toolbox if ermin is restricted because if the minimal ecological
rationality is low, there will probably be many toolboxes which reach this value and
thus perform good enough. The parameters #h, |h|, nc, pc and ermin are all related to
the toolbox by either restricting the size or the performance of a toolbox.
The other four parameters involve the environment. The number of information
pieces in the environment, |I|, restricts the number of toolbox candidates by restricting
the number of cues that can be made. Parameters |A| and amax are the number of
actions and the maximum number of actions per situation, respectively. Lastly, we look
at the correlation in the environment. This restriction is suggested by Gigerenzer et
al. to explain why a toolbox can perform well (Martignon & Hoffrage, 1999). It is
stated that little information is needed in order for the toolbox to perform well, because
information is often highly correlated (Todd, 2001). However, it might also be used to
explain why a toolbox is able to adapt quickly. An information piece is fully correlated
with a second if it has the same value (true or false) as the second for all situations.
If two pieces of information are fully correlated, this means that one of the pieces is
redundant, because all the information can be obtained by attending to only one of
the two pieces. We define parameter corr as the number of groups of fully correlated
information pieces.
44
Name Definition
#h The number of heuristics in a toolbox
|h| The length of a heuristic counted in the number of cues
nc The number of negative cues in a toolbox
pc The number of positive cues in a toolbox
ermin The minimal ecological rationality a toolbox should have
|I| The number of information pieces in an environment
|A| The number of actions
amax The number of satisfactory actions per situation
corr The number of groups of fully correlated information pieces.
Table 4.3: The parameters which may be causing the intractability of T OOLBOX A DAP-
TATION and T OOLBOX R EADAPTATION .
T OOLBOX A DAPTATION
We show that the polynomial time reduction from D OMINATING S ET to T OOLBOX A DAP-
TATION (Section 4.2.2) is also a parameterized reduction with respect to parameters #h,
|h|, nc, pc and ermin . To prove this we need to prove that the four points in Definition 3.5
hold. We already proved that the transformation runs in polynomial time. This means
it also runs in fp-tractable time, O(f (k)nc ). We can set f (k) to a constant function,
independent of k, so that O(f (k)nc ) becomes O(nc ), which is some polynomial time
function. Points 2 and 3 are already proved for the polynomial time reduction. This
leaves us with proving point 4, whether the parameters #h, |h|, nc, pc and ermin are
dependent only on k. That is, whether the parameters can be written as some function
of k, independent of the input size of D OMINATING S ET.
Parameter |h| is set to k − 1, a function of k alone (f (k) = k − 1). Parameter nc is
also dependent only on k, as nc = k. Since any number of positive cues is allowed, any
number of negative cues is allowed. Thus, pc is also set to k in this reduction. Parameters
#h and ermin are constants and thus trivial functions of k (g(k) = 1 and h(k) = 0.5).
This means that point 4 holds as well and thus the reduction in Section 4.2.2 is also a
parameterized reduction from k-D OMINATING S ET to {#h, |h|, nc, pc, ermin }-T OOLBOX
A DAPTATION.
This means that {#h, |h|, nc, pc, ermin }-T OOLBOX A DAPTATION is at least as hard as
k-D OMINATING S ET, and thus W[2]-hard as well. Given Lemma 3.2, T OOLBOX A DAPTA -
TION is also fp-intractable for any subset of the parameter set.
45
T OOLBOX R EADAPTATION
We show that the polynomial time reduction from D OMINATING S ET to T OOLBOX R EADAP-
TATION (Section 4.2.3) is also a parameterized reduction with respect to parameters #h,
|h|, nc, pc and ermin . To prove this we need to that prove the four points in Definition 3.5
hold. We proved that the transformation runs in polynomial time which means it also
runs in fp-tractable time, O(f (k)nc ), which proves point 1. We also proved points 2
and 3. Both parameter |h| and pc are k, and the other parameters are constants in this
reduction. Parameter #h is 1, nc is 0, ermin is 1, amax is 1 and |A| is 3. Thus, point 4 also
holds. Thus, {#h, |h|, nc, pc, ermin , amax , |A|}-T OOLBOX R EADAPTATION is W[2]-hard.
T OOLBOX R EADAPTATION is also fp-intractable for any subset of the parameter set.
46
can be constructed from a prior T . Therefore, if a toolbox T 0 with er ≥ ermin can be
found, the output of function T OOLBOX R EADAPTATION is also ‘yes’.
Step 1: Generating toolbox set T from I and A. One can generate all possible
toolboxes by simply constructing all combinations. We assume that any toolbox
only contains useful cues so that the maximum size of a heuristic and that of a
selector are both |I| + 1.4 In our further calculations of the number of toolboxes,
we ignore this assumption for simplicity. As such, the outcome of the number of
generated toolboxes is an upper bound.
A heuristic of length |I| + 1 contains |I| + 1 cues and |I| + 2 actions. Each cue
can have 2|I| different values and each action can have |A| different values. The
number of possible heuristics is thus: (2|I|)|I|+1 × |A||I|+2 . A toolbox contains at
most |I| + 1 heuristics, using the same reasoning as for the size of a heuristic.
Further, a toolbox contains a selector with |I| + 1 cues, each of them can have
2|I| different values. The number of possible selectors is thus: (2|I|)|I|+1 . In total,
there are (2|I|)|I|+1 × |A||I|+2 × (|I| + 1) × (2|I|)|I|+1 possible toolboxes. We call this
set of toolboxes T and write its size as a function f1 (|I|, |A|).
A toolbox can be created by adding cue-action pairs. The number of cue-action
pairs is at most |I| + 2 × |I| + 1. Thus, |I| + 2 × |I| + 1 steps need to be taken to
create one toolbox. We write the time to create one toolbox as a function f2 (|I|).
The time to generate the entire toolbox set T is then f1 (|I|, |A|) × f2 (|I|) and thus
depends only on |I| and |A|.
Step 2: Finding a viable toolbox in set T . In this step all toolboxes in T are evalu-
ated. To evaluate the ecological rationality of one toolbox one has to go through
all situations s ∈ S and determine whether or not a satisfactory action is given by
that toolbox. This is done by evaluating at most 2(|I| + 1) information pieces, in
the selector and chosen heuristic combined. Thus for each toolbox we need time:
|S| × 2(|I| + 1). As explained in Section 3.3.1, the total set of situations is {T, F }|I| ,
so that there are 2|I| situations. |S| × 2(|I| + 1) can be rewritten as: 2|I| × 2(|I| + 1),
which is some function f3 (I), dependent only on |I|. To evaluate whether the a
toolbox follows the constraints #h, |h| and nc, one has to go through all cues in
4
See Section 4.1.2 for an explanation of useful cues.
47
the toolbox and keep score of the amount of heuristics, the number of negative
cues and the largest heuristic. This takes time at most time f2 (|I|), as the time to
create one toolbox is the number of cues in the toolbox.
The whole algorithm thus takes time: f1 (|I|, |A|)×f2 (|I|)+f1 (|I|, |A|)×f3 (|I|)+f2 (|I|) =
f4 (|I|, |A|). The running time of this algorithm is thus dependent only on the number
of information pieces and actions. The complexity of the algorithm can be written as:
O(f4 (|I|, |A|) × |I|0 ).
Parameter corr
We prove that the functions T OOLBOX A DAPTATION and T OOLBOX R EADAPTATION that are
in F P T for parameter set {|I|, |A|} are also in F P T when the parameter set includes
corr instead of |I|. We do this by giving an algorithm which solves the functions in time
f1 (corr, |A|) × |I|). This algorithm makes use of the algorithm above by inserting Step 0
into the above algorithm before Step 1 and 2.
Step 0: Finding a set of uncorrelated information pieces Ic . In this step the groups
of correlated information pieces are determined. First, two information pieces are
compared in each situation. If they have the same value in each situation they
are correlated and therefore put in one group, otherwise they are put in separate
groups.
Every other information piece ii is appointed to a group by comparing it to the
existing groups. This is done by picking one information piece ij from a group I 0
and determining whether ii and ij are correlated. If they are, ii is assigned to I 0 ;
otherwise ii is compared to another group that has not been compared to ii . If no
such group is available, ii is assigned to a new group. All i ∈ I are assigned to a
group in this manner.
All information pieces in one group always have the same value in any situation.
If the value of one information piece is known, the values of the other pieces on
the group can be directly inferred. Therefore there is no information loss if only
one information piece is used. One random information piece is chosen from each
group. These pieces form a new set, called Ic and its size is corr, the number of
groups with fully correlated information pieces.
48
Comparing two information pieces takes time 2|S| and each information piece
needs to be compared with at most the number of correlated groups in the en-
vironment, corr. Above we defined the maximum number of situations as 2|I| .
Only one information piece per group of correlated pieces can contribute to the
number of situations as each situation is different from the next. The number
of situations is therefore at most 2corr . For each information piece at most corr
comparisons are made and there are |I| information pieces. Picking one informa-
tion pieces from each group takes time corr. The total time of this step is then
2 × 2corr × corr × |I| + corr which can be written as some function f5 (corr) × |I|.
Step 1 and 2. The two steps from above now are run. However, as input they do
not receive I and A, but Ic and A. In the running time we can simply replace
the size of I with the size of Ic (corr). The running time for step 1 and 2 is then
f5 (corr, |A|).
The whole algorithm takes time f5 (corr) × |I| + f3 (corr, |A|), which is of complexity
O(f (corr, |A|)×|I|) for some function f . This is a fixed-parameter tractable running time
with respect to corr and |A|. Thus, {corr, |A|}-T OOLBOX A DAPTATION and {corr, |A|}-
T OOLBOX R EADAPTATION are in F P T .
Parameter amax
We prove that the functions T OOLBOX A DAPTATION and T OOLBOX R EADAPTATION that are
in F P T for parameter set {|I|, |A|} and {corr, |A|} are also in F P T when the parameter
sets includes amax instead of |A|.
The total number of actions, |A|, is dependent on the number of situations and on the
number of satisfactory actions per situation amax . There can be at most amax different
actions for each situation, |A| = amax × |S|.
The first algorithm (Steps 1 and 2) takes time f3 (|I|, |A|) and solves both {|I|, |A|}-
T OOLBOX A DAPTATION and {|I|, |A|}-T OOLBOX R EADAPTATION in fp-tractable time. The
number of situations is 2|I| in this algorithm and we can therefore rewrite |A| as amax ×
2|I| such that we rephrase the time to run the algorithm as being dependent only on amax
and |I|. Therefore, {|I|, amax }-T OOLBOX A DAPTATION and {|I|, amax }-T OOLBOX R EADAP-
TATION are in F P T as well.
The second algorithm (Steps 0, 1 and 2) takes time f1 (corr, |A|) × |I| and solves
49
both {corr, |A|}-T OOLBOX A DAPTATION and {corr, |A|}-T OOLBOX R EADAPTATION in fp-
tractable time. The number of situations in this algorithm is 2corr and thus we can
substitute |A| for amax × 2corr so that we can rewrite f1 as being dependent only on
amax and corr: f1 (corr, amax ). The entire time to run the second algorithm is then:
f1 (corr, amax ) × |I|. Therefore, {corr, amax }-T OOLBOX A DAPTATION and {corr, amax }-
T OOLBOX R EADAPTATION are in F P T as well.
50
∅ #h |h| #h, nc, nc, nc, nc, pc
|h| pc pc, #h pc, |h| #h, |h|
∅ N P -hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard
ermin W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard
|A| ? ? ? ? ? ? ? ?
ermin , |A| ? ? ? ? ? ? ? ?
amax ? ? ? ? ? ? ? ?
ermin , amax ? ? ? ? ? ? ? ?
amax , |A| ? ? ? ? ? ? ? ?
ermin , amax , |A| ? ? ? ? ? ? ? ?
|I| ? ? ? ? ? ? ? ?
51
ermin , |I|, amax , |A| FPT FPT FPT FPT FPT FPT FPT FPT
corr ? ? ? ? ? ? ? ?
ermin , corr ? ? ? ? ? ? ? ?
corr, |A| FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, |A| FPT FPT FPT FPT FPT FPT FPT FPT
corr, amax FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, amax FPT FPT FPT FPT FPT FPT FPT FPT
corr, amax , |A| FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, amax , |A| FPT FPT FPT FPT FPT FPT FPT FPT
corr, |I| ? ? ? ? ? ? ? ?
ermin , corr, |I| ? ? ? ? ? ? ? ?
corr, |I|, |A| FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, |I|, |A| FPT FPT FPT FPT FPT FPT FPT FPT
corr, |I|, amax FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, |I|, amax FPT FPT FPT FPT FPT FPT FPT FPT
corr, |I|, amax , |A| FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, |I|, amax , |A| FPT FPT FPT FPT FPT FPT FPT FPT
is empty) or W [2]-hard. For some sets, this cannot be inferred from the results. This is
make the function T OOLBOX A DAPTATION F P T , N P -hard (only in the case when the set
column and row denotes whether the parameters of that column and row together
Table 4.4: A summary of the results for T OOLBOX A DAPTATION. An entry in a certain
∅ #h |h| #h, nc, nc, nc, nc, pc
|h| pc pc, #h pc, |h| #h, |h|
∅ N P -hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard
ermin W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard
|A| W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard
ermin , |A| W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard
amax W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard
ermin , amax W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard
amax , |A| W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard
ermin , amax , |A| W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard W [2]-hard
|I| ? ? ? ? ? ? ? ?
52
ermin , |I|, amax , |A| FPT FPT FPT FPT FPT FPT FPT FPT
corr ? ? ? ? ? ? ? ?
ermin , corr ? ? ? ? ? ? ? ?
corr, |A| FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, |A| FPT FPT FPT FPT FPT FPT FPT FPT
corr, amax FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, amax FPT FPT FPT FPT FPT FPT FPT FPT
corr, amax , |A| FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, amax , |A| FPT FPT FPT FPT FPT FPT FPT FPT
corr, |I| ? ? ? ? ? ? ? ?
ermin , corr, |I| ? ? ? ? ? ? ? ?
corr, |I|, |A| FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, |I|, |A| FPT FPT FPT FPT FPT FPT FPT FPT
corr, |I|, amax FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, |I|, amax FPT FPT FPT FPT FPT FPT FPT FPT
corr, |I|, amax , |A| FPT FPT FPT FPT FPT FPT FPT FPT
ermin , corr, |I|, amax , |A| FPT FPT FPT FPT FPT FPT FPT FPT
empty) or W [2]-hard. For some sets, this cannot be inferred from the results. This is
the function T OOLBOX R EADAPTATION F P T , N P -hard (only in the case when the set is
column and row denotes whether the parameters of that column and row together make
Table 4.5: A summary of the results for T OOLBOX R EADAPTATION. An entry in a certain
Chapter 5
Discussion
Gigerenzer and colleagues assume that the adaptive toolbox has been created through
evolution and/or learning (Wilke & Todd, 2010). To determine whether this assumption
is true, we analyzed the computational complexity of adapting a toolbox. With these
analyses we could determine whether the adaptation process is tractable, which is a
necessary condition for computational plausibility. These analyses are an important ad-
dition to determine whether humans can have an adaptive toolbox. First, we presented
our fast-and-frugal toolbox formalization, in which both the selector and the heuristics
are formalized as fast-and-frugal trees. We analyzed the complexity of applying and
adapting this type of toolbox and found that although applying the toolbox is tractable,
adapting a toolbox is intractable in general. Yet when certain restrictions are placed on
the environment the adaptation process can be tractable. If the adaptation processes
do exploit these restrictions, it is possible to adapt the adaptive toolbox in a reasonable
amount of time. Thus, it is possible that humans have an adaptive toolbox.
We first discuss our toolbox formalization in Section 5.1 and then continue with the
plausibility of the restrictions in Section 5.2. Lastly, we discuss some parameters which
might be interesting to explore in Section 5.3.
53
heuristics analyzed. As such, our research is a first attempt to analyze the complexity
of a complete toolbox. We chose to represent the selector as a fast-and-frugal tree be-
cause this is a fast mechanism and, as Gigerenzer et al. state, the selector should be
fast and frugal in order for the entire toolbox to be fast and frugal. However, there is
probably a high number of other possible selector formalizations. These other possible
selection mechanisms must still be fast and frugal and will therefore have to run in
(fixed-parameter) tractable time. If they are rewritable to our formalization in (fixed-
parameter) tractable time, our (fp-) tractability results will hold for a toolbox with that
other selector formalization. For selector formalizations are not rewritable in polyno-
mial time to our formalization, new analyses must be done. We leave the investigation
of these selector formalizations open to future research.
In our formalization, the heuristics were all formalized as fast-and-frugal trees. We
proved that most of the heuristics proposed by Gigerenzer et al. can be written as fast-
and-frugal trees (see Appendix A). Thus our results hold for this subset of heuristics as
well. Most likely the intractability results hold when the other heuristics, the ones not
included in this subset, are included in the toolbox, because these heuristics are about
as simple as fast-and-frugal trees and thus do not seem to contain any computational
power which can overcome the intractability of adapting the toolbox. However, we
cannot be certain. We leave the investigation of the heuristics which are not rewritable
as fast-and-frugal trees to future research.
Information: |I| and corr The number of information pieces, |I|, is very large in the
real world. Just by reading the front page of a single newspaper we find more
than a hundred Boolean pieces of information. Therefore, parameter |I| is not
54
restricted to some small number. A more plausible candidate is parameter corr,
number of groups of fully correlated information, because the number of informa-
tion pieces can be very large while corr is small and, as Gigerenzer et al. point
out, information is often correlated in the world (Czerlinski et al., 1999). In this
thesis we chose to analyze only the case of full correlation of information and
leave the case of partial correlation as a topic for future research. As information
in the world is often not fully correlated, it is important to know whether adapt-
ing a toolbox is fp-tractable when in the parameter sets corr is replaced by some
parameter for partial information. We chose not to pursue this topic in this the-
sis, as it is uncertain how choosing only one information piece of each group will
decrease the ecological rationality. The pieces of information chosen from each
group are probably correlated with one another, so that there could still be redun-
dant information. Moreover, picking one information piece from a group means
ignoring the extra information that other pieces in that group provide.
Actions: |A| and amax The number of actions that are satisfactory in at least one
situation, |A|, is very high in reality. For example, a high number of food types can
be chosen to satisfy a person’s hunger, even though choosing a specific dish is just
one type of action. A multitude of other types of actions are possible, for example
choosing jobs, or choosing a house to live in. Thus, parameter |A| is not restricted
to some small number. However, it is more likely that amax , the maximum number
of satisfactory actions in one situation, is restricted. For example, if a person is
hungry and it is warm then a salad or an ice cream may be two of the few types
of food which will satisfy him.
The parameters corr and amax should be further explored to determine whether they are
indeed small. Afterwards, it needs to be determined whether these (possible) restric-
tions of the environment are exploited by the adaptation processes, i.e., whether these
processes have endowed the adaptive toolbox with a limited number of information
pieces and actions.
55
non-compensatory information and scarce information as environmental structures in
which Take The Best might work better than some other algorithms (multiple linear
regression) (Czerlinski et al., 1999; Martignon & Hoffrage, 1999). An example of non-
compensatory information is the ordering of a dictionary, where the first letter is more
important than the second letter in ordering the words, the second letter is more im-
portant than the third, etc. The information of a letter cannot be compensated with the
information from the letters which follow that one, even if the information of the other
letters is combined. For example, the word ‘azz’ would always come before the word
‘zaa’ in the dictionary, even though ‘zaa’ has two a’s instead of one. Although these
structures were proposed to explain why applying a heuristic works well, they may be
interesting for adapting a toolbox.
Other environmental structures that are proposed by Gigerenzer et al. are very
specific. No general restrictions, but rather single rules were proposed, for example, the
rule that people are attracted to large cities (see for more examples Todd & Gigerenzer,
2007). How these solitary rules can be included in our adaptation formalization is not
clear.
56
5.5 Some last remarks
We have taken a first step in analyzing the process of adapting a toolbox. We have de-
termined that it is not trivial to adapt a toolbox, but that there are certain restrictions
on the environment under which it can be adapted. What now remains is determin-
ing whether these restrictions on the environment actually hold and testing whether
adaptive processes, evolution and/or learning, make and have made use of them. If the
restrictions are not used, we have suggested some others which may be interesting to
investigate.
By determining the complexity of the adaptation process, we can prove whether it
is computationally plausible that the adaptive toolbox has adapted. Only if the process
is tractable, can the toolbox have adapted. If the process were intractable, the time
to adapt a toolbox would be so long that the adaptation process would have had to
start before the earth existed, even for relatively small input sizes. Only if the adaptive
toolbox has adapted can it exist as a mechanism for resource-bounded decision making
in humans, because only then can a toolbox have been created.
57
References
Anderson, J. R., & Milson, R. (1989). Human memory: An adaptive perspective. Psy-
chological Review, 96(4), 703-719.
Ausiello, G., Crescenzi, P., Gambosi, G., Kann, V., Marchetti-Spaccamela, A., & Protasi,
M. (1999). Complexity and approximation: Combinatorial optimization problems
and their approximability properties. Springer-Verlag, Berlin Heidelberg.
Bergert, F. B., & Nosofsky, R. M. (2007). A response-time approach to comparing
generalized rational and take-the-best models of decision making. Journal of Ex-
perimental Psychology: Learning, Memory, and Cognition, 33(1), 107–129.
Borges, B., Goldstein, D. G., Ortmann, A., & Gigerenzer, G. (1999). Can ignorance beat
the stock market? In G. Gigerenzer, P. M. Todd, & ABC Research Group (Eds.),
Simple heuristics that make us smart (p. 59-72). Oxford University Press.
Bröder, A. (2000). Assessing the empirical validity of the “Take-the-best” heuristic as
a model of human probabilistic inference. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 26(5), 1332-1346.
Chase, V. M., Hertwig, R., & Gigerenzer, G. (1998). Visions of rationality. Trends in
cognitive sciences, 2(6), 206–214.
Cooper, R. (2000). Simple heuristics could make us smart; but which heuristics do we
apply when? Behavioral and Brain Sciences, 23(05), 746–746.
Czerlinski, J., Gigerenzer, G., & Goldstein, D. G. (1999). How good are simple heuris-
tics? In G. Gigerenzer, P. M. Todd, & ABC Research Group (Eds.), Simple heuristics
that make us smart (p. 97-118). Oxford University Press.
Dalrymple, G. B. (2001). The age of the earth in the twentieth century: a problem
(mostly) solved. Geological Society, London, Special Publications, 190(1), 205–
221.
Deineko, V. G., Hoffmann, M., Okamoto, Y., & Woeginger, G. J. (2006). The traveling
58
salesman problem with few inner points. Operations Research Letters, 34(1), 106–
110.
Dieckmann, A., & Rieskamp, J. (2007). The influence of information redundancy on
probabilistic inferences. Memory & Cognition, 35(7), 1801–1813.
Downey, R. G., & Fellows, M. R. (1999). Parameterized complexity. Springer-Verlag.
Downey, R. G., & Fellows, M. R. (2013). Fundamentals of parameterized complexity
(Vol. 4). Springer, Berlin.
Downey, R. G., Fellows, M. R., & Stege, U. (1999). Parameterized complexity: A frame-
work for systematically confronting computational intractability. In Contemporary
trends in discrete mathematics: From dimacs and dimatia to the future (Vol. 49, pp.
49–99).
Fortnow, L. (2009). The status of the P versus NP problem. Communications of the ACM,
52(9), 78–86.
Garey, M. R., & Johnson, D. S. (1979). Computers and intractability: a guide to the
theory of NP-completeness.
Gigerenzer, G. (2001). The adaptive toolbox. In G. Gigerenzer & R. Selten (Eds.),
Bounded rationality: The adaptive toolbox (p. 37-50). MIT Press.
Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Science, 3(1),
20–29.
Gigerenzer, G. (2015). Simply rational: Decision making in the real world. Oxford
University Press.
Gigerenzer, G., & Brighton, H. (2009). Homo heuristicus: Why biased minds make
better inferences. Topics in Cognitive Science, 1(1), 107–143.
Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual review of
psychology, 62, 451–482.
Gigerenzer, G., & Goldstein, D. G. (1999). Betting on one good reason: The take the
best heuristic. In G. Gigerenzer, P. M. Todd, & ABC Research Group (Eds.), Simple
heuristics that make us smart (p. 75-95). Oxford University Press.
Gigerenzer, G., & Sturm, T. (2012). How (far) can rationality be naturalized? Synthese,
187(1), 243–268.
Gigerenzer, G., & Todd, P. M. (1999a). Fast and frugal heuristics: The adaptive toolbox.
In G. Gigerenzer, P. M. Todd, & ABC Research Group (Eds.), Simple heuristics that
make us smart (p. 3-34). Oxford University Press.
59
Gigerenzer, G., & Todd, P. M. (1999b). Simple heuristics that make us smart. Oxford
University Press, USA.
Goldstein, D. G., & Gigerenzer, G. (1999). The recognition heuristic: How ignorance
makes us smart. In G. Gigerenzer, P. M. Todd, & ABC Research Group (Eds.),
Simple heuristics that make us smart (p. 37-58). Oxford University Press.
Hilbig, B. E., & Richter, T. (2011). Homo heuristicus outnumbered: Comment on
Gigerenzer and Brighton (2009). Topics in Cognitive Science, 3(1), 187–196.
Laplace, P. S. (1951). A philosophical essay on probabilities (F. W. Truscott & F. L. Emory,
Trans.). J. Wiley.
Martignon, L., & Hoffrage, U. (1999). Why does one-reason decision making work? A
case study in ecological rationality. In G. Gigerenzer, P. M. Todd, & ABC Research
Group (Eds.), Simple heuristics that make us smart (p. 119-140). Oxford University
Press.
Martignon, L., Vitouch, O., Takezawa, M., & Forster, M. R. (2003). Naive and yet enlight-
ened: From natural frequencies to fast and frugal decision trees. In D. Hardman &
L. Macchi (Eds.), Thinking: Psychological perspective on reasoning, judgment, and
decision making (pp. 189–211). Wiley.
Newell, B. R. (2005). Re-visions of rationality? Trends in Cognitive Sciences, 9(1), 11 -
15.
Newell, B. R., & Shanks, D. R. (2003). Take the best or look at the rest? factors
influencing “one-reason” decision making. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 29(1), 53-63.
Newell, B. R., Weston, N. J., & Shanks, D. R. (2003). Empirical tests of a fast-and-frugal
heuristic: Not everyone “takes-the-best”. Organizational Behavior and Human De-
cision Processes, 91(1), 82–96.
Otworowska, M., Sweers, M., Wellner, R., Uhlmann, M., Todd, W., & Van Rooij, I.
(2015). How did Homo Heuristicus become ecologically rational? In Proceedings
of the euroasianpacific joint conference on cognitive science (p. 324–329).
Pohl, R. F. (2006). Empirical tests of the recognition heuristic. Journal of Behavioral
Decision Making, 19(3), 251–271.
Rieskamp, J., & Otto, P. E. (2006). SSL: a theory of how people learn to select strategies.
Journal of Experimental Psychology: General, 135(2), 207–237.
Schmitt, M., & Martignon, L. (2006). On the complexity of learning lexicographic
60
strategies. The Journal of Machine Learning Research, 7, 55–83.
Simon, H. A. (1956). Rational choice and the structure of the environment. Psycholog-
ical review, 63(2), 129-138.
Simon, H. A. (1990). Invariants of human behavior. Annual review of psychology, 41(1),
1–20.
Todd, M., Fiddick, L., & Krauss, S. (2000). Ecological rationality and its contents.
Thinking & Reasoning, 6(4), 375–384.
Todd, P. M. (2001). Fast and frugal heuristics for environmentally bounded minds.
In G. Gigerenzer & R. Selten (Eds.), Bounded rationality: The adaptive toolbox
(p. 51-70). MIT Press.
Todd, P. M., & Gigerenzer, G. (2007). Environments that make us smart ecological
rationality. Current Directions in Psychological Science, 16(3), 167–171.
van Rooij, I. (2003). Tractable cognition: Complexity theory in cognitive psychology
(Unpublished doctoral dissertation). University of Victoria, Canada.
van Rooij, I. (2008). The tractable cognition thesis. Cognitive science, 32(6), 939–984.
van Rooij, I., Wright, C. D., & Wareham, T. (2012). Intractability and the use of heuris-
tics in psychological explanations. Synthese, 187(2), 471–487.
Wareham, H. T. (1999). Systematic parameterized complexity analysis in computational
phonology (Unpublished doctoral dissertation). University of Victoria, Canada.
Wigderson, A. (2006). P, NP and mathematics a computational complexity perspective.
In Proceedings of the international congress of mathematicians: Madrid, august 22-
30, 2006: invited lectures (pp. 665–712).
Wilke, A., & Todd, P. M. (2010). Past and present environments: The evolution of
decision making. Psicothema, 22(1), 4–8.
61
Appendices
62
Appendix A
Gigerenzer et al. have proposed a list of heuristics, including the recognition heuristic,
the fluency heuristic, Take The Best, tallying, satisficing, 1/N, default heuristic, tit-for-
tat, imitate the majority and imitate the successful (Gigerenzer, 2008; Gigerenzer &
Sturm, 2012). Here we discuss in turn whether these heuristics can be rewritten as fast-
and-frugal trees. All heuristics in this list can be represented as fast-and-frugal trees,
except for the fluency heuristic and tallying. They require the presence of non-Boolean
information and trees with non-binary branching.
The recognition and fluency heuristics work by choosing the alternative which is
recognized over the unrecognized one or by choosing the alternative which is recog-
nized faster, respectively. The recognition heuristic can be implemented as a fast-and-
frugal tree. One can add a cue which states ‘I have seen this alternative before’. How-
ever, the fluency heuristic contains information about how quickly an alternative is
recognized, which is a specific time value. This information cannot be represented as a
Boolean variable without a loss of information. As such the fluency heuristic cannot be
implemented as a fast-and-frugal tree, which can only contain Boolean information.
Take The Best can be described as a fast-and-frugal tree, although some information
pieces need to be combined. Take The Best compares the value of some information
piece for two alternatives. For example, it compares whether two cities have a train
station. This can be encoded in two cues. The first cue asks whether the first alter-
native has a higher value than the second (where higher means the alternative has a
train station while the other does not, or it is not known for the other). The second
63
cue asks whether the second alternative has a higher value. The action associated with
the first cue is then ‘pick alternative one’, while action associated with the second cue
is ‘pick alternative two’. However, this encoding relies on the fact that multiple infor-
mation pieces can be evaluated in one cue, because two information pieces need to be
compared. In our present formalization, cues are formalized as always evaluating only
one information piece. Alternatively, there may be information pieces which are combi-
nations of other information pieces. In that way, a cue would only have to evaluate one
information piece. In our formalization we made no assumptions whether or not this is
the case.
Tallying is used to choose one of multiple alternatives. This is done by assigning
each alternative a value by adding 1 to the value if an information piece is true and
subtracting 1 if the information piece is false. The alternative with the highest value is
chosen. This is, to our knowledge, not representable by a fast-and-frugal tree, because
the subtotal needs to be kept in memory and tallying does not stop until it has passed
every cue.
Satisficing is used when one would like to know which alternative has the highest
value of some variable. For example, one would like to get the house which the biggest
garden. Satisficing searches through the alternatives and chooses the first one that
exceeds a certain aspiration level. This could be represented as a fast-and-frugal tree,
where each cue evaluates one alternative. If that alternative has a value higher than the
aspiration level, that alternative is chosen.
1/N is used when resources need to be allocated to multiple alternatives. This heuris-
tic states that resources need to be allocated equally among the N options. There is
always only one action to take, and thus it can be implemented as a trivial fast-and-
frugal tree, containing no cues and only one action.
The default heuristic is used for choosing between different alternatives. It simply
takes the default one if there is such an alternative. This can be implemented by simply
starting a fast-and-frugal tree with a cue ‘is there a default action?’ and if that is the
case, the default action is taken.
Tit-for-tat is used for choosing between (mostly two) alternatives when interacting
with someone. It was used for the prisoner’s dilemma. First the cooperative choice is
64
picked and then thereafter the choice the other person made. This can be implemented
as a fast-and-frugal tree with a first cue ‘this is the first move’. The associated action is
‘the cooperative action’. The other cues are ‘the other person one chose alternative a
last time’ with associated action a, for all alternatives a.
Imitate the majority or the successful In imitate the majority the action that the
majority of people has done is chosen. It can be implemented in one cue-action pair. The
cue is ‘the majority of people (I know) chooses action a, while the action is a. Imitate
the successful works in a similar way, only the most successful person is imitated.
65