Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

Introduction to Modeling Cognitive Processes
Introduction to Modeling Cognitive Processes
Introduction to Modeling Cognitive Processes
Ebook491 pages5 hours

Introduction to Modeling Cognitive Processes

Rating: 0 out of 5 stars

()

Read preview

About this ebook

An introduction to computational modeling for cognitive neuroscientists, covering both foundational work and recent developments. 

Cognitive neuroscientists need sophisticated conceptual tools to make sense of their field’s proliferation of novel theories, methods, and data. Computational modeling is such a tool, enabling researchers to turn theories into precise formulations. This book offers a mathematically gentle and theoretically unified introduction to modeling cognitive processes. Theoretical exercises of varying degrees of difficulty throughout help readers develop their modeling skills.
 
After a general introduction to cognitive modeling and optimization, the book covers models of decision making; supervised learning algorithms, including Hebbian learning, delta rule, and backpropagation; the statistical model analysis methods of model parameter estimation and model evaluation; the three recent cognitive modeling approaches of reinforcement learning, unsupervised learning, and Bayesian models; and models of social interaction. All mathematical concepts are introduced gradually, with no background in advanced topics required. Hints and solutions for exercises and a glossary follow the main text. All code in the book is Python, with the Spyder editor in the Anaconda environment. A GitHub repository with Python files enables readers to access the computer code used and start programming themselves. The book is suitable as an introduction to modeling cognitive processes for students across a range of disciplines and as a reference for researchers interested in a broad overview.
LanguageEnglish
PublisherThe MIT Press
Release dateFeb 8, 2022
ISBN9780262362313
Introduction to Modeling Cognitive Processes

Related to Introduction to Modeling Cognitive Processes

Related ebooks

Science & Mathematics For You

View More

Related articles

Reviews for Introduction to Modeling Cognitive Processes

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Introduction to Modeling Cognitive Processes - Tom Verguts

    Preface and Acknowledgments

    Cognitive science and cognitive neuroscience have witnessed an explosion of novel theories, tools, and data in recent decades. This increasing sophistication requires conceptual tools to make sense of it all. Computational modeling is just such a tool; it allows turning verbal theories into precise formulations, thus aiding both the integration of behavioral, electroencephalography (EEG), functional magnetic resonance imaging (fMRI), and other kinds of data and testing of the theory. As a result, computational modeling is omnipresent in modern cognitive science and cognitive neuroscience. At the same time, the general public is increasingly interested in the relation between cognition and computation; witness the popularity of topics like deep learning.

    Oddly, however, there are very few textbooks targeted at an audience of cognitive neuroscientists. Note: I will use the term cognitive neuroscience in a very inclusive manner, encompassing cognitive neuroscience, cognitive science, neuroscience, psychology, and all its intersections. Most books on the market are targeted toward computer scientists or engineers. One excellent book (McLeod, Plunkett, & Rolls, 1998) is targeted toward a cognitive neuroscience audience; I have used this for several years in my modeling cognitive processes class. However, that book is now becoming outdated. There are good more recent books targeted to the same cognitive neuroscience audience (Farrell & Lewandowsky, 2018), but they tend to emphasize statistical model analysis, whereas I wanted a stronger focus on model building in my class. This prompted me to write this textbook. It also covers model analysis (in chapters 6 and 7), but the emphasis is on model building (in all the other chapters). The audience is meant to be inclusive: it caters to novice readers, but also to active researchers who want to refresh their memory or study a field of modeling that they have less experience with.

    Readers of the research literature typically find a chaotic and overwhelming variety of models on the market. This can make it hard to see the proverbial forest for the trees. With this text, I had the specific goal to provide the reader a unified perspective on modeling. To keep it concise and self-contained, I had to exclude many interesting models and modeling approaches. Relatedly, I do not only cover the latest hits (although some of these are discussed as well), but intend to show that the same modeling principles have been recycled throughout the last decades. I hope the text is a useful starting point for readers to dig into the more specialized literature themselves.

    Thematically, after chapter 1, the book starts with models of decision making (chapter 2), followed by supervised learning (chapters 3–5), statistical model analysis (chapters 6 and 7), reinforcement learning (RL; sometimes called semisupervised learning) (chapters 8 and 9), unsupervised learning (chapter 10), and finally Bayesian (chapter 11) and social-interaction (chapter 12) models. I use a subset (chapters 1–7, 9, and 10) in my modeling cognitive processes (MCP) class at Ghent University. Note that the division of material across chapters is model-based rather than topic-based (such as a typical introduction to psychology book would be). For example, the topic of language appears in different chapters (e.g., chapters 4, 5, and 12). The only exception to this rule is chapter 12, which describes various models of social interactions. Although the book is intended as an integrated whole, it is possible to read the chapters individually. However, I do recommend reading chapter 1 first, and to read chapters 4–5 and chapters 6–7 together due to their thematic overlap.

    The language of modeling is mathematical, but don’t be afraid. Anyone with a high-school background and a willingness to invest some effort should be able to follow most of this book, and at least the main arguments of each chapter. Some of the material is slightly more advanced, but this material can be skipped without danger of losing the main argument. There are also theoretical exercises throughout the book to help you develop your modeling skills. Exercises vary a lot in difficulty, so that researchers with different backgrounds can profit. The more advanced exercises are indicated with an asterisk (∗). Hints and solutions to the exercises of the book are provided at the end.

    In addition, the book contains a glossary with some of the key technical terms that I use. The definitions provided here are intended to refresh your memory; they are typically not sufficient to understand a concept. You cannot understand the derivative of a function just by reading a two-line definition. If you need to learn about or refresh some basic concepts, there are excellent websites available to do just that. I recommend Xaktly.com, which offers very good, graphical, and intuitive tutorials on many mathematical concepts.

    One of the advantages of a mathematical model is that ideally, it is precise enough to be implemented in a computer program. A lot of effort in modeling, therefore, is also directed toward computer code implementation. For the code in this book, I used the free Python language, with the Spyder editor in the Anaconda environment. Visit Anaconda.org (www.anaconda.org) for download and tutorial on this language. There are several good tutorials on Python available. For a thorough theoretical introduction, I recommend Punch and Enbody (2014) or my own website (https://sites.google.com/view/pp02psychopy), which focuses on programming experiments but also introduces Python in a practical manner.

    In this book, I refer to some packages that you can download and install. It is advised to create a virtual environment in Anaconda for each collection of packages that you use together. In this way, you can install in that environment the optimal versions of basic packages such as numpy or scipy without interference from other versions. For example, in my Anaconda installation, I have an environment for using AI Gym and TensorFlow, and another environment for using PsychoPy (Peirce, 2007).

    Modeling can only be learned by doing, so exercise and exploration are indispensable. For this reason, the code for generating all figures and the coding exercises are also provided (as Python .py files) on GitHub. GitHub is a platform to share computer code. Here, each coding project (such as a chapter in this book) is conceptualized as an online tree consisting of the different programs belonging to the project. The owners of a tree can directly change code. If other users have suggestions for code changes, they can create a branch in the tree, which the owners can then inspect and subsequently either reject (cut off the branch), or merge with the tree. In this way, joint code creation for a project is possible. To get started, you can just download the code for your inspiration and exploration. But if you have suggestions, do feel free to add a branch! The Github repository for this book can be found at

    https://github.com/CogComNeuroSci/modeling-master/tree/master/code%20by%20chapter.

    If you want to use this book for teaching, I can also send you extra exercises, tests, and their solutions that I use in class. Some extra material (currently only color pictures) is also posted at https://sites.google.com/view/mcp-website; in the book, this is referred to as the MCP website.

    I thank Mehdi Senoussi and Pieter Huycke for providing excellent help in teaching the course that started this textbook, for feedback on the different chapters, and for developing several exercises to accompany the text. I thank the MCP students for suffering through the first years where I tried out a preliminary draft of this book, as well as for providing useful comments on several passages. Thanks are also due to Esther De Loof for Python inspiration, and to Anna Marzecova and Jonas Simoens for teaching support. For comments on parts of an earlier draft, I thank Kobe Desender, Wim Fias, Rob Hartsuiker, Clay Holroyd, and Francis Tuerlinckx. I thank Martin Butz and three anonymous reviewers for their feedback; and Philip Laughlin and his team from the MIT Press for supporting the project and providing practical advice. All remaining errors in the text are mine. Finally, I thank my partner, Ann, and my daughters, Laura-Line and Yente, for being there for me.

    On the day that I write these words (in April 2021), the vaccination program against the SARS-CoV-2 virus that led to a worldwide pandemic is in full swing. It has been a difficult year for many people and for many reasons, but also for those who had to learn a new skill, such as modeling. This fact can be understood from a modeling perspective: As will become clear from reading the book, the most efficient way to train a model is to let it do something first, followed by looking at feedback on its performance. Providing feedback was more difficult due to the necessity to keep social distance, thus impairing the learning experience. This is one lesson to take away from the crisis: feedback is crucial for learning any skill, and writing a book is no exception. I therefore welcome any comments the reader of this book may have, regardless of whether you read it as a teacher, a student, a researcher, or just for fun.

    1 What Is Cognitive Modeling?

    Don’t just stand there, optimize something!

    —Daniel S. Levine

    The Use of Models

    What is cognitive modeling? For several years, I have attempted to answer this question at family and cocktail parties. It’s not so easy to figure this out because the term has different meanings in different contexts, including ecology, astronomy, meteorology, and economics. So let’s start by making clear how it will be used in this book. In cognitive neuroscience, modeling is a tool to help construct better theories of cognition and behavior. We have empirical tools in psychology, such as response time measurement, electroencephalogram (EEG) measurement, and questionnaires. However, modeling is a conceptual rather than an empirical tool. In particular, it provides a formal language that helps build more precise theories.

    Why do we need this conceptual tool? Models have various advantages. First, they allow for making novel predictions that don’t obviously follow from the theory. A schoolbook example of modeling concerns Newton’s theory of gravity (an example borrowed from John Kruschke¹). At an intuitive (nonprecise) level, the theory of gravity states that due to a force called gravity, each pair of objects in the universe is attracted to each other. In itself, this is not so useful or interesting, but we can make it interesting by formalizing it into a model. Thus, Newton’s model also specifies that gravity is proportional to the product of the masses of the two objects, and further, that gravity is inversely proportional to the distance between the objects. From these two principles (and some other laws that I’ll ignore for now), one can predict that all objects will fall toward the Earth at the same acceleration (and thus speed). Would anyone be able to make this prediction without the formal model?

    Second, models allow integration of existing data in a well-organized conceptual framework. The theory of gravity allows formalizing the finding (first discovered by Galilei) that the movement of a pendulum always has the same frequency (movement speed), which allowed the construction of precise clocks. The theory also helps us understand how tides work; why the Moon rotates around the Earth; the Earth around the Sun; the Sun in our galaxy; and many more phenomena. Quite a heavy lift for such a simple concept! Our understanding of the world would simply not be the same without the theory of gravity—and the theory of gravity would not be the same without the conceptual tool of modeling.

    Obviously, I am not going to talk about physics in this book. Yet, in cognitive neuroscience, modeling is also an extremely useful tool; for a philosophical and historical background to cognitive modeling, see Butz and Kutter (2017). You may have heard about the replication crisis in psychology, meaning that many empirical effects in psychology could not be replicated (Open science collaboration, 2015). Several measures have been taken to address it, including preregistration of study designs and Bayesian statistics. Although such measures are definitely relevant and worthwhile, they have tended to concentrate on improving the statistical methods. However, some have argued that the deeper crisis resides not only in the methods, but also in the lack of theory that psychologists develop (Muthukrishna & Henrichs, 2019; Oberauer & Lewandowsky, 2019; van Rooij & Baggio, 2020). Modeling is a tool to help developing such theories. I hope to demonstrate this in this book.

    A crucial concept in cognitive modeling is levels of modeling. Modeling can be done at a high level, describing interactions between large units; or at a low level, where one considers interactions between smaller units. These levels are illustrated on the y-axis of figure 1.1. This is called the spatial scale in figure 1.1, in the sense that higher levels denote interactions between larger spatial units.

    Figure 1.1

    Levels (or scales) of modeling. The spatiotemporal scale of several real-world phenomena that can be modeled are indicated on the graph.

    Starting at the top, one can consider interactions at the social level, between individuals (social agents), or even between entire groups of individuals. For example, modeling at this level demonstrates the precise conditions under which cooperation between individuals pays off more than defection (Nowak, 2006). A general conclusion from this modeling work is that there is always a strong attraction toward defection, but sufficiently strong interactions between individuals can make cooperation stronger, thus leading to profitable interactions (see chapter 12).

    Another example of a social-level model is the Lotka-Volterra model, which considers interactions between predators (e.g., lions) and prey (e.g., deer). The model makes the rather intuitive assumption that local interactions between predators and prey increase the number of predators, but interactions decrease the number of prey. Formalizing this notion, it turns out that both population sizes will oscillate across time, alternating between periods with more and fewer animals. Furthermore, they oscillate in a synchronized way; if the number of prey increases, then the number of predators follows suit. Nothing or nobody needs to coordinate this synchrony—it follows from the local interactions among the animals. And we can understand this consequence only by turning the initial assumptions into a formal model.

    Descending on the spatial scale, one can consider interactions within a single person or within a single brain. Here, one may be interested in modeling how cognitive modules interact to explain vision (Poggio & Bizzi, 2004), motor processing (Wolpert et al., 1995), or indeed any other cognitive function. As another example, Jonathan Cohen, Matthew Botvinick, and several colleagues have developed an influential series of models about how humans and other primates implement cognitive control, which is the ability to control one’s own cognitive processes, such as to suppress one’s instantaneous urges in favor of more appropriate responses (Botvinick et al., 2001). For example, a canonical task for measuring cognitive control is the Stroop task, in which subjects must read the ink color (say, green) of words that state a color (say, orange); people have the automatic tendency to read the word instead (Google Stroop and try the test online). It is thought that the extent to which people can suppress their urge to name the word instead of the color is a measure of cognitive control.

    A typical interpretation of the cognitive processes involved in solving this task goes as follows: One cognitive module processes words, one cognitive module processes colors, and a third one processes verbal or manual actions. The anatomical connectivity structure between the word-processing module and the response module is much stronger than between color-processing and response modules because pronouncing words is a much more common, and thus more practiced, task than naming colors. Thus, the challenge in the Stroop task is that, despite its weaker anatomical connectivity, the color module must dominate the response module to solve the task correctly. For this purpose, a module is postulated that monitors whether the other three modules are acting appropriately. If they are not, the monitoring module intervenes to make sure that the color module has a larger impact on the response module. Several cognitive control models address this challenge and have this basic structure (Brown & Braver, 2005; Verguts & Notebaert, 2008). At the neural level, the monitoring module is typically thought to involve both medial and lateral frontal cortices, possibly in interaction with basal ganglia structures.

    At a still lower level, further descending on the y-axis of figure 1.1, researchers study the interactions between single neurons (Wong & Wang, 2006). But there is no need to stop at the single-neuron level: modeling can (and indeed does) continue to any level where one has data.

    Figure 1.1 is an approximation: Even only at the brain level, eleven spatial levels of investigation have been distinguished (Sejnowski, 2020). Confusion sometimes arises when people do not clearly distinguish among these levels. Thus, an often-heard criticism is that a specific model is not biologically plausible (but often lacking details about what specifically is implausible). However, whether biological plausibility is relevant and to what extent depend entirely on the question that the modeler is attempting to address. Whether a model is biologically plausible has different meaning and relevance for each of these models. For example, the social-interaction model mentioned here considers interactions between very simple agents. These conceptual agents do not consist of biological cells with dendrites, axons, glutamate receptors, sodium-potassium pumps, and so on. In this sense, the model and the agents that it considers are not biologically plausible. However, this doesn’t matter at this level of modeling. In fact, it would detract massively from the goal of modeling interactions between individuals to include all this biological detail. It is precisely because some biological details are ignored that it is at all possible to formulate useful models at the social level. In contrast, a neural model that aims to understand communication at the cell membrane requires the concept of a sodium-potassium pump if it is of any use for understanding such communications. For this reason, modeling is to a large extent an art: it requires being able to formulate a useful level of abstraction, with useful meaning that the model allows for integrating and understanding data at that specific level and allowing the derivation of novel empirical tests. But what exact level of detail is useful will differ across levels. Ideally, we should also be able to know how to connect explanations across different levels (e.g., from neuron to brain to social level). However, just as in other sciences, such cross-level comparisons are typically very hard.

    As another example, several authors have suggested that mental diseases, such as schizophrenia, are due to altered connectivity in the brain (Bassett et al., 2008). For example, using network theory, van den Heuvel et al. (2013) found that the highly connected brain hubs in schizophrenic patients were less mutually connected than the same regions in control patients. In constructing and testing this model, many neural details were deliberately ignored. This is in some sense a loss. However, it did allow the modelers to concentrate on the connectivity structure between the neural hubs and how they are impaired in schizophrenia. Again, it is precisely because they ignored some details that they could achieve the right level of abstraction, focus on connectivity among neural areas, and derive useful empirical predictions from the abstract connectivity structure.

    This book mostly concerns cognitive modeling; it primarily targets the middle part of the vertical axis of figure 1.1. It addresses cognition at the level of the individual subject or individual brain, and it typically considers interactions among several neural modules within a single subject (or brain). Note that also the term module can be understood at different levels; it can refer to a neural column, a brain region, or a cognitive module, depending on the level of granularity of the investigation.

    This doesn’t mean that the other levels are irrelevant. In fact, in cognitive modeling, one is typically interested in a few levels at the same time. The models I consider here aim to be cognitively plausible; that is, they must account for behavioral data such as response times and accuracies of subjects in behavioral experiments. But at the same time, the models should ideally also be neurally plausible; that is, they should be consistent with systems-level neuroscience or sometimes single-cell neuroscience. Different models will place different emphases on each of these levels. Obviously, in cognitive modeling, the cognitive plausibility question is a very important one; but it is not the only one.

    Time Scales of Modeling

    In addition to levels (spatial scales) of modeling, one can identify different time scales of modeling. In line with Christiansen and Chater (2016a), I distinguish at least three time scales. The fastest time scale considered in this book is that of cognitive processing and human behavior. At this time scale, people utter sentences, remember telephone numbers, play tennis, and so on. This is the time scale of the Stroop model previously discussed (see the horizontal axis of figure 1.1) and the time scale that will be considered in chapter 2. A slightly slower time scale is that of knowledge acquisition. This is the time scale at which new information (e.g., learning a book chapter) or skills (e.g., learning to read) are acquired. A lot of interest in the cognitive modeling literature has been devoted to this time scale. I accordingly direct most of my attention in this book (i.e., chapters 3–5 and 8–11) to this time scale. Finally, the slowest time scale considered here is that of cultural change. At this time, a human language transfers across generations of humans. Because this is the least-studied time scale (at least in cognitive neuroscience), only one chapter (chapter 12) considers it. And an even slower time scale is that of genetic change, which is not discussed in this book. An important theoretical goal is formulating common principles across temporal scales (e.g., processing and acquisition; Rao & Ballard, 1999); or cultural change and genetic change (Dawkins, 1976). To understand cognition and behavior, we eventually have to know how all those time scales relate to one another.

    Striving for a Goal

    The evolutionary biologist Theodosius Dobzhansky (figure 1.2) famously said, Nothing in biology makes sense except in the light of evolution. If one accepts that cognition is a natural phenomenon, then this principle applies to cognition too. From this perspective, the function of cognition is to optimize an agent’s interactions with the world. For example, in reasoning, arguments are not generated by humans applying a set of argumentation rules in an arbitrary fashion. Humans reason in order to convince others (Mercier & Sperber, 2011), which presumably optimizes the agent’s interactions. More generally, it has been argued that the real reason for an organism to have a brain is so it can adaptively move within its environment (Wolpert, 2012). From this perspective, cognition is just a sophisticated way of choosing the right movement.

    Figure 1.2

    Theodosius Dobzhansky (1900–1975), one of the founders of the modern synthesis theory of biology (and inventor of great quotes). Reprinted with permission from the American Philosophical Society.

    From a biological perspective, whether cognitive agents arrive at true conclusions is less relevant than whether the conclusions are useful for the agent. More generally, I postulate that humans act because they think it is useful for them in some way. In other words, they act because they are motivated to act. From an evolutionary perspective, one might say that organisms have evolved over millions of years to be maximally adaptive. Therefore, what they do at the current time can probably be considered as striving for adaptivity in some sense. This is the founding assumption of the field of behavioral ecology (Shennan, 2002), and it can account for several of the biases that have been documented in the social psychology literature (Haselton et al., 2009). In biology, it has been debated to what extent the assumption is valid, but I’ll bypass that discussion here and simply explore how far the assumption takes us in unifying models of cognitive processing.

    One can formulate this intuition of adaptivity using the central concept of optimization. This is so central that I will attach a name to it: the optimization principle. The term optimization means that one maximizes or minimizes a function of some variable. For example, consider the function y = f (x) = (x − 1)², shown in figure 1.3. This function is very easy to minimize by choosing x = 1. Indeed, the function y = f (x) reaches its minimal value of zero when x = 1; all other values would lead to values f (x) > 0. If one wanted to maximize this function instead, the solution would be at the boundary of parameter space: indeed, only x = −∞ and +∞ maximize this function.

    Figure 1.3

    The steepest descent on the function f (x) = (x − 1)².

    Bringing these concepts together, I can formulate a core idea of this book: human behavior is directed toward achieving some goal. This can be formalized as attempting to optimize some function. The idea that human behavior is always directed toward achieving a certain goal can hardly be called a novel insight. It pervades motivational psychology (Deci & Ryan, 2008), and more broadly, literature, art, and indeed virtually any conversation about humans and their behavior. However, what may be surprising to you is this concept of motivation having such a central position in cognitive modeling. The person with just a casual experience of modeling often comes away from that experience thinking that cognitive models are mainly about associations between stimuli and responses, that cognitive models are a sophisticated form of regression, or that they are games, intended to satisfy the intellectual curiosity of the modeler. Models can be all of those things. But crucially, most cognitive models have a goal function at their core, which the model is motivated to optimize (Lieder & Griffiths, 2020). There are some exceptions to this rule; not all models can be considered as being involved in optimizing something. But for a first approximation, we can say that models attempt to reach a goal; and attempting to reach a goal can be formalized as optimizing a mathematical function.

    Is the optimization principle really true? This is again hard to tell, but there are at least two reasons to accept it anyway. The first reason is exemplified by the quote of Dobzhansky mentioned earlier; given that humans are the descendants of animals across millions of years of evolution, it makes sense to assume that they attempt to be as adaptive as possible. The second, more pragmatic reason is that an extensive mathematical and statistical apparatus has been developed for optimizing functions. Application of the optimization principle allows us to exploit that apparatus and formulate models of behavior that are grounded in goal-directness. Also, note that the central claim is not that behavior is optimal, merely that humans attempt to optimize a goal. This is quite a difference: a funny but slightly misguided criticism of optimality theory is that a chess player gains nothing with the advice, Simply optimize your probability of winning the game. This unhelpful statement assumes that the chess player knows how to do this, but the difficulty (and fun) of the game, of course, is that he or she does not know for sure and merely attempts to maximize this probability. Instead, the central claim is that much of (and perhaps all) cognition and behavior can be understood as attempts to climb or descend the function that is being optimized. And, to reiterate, a standard mathematical apparatus is available to formalize that notion. Let’s now delve a bit more into that apparatus.

    Optimization

    The function that is optimized is called a goal function. The idea here is that the organism’s goal is to optimize this function. The concept of having a goal function that is optimized is used in several fields. In economics, as developed by Adam Smith, Jeremy Bentham, and other founders of that discipline, consumers are considered to be homo economicus, whose sole interest is to maximize their own well-being. Well-being is typically (and conveniently) operationalized by the monetary value that a consumer can receive from each of a number of options. As a simple example, suppose that a farmer has the option to plant 1 apple tree at a cost of 10€, with an expected number of 50 apples, for which she receives 1€ each. Alternatively, she can plant 2 trees at a cost of 7€ each, with an expected number of 40 apples each, but in this case she gets only .70€ per apple (perhaps due to apple inflation). Simple arithmetic then demonstrates that the expected return in the first case is 50 × 1 − 10 = 40€; and in the second case, 2 × 40 × .70 − 2 × 7 = 42€. Optimality dictates, therefore, that she should plant two trees.

    This principle also forms the basis of expected utility theory (von Neumann & Morgenstern, 1944; see also the discussion of game theory in chapter 12), which generalizes the notion of monetary value optimization. According to expected utility theory, each object x has a utility (or value) of v (x), where v (.) is a function measuring the value of object x. Subjects choose the object x with maximal v (x). More generally, suppose that we have an option bundle (x1, p1, x2, p2), which means that object xi can be obtained with probability pi. The expected value of this option bundle then equals p1v (x1) + p2v (x2). For example, should you play a lottery that costs 2€ but delivers 100€ with 1% probability? Suppose that the value of each monetary amount x is simply the amount itself: v (x) = x. Then the no-play option delivers you 0€; but the play option delivers you 0.99 × (−2) + 0.01 × 100 = −0.98. Because the no-play option has a higher value (0 > −0.98), you should not play in this particular situation with this set of assumptions. From this perspective, life is a search for the best option bundles available in the world.

    Recent economic theories question the assumptions of the standard model, according to which humans exclusively attempt to maximize their own monetary value, and instead emphasize that humans value cooperation and social interactions more generally for achieving economic and psychological well-being (Nowak,

    Enjoying the preview?
    Page 1 of 1