Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

From Molecule to Metaphor: A Neural Theory of Language
From Molecule to Metaphor: A Neural Theory of Language
From Molecule to Metaphor: A Neural Theory of Language
Ebook489 pages6 hours

From Molecule to Metaphor: A Neural Theory of Language

Rating: 0 out of 5 stars

()

Read preview

About this ebook

In From Molecule to Metaphor, Jerome Feldman proposes a theory of language and thought that treats language not as an abstract symbol system but as a human biological ability that can be studied as a function of the brain, as vision and motor control are studied. This theory, he writes, is a "bridging theory" that works from extensive knowledge at two ends of a causal chain to explicate the links between. Although the cognitive sciences are revealing much about how our brains produce language and thought, we do not yet know exactly how words are understood or have any methodology for finding out. Feldman develops his theory in computer simulations—formal models that suggest ways that language and thought may be realized in the brain. Combining key findings and theories from biology, computer science, linguistics, and psychology, Feldman synthesizes a theory by exhibiting programs that demonstrate the required behavior while remaining consistent with the findings from all disciplines.

After presenting the essential results on language, learning, neural computation, the biology of neurons and neural circuits, and the mind/brain, Feldman introduces specific demonstrations and formal models of such topics as how children learn their first words, words for abstract and metaphorical concepts, understanding stories, and grammar (including "hot-button" issues surrounding the innateness of human grammar). With this accessible, comprehensive book Feldman offers readers who want to understand how our brains create thought and language a theory of language that is intuitively plausible and also consistent with existing scientific data at all levels.

LanguageEnglish
Release dateJan 25, 2008
ISBN9780262296885
From Molecule to Metaphor: A Neural Theory of Language

Related to From Molecule to Metaphor

Related ebooks

Psychology For You

View More

Related articles

Reviews for From Molecule to Metaphor

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    From Molecule to Metaphor - Jerome Feldman

    1 | The Mystery of Embodied Language

    Each of us is the world’s greatest expert on one human mind—our own. But Nature (or God if you’d prefer) did not endow us with the ability to comprehend how our minds work. You can’t understand by introspection even something as basic as your eye movements as you read these words. Cognitive scientists can predict where most people will focus their gaze— almost everyone pauses at read these words because it is unusual to see a sentence that talks about itself. When it comes to the mental processes involved in understanding the meaning of the text, scientists cannot explain even the basics, such as how the meaning of a word is represented in the brain.

    This book contains a good deal of technical detail on various scientific subjects, but the central theme of our story on language and thought is based on two simple related principles:

    Thought is structured neural activity.

    Language is inextricable from thought and experience.

    All of our thought and language arises from our genetic endowment and from our experience. Language and culture are, of course, carried by the family and the community. But each child has to rebuild it all in his or her own mind. From the child’s internal perspective, all social and cultural interactions start as additional inputs that must somehow be understood and incorporated using existing knowledge. This is more than a truism. A neural theory of language depends critically on looking at the problem from the perspective of the mind/brain that is learning and using language. The human brain is a system of neurons intricately linked together. Neurons work via electrochemistry. How can such a physical system— biological, chemical, and electrical—have ideas and communicate them through language? In other words, how does all of this biology, chemistry, and electricity give rise to thought and language?

    The link between neurons and physical behavior is easy to see in a well-understood case such as the knee-jerk reflex—the lifting of your leg when your doctor taps below your kneecap. Neural connections run from the sensing neurons in the knee, through one link in the spinal cord, back to motor neurons that drive the leg muscles. Although the complete underlying chemistry is very complex, your doctor doesn’t need to think about that. She can quickly see if your knee is functioning correctly by taking an information-processing perspective. The question doctors ask is, Are the signals from the knee being effectively transmitted to the spinal cord, correctly received there, and appropriate signals being transmitted to the leg muscles? From this perspective, the problem of behavior (whether your knee jerks) becomes an information-processing problem involving circuitry and signals. As we will see, this information-processing view also applies to learned automatic behaviors, like driving a car or understanding language.

    Now imagine that your doctor, instead of tapping your knee, asks you to lift your leg. The link between an input signal (the doctor’s words) and the output (lifting) now does involve the brain and is much more complex, but the neural information processing perspective is still the key to under-standing the behavior. The sound waves produced by the doctor’s speech strike your ear and are converted to frequency signals involving millions of neurons. Between these incoming neural signals and the neural command involved in volitionally lifting your leg, there is an enormous amount of neural information processing. This is the physical embodiment of your understanding of what the doctor says and what you choose to do about it. Should you decide to kick, a neural signal from the motor control region of your brain is sent to a circuit in your spinal cord, which will activate the lifting circuitry and muscles in your leg. This assumes that your doctor asked politely—if she were nasty or arrogant, you would be more likely to stomp out of the office or worse.

    Many of the neural circuits used in moving are also used in perceiving motion. If you watch a video of someone else kicking, this activates some of the same brain circuits that you use for kicking with your own leg (Buccino et al. 2001). Of course, if all of your kicking circuitry were active, you would kick. Now imagine you are told a story about someone else kicking. Recent biological evidence suggests that you can understand such stories by imagining yourself kicking (Hauk et al. 2004; Tettamanti et al. 2005). Brain imaging studies reveal that much of the neural activity required for you to understand someone else moving his or her leg overlaps significantly with the activity involved in actually moving your own leg. More generally, we can say the following:

     Understanding language about perceiving and moving involves much of the same neural circuitry as do perceiving and moving themselves.

     Neural computation links our experience of hearing and speaking to the experience of perception, motion, and imagination.

     So we need to know more about neural computation to understand language.

    The brain is made up of some 100 billion neurons, each connected, on the average to thousands of other neurons. This comes to some 100 trillion connections. The neurons and their connections (axons, dendrites, and synapses) are biological structures that work by means of chemistry. We will learn a lot in the book about this magnificent structure and how it develops, but we need a few initial insights now. Any thought or action involves a significant fraction of the billions of neurons—the computation is massively parallel. The brain is self-controlling and self-modifying, that is, no central controller tells each part what to do and no external monitor guides its learning. Neural computation involves continuously finding a best match between the inputs and current brain state, including our goals.

    The brain is constantly active, computing inferences, predictions, and actions with each evolving situation. There has been enormous evolutionary pressure toward brains that can respond fast and effectively in complex situations. For example, a common housefly can sense changes in air currents and quickly shift directions, which is why fly swatters have holes.

    To help get a feel for how your best-match circuitry works, look for a minute or so at figure 1.1. This wire-frame cube can be seen in two different ways, with either corner A or corner G appearing to closer to you. If you are having trouble seeing corner A as closer, focus on it and imagine it coming out of the paper toward you. This figure is called the Necker Cube, after the nineteenth-century Swiss naturalist Louis Necker, who discovered that the image will spontaneously flip between the two interpretations as you stare at it. We never see a mixture of the two versions—it’s always one coherent whole (Gestalt).

    Figure 1.1

    An ambiguous image; A or G can appear closer to you.

    This coherence principle also holds for language. If you read the sentence: Josh threw a ball, you picture him hurling a sphere, probably about the size of a baseball. But if you read, Josh threw a ball for charity, you are more likely to imagine him sponsoring a dance. As in the case of the Necker Cube, we always get one coherent reading at a time, although it can easily change. Consider the following: Josh threw a ball for charity, but missed the clown’s nose. This takes us back to the original version.

    The general properties of neural computation described earlier largely determine the nature of our language and thought, but there is still a significant conceptual gap. Thought involves ideas, feelings, and reasoning, and language somehow links those ideas, feelings, and reasoning to perceived and spoken sounds (or signs in the case of signed languages). We know all of this must be accomplished by a physical brain in a physical body. The question is, how?

    This is not the standard where question, asking which parts of the brain are used in thought and language. It is misleading to talk about a brain area computing some function—areas don’t compute, neural circuits do. It is like saying that U.S. cars are made by Detroit. The Detroit area is certainly important in automobile manufacture, but all cars have parts made in many places and some American cars are assembled in other places. Current technology is only able to crudely localize brain function, but by putting together findings from different kinds of studies, we are able to address the central question of how the circuitry underlying thought and language can work.

    A quarter of a century ago, it was unimaginable that such questions about the neural basis of language could be answered scientifically. Yet today, the basic components of an answer are in place.

    The Embodied Mind

    One simple insight has driven much of the scientific study of how the structure and function of the brain results in thought and language. Human language and thought are crucially shaped by the properties of our bodies and the structure of our physical and social environment. Language and thought are not best studied as formal mathematics and logic, but as adaptations that enable creatures like us to thrive in a wide range of situations. This is essentially the same as the principle of continuity of the American pragmatists, most notably William James and John Dewey. It is unquestioned in contemporary science, except in the case of language.

    The embodied approach entails several crucial questions. How much, and in exactly what ways, are thought and language products of our bodies? How, exactly, does our embodied nature shape the way we think and communicate? Here are some of the findings discussed in the course of this book:

     Concrete words and concepts directly label our embodied experience. Think of such short words in English as knee , kick , ask , red , want , sad .

     Spatial relations, for example, concepts expressed by words such as in , through , above , and around , can be seen as derived from specialized circuitry in the visual system: topographic maps of the visual field, orientation-sensitive cells.

     What is technically called aspect in linguistics—the way we conceptualize the structure of events, reason about events, and express events in language—appears to stem from the neural structure of our system of motor control.

     Abstract thought grows out of concrete embodied experiences, typically sensory-motor experiences. Much of abstract thought makes use of reasoning based on the underlying embodied experience.

     Our systems of abstract and metaphorical thought and language arise from everyday experiences and a basic neural learning mechanism.

     Grammar consists of neural circuitry pairing embodied concepts with sound (or sign).

     Grammar is not a separate faculty, but depends on embodied conceptual and phonological systems.

     Children first learn grammar by pairing sound combinations with familiar experiences.

    Thought and language are thus very strongly shaped by the nature of our bodies, our brains, and our experience functioning in the everyday world.

    What this means is that any approach to an embodied theory of language requires mechanisms of neural computation used for other purposes and adapted to thought and language—detailed structures in the visual system, the motor system, and basic neural learning mechanisms. This has profound consequences:

     Thought (including abstract thought) and language make use of important brain structures found in other mammals. Most of the brain mechanisms used in thought and language are not unique to human beings.

     Thought and language are neural systems. They work by neural computation, not formal symbol manipulation. The differences between these modes of computation and why they matter are examined as we go along.

     Thought and language are not disembodied symbol systems that happen to be realized in the human brain through its computation properties. Instead, thought and language are inherently embodied. They reflect the structure of human bodies and have the inherent properties of neural systems as well as the external physical and social environment.

    The consequences of these findings for philosophy, politics, mathematics, and linguistics have been described elsewhere and are reprised in chapter 27. This book focuses on the scientific foundations of neural computation and embodied language and their consequences for how we think about our societies and ourselves.

    The Integrated, Multimodal Nature of Language

    Because language is complex, linguists have traditionally broken its study artificially into levels or modules given names such as phonetics, phonology, morphology, syntax, the lexicon, semantics, discourse, and pragmatics. Most linguists specialize in the study of just one level or at the border between two adjacent subfields. Such focused studies have told us a great deal about language and are still the norm.

    However, real language is embodied, integrated, and multimodal. When your doctor asks you to lift your leg, your understanding involves a rich interaction among many neural systems. There is systematic structure to how all these components fit together to constitute language. The rules or patterns of language are called constructions , and these integrate different facets of language—for example, phonology, pragmatics, semantics, and syntax. A request construction might specify a grammatical form, an intonation pattern, pragmatic constraints, and the intended meaning. When your doctor asks you to lift your leg, all of these features are involved.

    This integrated, multifaceted nature of language is hard to express in traditional theories, which focus on the separate levels and sometimes view each level as autonomous. But constructions can provide a natural description of the links between form and meaning that characterize the neural circuitry underlying real human language. They offer a high-level computational description of a neural theory of language (NTL).

    An NTL does more than just provide a neural implementation of standard theories of thought and language. Rather it permits a more accurate and full account of our thought and language and the way they fit together. In particular, it allows the embodied and neural character of thought and language to take center stage. The neural theory of language described in this book helps us characterize the integrated, embodied nature of language. The following two concrete examples illustrate what this means.

    Spinning Your Wheels

    Imagine yourself trying to teach the meaning and usage of a phrase like spinning your wheels to a friend who knows English but comes from a culture where the phrase isn’t used. Let’s begin with the simplest, literal meaning of this idiomatic expression. If your friend’s culture did not have cars, the task would be enormous. You would first have to explain what an automobile is, how it works, and how a car’s wheels might spin in mud, in sand, or on ice without the car moving. One would also have to explain the typical effect of this on the driver, namely, frustration at not being able to get the car to move. All of this is part of a knowledge structure called a frame that systematically relates cars, wheels, motion, attitudes, and so on to the situations in which the wheels are spinning but the car doesn’t move. The expression spinning your wheels evokes the entire conceptual framework, with all the appropriate knowledge and attitudes.

    The phrase spinning your wheels can be used either literally or metaphorically. In many cases, spin your wheels works in a simple and literal manner. If you’re spinning your wheels, don’t step on the gas is an expression that is probably new to you, but you can interpret it easily because the sentence connects directly to your own frustrating experiences or those you’ve seen in movies. But variations that are superficially just as simple are not allowed with this idiom, for example, He’s spinning her wheels is not acceptable, although it could make sense (he might be driving her car) and it does fit a pattern found in other English idioms, for example, He fixed her wagon .

    Spinning your wheels makes use of ordinary grammatical constructions of English. It is a verb phrase; it has a verb (spinning) followed by a noun phrase (possessive pronoun + wheels). The verb has a normal suffix -ing . The phrase can be modified by some of the standard grammar rules of English—one can say, We used to be spinning our wheels .

    Also, the idiom is defined in relation to a knowledge frame with an image. In the image, there is a car whose drive wheels are spinning. The car is not moving. The driver of the car is trying to get the car to move, is putting a lot of energy into it, and is frustrated that the car is stuck. The most salient part of the scene is the spinning of the wheels of the car. The noun wheels refers to the wheels of the car and the verb spin refers to what the wheels are doing in the scene. These words become anchor points in metaphorical uses of the phrase.

    For example, there is a general conceptual metaphor in which achieving a purpose is conceptualized as reaching a destination with progress experienced as moving closer to the destination . Consider the example, I’m spinning my wheels working at this job . The general metaphor is that "A career is a journey and career advancement is forward motion. If you’re spinning your wheels , you are not moving, not making progress toward life goals, even though you are putting effort into it. The sentence implies that you are not advancing in your career. You are putting a lot of effort into it, not getting anywhere, and feel frustrated. Here, the metaphor maps the knowledge about the stuck-car scene onto the situation in which there is no advancement in your career.

    The phrase spinning your wheels (like hundreds, if not thousands, of other motivated idioms in English) illustrates the multimodal nature of language. As an idiom, it is like a word of English; you have to learn it, and what you know about it does not follow from general rules. The words involved in the idiom (spin and wheels ) have images that fit the salient part of a cultural image (a car spinning its wheels) with knowledge about the image (no motion, desire for motion, lots of effort, frustration). And the common metaphorical meanings make use of maps from this frame-and-scene semantics to various abstract domains (purposeful actions, careers, love relationships).

    To know how to use spinning your wheels correctly, you need to have integrated knowledge involving at least grammar (the constructions), lexicon (the words), semantics (identity of the subject and the pronoun), a cultural image and associated knowledge, and standard conceptual metaphors. There must be precise linkages across all these modalities: the ing has to fit on the same verb (spin ) that (a) precedes the noun phrase in the verb phrase construction, (b) has an image that fits into the wheel-spinning-on-a-car image, (c) is part of the cultural knowledge associated with the image, which entails lack of motion. Also, the lack of motion can stand as the base of at least three different metaphors: lack of progress in an activity, lack of advancement in a career, and lack of development in a relationship. The remarkable fact is that these metaphors are productive — we can apply them in novel situations and will be understood. For example, you will be spinning your wheels if you try to understand this book without doing the mental exercises.

    Waltzing into a Recession

    The waltz is a dance to music with a 123, 123, … rhythm. The dance partners move in sweeping circular paths, concentrating attention primarily on each other (rather than on where they are going). Ideally, it is a dance one enjoys and is swept up in. To waltz is to perform such a dance. When you waltz, you move; and, since verbs of motion can take directional modifiers (e.g., onto the terrace ), there are sentences like Harry and Sadie waltzed onto the terrace.

    Now imagine reading the following sentence in the news: France waltzed into a recession. Sentences like this are very common in news stories (check it out), and this one is immediately understandable, even though its subject is not animate and the path (into a recession ) is abstract and does not indicate a physical direction. The use of waltz in this sentence appears to violate the rules of English—waltzing is done only by people. Why is this acceptable?

    The answer has to do with the complex interaction between grammar and metaphor. In understanding the example sentence, a number of common metaphors are being used.

     Countries are metaphorically conceptualized as people . Since a person can be the agent of waltz , so can a metaphorical person.

     Change is metaphorically conceptualized as motion , with economic change as a special case. More is metaphorically up and less is down . In economics, an increase in gross domestic product is conceptualized as an upward motion , whereas a decrease is a downward motion . States of an economy are conceptualized metaphorically as locations , that is, bounded regions in space. A recession is thus a metaphorical hole : it is an economic state seen as a region in which the economy of a country is pulled downward for a significant length of time. When a country is in such a metaphorical hole, it tries to climb out of it , pull itself out of it , or induce another party to help it get out .

     To understand this sentence, the brain must activate these existing metaphorical structures to form what is called a conceptual blend, consisting of all the metaphors linked together.

     In this sentence, France is a metaphorical person and into a recession metaphorically indicates a direction toward a physical location. Thus, with these metaphors, waltz fits within its normal grammatical constraints on the kind of subjects and modifiers fit with it.

     The connotation of the sentence follows from how the metaphors apply to our knowledge of waltzing. These metaphors connote that France was enjoying itself, that it was not paying attention to where it was going economically and, as a result, fell into the recession/hole.

    Grammar is the study of the principles by which elements fit together in sentences to produce certain meanings. Here, the grammar of the sentence—what elements can fit with the verb waltz to form the sentence— depends on the complex of metaphors used to understand what is being talked about, which can be quite subtle. Another common metaphorical use of waltz implies an easy achievement, as in Josh waltzed into the end zone . We don’t accept this reading for France waltzed into a recession, because we don’t believe that countries have a goal of getting into recessions.

    In general, we see that understanding a sentence involves finding the best match between what was spoken and our current mental state. The brain is inherently a best-match computer; its massively parallel, inter-connected structure allows it to combine many factors in understanding a sentence (or image, etc.) as we saw with figure 1.1. Finding the best match for language input includes evoking metaphors, as we have seen in several examples. More details on how this works are presented in chapter 16.

    You can see the productive aspect of language at work by substituting different verbs of motion in the example sentence, for example, France stumbled into a recession ; France rushed into a recession ; and so on. With almost any verb of motion you get a somewhat different image of France’s economic progress. Each of these imagined situations is predictable from the meaning of the embodied action and the metaphors involved. We can also immediately understand France is spinning its wheels on unemployment.

    This is language understanding in real life and is what this book tries to explain. The scientific explanation of language begins with the brain and neural computation.

    The Three-Part Bridge

    The bridge between neural structure and meaningful language rests on three pillars:

    Neural computation Our present understanding of how the general theory of computation can be applied to the structure and development of the neural circuitry of the brain. This background provides an account of what it means for the brain to compute, and how that computation differs crucially from the operation of a standard digital computer.

    The embodied nature of thought and language Using neural computation to account for what has been discovered about how thought and language are embodied, as in the preceding examples.

    The integrated organization of language Language is organized in terms of constructions , each of which integrates many aspects—meaning, context, affect, phonological form, and so on. Language learning and use revolve around our ability to flexibly combine these multifaceted constructions.

    The role of this bridge, as with any scientific theory, is to provide adequate descriptions, explanations, and predictions about natural phenomena. The natural phenomena we are studying are thought and language. The scientific technique I use is computational modeling, in particular, modeling using neural computation. An adequate model must actually exhibit the required behavior and be consistent with existing data from neuroscience, cognitive and developmental psychology, and cognitive linguistics. This book is a first attempt to explore the power of this approach.

    The goal is to outline the whole story of how the brain gives rise to thought and language—enough to allow further scientific work to proceed along these lines. Currently we have nothing close to a complete neural computational model of thought and language, but such a model is the ultimate goal of the approach taken here.

    One of the hardest parts of our journey of discovery is understanding the vehicle that is carrying us—the information processing perspective. Any explanation of language and thought will obviously involve some kind of information processing story. The tricky bit is that we need to use one kind of information processing system, conventional computing theory and programs, to discuss and model a quite different kind of information processing system, the brain. The following two chapters attempt to spell out both the necessity of using an information processing perspective and the critical importance of keeping the discussion grounded in the reality of the brain and human experience.

    2 | The Information Processing Perspective

    Neuroscientists speak of neurons as processing information and communicating by sending and receiving signals. They also talk of neurons as performing computations. In fact, neural computation has become the standard way of thinking about how the brain works. But neurons are just cells, tiny biological entities that are alive and function by means of chemistry. Why can we say that neurons process information and perform computations?

    Since neural computation and the information processing perspective on the brain are central to a neural theory of language, it is important that I say, in as simple, clear, and straightforward terms as possible, just what I, and others who study the brain, mean by neural computation. We are interested in a complex cell, the neuron, but it is easier to start by under-standing a much simpler cell—the amoeba—in information processing terms.

    Amoebas as Information Processors

    The amoeba, one of the simplest and best known of living things, is depicted in figure 2.1. Amoebas are one-celled animals, somewhat smaller than a period on this page, that have remained essentially unchanged in their billion-year existence. Various members of the amoeba family live in a variety of environments including fresh and salt water, and the digestive systems of people with amoebic dysentery. Though primitive (the whole animal is a single cell), they exhibit many of the vital behaviors of all animals. Much about the way our neurons function can be learned from considering amoebas. We look at some of the details of cell structure and function in the next chapter. For now, I just want to discuss what is involved in thinking about an amoeba from the information processing perspective.

    Figure 2.1

    Drawings of an amoeba ingesting food.

    Three Ways of Thinking about an Amoeba

    A Chemical Factory

    The amoeba is what it looks like—a tiny gelatinous blob of complex molecules. To even start talking about its behavior, we need a way to think about it—to conceptualize an amoeba in terms of something else we know how to think about. A simple and straightforward way to conceptualize the amoeba is from the chemical perspective, as a chemical plant—an organized system of about a million complex molecules. This view is required for many scientific purposes, but additional perspectives are needed for a full understanding.

    A Creature with Needs, Desires, and Goals

    The basic life problems of the amoeba are not very different from our own, as Antonio Damasio (Damasio 2003) states nicely:

    All living organisms from the humble amoeba to the human are born with devices designed to solve automatically, no proper reasoning required, the basic problems of life. Those problems are: finding sources of energy; incorporating and transforming energy; maintaining a chemical balance of the interior compatible with the life process; maintaining the organism’s structure by repairing its wear and tear; and fending off external agents of disease and physical injury.

    The chemical perspective in itself doesn’t help much when we are trying to understand the amoeba’s behavior. A common way of modeling behavior is through a personification metaphor: try thinking of an amoeba as if it were a person with needs, desires, goals, and ways of satisfying them. Then think of yourself as that amoeba/person, and construct an imaginative simulation of what you would do if you were an amoeba with such needs, desires, and goals.

    Imagine being a small aquatic blob, equipped with only crude feelers, trying to survive in a hostile environment. From such an empathetic perspective, one might say that the amoeba is trying to find food and avoid danger. It is hard to imagine really being an amoeba, because the amoeba has no coherent control. The membrane of the amoeba regularly pushes out in one direction or other. If the rest of the body flows behind, the whole amoeba moves—somewhat like the U.S. Congress.

    To understand how the amoeba behaves, both the chemical and the empathetic perspectives are required at once. You need both the chemistry and the amoeba-as-person metaphor. Bringing such multiple perspectives to bear simultaneously is commonplace in science. Throughout this book, we use multiple perspectives. In what follows, we show how the chemical and empathetic perspectives work together to explain the amoeba’s behavior. Words from the empathetic perspective (the amoeba-as-person metaphor) are indicated in italics.

    It is no surprise that amoebas, like the rest of us animals, need to eat. But how does a one-celled animal know what to eat? If I were an amoeba, how would I know what to eat? Here is the answer from the chemical perspective: the amoeba’s outer membrane contains complex molecules that react differently to different molecules in the environment. This general mechanism of chemical detectors in the cell membrane has evolved to play a central role in our immune and nervous systems.

    For the amoeba, some membrane detectors match up well with amoeba-food (for example, bacteria). When the food-detector and amoeba-food molecules come together, the resulting chemical reaction leads to shape changes in the amoeba’s membrane molecules, eventually resulting in the amoeba incorporating the potential food. Similarly, different membrane detector molecules react with amoeba-hazard molecules in the environment, causing reactions that retract amoeba tissue from the threat. That is about what we’d expect from a one-celled animal. The amoeba is a chemical system that reacts to chemicals that fit its irritant detectors differently from ones that activate its food detectors.

    Although amoebas normally function as independent cells, certain species can (in times of famine) assemble themselves into a multicelled creature that can sense light and heat and that gives off amoeba spores, which can last indefinitely without food. The molecule that signals this need for community action, cAMP, is also a major messenger in our own cellular communication.

    Here we have a reasonable, and comprehensible, account of amoeba behavior: from the amoeba-as-person perspective, we can conceptualize what the amoeba has to do. It needs to eat and needs to avoid hazards that threaten it. From a purely chemical perspective, of course, amoebas do not have needs, foods, avoidance behavior, or irritants. The chemical perspective allows for a description of how these life needs are satisfied.

    The Information Processing Perspective

    Now let’s think about the amoeba from an information processing perspective. This is another generally useful way of talking about living things, and how they satisfy their needs and goals. It allows us to pose questions like the following:

     What information does the amoeba use to survive?

     How does it categorize the information inputs it gets, and how does it respond to each category?

     What is its reaction time?

     How does it know when to replicate by dividing?

     Can it remember and, if so, what is its memory capacity?

     Does it learn and, if so, how?

     Do amoebas communicate with one another?

    As observers of amoebas, we may decide to ask such questions, and doing this requires an information processing perspective. From this perspective, the two types of chemical reactions (to food and to irritants) can be seen as enabling the amoeba to distinguish two kinds of inputs. In general, we always try to understand new things by relating them to familiar concepts. The information processing stance is often useful because we have rich knowledge about computing, memory, and learning, and we can use this wisdom to help us understand what amoebas and other living things do.

    An additional set of information processing questions concerns communication. Since amoebas do not reproduce sexually, they normally have nothing to communicate about, but other single-celled creatures, including yeast, do communicate using molecular signals.

    Communication and Coordinated Evolution

    Much smaller than an amoeba, a yeast cell makes its living by eating sugars (fermentation). The carbon dioxide released in this process is what makes bread rise and gives beer its carbonation. Yeast cells sometimes reproduce by division, as amoebas do. But they also can engage in sexual reproduction, and this requires communication among cells.

    In general, communication between cells was a major evolutionary advance and a prerequisite for the appearance of multicelled creatures like ourselves. Individual cells, like the amoeba, survive by carefully controlling their internal chemistry and it goes against their nature to allow outside agitators. Of the 4 billion years since life began, about two-thirds was required to evolve the simplest multicellular organisms and their coordination mechanisms.

    The basic mechanism of the communication is molecular matching. This is particularly simple in yeast, which release specific molecules (pheromones) from one cell that can interact with detecting molecules in the walls of other yeast cells (again as in figure 2.1). This can give rise to quite complex transformations in the receiving cell; dozens of steps of this chain are already known. As a result of

    Enjoying the preview?
    Page 1 of 1