Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

How We Learn: Why Brains Learn Better Than Any Machine . . . for Now
How We Learn: Why Brains Learn Better Than Any Machine . . . for Now
How We Learn: Why Brains Learn Better Than Any Machine . . . for Now
Ebook505 pages6 hours

How We Learn: Why Brains Learn Better Than Any Machine . . . for Now

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

“There are words that are so familiar they obscure rather than illuminate the thing they mean, and ‘learning’ is such a word. It seems so ordinary, everyone does it. Actually it’s more of a black box, which Dehaene cracks open to reveal the awesome secrets within.”--The New York Times Book Review

An illuminating dive into the latest science on our brain's remarkable learning abilities and the potential of the machines we program to imitate them


The human brain is an extraordinary learning machine. Its ability to reprogram itself is unparalleled, and it remains the best source of inspiration for recent developments in artificial intelligence. But how do we learn? What innate biological foundations underlie our ability to acquire new information, and what principles modulate their efficiency?

In How We Learn, Stanislas Dehaene finds the boundary of computer science, neurobiology, and cognitive psychology to explain how learning really works and how to make the best use of the brain’s learning algorithms in our schools and universities, as well as in everyday life and at any age.
LanguageEnglish
PublisherPenguin Books
Release dateJan 28, 2020
ISBN9780525559894

Read more from Stanislas Dehaene

Related to How We Learn

Related ebooks

Biology For You

View More

Related articles

Reviews for How We Learn

Rating: 4.214285714285714 out of 5 stars
4/5

21 ratings3 reviews

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 3 out of 5 stars
    3/5
    Maybe 3.5 stars. Interesting stuff about how the learning process works within the brain, with some contrast/compare of computer learning algorithms. Some suggestions about how schools should restructure educational techniques in light of knowledge about how learning works.

    I felt disappointed that there was very little discussion about how what is learned is actually encoded in the brain for future retrieval. The simple fact is that scientists just don’t know how this works, but I would have loved to hear some intelligent informed speculation. Not sure if this is the cautious scientist sticking to scientific facts, or if the mysteries of how knowledge is stored is still so unfathomable that it’s just not worth speculating about. But I guess I expect that a book about how the mind learns would addressthis situation, if at least to say that nobody has any real idea how information is encoded in the brain.
  • Rating: 5 out of 5 stars
    5/5
    Very interesting. Many of the studies he refers to are also used in similar books like Make It Stick and Why We Sleep. This isn't a bad thing; it's helpful for remembering because it spaces out learning (as all these texts advocate).
  • Rating: 4 out of 5 stars
    4/5
    "How We Learn" ought to be a text studied for a degree in education, and would be great for new parents as well.. Dehaene draws conclusions from the latest brain studies done on infants and children and argues that education requires more one-on-one interaction between the instructor and the student (or parent and child) and less lecturing--and no grades! Frequent testing and consistent error feedback are essential for learning to take hold but grades are simply demoralizing.

    I particularly found the organization of the book helpful. I learned about the neurological function of sleep in memory retention, what is true and false about brain plasticity, the grim realities of trauma and addiction, the affect of music education, the benefit of bilingualism from an early age, and the enormous potential of the small child's mind. In fact, I'll never look at small children or even tiny babies in the same way again after reading this book. When I go to bed, I'm also going to make it a habit to dwell on the positive aspects of the day, or something neat I have learned during the day, just before I go to sleep, in order to maximize the chances that my brain will hold onto it better. Adults can't learn as easily as children, but we can still learn quite a bit with the right techniques.

    Machine learning, on the other hand, has a long way to go. As the author points out, some human person must input massive amounts of data in order to get AI to do a few things. We're the opposite: with a few pieces of data, we can do massive numbers of things.

    I received an advanced readers copy from the publisher and was encouraged to submit an honest review.

Book preview

How We Learn - Stanislas Dehaene

Cover for How We Learn

Praise for How We Learn

[An] entertaining survey of how science—from brain scans to psychological tests—is helping inspire pedagogy . . . Dehaene challenges many tropes. He argues that children digest new concepts best not when passively listening to a teacher but by rephrasing ideas in their own words and thoughts. He stresses attention, consolidation, engagement, and feedback are all key to learning.

Financial Times

There are words that are so familiar they obscure rather than illuminate the thing they mean, and ‘learning’ is such a word. It seems so ordinary, everyone does it. Actually it’s more of a black box, which Dehaene cracks open to reveal the awesome secrets within. . . . His explanation of the basic machinery of the brain is an excellent primer.

The New York Times Book Review

This is an absorbing, mind-enlarging book, studded with insights. . . . Could have significant real-world results.

The Sunday Times (London)

[An] expert overview of learning. . . . Never mind our opposable thumb, upright posture, fire, tools, or language; it is education that enabled humans to conquer the world. . . . Dehaene’s fourth insightful exploration of neuroscience will pay dividends for attentive readers.

Kirkus Reviews

[Dehaene] rigorously examines our remarkable capacity for learning. The baby brain is especially awesome and not a ‘blank slate.’ . . . Dehaene’s portrait of the human brain is fascinating.

Booklist

A richly instructive [book] for educators, parents, and others interested in how to most effectively foster the pursuit of knowledge.

Publishers Weekly

penguin books

HOW WE LEARN

Stanislas Dehaene is the director of the Cognitive Neuroimaging Unit in Saclay, France, and the professor of experimental cognitive psychology at the Collège de France. He is currently the president of the Scientific Council of the French Ministry of Education.

ALSO BY STANISLAS DEHAENE

Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts

Reading in the Brain: The New Science of How We Read

The Number Sense: How the Mind Creates Mathematics

Book title, How We Learn, Subtitle, Why Brains Learn Better Than Any Machine . . . for Now, author, Stanislas Dehaene, imprint, Viking

PENGUIN BOOKS

An imprint of Penguin Random House LLC

penguinrandomhouse.com

First published in the United States of America by Viking, an imprint of Penguin Random House LLC, 2020

Published in Penguin Books 2021

Copyright © 2020 by Stanislas Dehaene

Penguin supports copyright. Copyright fuels creativity, encourages diverse voices, promotes free speech, and creates a vibrant culture. Thank you for buying an authorized edition of this book and for complying with copyright laws by not reproducing, scanning, or distributing any part of it in any form without permission. You are supporting writers and allowing Penguin to continue to publish books for every reader.

Based on, in part, Apprendre!, published in France by Odile Jacob, Paris, in 2018.

This page constitutes an extension to the copyright page.

ISBN 9780525559900 (paperback)

the library of congress has cataloged the hardcover edition as follows:

Names: Dehaene, Stanislas, author.

Title: How we learn : why brains learn better than any machine . . . for now / Stanislas Dehaene.

Other titles: Apprendre! English

Description: [New York, New York] : Viking, [2020] | Translation of: Apprendre! : les talents du cerveau, le défi des machines. | Includes bibliographical references and index.

Identifiers: LCCN 2019036724 (print) | LCCN 2019036725 (ebook) | ISBN 9780525559887 (hardcover) | ISBN 9780525559894 (ebook)

Subjects: LCSH: Learning, Psychology of. | Cognitive psychology. | Neuroplasticity. | Cognitive science.

Classification: LCC BF318 .D44 2020 (print) | LCC BF318 (ebook) | DDC 153.1/5—dc23

LC record available at https://lccn.loc.gov/2019036724

LC ebook record available at https://lccn.loc.gov/2019036725

Cover art and design by Owen Gildersleeve

btb_ppg_c0_r2

For Aurore, who was born this year,

and for all those who once were babies.

Begin by making a more careful study of your pupils, for it is clear that you know nothing about them.

Jean-Jacques Rousseau, Emile, or On Education (1762)

This is a strange and amazing fact: we know every nook and cranny of the human body, we have catalogued every animal on the planet, we have described and baptized every blade of grass, but we have left psychological techniques to their empiricism for centuries, as if they were of lesser importance than those of the healer, the breeder or the farmer.

Jean Piaget, La pédagogie moderne (1949)

If we don’t know how we learn, how on earth do we know how to teach?

L. Rafael Reif, president of MIT (March 23, 2017)

CONTENTS

INTRODUCTION

Part One

What Is Learning?

CHAPTER 1 Seven Definitions of Learning

CHAPTER 2 Why Our Brain Learns Better Than Current Machines

Part Two

How Our Brain Learns

CHAPTER 3 Babies’ Invisible Knowledge

CHAPTER 4 The Birth of a Brain

CHAPTER 5 Nurture’s Share

CHAPTER 6 Recycle Your Brain

Part Three

The Four Pillars of Learning

CHAPTER 7 Attention

CHAPTER 8 Active Engagement

CHAPTER 9 Error Feedback

CHAPTER 10 Consolidation

CONCLUSION Reconciling Education with Neuroscience

ILLUSTRATIONS

ACKNOWLEDGMENTS

NOTES

BIBLIOGRAPHY

INDEX

CREDITS

INTRODUCTION

In September 2009, an extraordinary child forced me to drastically revise my ideas about learning. I was visiting the Sarah Hospital in Brasilia, a neurological rehabilitation center with a white architecture inspired by Oscar Niemeyer, with which my laboratory has collaborated for about ten years. The director, Lucia Braga, asked me to meet one of her patients, Felipe, a young boy only seven years old, who had spent more than half his life in a hospital bed. She explained to me how, at the age of four, he had been shot in the street—unfortunately not such a rare event in Brazil. The stray bullet had severed his spinal cord, thus rendering him almost completely paralyzed (tetraparetic). It also destroyed the visual areas of his brain: he was fully blind. To help him breathe, an opening was made in his trachea, at the base of his neck. And for over three years, he had been living in a hospital room, locked within the coffin of his inert body.

In the corridor leading to his room, I remember bracing myself at the thought of having to face a broken child. And then I meet . . . Felipe, a lovely little boy like any other seven-year-old—talkative, full of life, and curious about everything. He speaks flawlessly with an extensive vocabulary and asks me mischievous questions about French words. I learn that he has always been passionate about languages and never misses an opportunity to enrich his trilingual vocabulary (he speaks Portuguese, English, and Spanish). Although he is blind and bedridden, he escapes into his imagination by writing his own novels, and the hospital team has encouraged him in this path. In a few months, he learned to dictate his stories to an assistant, then write them himself using a special keyboard connected to a computer and sound card. The pediatricians and speech therapists take turns at his bedside, transforming his writings into real, tactile books with embossed illustrations that he proudly sweeps with his fingers, using the little sense of touch that he has left. His stories speak of heroes and heroines, mountains and lakes that he will never see, but that he dreams of like any other little boy.

Meeting with Felipe deeply moved me, and also persuaded me to take a closer look at what is probably the greatest talent of our brain: the ability to learn. Here was a child whose very existence poses a challenge to neuroscience. How do our brain’s cognitive faculties resist such a radical upheaval of their environment? Why could Felipe and I share the same thoughts, given our extraordinarily different sensory experiences? How do different human brains converge on the same concepts, almost regardless of how and when they learn them?

Many neuroscientists are empiricists: together, with the English Enlightenment philosopher John Locke (1632–1704), they presume that the brain simply draws its knowledge from its environment. In this view, the main property of cortical circuits is their plasticity, their ability to adapt to their inputs. And, indeed, nerve cells possess a remarkable ability to constantly adjust their synapses according to the signals they receive. Yet if this were the brain’s main drive, my little Felipe, deprived of visual and motor inputs, should have become a profoundly limited person. By what miracle did he manage to develop strictly normal cognitive abilities?

Felipe’s case is by no means unique. Everybody knows the story of Helen Keller (1880–1968) and Marie Heurtin (1885–1921), both of whom were born deaf and blind and yet, after years of grueling social isolation, learned sign language and ultimately became brilliant thinkers and writers.¹ Throughout these pages, we will meet many other individuals who, I hope, will radically alter your views on learning. One of them is Emmanuel Giroux, who has been blind since the age of eleven but became a top-notch mathematician. Paraphrasing the fox in Antoine de Saint-Exupéry’s The Little Prince (1943), Giroux confidently states: In geometry, what is essential is invisible to the eye. It is only with the mind that you can see well. How does this blind man manage to swiftly navigate within the abstract spaces of algebraic geometry, manipulating planes, spheres, and volumes without ever seeing them? We will discover that he uses the same brain circuits as other mathematicians, but that his visual cortex, far from remaining inactive, has actually repurposed itself to do math.

I will also introduce you to Nico, a young painter who, while visiting the Marmottan Museum in Paris, managed to make an excellent copy of Monet’s famous painting Impression, Sunrise (see figure 1 in the color insert). What is so exceptional about this? Nothing, besides the fact that he accomplished it with only a single hemisphere, his left one—the right half of his brain was almost fully removed at the age of three! Nico’s brain learned to squeeze all his talents into half a brain: speech, writing, and reading, as usual, but drawing and painting too, which are generally thought to be functions of the right hemisphere, and also computer science and even wheelchair fencing, a sport in which he has reached the rank of champion in Spain. Forget everything you were told about the respective roles of both hemispheres, because Nico’s life proves that anyone can become a creative and talented artist without a right hemisphere! Cerebral plasticity seems to work miracles.

We will also visit the infamous orphanages of Bucharest where children were left from birth in quasi-abandon—and yet, years later, some of them, adopted before the age of one or two, have had almost normal school experiences.

All these examples illustrate the extraordinary resilience of the human brain: even major trauma, such as blindness, the loss of a hemisphere, or social isolation, cannot extinguish the spark of learning. Language, reading, mathematics, artistic creation: all these unique talents of the human species, which no other primate possesses, can resist massive injuries, such as the removal of a hemisphere or the loss of sight and motor skills. Learning is a vital principle, and the human brain has an enormous capacity for plasticity—to change itself, to adapt. Yet we will also discover dramatic counterexamples, where learning seems to freeze and remain powerless. Consider pure alexia, the inability to read a single word. I have personally studied several adults, all of whom were excellent readers, who had a tiny stroke restricted to a minuscule brain area that rendered them incapable of deciphering words as simple as dog or mat. I remember a brilliant trilingual woman, a faithful reader of the French newspaper Le Monde, who was deeply sorrowed at the fact that, after her brain injury, every page of the daily press looked like Hebrew. Her determination to relearn to read was at least as strong as the stroke that she had suffered was severe. However, after two years of perseverance, her reading level still did not exceed that of a kindergartner: it took her several seconds to read a single word, letter by letter, and she still stumbled on every word. Why couldn’t she learn? And why do some children, who suffer from dyslexia, dyscalculia, or dyspraxia, show a similar radical hopelessness in acquiring reading, calculating, or writing while others surf smoothly through those fields?

Brain plasticity almost seems temperamental: sometimes it overcomes massive difficulties, and other times it leaves children and adults who are otherwise highly motivated and intelligent with debilitating disabilities. Does it depend on particular circuits? Do these circuits lose their plasticity over the years? Can plasticity be reopened? What are the rules that govern it? How can the brain be so effective from birth and throughout a child’s youth? What algorithms allow our brain circuits to form a representation of the world? Would understanding them help us learn better and faster? Could we draw inspiration from them in order to build more efficient machines, artificial intelligences that would ultimately imitate us or even surpass us? These are some of the questions that this book attempts to answer, in a radically multidisciplinary manner, drawing on recent scientific discoveries in cognitive science and neuroscience, but also in artificial intelligence and education.

WHY LEARN?

Why do we have to learn in the first place? The very existence of the capacity to learn raises questions. Wouldn’t it be better for our children to immediately know how to speak and think, right from day one, like Athena, who, according to legend, emerged into the world from Zeus’s skull, fully grown and armed, as she let out her war cry? Why aren’t we born pre-wired, with pre-programmed software and exactly the pre-loaded knowledge necessary to our survival? In the Darwinian struggle for life, shouldn’t an animal who is born mature, with more knowledge than others, end up winning and spreading its genes? Why did evolution invent learning in the first place?

My answer is simple: a complete pre-wiring of the brain is neither possible nor desirable. Impossible, really? Yes, because if our DNA had to specify all the details of our knowledge, it simply would not have the necessary storage capacity. Our twenty-three chromosomes contain three billion pairs of the letters A, C, G, T—the molecules adenine, cytosine, guanine, and thymine. How much information does that represent? Information is measured in bits: a binary decision, 0 or 1. Since each of the four letters of the genome codes for two bits (we can code them as 00, 01, 10, and 11), our DNA therefore contains a total of six billion bits. Remember, however, that in today’s computers, we count in bytes, which are sequences of eight bits. The human genome can thus be reduced to about 750 megabytes—the contents of an old-fashioned CD-ROM or a small USB key! And this basic calculation does not even take into account the many redundancies that abound in our DNA.

From this modest amount of information, inherited from millions of years of evolution, our genome, initially confined to a single fertilized egg, manages to set up the whole body plan—every molecule of every cell in our liver, kidneys, muscles, and, of course, our brain: eighty-six billion neurons, a thousand trillion connections. . . . How could our genome possibly specify each one of them? Assuming that each of our nerve connections encodes only one bit, which is certainly an underestimate, the capacity of our brain is on the order of one hundred terabytes (about 10¹⁵ bits), or a hundred thousand times more than the information in our genome. We are faced with a paradox: the fantastic palace that is our brain contains a hundred thousand times more detail than the architect’s blueprints that are used to build it! I see only one explanation: the structural frame of the palace is built following the architect’s guidelines (our genome), while the details are left to the project manager, who can adapt the blueprints to the terrain (the environment). Pre-wiring a human brain in all its detail would be strictly impossible, which is why learning is needed to supplement the work of genes.

This simple bookkeeping argument, however, fails to explain why learning is so universally widespread in the animal world. Even simple organisms devoid of any cortex, such as earthworms, fruit flies, and sea cucumbers, learn many of their behaviors. Take the little worm called the nematode, or C. elegans. In the past twenty years, this millimeter-size animal became a laboratory star, in part because its architecture is under strong genetic determinism and can be analyzed down to the smallest detail. Most individual specimens have exactly 959 cells, including 302 neurons, whose connections are all known and reproducible. And yet it learns.² Researchers initially considered it as a kind of robot just able to swim back and forth, but they later realized that it possesses at least two forms of learning: habituation and association. Habituation refers to an organism’s capacity to adapt to the repeated presence of a stimulus (for example, a molecule in the water in which the animal lives) and eventually cease to respond to it. Association, on the other hand, consists of discovering and remembering what aspects of the environment predict sources of food or danger. The nematode worm is a champion of association: it can remember, for instance, which tastes, smells, or temperature levels were previously associated with food (bacteria) or with a repellent molecule (the smell of garlic) and use this information to choose an optimal path through its environment.

With such a small number of neurons, the worm’s behavior could have been fully pre-wired. However, it is not. The reason is that it is highly advantageous, indeed indispensable for its survival, to adapt to the specific environment in which it is born. Even two genetically identical organisms will not necessarily encounter the same ecosystem. In the case of the nematode, the ability to quickly adjust its behavior to the density, chemistry, and temperature of the place in which it lands allows it to be more efficient. More generally, every animal must quickly adapt to the unpredictable conditions of its current existence. Natural selection, Darwin’s remarkably efficient algorithm, can certainly succeed in adapting each organism to its ecological niche, but it does so at an appallingly slow rate. Whole generations must die, due to lack of proper adaptation, before a favorable mutation can increase the species’ chance of survival. The ability to learn, on the other hand, acts much faster—it can change behavior within the span of a few minutes, which is the very quintessence of learning: being able to adapt to unpredictable conditions as quickly as possible.

This is why learning evolved. Over time, the animals that possessed even a rudimentary capacity to learn had a better chance of surviving than those with fixed behaviors—and they were more likely to pass their genome (now including genetically driven learning algorithms) on to the next generation. In this manner, natural selection favored the emergence of learning. The evolutionary algorithm discovered a good trick: it is useful to let certain parameters of the body change rapidly in order to adjust to the most volatile aspects of the environment.

Naturally, several aspects of the physical world are strictly invariable: gravitation is universal; the propagation of light and sound does not change overnight; and that is why we do not have to learn how to grow ears, eyes, or the labyrinths that, in our vestibular system, keep track of our body’s acceleration—all these properties are genetically hardwired. However, many other parameters, such as the spacing of our two eyes, the weight and length of our limbs, or the pitch of our voice, all vary, and this is why our brain must adapt to them. As we shall see, our brains are the result of a compromise—we inherit, from our long evolutionary history, a great deal of innate circuitry (coding for all the broad intuitive categories into which we subdivide the world: images, sounds, movements, objects, animals, people . . .) but also, perhaps, to an even greater extent, some highly sophisticated learning algorithm that can refine those early skills according to our experience.

HOMO DOCENS

If I had to sum up, in one word, the singular talents of our species, I would answer with learning. We are not simply Homo sapiens, but Homo docens—the species that teaches itself. Most of what we know about the world was not given to us by our genes: we had to learn it from our environment or from those around us. No other animal has managed to change its ecological niche so radically, moving from the African savanna to deserts, mountains, islands, polar ice caps, cave dwellings, cities, and even outer space, all within a few thousand years. Learning has fueled it all. From making fire and designing stone tools to agriculture, exploration, and atomic fission, the story of humanity is one of constant self-reinvention. At the root of all these accomplishments lies one secret: the extraordinary ability of our brain to formulate hypotheses and select those that fit with our environment.

Learning is the triumph of our species. In our brain, billions of parameters are free to adapt to our environment, our language, our culture, our parents, or our food. . . . These parameters are carefully chosen: over the course of evolution, the Darwinian algorithm carefully delineated which brain circuits should be pre-wired and which should be left open to the environment. In our species, the contribution of learning is particularly large since our childhood extends over many more years than it does for other mammals. And because we possess a unique knack for language and mathematics, our learning device is able to navigate vast spaces of hypotheses that recombine into potentially infinite sets—even if they are always grounded in fixed and invariable foundations inherited from our evolution.

More recently, humanity discovered that it could increase this remarkable ability even further with the help of an institution: the classroom. Pedagogy is an exclusive privilege of our species: no other animal actively teaches its offspring by setting aside specific time to monitor their progress, difficulties, and errors. The invention of the school, an institution which systematizes the informal education present in all human societies, has vastly increased our brain potential. We have discovered that we can take advantage of the exuberant plasticity of the child brain to instill in it a maximum amount of information and talent. Over centuries, our school system has continued to improve in efficiency, starting earlier and earlier in childhood and now lasting for fifteen years or more. Increasing numbers of brains benefit from higher education. Universities are neural refineries where our brain circuits acquire their best talents.

Education is the main accelerator of our brain. It is not difficult to justify its presence in the top spots in government spending: without it, our cortical circuits would remain diamonds in the rough. The complexity of our society owes its existence to the multiple improvements that education brings to our cortex: reading, writing, calculation, algebra, music, a sense of time and space, a refinement of memory. . . . Did you know, for example, that the short-term memory of a literate person, the number of syllables she can repeat, is almost double that of an adult who never attended school and remained illiterate? Or that IQ increases by several points for each additional year of education and literacy?

LEARNING TO LEARN

Education magnifies the already considerable faculties of our brain—but could it perform even better? At school and at work, we constantly tinker with our brain’s learning algorithms, yet we do so intuitively, without paying attention to how to learn. No one has ever explained to us the rules by which our brain memorizes and understands or, on the contrary, forgets and makes mistakes. It truly is a pity, because the scientific knowledge is extensive. An excellent website, put together by the British Education Endowment Foundation (EEF),³ lists the most successful educational interventions—and it gives a very high ranking to the teaching of metacognition (knowing the powers and limits of one’s own brain). Learning to learn is arguably the most important factor for academic success.

Fortunately, we now know a lot about how learning works. Thirty years of research, at the boundaries of computer science, neurobiology, and cognitive psychology, have largely elucidated the algorithms that our brain uses, the circuits involved, the factors that modulate their efficacy, and the reasons why they are uniquely efficient in humans. In this book, I will discuss all those points in turn. When you close this book, I hope you will know much more about your own learning processes. It seems fundamental, to me, that every child and every adult realize the full potential of his or her own brain and also, of course, its limits. Contemporary cognitive science, through the systematic dissection of our mental algorithms and brain mechanisms, gives new meaning to the famous Socratic adage Know thyself. Today, the point is no longer just to sharpen our introspection, but to understand the subtle neuronal mechanics that generate our thoughts, in an attempt to use them in optimal accordance with our needs, goals, and desires.

The emerging science of how we learn is, of course, of special relevance to all those for whom learning is a professional activity: teachers and educators. I am deeply convinced that one cannot properly teach without possessing, implicitly or explicitly, a mental model of what is going on in the minds of the learners. What sort of intuitions do they start with? What steps do they have to take in order to move forward? What factors can help them develop their skills?

While cognitive neuroscience does not have all the answers, we begin to understand that all children start off life with a similar brain architecture—a Homo sapiens brain, radically different from that of other apes. I am not denying, of course, that our brains vary: the quirks of our genomes, as well as the whimsies of early brain development, grant us slightly different strengths and learning speeds. However, the basic circuitry is the same in all of us, as is the organization of our learning algorithms. There are therefore fundamental principles that any teacher must respect in order to be most effective. In this book, we will see many examples. All young children share abstract intuitions in the domains of language, arithmetic, logic, and probability, thus providing a foundation on which higher education must be grounded. And all learners benefit from focused attention, active engagement, error feedback, and a cycle of daily rehearsal and nightly consolidation—I call these factors the four pillars of learning, because, as we shall see, they lie at the foundation of the universal human learning algorithm present in all our brains, children and adults alike.

At the same time, our brains do exhibit individual variations, and in some extreme cases, a pathology can appear. The reality of developmental pathologies, such as dyslexia, dyscalculia, dyspraxia, and attention disorders, is no longer a subject of doubt. Fortunately, as we increasingly understand the common architecture from which these quirks arise, we also discover that simple strategies exist to detect and compensate for them. One of the goals of this book is to spread this growing scientific knowledge, so that every teacher, and also every parent, can adopt an optimal teaching strategy. While children vary dramatically in what they know, they still share the same learning algorithms. Thus, the pedagogical tricks that work best with all children are also those that tend to be the most efficient for children with learning disabilities—they must be applied only with greater focus, patience, systematicity, and tolerance to error.

And the latter point is crucial: while error feedback is essential, many children lose confidence and curiosity because their errors are punished rather than corrected. In schools worldwide, error feedback is often synonymous with punishment and stigmatization—and later in this book I will have much to say about the role of school grades in perpetuating this confusion. Negative emotions crush our brain’s learning potential, whereas providing the brain with a fear-free environment may reopen the gates of neuronal plasticity. There will be no progress in education without simultaneously considering the emotional and cognitive facets of our brain—in today’s cognitive neuroscience, both are considered key ingredients of the learning cocktail.

THE CHALLENGE OF MACHINES

Today, human intelligence faces a new challenge: we are no longer the only champions of learning. In all fields of knowledge, learning algorithms are challenging our species’ unique status. Thanks to them, smartphones can now recognize faces and voices, transcribe speech, translate foreign languages, control machines, and even play chess or Go—much better than we can. Machine learning has become a billion-dollar industry that is increasingly inspired by our brains. How do these artificial algorithms work? Can their principles help us understand what learning is? Are they already able to imitate our brains, or do they still have a long way to go?

While the current advances in computer science are fascinating, their limits are evident. Conventional deep learning algorithms mimic only a small part of our brain’s functioning, the one that, I argue, corresponds to the first stages of sensory processing, the first two or three hundred milliseconds during which our brain operates in an unconscious manner. This type of processing is in no way superficial: in a fraction of a second, our brain can recognize a face or a word, put it in context, understand it, and even integrate it into a small sentence. . . . The limitation, however, is that the process remains strictly bottom-up, without any real capacity for reflection. Only in the subsequent stages, which are much slower, more conscious, and more reflective, does our brain manage to deploy all its abilities of reasoning, inference, and flexibility—features that today’s machines are still far from matching. Even the most advanced computer architectures fall short of any human infant’s ability to build abstract models of the world.

Even within their fields of expertise—for example, the rapid recognition of shapes—modern-day algorithms encounter a second problem: they are much less effective than our brain. The state of the art in machine learning involves running millions, even billions, of training attempts on computers. Indeed, machine learning has become virtually synonymous with big data: without massive data sets, algorithms have a hard time extracting abstract knowledge that generalizes to new situations. In other words, they do not make the best use of data.

In this contest, the infant brain wins hands down: babies do not need more than one or two repetitions to learn a new word. Their brain makes the most of extremely scarce data, a competence that still eludes today’s computers. Neuronal learning algorithms often come close to optimal computation: they manage to extract the true essence from the slightest observation. If computer scientists hope to achieve the same performance in machines, they will have to draw inspiration from the many learning tricks that evolution integrated into our brain: attention, for example, which allows us to select and amplify relevant information; or sleep, an algorithm by which our brain synthesizes what it learned on previous days. New machines with these properties are beginning to emerge, and their performance is constantly improving—they will undoubtedly compete with our brains in the near future.

According to an emerging theory, the reason that our brain is still superior to machines is that it acts as a statistician. By constantly attending to probabilities and uncertainties, it optimizes its ability to learn. During its evolution, our brain seems to have acquired sophisticated algorithms that constantly keep track of the uncertainty associated with what it has learned—and such a systematic attention to probabilities is, in a precise mathematical sense, the optimal way to make the most of each piece of information.

Recent experimental data support this hypothesis. Even babies understand probabilities: from birth, they seem to be deeply embedded in their brain circuits. Children act like little budding scientists: their brains teem with hypotheses, which resemble scientific theories that their experiences put to the test. Reasoning with probabilities, in a largely unconscious manner, is deeply inscribed in the logic of our learning. It allows any of us to gradually reject false hypotheses and retain only the theories that make sense of the data. And, unlike other animal species, humans seem to use this sense of probabilities to acquire scientific theories from the outside world. Only Homo sapiens manages to systematically generate abstract symbolic thoughts and to update their plausibility in the face of new observations.

Innovative computer algorithms are beginning to incorporate this new vision of learning. They are called Bayesian, after the Reverend Thomas Bayes (1702–61), who outlined the rudiments of this theory as early as the eighteenth century. My hunch is that Bayesian algorithms will revolutionize machine learning—indeed, we will see that they are already able to extract abstract information with an efficiency close to that of a human scientist.


•   •   •

Our journey into the contemporary science of learning is a three-part trip.

In the first part, entitled What Is Learning?, we start by defining what it means for humans or animals—or indeed any algorithm or machine—to learn something. The idea is simple: to learn is to progressively form, in silicon and neural circuits alike, an internal model of the outside world. When I walk around a new town, I form a mental map of its layout—a miniature model of its streets and passageways. Likewise, a child who is learning to ride a bike is shaping, in her neural circuits, an unconscious simulation of how the actions on the pedals and handlebars affect the bike’s stability. Similarly, a computer algorithm learning to recognize faces is acquiring template models of the various possible shapes of eyes, noses, mouths, and their combinations.

But how do we set up the proper mental model? As we shall see, the learner’s mind can be likened to a giant machine with millions of tunable parameters whose settings collectively define what is learned (for instance, where the streets are likely to be in our mental map of the neighborhood). In the brain, the parameters are synapses, the connections between neurons, which can vary in strength; in most present-day computers, they are the tunable weights or probabilities that specify the strength of each

Enjoying the preview?
Page 1 of 1