John Horgan - The End of Science - Facing The Limits of Knowledge in The Twilight of The Scientific Age-Basic Books (2015) PDF
John Horgan - The End of Science - Facing The Limits of Knowledge in The Twilight of The Scientific Age-Basic Books (2015) PDF
John Horgan - The End of Science - Facing The Limits of Knowledge in The Twilight of The Scientific Age-Basic Books (2015) PDF
SCIENCE
HORGAN
I
“A thumping good book.” —Wall Street Journal
n The End of Science, John Horgan makes the case that the era of truly profound
scientific revelations about the universe and our place in it is over. Interviewing
scientifi c luminaries such as Stephen Hawking, Francis Crick, and Richard Dawkins, 12/10
he demonstrates that all the big questions that can be answered have been answered,
as science bumps up against fundamental limits. The world cannot give us a “theory of
JOHN HORGAN
12/10
everything,” and modern endeavors such as string theory are “ironic” and “theological”
THE END
in nature, not scientific, because they are impossible to confirm. Horgan’s argument
THE END
was controversial in 1996, and it remains so today, still firing up debates in labs and
on the internet, not least because—as Horgan details in a lengthy new preface—ironic
science is more prevalent than ever. Still, while Horgan offers his critique, grounded
in the thinking of the world’s leading researchers, he offers homage, too. If science is
ending, he maintains, it is only because it has done its work so well.
of
while he buttonholes several dozen of earth’s crankiest, most opinionated, most
exasperating scientists to get their views on where science is and where it’s going...
They all come to life in Horgan’s narrative.”—Washington Post Book World
SCIENCE SCIENCE
“An unauthorized biography of science.” —Associated Press 5.5 x 8.25
B:0.92
“A deft wordsmith and keen observer, Horgan offers lucid expositions of everything Basic-PB
from superstring theory and Thomas Kuhn’s analysis of scientific revolutions to the Color:4C
origin of life and sociobiology.”—Business Week
“The End of Science is a revealing glimpse into the minds of some of our
leading scientists and philosophers. Read it. Enjoy it, learn from it.”
—Hartford Courant
SC IE NCE
FA C ING T HE L IM I T S OF K NO W L E DGE
IN T HE T W IL IGH T OF T HE S C IE N T IF I C AGE
JOHN HORG A N
Many of the designations used by manufacturers and sellers to distinguish their products are
claimed as trademarks. Where those designations appear in this book and the publisher was
aware of the trademark claim, the designations have been printed in initial capital letters.
All rights reserved. Printed in the United States of America. No part of this book may be
reproduced in any manner whatsoever without written permission except in the case of brief
quotations embodied in critical articles and reviews. For more information, address the Perseus
Books Group, 250 West 57th Street, 15th Floor, New York, New York 10107.
Books published by Perseus Books are available for special distribution for bulk purchase in
the United States by corporations, institutions, and other organizations. For more information,
contact the Special Markets Department at the Perseus Books Group, 2300 Chestnut Street,
Suite 200, Philadelphia, PA 19103, or call (800) 810-4145, ext. 5000, or e-mail special.markets@
perseusbooks.com.
A CIP catalog record for the original paperback edition of this book is available from the
Library of Congress.
ISBN: 978-0-465-06592-9 (ppk.)
ISBN: 978-0-465-05085-7 (ebook)
ISBN: 978-0-553-06174-7 (original ppk.)
A hardcover edition of this book was originally published in 1996 by Addison-Wesley Publish-
ing Company, Inc.
10 9 8 7 6 5 4 3 2 1
H
ere’s what a fanatic I am: When I have a captive audience of inno-
cent youths, I expose them to my evil meme.
Since 2005, I’ve taught history of science to undergraduates at
Stevens Institute of Technology, an engineering school perched on the
bank of the Hudson River. After we get through ancient Greek “sci-
ence,” I make my students ponder this question: Will our theories of
the cosmos seem as wrong to our descendants as Aristotle’s theories
seem to us?
I assure them there is no correct answer, then tell them the answer
is “No,” because Aristotle’s theories were wrong and our theories are
right. The earth orbits the sun, not vice versa, and our world is made
not of earth, water, fire and air but of hydrogen, carbon and other ele-
ments that are in turn made of quarks and electrons.
Our descendants will learn much more about nature, and they
will invent gadgets even cooler than smartphones. But their scientific
version of reality will resemble ours, for two reasons: First, ours—
as sketched out, for example, by Neil deGrasse Tyson in his terrific
re-launch of Cosmos—is in many respects true; most new knowledge
will merely extend and fill in our current maps of reality rather than
forcing radical revisions. Second, some major remaining mysteries—
Where did the universe come from? How did life begin? How, exactly,
does a chunk of meat make a mind?—might be unsolvable.
That’s my end-of-science argument in a nutshell, and I believe it
as much today as I did when I was finishing my book twenty years
ago. That’s why I keep writing about my thesis, and why I make my
xiii
students ponder it—even though I hope I’m wrong, and I’m oddly
relieved when my students reject my pessimistic outlook.
My meme has spread far and wide, in spite of its unpopularity, and
often intrudes even in upbeat discussions about science’s future. A few
brave souls have suggested that maybe there’s something to this end-
of-science idea.1 But most pundits reject it in ritualistic fashion, as if
performing an exorcism.2
Allusions to my book often take this form: “In the mid-1990s, this guy
John Horgan said science is ending. But look at all that we’ve learned
lately, and all that we still don’t know! Horgan is as wrong as those fools
at the end of the nineteenth century who said physics was finished!”
This preface to a new edition of The End of Science updates my
argument, going beyond “Loose Ends,” the Afterword for the 1997
paperback, which fended off the first volley of attacks. Science has
accomplished a lot over the past eighteen years, from the completion
of the Human Genome Project to the discovery of the Higgs boson, so
it’s not surprising that folks only sketchily familiar with my argument
find it preposterous.
But so far my prediction that there would be no great “revelations or
revolutions”—no insights into nature as cataclysmic as heliocentrism,
evolution, quantum mechanics, relativity, the big bang—has held up
just fine.
In some ways, science is in even worse shape today than I would have
guessed back in the 1990s. In The End of Science, I predicted that sci-
entists, as they struggle to overcome their limitations, would become
increasingly desperate and prone to hyperbole. This trend has become
more severe and widespread than I anticipated. In my thirty-plus years
of covering science, the gap between the ideal of science and its messy,
all-too-human reality has never been greater than it is today.
Over the past decade, statistician John Ioannidis has analyzed the
peer-reviewed scientific literature and concluded that “most current
published research findings are false.”3 Ioannidis blames the problem
on increasingly fierce competition of scientists for attention and grants.
“Much research is conducted for reasons other than the pursuit of
The federal project was completed ahead of time and under bud-
get, because of the surging power and plummeting costs of gene-
sequencing machines and other biotechnologies. These advances have
inspired triumphal rhetoric about how we are on the verge of a pro-
found new understanding of and control over ourselves—as well as the
usual hand-wringing about designer babies.
But so far the medical advances promised by Francis Collins have
not materialized. Over the past twenty-five years, researchers have car-
ried out some 2,000 clinical trials of gene therapies, which treat dis-
eases by tweaking patients’ DNA.16 At this writing, no gene therapies
have been approved for commercial sale in the United States (although
in 2012 one was approved in Europe). Tellingly, cancer mortality
rates—in spite of countless alleged “breakthroughs”—also remain
extremely high.17
We still lack basic understanding of how genes make us who we are.
Behavioral genetics seeks to identify variations in genes that underpin
variations between people—in, for example, temperament and suscep-
tibility to mental illnesses. The field has announced countless dramatic
“discoveries”—genes for homosexuality, high IQ, impulsivity, violent
aggression, schizophrenia—but so far it has not produced a single
robust finding.18
Other lines of investigations, while intriguing, have complicated
rather than clarified our understanding of how genes work. Research-
ers have shown that evolution and embryonic development can interact
in surprising ways (evo devo); that subtle environmental factors can
affect genes’ expression (epigenetics); and that supposedly inert sections
of our genome (junk DNA) might actually serve some purpose after all.
Biology’s plight recalls that of physics in the 1950s, when accelerators
churned out weird new particles hard to reconcile with conventional
quantum physics. Confusion yielded to clarity after Murray Gell-Mann
and others conjectured that neutrons and protons are composed of
triplets of strange particles called “quarks.”
Could some novel idea—as bold as the quark—help clarify genetics,
perhaps in ways that help the field fulfill its medical potential? I hope
so. Could biology be on the verge of a profound paradigm shift, akin to
the quantum revolution in physics?
O f all the initial objections to The End of Science, the one I took most
to heart was that I didn’t delve deeply enough into research on the
human mind, which has more potential to produce “revelations or rev-
olutions” than any other scientific endeavor. I agreed, so much so that I
devoted my next two books to mind-related topics.
The Undiscovered Mind (1999) critiques psychiatry, neuroscience,
evolutionary psychology, artificial intelligence and other fields that
seek to explain the mind and heal its disorders. Rational Mysticism
(2003) explores mystical states of consciousness, with which I’ve been
obsessed since my acid-addled youth.
I’ve continued tracking neuroscience over the past decade. I’m
especially intrigued by efforts to crack the “neural code.”22 This is the
algorithm, or set of rules, that supposedly transforms electrochemi-
cal impulses in our brains into perceptions, memories, emotions,
decisions.
A solution to the neural code could have profound consequences,
intellectual and practical. It could help solve ancient philosoph-
ical conundrums such as the mind-body problem, with which Plato
wrestled, and the riddle of free will. It could lead to better theories
in research that can boost its soldiers’ cognitive capacities and degrade
those of enemies.
In 2013, President Barack Obama announced that the government
was investing more than $100 million a year in a new initiative called
BRAIN, for Brain Research through Advancing Innovative Neuro-
technologies. The major sponsor of BRAIN is the Defense Advanced
Research Projects Agency (DARPA). Neuroscientists have accepted—
indeed, eagerly sought—this funding without any significant debate
over ethical propriety.
I’m often asked what would make me admit I’m wrong about science
ending. Sometimes my responses are glib: I’ll admit I was wrong
when Ray Kurzweil successfully uploads his digitized psyche into a
smartphone. Or aliens land in Times Square and announce that the
young-earth creationists are right. Or we invent warp-drive spaceships
that allow us to squirt down wormholes into parallel universes.
Here is a more serious answer: My book serves up many mini-
arguments—for example, that string theory will never be confirmed,
and that the mind-body problem will never be solved—that could turn
out to be wrong. But nothing can shake my faith in my meta-argu-
ment: Science gets things right, it converges on the truth, but it will
never give us absolute truth. Sooner or later science must bump up
against limits.29
I see no limits, however, to our ability to create a better world. In
my 2012 book The End of War, I argue that humanity can achieve per-
manent peace, not in some distant future, but soon. We don’t have to
turn into robots, or abolish capitalism or religion, or return to a hunt-
er-gatherer lifestyle. We just have to recognize that war is stupid and
immoral and put more effort into nonviolent conflict resolution.
In my classes, I often make my students read President John F. Ken-
nedy’s inaugural speech, in which he envisions a world without war,
poverty, infectious disease, and tyranny. When I ask if they think Ken-
nedy’s vision is plausible or just utopian hogwash, most students go
with hogwash. Paradoxically, students who reject my end-of-science
argument as too pessimistic are in other respects distressingly gloomy
about humanity’s prospects.
I point out to my students that humanity has taken huge strides
toward every one of the goals mentioned by Kennedy. The world is
healthier, wealthier, freer, and more peaceful than it was one hundred
years ago—or when Kennedy gave his speech in 1961. Because my col-
lege sits on the Hudson River across from Manhattan, I can point out
a window and note that the water and air around New York City are
much cleaner than during Kennedy’s era. I wrap up these classroom
riffs by exclaiming, “Things are getting better!”
We may never achieve immortality, or colonize other star systems,
or find a unified theory that unlocks all the secrets of existence. But
with the help of science, we can create a world in which all people, and
not just a lucky elite, flourish. And that is what matters most.
Hoboken, New Jersey, 2015
I
t was in the summer of 1989, during a trip to upstate New York, that
I began to think seriously about the possibility that science, pure
science, might be over. I had flown to the University of Syracuse to
interview Roger Penrose, a British physicist who was a visiting scholar
there. Before meeting Penrose, I had struggled through galleys of his
dense, difficult book, The Emperor’s New Mind, which to my astonish-
ment became a best-seller several months later, after being praised in
the New York Times Book Review.1 In the book, Penrose cast his eye across
the vast panorama of modern science and found it wanting. This knowl-
edge, Penrose asserted, for all its power and richness, could not possibly
account for the ultimate mystery of existence, human consciousness.
The key to consciousness, Penrose speculated, might be hidden in
the fissure between the two major theories of modern physics: quan-
tum mechanics, which describes electromagnetism and the nuclear
forces, and general relativity, Einstein’s theory of gravity. Many phys-
icists, beginning with Einstein, had tried and failed to fuse quantum
mechanics and general relativity into a single, seamless “unified” the-
ory. In his book, Penrose sketched out what a unified theory might
look like and how it might give rise to thought. His scheme, which
involved exotic quantum and gravitational effects percolating through
the brain, was vague, convoluted, utterly unsupported by evidence
from physics or neuroscience. But if it turned out to be in any sense
right, it would represent a monumental achievement, a theory that in
one stroke would unify physics and solve one of philosophy’s most vex-
ing problems, the link between mind and matter. Penrose’s ambition
xxv
was Penrose implying that one day scientists would find The Answer
and thus bring their quest to an end?
Unlike some prominent scientists, who seem to equate tentativity
with weakness, Penrose actually thinks before he responds, and even
as he responds. “I don’t think we’re close,” he said slowly, squinting out
his office window, “but it doesn’t mean things couldn’t move fast at
some stage.” He cogitated some more. “I guess this is rather suggesting
that there is an answer,” he continued, “although perhaps that’s too
pessimistic.” This final comment stopped me short. What is so pessi-
mistic, I asked, about a truth seeker thinking that the truth is attain-
able? “Solving mysteries is a wonderful thing to do,” Penrose replied.
“And if they were all solved, somehow, that would be rather boring.”
Then he chuckled, as if struck by the oddness of his own words.3
Long after leaving Syracuse, I mulled over Penrose’s remarks. Was
it possible that science could come to an end? Could scientists, in effect,
learn everything there is to know? Could they banish mystery from the
universe? It was hard for me to imagine a world without science, and
not only because my job depended on it. I had become a science writer
in large part because I considered science—pure science, the search for
knowledge for its own sake—to be the noblest and most meaningful
of human endeavors. We are here to figure out why we are here. What
other purpose is worthy of us?
I had not always been so enamored of science. In college, I passed
through a phase during which literary criticism struck me as the most
thrilling of intellectual endeavors. Late one night, however, after
too many cups of coffee, too many hours spent slogging through yet
another interpretation of James Joyce’s Ulysses, I had a crisis of faith.
Very smart people had been arguing for decades over the meaning of
Ulysses. But one of the messages of modern criticism, and of modern
literature, was that all texts are “ironic”: they have multiple meanings,
none of them definitive.4 Oedipus Rex, The Inferno, even the Bible are in
a sense “just kidding,” not to be taken too literally. Arguments over
meaning can never be resolved, since the only true meaning of a text
is the text itself. Of course, this message applied to the critics, too. One
was left with an infinite regress of interpretations, none of which rep-
resented the final word. But everyone still kept arguing! To what end?
For each critic to be more clever, more interesting, than the rest? It all
began to seem pointless.
Although I was an English major, I took at least one course in science
or mathematics every semester. Working on a problem in calculus or
physics represented a pleasant change of pace from messy humanities
assignments; I found great satisfaction in arriving at the correct answer
to a problem. The more frustrated I became with the ironic outlook
of literature and literary criticism, the more I began to appreciate the
crisp, no-nonsense approach of science. Scientists have the ability to
pose questions and resolve them in a way that critics, philosophers, his-
torians cannot. Theories are tested experimentally, compared to real-
ity, and those found wanting are rejected. The power of science cannot
be denied: it has given us computers and jets, vaccines and thermonu-
clear bombs, technologies that, for better or worse, have altered the
course of history. Science, more than any other mode of knowledge—
literary criticism, philosophy, art, religion—yields durable insights
into the nature of things. It gets us somewhere. My mini-epiphany led,
eventually, to my becoming a science writer. It also left me with this
criterion for science: science addresses questions that can be answered,
at least in principle, given a reasonable amount of time and resources.
Before my meeting with Penrose, I had taken it for granted that
science was open-ended, even infinite. The possibility that scientists
might one day find a truth so potent that it would obviate all further
investigations had struck me as wishful thinking at best, or as the kind
of hyperbole required to sell science (and science books) to the masses.
The earnestness, and ambivalence, with which Penrose contemplated
the prospect of a final theory forced me to reassess my own views of
science’s future. Over time, I became obsessed with the issue. What are
the limits of science, if any? Is science infinite, or is it as mortal as we
are? If the latter, is the end in sight? Is it upon us?
After my original conversation with Penrose, I sought out other sci-
entists who were butting their heads against the limits of knowledge:
particle physicists who dreamed of a final theory of matter and energy;
cosmologists trying to understand precisely how and even why our
universe was created; evolutionary biologists seeking to determine
how life began and what laws governed its subsequent unfolding;
neuroscientists probing the processes in the brain that give rise to con-
sciousness; explorers of chaos and complexity, who hoped that with
computers and new mathematical techniques they could revitalize
science. I also spoke to philosophers, including some who allegedly
doubted whether science could ever achieve objective, absolute truths.
I wrote articles about a number of these scientists and philosophers for
Scientific American.
When I first thought about writing a book, I envisioned it as a series
of portraits, warts and all, of the fascinating truth seekers and truth
shunners I have been fortunate enough to interview. I intended to
leave it to readers to decide whose forecasts about the future of science
made sense and whose did not. After all, who really knew what the
ultimate limits of knowledge might be? But gradually, I began to imag-
ine that I knew; I convinced myself that one particular scenario was
more plausible than all the others. I decided to abandon any pretense of
journalistic objectivity and write a book that was overtly judgmental,
argumentative, and personal. While still focusing on individual sci-
entists and philosophers, the book would present my views as well.
That approach, I felt, would be more in keeping with my conviction
that most assertions about the limits of knowledge are, finally, deeply
idiosyncratic.
It has become a truism by now that scientists are not mere
knowledge-acquisition machines; they are guided by emotion and intu-
ition as well as by cold reason and calculation. Scientists are rarely so
human, I have found, so at the mercy of their fears and desires, as when
they are confronting the limits of knowledge. The greatest scientists
want, above all, to discover truths about nature (in addition to acquir-
ing glory, grants, and tenure and improving the lot of humankind);
they want to know. They hope, and trust, that the truth is attainable,
not merely an ideal or asymptote, which they eternally approach. They
also believe, as I do, that the quest for knowledge is by far the noblest
and most meaningful of all human activities.
Scientists who harbor this belief are often accused of arrogance.
Some are arrogant, supremely so. But many others, I have found, are
less arrogant than anxious. These are trying times for truth seekers.
The scientific enterprise is threatened by technophobes, animal-rights
the modern poet to Satan in Milton’s Paradise Lost.5 Just as Satan fought
to assert his individuality by defying the perfection of God, so must
the modern poet engage in an Oedipal struggle to define himself or
herself in relation to Shakespeare, Dante, and other masters. The effort
is ultimately futile, Bloom said, because no poet can hope to approach,
let alone surpass, the perfection of such forebears. Modern poets are all
essentially tragic figures, latecomers.
Modern scientists, too, are latecomers, and their burden is much
heavier than that of poets. Scientists must endure not merely Shake-
speare’s King Lear, but Newton’s laws of motion, Darwin’s theory of nat-
ural selection, and Einstein’s theory of general relativity. These theories
are not merely beautiful; they are also true, empirically true, in a way
that no work of art can be. Most researchers simply concede their inabil-
ity to supersede what Bloom called “the embarrassments of a tradition
grown too wealthy to need anything more.”6 They try to solve what
philosopher of science Thomas Kuhn has patronizingly called “puzzles,”
problems whose solution buttresses the prevailing paradigm. They
settle for refining and applying the brilliant, pioneering discoveries of
their predecessors. They try to measure the mass of quarks more pre-
cisely, or to determine how a given stretch of DNA guides the growth
of the embryonic brain. Others become what Bloom derided as a “mere
rebel, a childish inverter of conventional moral categories.”7 The rebels
denigrate the dominant theories of science as flimsy social fabrications
rather than rigorously tested descriptions of nature.
Bloom’s “strong poets” accept the perfection of their predecessors
and yet strive to transcend it through various subterfuges, including
a subtle misreading of the predecessors’ work; only by so doing can
modern poets break free of the stultifying influence of the past. There
are strong scientists, too, those who are seeking to misread and there-
fore to transcend quantum mechanics or the big bang theory or Dar-
winian evolution. Roger Penrose is a strong scientist. For the most
part, he and others of his ilk have only one option: to pursue science
in a speculative, postempirical mode that I call ironic science. Ironic
science resembles literary criticism in that it offers points of view, opin-
ions, which are, at best, interesting, which provoke further comment.
But it does not converge on the truth. It cannot achieve empirically
I
n 1989, just a month after my meeting with Roger Penrose in Syra-
cuse, Gustavus Adolphus College in Minnesota held a symposium
with the provocative but misleading title, “The End of Science?” The
meeting’s premise was that belief in science—rather than science itself—
was coming to an end. As one organizer put it, “There is an increas-
ing feeling that science as a unified, universal, objective endeavor is
over.”1 Most of the speakers were philosophers who had challenged the
authority of science in one way or another. The meeting’s great irony
was that one of the scientists who spoke, Gunther Stent, a biologist at
the University of California at Berkeley, had for years promulgated a
much more dramatic scenario than the one posited by the symposium.
Stent had asserted that science itself might be ending, and not because
of the skepticism of a few academic sophists. Quite the contrary. Sci-
ence might be ending because it worked so well.
Stent is hardly a fringe figure. He was a pioneer of molecular biology;
he founded the first department dedicated to that field at Berkeley in the
1950s and performed experiments that helped to illuminate the machin-
ery of genetic transmission. Later, after switching from genetics to the
study of the brain, he was named chairman of the neurobiology depart-
ment of the National Academy of Sciences. Stent is also the most astute
analyst of the limits of science whom I have encountered (and by astute
I mean of course that he articulates my own inchoate premonitions).
In the late 1960s, while Berkeley was racked with student protests, he
wrote an astonishingly prescient book, now long out of print, called The
Coming of the Golden Age: A View of the End of Progress. Published in 1969, it
1
contended that science—as well as technology, the arts, and all progres-
sive, cumulative enterprises—was coming to an end.2
Most people, Stent acknowledged, consider the notion that science
might soon cease to be absurd. How can science possibly be nearing
an end when it has been advancing so rapidly throughout this century?
Stent turned this inductive argument on its head. Initially, he granted,
science advances exponentially through a positive feedback effect;
knowledge begets more knowledge, and power begets more power.
Stent credited the American historian Henry Adams with having fore-
seen this aspect of science at the turn of the twentieth century.3
Adams’s law of acceleration, Stent pointed out, has an interesting
corollary. If there are any limits to science, any barriers to further
progress, then science may well be moving at unprecedented speed just
before it crashes into them. When science seems most muscular, tri-
umphant, potent, that may be when it is nearest death. “Indeed, the
dizzy rate at which progress is now proceeding,” Stent wrote in Golden
Age, “makes it seem very likely that progress must come to a stop soon,
perhaps in our lifetime, perhaps in a generation or two.”4
Certain fields of science, Stent argued, are limited simply by the
boundedness of their subject matter. No one would consider human
anatomy or geography, for example, to be infinite endeavors. Chemis-
try, too, is bounded. “[T]hough the total number of possible chemical
reactions is very great and the variety of reactions they can undergo
vast, the goal of chemistry of understanding the principles governing
the behavior of such molecules is, like the goal of geography, clearly
limited.”5 That goal, arguably, was achieved in the 1930s, when the
chemist Linus Pauling showed how all chemical interactions could be
understood in terms of quantum mechanics.6
In his own field of biology, Stent asserted, the discovery of DNA’s
twin-corkscrew structure in 1953 and the subsequent deciphering of the
genetic code had solved the profound problem of how genetic informa-
tion is passed on from one generation to the next. Biologists had only
three major questions left to explore: how life began, how a single fer-
tilized cell develops into a multicellular organism, and how the central
nervous system processes information. When those goals are achieved,
Stent said, the basic task of biology, pure biology, will be completed.
A Trip to Berkeley
W e obviously are nowhere near the new Polynesia that Stent envi-
sioned, in part because applied science has not come nearly as far
as Stent had hoped (feared?) when he wrote The Coming of the Golden
Age. But I have come to the conclusion that Stent’s prophecy has, in one
very important sense, already come to pass. Pure science, the quest for
knowledge about what we are and where we came from, has already
entered an era of diminishing returns. By far the greatest barrier to
future progress in pure science is its past success. Researchers have
already mapped out physical reality, ranging from the microrealm of
quarks and electrons to the macrorealm of planets, stars, and galax-
ies. Physicists have shown that all matter is ruled by a few basic forces:
gravity, electromagnetism, and the strong and weak nuclear forces.
Scientists have also stitched their knowledge into an impressive, if
not terribly detailed, narrative of how we came to be. The universe
exploded into existence 15 billion years ago, give or take 5 billion years
(astronomers may never agree on an exact figure), and is still expand-
ing outward. Some 4.5 billion years ago, the detritus of an exploding
star, a supernova, condensed into our solar system. Sometime during
the next few hundred million years, for reasons that may never be
known, single-celled organisms bearing an ingenious molecule called
A pplied science will continue for a long time to come. Scientists will
keep developing versatile new materials; faster and more sophisti-
cated computers; genetic-engineering techniques that make us health-
ier, stronger, longer-lived; perhaps even fusion reactors that provide
cheap energy with few environmental side effects (although given the
drastic cutbacks in funding, fusion’s prospects now seem dimmer than
ever). The question is, will these advances in applied science bring
about any surprises, any revolutionary shifts in our basic knowledge?
Will they force scientists to revise the map they have drawn of the uni-
verse’s structure or the narrative they have constructed of our cosmic
creation and history? Probably not. Applied science in this century has
tended to reinforce rather than to challenge the prevailing theoreti-
cal paradigms. Lasers and transistors confirm the power of quantum
mechanics, just as genetic engineering bolsters belief in the DNA-based
model of evolution.
What constitutes a surprise? Einstein’s discovery that time and space,
the I beams of reality, are made of rubber was a surprise. So was the
observation by astronomers that the universe is expanding, evolv-
ing. Quantum mechanics, which unveiled a probabilistic element, a
Lucretian swerve, at the bottom of things, was an enormous surprise;
God does play dice (Einstein’s disapproval notwithstanding). The later
finding that protons and neutrons are made of smaller particles called
quarks was a much lesser surprise, because it merely extended quantum
theory to a deeper domain; the foundations of physics remained intact.
Learning that we humans were created not de novo by God, but
gradually, by the process of natural selection, was a big surprise. Most
other aspects of human evolution—those concerning where, when,
and how, precisely, Homo sapiens evolved—are details. These details
may be interesting, but they are not likely to be surprising unless they
show that scientists’ basic assumptions about evolution are wrong. We
may learn, say, that our sudden surge in intelligence was catalyzed by
the intervention of alien beings, as in the movie 2001. That would be
a very big surprise. In fact, any proof that life exists—or even once
existed—beyond our little planet would constitute a huge surprise.
Science, and all human thought, would be reborn. Speculation about
the origin of life and its inevitability would be placed on a much more
empirical basis.
But how likely is it that we will discover life elsewhere? In retrospect,
the space programs of both the United States and the USSR represented
elaborate displays of saber rattling rather than the opening of a new
frontier for human knowledge. The prospects for space exploration on
anything more than a trivial level seem increasingly unlikely. We no
longer have the will or the money to indulge in technological muscle
flexing for its own sake. Humans, made of flesh and blood, may some-
day travel to other planets here in our solar system. But unless we find
some way to transcend Einstein’s prohibition against faster-than-light
travel, chances are that we will never even attempt to visit another
star, let alone another galaxy. A spaceship that can travel one million
miles an hour, a velocity at least one order of magnitude greater than
any current technology can attain, would still take almost 3,000 years
to reach our nearest stellar neighbor, Alpha Centauri.11
The most dramatic advance in applied science I can imagine is
immortality. Many scientists are now attempting to identify the pre-
cise causes of aging. It is conceivable that if they succeed, scientists may
be able to design versions of Homo sapiens that can live indefinitely.
But immortality, although it would represent a triumph of applied sci-
ence, would not necessarily change our fundamental knowledge of the
universe. We would not have any better idea of why the universe came
to be and of what lies beyond its borders than we do now. Moreover,
evolutionary biologists suggest that immortality may be impossible
to achieve. Natural selection designed us to live long enough to breed
and raise our children. As a result, senescence does not stem from any
single cause or even a suite of causes; it is woven inextricably into the
fabric of our being.12
While it is never safe to say that the future of Physical Science has
no marvels even more astonishing than those of the past, it seems
probable that most of the grand underlying principles have been
firmly established and that further advances are to be sought
chiefly in the rigorous application of these principles to all the
phenomena which come under our notice. It is here that the sci-
ence of measurement shows its importance—where quantitative
results are more to be desired than qualitative work. An eminent
physicist has remarked that the future truths of Physical Science
are to be looked for in the sixth place of decimals.13
story alleges that in the mid-1800s the head of the U.S. Patent Office
quit his job and recommended that the office be shut down because
there would soon be nothing left to invent.
In 1995, Daniel Koshland, editor of the prestigious journal Science,
repeated this story in an introduction to a special section on the future
of science. In the section, leading scientists offered predictions about
what their fields might accomplish over the next 20 years. Koshland,
who, like Gunther Stent, is a biologist at the University of California
at Berkeley, exulted that his prognosticators “clearly do not agree with
that commissioner of patents of yesteryear. Great discoveries with
great import for the future of science are in the offing. That we have
come so far so fast is not an indication that we have saturated the dis-
covery market, but rather that discoveries will come even faster.”17
There were two problems with Koshland’s essay. First, the contribu-
tors to his special section envisioned not “great discoveries” but, for the
most part, rather mundane applications of current knowledge, such as
better methods for designing drugs, improved tests for genetic disor-
ders, more discerning brain scans, and the like. Some predictions were
negative in nature. “Anyone who expects any human-like intelligence
from a computer in the next 50 years is doomed to disappointment,”
proclaimed the physicist and Nobel laureate Philip Anderson.
Second, Koshland’s story about the commissioner of patents was
apocryphal. In 1940, a scholar named Eber Jeffery examined the patent
commissioner anecdote in an article entitled “Nothing Left to Invent,”
published in the Journal of the Patent Office Society.18 Jeffery traced the
story to congressional testimony delivered in 1843 by Henry Ellsworth,
then the commissioner of patents. Ellsworth remarked at one point,
“The advancement of the arts, from year to year, taxes our credulity
and seems to presage the arrival of that period when human improve-
ment must end.”
But Ellsworth, far from recommending that his office be shut down,
asked for extra funds to cope with the flood of inventions he expected in
agriculture, transportation, and communications. Ellsworth did indeed
resign two years later, in 1845, but in his resignation letter he made
no reference to closing the patent office; rather, he expressed pride at
having expanded it. Jeffery concluded that Ellsworth’s statement about
I n his 1932 book, The Idea of Progress, the historian J. B. Bury stated:
“Science has been advancing without interruption during the last
three or four hundred years; every new discovery has led to new prob-
lems and new methods of solution, and opened up new fields for explo-
ration. Hitherto men of science have not been compelled to halt, they
have always found means to advance further. But what assurance have
we that they will not come up against impassable barriers?” [Italics in the
original.]19
Bury had demonstrated through his own scholarship that the con-
cept of progress was only a few hundred years old at most. From the
era of the Roman Empire through the Middle Ages, most truth seekers
had a degenerative view of history; the ancient Greeks had achieved
the acme of mathematical and scientific knowledge, and civilization
had gone downhill from there. Those who followed could only try
to recapture some remnant of the wisdom epitomized by Plato and
ones, Spengler declared, society will rebel against science and embrace
religious fundamentalism and other irrational systems of belief. Spen-
gler predicted that the decline of science and the resurgence of irratio-
nality would begin at the end of the second millennium.24
Spengler’s analysis was, if anything, too optimistic. His view of sci-
ence as cyclic implied that science might one day be resurrected and
undergo a new period of discovery. Science is not cyclic, however, but
linear; we can only discover the periodic table and the expansion of
the universe and the structure of DNA once. The biggest obstacle to
the resurrection of science—and especially pure science, the quest for
knowledge about who we are and where we came from—is science’s
past success.
radar during World War II, silicon and laser technology thereafter,
American optimism and industrial hegemony, socialist belief in ratio-
nality as a way of improving the world.” Those conditions have largely
vanished, Kadanoff contended; both physics and science as a whole are
now besieged by environmentalists, animal-rights activists, and others
with an antiscientific outlook. “In recent decades, science has had high
rewards and has been at the center of social interest and concern. We
should not be surprised if this anomaly disappears.”28
Kadanoff, when I spoke to him over the telephone two years later,
sounded even gloomier than he had been in his essay.29 He laid out
his worldview for me with a muffled melancholy, as if he were suf-
fering from an existential head cold. Rather than discussing science’s
social and political problems, as he had in his article in Physics Today,
he focused on another obstacle to scientific progress: science’s past
achievements. The great task of modern science, Kadanoff explained,
has been to show that the world conforms to certain basic physical laws.
“That is an issue which has been explored at least since the Renaissance
and maybe a much longer period of time. For me, that’s a settled issue.
That is, it seems to me that the world is explainable by law.” The most
fundamental laws of nature are embodied in the theory of general rel-
ativity and in the so-called standard model of particle physics, which
describes the behavior of the quantum realm with exquisite precision.
Just a half century ago, Kadanoff recalled, many reputable scientists
still clung to the romantic doctrine of vitalism, which holds that life
springs from some mysterious élan vital that cannot be explained in
terms of physical laws. As a result of the findings of molecular biol-
ogy—beginning with the discovery of the structure of DNA in 1953—
“there are relatively few well-educated people” who admit to belief in
vitalism, Kadanoff said.
Of course, scientists still have much to learn about how the funda-
mental laws generate “the richness of the world as we see it.” Kadanoff
himself is a leader in the field of condensed-matter physics, which stud-
ies the behavior not of individual subatomic particles, but of solids
or liquids. Kadanoff has also been associated with the field of chaos,
which addresses phenomena that unfold in predictably unpredictable
ways. Some proponents of chaos—and of a closely related field called
fears—of scientists that we will find The Answer, a theory that quenches
our curiosity forever. Francis Bacon, one of the founders of modern sci-
ence, expressed his belief in the vast potential of science with the Latin
term plus ultra, “more beyond.”34 But plus ultra does not apply to science
per se, which is a tightly constrained method for examining nature.
Plus ultra applies, rather, to our imaginations. Although our imagina-
tions are constrained by our evolutionary history, they will always be
capable of venturing beyond what we truly know.
Even in the new Polynesia, Gunther Stent suggested, a few per-
sistent souls will keep striving to transcend the received wisdom. Stent
called these truth seekers “Faustian” (a term he borrowed from Oswald
Spengler). I call them strong scientists (a term I coopted from Harold
Bloom’s The Anxiety of Influence). By raising questions that science can-
not answer, strong scientists can continue the quest for knowledge in
the speculative mode that I call ironic science even after empirical sci-
ence—the kind of science that answers questions—has ended.
The poet John Keats coined the term negative capability to describe
the ability of certain great poets to remain “in uncertainties, myster-
ies, doubts, without any irritable reaching after fact and reason.” As
an example, Keats singled out his fellow poet Samuel Coleridge, who
“would let go by a fine isolated verisimilitude caught from the pene-
tralium of mystery, from being incapable of remaining content with
half-knowledge.”35 The most important function of ironic science is to
serve as humanity’s negative capability. Ironic science, by raising unan-
swerable questions, reminds us that all our knowledge is half-knowl-
edge; it reminds us of how little we know. But ironic science does not
make any significant contributions to knowledge itself. Ironic science
is thus less akin to science in the traditional sense than to literary criti-
cism—or to philosophy.
T
wentieth-century science has given rise to a marvelous paradox.
The same extraordinary progress that has led to predictions that
we may soon know everything that can be known has also nur-
tured doubts that we can know anything for certain. When one theory
so rapidly succeeds another, how can we ever be sure that any the-
ory is true? In 1987 two British physicists, T. Theocharis and M. Psi-
mopoulos, excoriated this skeptical philosophical position in an essay
entitled “Where Science Has Gone Wrong.” Published in the British
journal Nature, the essay blamed the “deep and widespread malaise”
in science on philosophers who had attacked the notion that science
could achieve objective knowledge. The article printed photographs of
four particularly egregious “betrayers of the truth”: Karl Popper, Imre
Lakatos, Thomas Kuhn, and Paul Feyerabend.1
The photographs were grainy, black-and-white shots of the sort that
adorn a lurid exposé about a venerable banker who has been caught
swindling retirees. These, clearly, were intellectual transgressors of the
worst sort. Feyerabend, whom the essayists called “the worst enemy of
science,” was the most wicked-looking of the bunch. Smirking at the
camera over glasses perched on the tip of his nose, he was clearly either
anticipating or relishing the perpetration of some diabolical prank. He
looked like an intellectual version of Loki, the Norse god of mischief.
The main complaint of Theocharis and Psimopoulos was silly.
The skepticism of a few academic philosophers has never represented
a serious threat to the massive, well-funded bureaucracy of science.
Many scientists, particularly would-be revolutionaries, find the ideas
25
secretary there said that Popper generally worked at his home in Ken-
ley, a suburb south of London, and gave me his number. I called, and
a woman with an imperious, German-accented voice answered. Mrs.
Mew, assistant to “Sir Karl.” Before Sir Karl would see me, I had to send
her a sample of my writings. She gave me a reading list that would
prepare me for my meeting: a dozen or so books by Sir Karl. Even-
tually, after several faxes and telephone calls, she set a date. She also
provided directions to the train station near Sir Karl’s house. When I
asked her for directions from the train station to the house, Mrs. Mew
assured me that all the cab drivers knew where Sir Karl lived. “He’s
quite famous.”
“Sir Karl Popper’s house, please,” I said as I climbed into a cab at
Kenley station. “Who?” the driver replied. Sir Karl Popper? The famous
philosopher? Never heard of him, the driver said. He was familiar
with the street on which Popper lived, however, and we found Pop-
per’s home—a two-story cottage surrounded by scrupulously trimmed
grass and shrubs—with little difficulty.6
A tall, handsome woman dressed in dark pants and shirt and with
short, dark hair answered the door: Mrs. Mew. She was only slightly
less forbidding in person than over the telephone. As she led me into
the house, she told me that Sir Karl was quite tired. He had undergone
a spate of interviews and congratulations brought on by his 90th birth-
day the previous month, and he had been working too hard preparing
an acceptance speech for the Kyoto Award, known as Japan’s Nobel. I
should expect to speak to him for only an hour at the most.
I was trying to lower my expectations when Popper made his
entrance. He was stooped, equipped with a hearing aid, and surpris-
ingly short; I had assumed that the author of such autocratic prose
would be tall. Yet he was as kinetic as a bantamweight boxer. He bran-
dished an article I had written for Scientific American about how quan-
tum mechanics was compelling some physicists to abandon the view
of physics as a wholly objective enterprise.7 “I don’t believe a word of
it,” he declared in an Austrian-accented growl. “Subjectivism” has no
place in physics, quantum or otherwise. “Physics,” he exclaimed, grab-
bing a book from a table and slamming it down, “is that!” (This from
a man who cowrote a book espousing dualism, the notion that ideas
The light in the kitchen was acquiring a ruddy hue when Mrs. Mew
stuck her head in the door and informed us that we had been talking
for more than three hours. How much longer, she inquired a bit pee-
vishly, did we expect to continue? Perhaps she had better call me a cab?
I looked at Popper, who had broken into a bad-boy grin but did appear
to be drooping.
I slipped in a final question: Why in his autobiography did Popper
say that he was the happiest philosopher he knew? “Most philosophers
are really deeply depressed,” he replied, “because they can’t produce
anything worthwhile.” Looking pleased with himself, Popper glanced
over at Mrs. Mew, who wore an expression of horror. Popper’s own
smile abruptly faded. “It would be better not to write that,” he said,
turning back to me. “I have enough enemies, and I better not answer
them in this way.” He stewed a moment and added, “But it is so.”
I asked Mrs. Mew if I could have a copy of the speech Popper was
going to deliver at the Kyoto Award ceremony in Japan. “No, not now,”
she said curtly. “Why not?” Popper inquired. “Karl,” she replied, “I’ve
been typing the second lecture nonstop, and I’m a bit . . .” She sighed.
“You know what I mean?” Anyway, she added, she did not have a final
version. “What about an uncorrected version?” Popper asked. Mrs.
Mew stalked off.
She returned and shoved a copy of Popper’s lecture at me. “Have
you got a copy of Propensities?” Popper asked her.11 She pursed her lips
and stomped into the room next door, while Popper explained the
book’s theme to me. The lesson of quantum mechanics and even of
classical physics, Popper said, is that nothing is determined, nothing
is certain, nothing is completely predictable; there are only propensi-
ties for certain things to occur. “For example,” Popper added, “in this
moment there is a certain propensity that Mrs. Mew may find a copy
of my book.”
“Oh, please!” Mrs. Mew exclaimed from the next room. She
returned, no longer making any attempt to hide her annoyance. “Sir
Karl, Karl, you have given away the last copy of Propensities. Why do
you do that?”
“The last copy was given away in your presence,” he declared.
“I don’t think so,” she retorted. “Who was it?”
“
Look,” Thomas Kuhn said. The word was weighted with weariness,
as if Kuhn was resigned to the fact that I would misinterpret him,
but he was still going to try—no doubt in vain—to make his point.
Kuhn uttered the word often. “Look,” he said again. He leaned his gan-
gly frame and long face forward, and his big lower lip, which ordinarily
curled up amiably at the corners, sagged. “For Christ’s sake, if I had
my choice of having written the book or not having written it, I would
choose to have written it. But there have certainly been aspects involv-
ing considerable upset about the response to it.”
“The book” was The Structure of Scientific Revolutions, which may be
the most influential treatise ever written on how science does (or does
not) proceed. It is notable for having spawned the trendy term para-
digm. It also fomented the now trite idea that personalities and politics
play a large role in science. The book’s most profound argument was
less obvious: scientists can never truly understand the real world or
even each other.15
Given this theme, one might think that Kuhn would have expected
his own message to be at least partially misunderstood. But when I
interviewed Kuhn in his office at the Massachusetts Institute of Tech-
nology (of all places) some three decades after the publication of
Structure, he seemed to be deeply pained by the breadth of misunder-
standing of his book. He was particularly upset by claims that he had
described science as irrational. “If they had said ‘arational’ I wouldn’t
have minded at all,” he said with no trace of a smile.
Kuhn’s fear of compounding the confusion over his work had made
him a bit press shy. When I first telephoned him to ask for an interview,
he turned me down. “Look. I think not,” he said. He revealed that Sci-
entific American, my employer, had given Structure “the worst review
I can remember.” (The squib was indeed dismissive; it called Kuhn’s
argument “much ado about very little.” But what did Kuhn expect
from a magazine that celebrates science?16) Pointing out that I had not
been at the magazine then—the review ran in 1964—I begged him to
reconsider. Kuhn finally, reluctantly, agreed.
When we at last sat down together in his office, Kuhn expressed
nominal discomfort at the notion of delving into the roots of his
thought. “One is not one’s own historian, let alone one’s own psycho-
analyst,” he warned me. He nonetheless traced his view of science to
an epiphany he experienced in 1947, when he was working toward
a doctorate in physics at Harvard. While reading Aristotle’s Physics,
Kuhn had become astonished at how “wrong” it was. How could some-
one who wrote so brilliantly on so many topics be so misguided when
it came to physics?
Kuhn was pondering this mystery, staring out his dormitory win-
dow (“I can still see the vines and the shade two-thirds of the way
down”), when suddenly Aristotle “made sense.” Kuhn realized that
Aristotle invested basic concepts with different meanings than did
modern physicists. Aristotle used the term motion, for example, to refer
not just to change in position but to change in general—the reddening
of the sun as well as its descent toward the horizon. Aristotle’s physics,
understood on its own terms, was simply different from, rather than
inferior to, Newtonian physics.
Kuhn left physics for philosophy, and he struggled for 15 years to
transform his epiphany into the theory set forth in The Structure of Sci-
entific Revolutions. The keystone of his model was the concept of a par-
adigm. Paradigm, pre-Kuhn, referred merely to an example that serves
an educational purpose; amo, amas, amat, for instance, is a paradigm for
teaching conjugations in Latin. Kuhn used the term to refer to a col-
lection of procedures or ideas that instruct scientists, implicitly, what to
believe and how to work. Most scientists never question the paradigm.
They solve puzzles, problems whose solutions reinforce and extend the
scope of the paradigm rather than challenge it. Kuhn called this “mop-
ping up,” or “normal science.” There are always anomalies, phenom-
ena that the paradigm cannot account for or that even contradict it.
Anomalies are often ignored, but if they accumulate they may trigger
a revolution (also called a paradigm shift, although not originally by
Kuhn), in which scientists abandon the old paradigm for a new one.
Denying the view of science as a continual building process, Kuhn
held that a revolution is a destructive as well as a creative act. The pro-
poser of a new paradigm stands on the shoulders of giants (to borrow
Newton’s phrase) and then bashes them over the head. He or she is
often young or new to the field, that is, not fully indoctrinated. Most
scientists yield to a new paradigm reluctantly. They often do not under-
stand it, and they have no objective rules by which to judge it. Dif-
ferent paradigms have no common standard for comparison; they are
“incommensurable,” to use Kuhn’s term. Proponents of different par-
adigms can argue forever without resolving their differences because
they invest basic terms—motion, particle, space, time—with different
meanings. The conversion of scientists is thus both a subjective and
a political process. It may involve sudden, intuitive understanding—
like that finally achieved by Kuhn as he pondered Aristotle. Yet scien-
tists often adopt a paradigm simply because it is backed by others with
strong reputations or by a majority of the community.
Kuhn’s view diverged from Popper’s in several important respects.
Kuhn (like other critics of Popper) argued that falsification is no more
possible than verification; each process implies the existence of abso-
lute standards of evidence, which transcend any individual paradigm.
A new paradigm may solve puzzles better than the old one does, and it
may yield more practical applications. “But you cannot simply describe
the other science as false,” Kuhn said. Just because modern physics has
spawned computers, nuclear power, and CD players does not mean it
is truer, in an absolute sense, than Aristotle’s physics. Similarly, Kuhn
denied that science is constantly approaching the truth. At the end of
Structure he asserted that science, like life on earth, does not evolve
toward anything, but only away from something.
Kuhn described himself to me as a “post-Darwinian Kantian.” Kant,
too, believed that without some sort of a priori paradigm the mind cannot
impose order on sensory experience. But whereas Kant and Darwin each
thought that we are all born with more or less the same innate paradigm,
Kuhn argued that our paradigms keep changing as our culture changes.
“Different groups, and the same group at different times,” Kuhn told me,
“can have different experiences and therefore in some sense live in dif-
ferent worlds.” Obviously all humans share some responses to experi-
ence, simply because of their shared biological heritage, Kuhn added. But
whatever is universal in human experience, whatever transcends culture
and history, is also “ineffable,” beyond the reach of language. Language,
Kuhn said, “is not a universal tool. It’s not the case that you can say any-
thing in one language that you can say in another.”
But isn’t mathematics a kind of universal language? I asked. Not
really, Kuhn replied, since it has no meaning; it consists of syntacti-
cal rules without any semantic content. “There are perfectly good
reasons why mathematics can be considered a language, but there is
a very good reason why it isn’t.” I objected that although Kuhn’s view
of the limits of language might apply to certain fields with a metaphys-
ical cast, such as quantum mechanics, it did not hold in all cases. For
example, the claim of a few biologists that AIDS is not caused by the
so-called AIDS virus is either right or wrong; language is not the cru-
cial issue. Kuhn shook his head. “Whenever you get two people inter-
preting the same data in different ways,” he said, “that’s metaphysics.”
So, were his own ideas true or not? “Look,” Kuhn responded with
even more weariness than usual; obviously he had heard this question
many times before. “I think this way of talking and thinking that I am
engaged in opens up a range of possibilities that can be investigated.
But it, like any scientific construct, has to be evaluated simply for its
utility—for what you can do with it.”
But then Kuhn, having set forth his bleak view of the limits of sci-
ence and indeed of all human discourse, proceeded to complain about
the many ways in which his book had been misinterpreted and mis-
used, especially by admirers. “I’ve often said I’m much fonder of my
critics than my fans.” He recalled students approaching him to say,
“Oh, thank you, Mr. Kuhn, for telling us about paradigms. Now that we
know about them we can get rid of them.” He insisted that he did not
believe that science was entirely political, a reflection of the prevailing
power structure. “In retrospect, I begin to see why this book fed into
that, but boy, was it not meant to, and boy, does it not mean to.”
His protests were to no avail. He had a painful memory of sitting in
on a seminar and trying to explain that the concepts of truth and fal-
sity are perfectly valid, and even necessary—within a paradigm. “The
professor finally looked at me and said, ‘Look, you don’t know how
radical this book is.’” Kuhn was also upset to find that he had become
the patron saint of all would-be scientific revolutionaries. “I get a lot of
letters saying, ‘I’ve just read your book, and it’s transformed my life.
I’m trying to start a revolution. Please help me,’ and accompanied by a
book-length manuscript.”
Kuhn declared that, although his book was not intended to be
pro-science, he is pro-science. It is the rigidity and discipline of science,
Kuhn said, that makes it so effective at problem solving. Moreover, sci-
ence produces “the greatest and most original bursts of creativity” of
any human enterprise. Kuhn conceded that he was partly to blame for
some of the antiscience interpretations of his model. After all, in Struc-
ture he did call scientists committed to a paradigm “addicts”; he also
compared them to the brainwashed characters in Orwell’s 1984.17 Kuhn
insisted that he did not mean to be condescending by using such terms
as mopping up or puzzle solving to describe what most scientists do. “It
was meant to be descriptive.” He ruminated a bit. “Maybe I should
have said more about the glories that result from puzzle solving, but I
thought I was doing that.”
As for the word paradigm, Kuhn conceded that it had become “hope-
lessly overused” and was “out of control.” Like a virus, the word spread
beyond the history and philosophy of science and infected the intel-
lectual community at large, where it came to signify virtually any
dominant idea. A 1974 New Yorker cartoon captured the phenomenon.
“Dynamite, Mr. Gerston!” gushed a woman to a smug-looking man.
“You’re the first person I ever heard use ‘paradigm’ in real life.” The
low point came during the Bush administration, when White House
officials introduced an economic plan called “the New Paradigm”
(which was really just warmed-over Reaganomics).18
Kuhn admitted, again, that the fault was partly his, since in Struc-
ture he had not defined paradigm as crisply as he might have. At one
authority, to deny that science could ever arrive at absolute truth. “The
one thing I think you shouldn’t say is that now we’ve found out what
the world is really like,” Kuhn said. “Because that’s not what I think the
game is about.”
Kuhn has tried, throughout his career, to remain true to that origi-
nal epiphany he experienced in his dormitory at Harvard. During that
moment Kuhn saw—he knew!—that reality is ultimately unknow-
able; any attempt to describe it obscures as much as it illuminates. But
Kuhn’s insight forced him to take the untenable position that because
all scientific theories fall short of absolute, mystical truth, they are all
equally untrue; because we cannot discover The Answer, we cannot
find any answers. His mysticism led him toward a position as absurd
as that of the literary sophists who argue that all texts—from The Tem-
pest to an ad for a new brand of vodka—are equally meaningless, or
meaningful.
At the end of Structure, Kuhn briefly raised the question of why
some fields of science converge on a paradigm while others, artlike,
remain in a state of constant flux. The answer, he implied, was a matter
of choice; scientists within certain fields were simply unwilling to com-
mit themselves to a single paradigm. I suspect Kuhn avoided pursuing
this issue because he could not abide the answer. Some fields, such as
economics and other social sciences, never adhere for long to a single
paradigm because they address questions for which no paradigm will
suffice. Fields that achieve consensus, or normalcy, to borrow Kuhn’s
term, do so because their paradigms correspond to something real in
nature, something true.
Finding Feyerabend
T o say that the ideas of Popper and Kuhn are flawed is not to say that
they cannot serve as useful tools for analyzing science. Kuhn’s nor-
mal-science model accurately describes what most scientists now do:
fill in details, solve relatively trivial puzzles that buttress rather than
challenge the prevailing paradigm. Popper’s falsification criterion can
help to distinguish between empirical science and ironic science. But
each philosopher, by pushing his ideas too far, by taking them too seri-
ously, ends up in an absurd, self-contradicting position.
How does a skeptic avoid becoming Karl Popper, pounding the
table and shouting that he is not dogmatic? Or Thomas Kuhn, trying
to communicate precisely what he means when he talks about the
impossibility of true communication? There is only one way. One must
embrace—even revel in—paradox, contradiction, rhetorical excess.
One must acknowledge that skepticism is a necessary but impossible
exercise. One must become Paul Feyerabend.
Feyerabend’s first and still most influential book, Against Method, was
published in 1975 and has been translated into 16 languages.20 It argues
that philosophy cannot provide a methodology or rationale for sci-
ence, since there is no rationale to explain. By analyzing such scientific
milestones as Galileo’s trial before the Vatican and the development of
quantum mechanics, Feyerabend sought to show that there is no logic
to science; scientists create and adhere to scientific theories for what
are ultimately subjective and even irrational reasons. According to Fey-
erabend, scientists can and must do whatever is necessary to advance.
He summed up his anticredo with the phrase “anything goes.” Feyer-
abend once derided Popper’s critical rationalism as “a tiny puff of hot
air in the positivistic teacup.”21 He agreed with Kuhn on many points,
in particular on the incommensurability of scientific theories, but he
argued that science is rarely as normal as Kuhn contended. Feyerabend
also accused Kuhn—quite rightly—of avoiding the implications of his
own view; he remarked, to Kuhn’s dismay, that Kuhn’s sociopolitical
model of scientific change applied nicely to organized crime.22
Feyerabend’s penchant for posturing made it all too easy to reduce
him to a grab bag of outrageous sound bites. He once likened science
to voodoo, witchcraft, and astrology. He defended the right of reli-
gious fundamentalists to have their version of creation taught along-
side Darwin’s theory of evolution in public schools.23 His entry in the
1991 Who’s Who in America ended with the following remark: “My life
has been the result of accidents, not of goals and principles. My intel-
lectual work forms only an insignificant part of it. Love and personal
understanding are much more important. Leading intellectuals with
their zeal for objectivity kill these personal elements. They are crimi-
nals, not the liberators of mankind.”
Feyerabend’s Dadaesque rhetoric concealed a deadly serious point:
the human compulsion to find absolute truths, however noble, too
often culminates in tyranny. Feyerabend attacked science not because
he truly believed that it had no more claim to truth than did astrol-
ogy. Quite the contrary. Feyerabend attacked science because he recog-
nized—and was horrified by—its power, its potential to stamp out the
diversity of human thought and culture. He objected to scientific cer-
tainty for moral and political, rather than for epistemological, reasons.
At the end of his 1987 book, Farewell to Reason, Feyerabend revealed
just how deep his relativism ran. He addressed an issue that “has
enraged many readers and disappointed many friends—my refusal to
condemn even an extreme fascism and my suggestion that it should be
allowed to thrive.”24 The point was particularly touchy because Feyer-
abend had served in the German army during World War II. It would
be all too easy, Feyerabend argued, to condemn Nazism, but it was that
very moral self-righteousness and certitude that made Nazism possible.
They don’t beat each other down.” People have a perfect right to reject
science if they so choose, Feyerabend said.
Did that mean fundamentalist Christians also had the right to have
creationism taught alongside the theory of evolution in schools? “I
think that ‘right’ business is a tricky business,” Feyerabend responded,
“because once somebody has a right they can hit somebody else over
the head with that right.” He paused. Ideally, he said, children should
be exposed to as many different modes of thought as possible so they
can choose freely among them. He shifted uneasily in his seat. Sensing
an opening, I pointed out that he had not really answered my question
about creationism. Feyerabend scowled. “This is a dried-out business.
It doesn’t interest me very much. Fundamentalism is not the old rich
Christian tradition.” But American fundamentalists are very powerful,
I persisted, and they use the kinds of things Feyerabend says to attack
the theory of evolution. “But science has been used to say some people
have a low intelligence quotient,” he retorted. “So everything is used
in many different ways. Science can be used to beat down all sorts of
other people.”
But shouldn’t educators point out that scientific theories are differ-
ent from religious myths? I asked. “Of course. I would say that science
is very popular nowadays,” he replied. “But then I have also to let the
other side get in as much evidence as possible, because the other side is
always given a short presentation.” Anyway, so-called primitive people
often know far more about their environments, such as the properties
of local plants, than do so-called experts. “So to say these people are
ignorant is just—this is ignorance!”
I unloaded my self-refuting question: Wasn’t there something con-
tradictory about the way he used all the techniques of Western ratio-
nalism to attack Western rationalism? Feyerabend refused to take the
bait. “Well, they are just tools, and tools can be used in any way you see
fit,” he said mildly. “They can’t blame me that I use them.” Feyerabend
seemed bored, distracted. Although he would not admit it, 1 suspected
he was tired of being a radical relativist, of defending the colorful belief
systems of the world—astrology, creationism, even fascism!—against
the bully of rationalism.
down and become more and more material. And down, down at the
last emanation, you can see a little trace of it and guess at it.”
Surprised by this outburst, I asked Feyerabend if he was religious.
“I’m not sure,” he replied. He had been raised as a Roman Catholic,
and then he became a “vigorous” atheist. “And now my philosophy has
taken a completely different shape. It can’t just be that the universe—
Boom!—you know, and develops. It just doesn’t make any sense.” Of
course, many scientists and philosophers have argued that it is point-
less to speculate about the sense, or meaning, or purpose of the uni-
verse. “But people ask it, so why not? So all this will be stuffed into this
book, and the question of abundance will come out of it, and it will
take me a long time.”
As I prepared to leave, Feyerabend asked how my wife’s birthday
party had gone the previous night. (I had told Feyerabend about my
wife’s birthday in the course of arranging my meeting with him.) Fine,
I replied. “You’re not drifting apart?” Feyerabend persisted, scrutiniz-
ing me. “It wasn’t the last birthday you will ever celebrate with her?”
Borrini glared at him, aghast. “Why should it be?”
“I don’t know!” Feyerabend exclaimed, throwing his hands up.
“Because it happens!” He turned back to me. “How long have you been
married?” Three years, I said. “Ah, just the beginning. The bad things
will come. Just wait 10 years.” Now you really sound like a philosopher,
I said. Feyerabend laughed. He confessed that he had been married and
divorced three times before he met Borrini. “Now for the first time I
am so happy to be married.”
I said that I had heard his marriage to Borrini had made him more
easygoing. “Well, this may be two things,” Feyerabend replied. “Get-
ting older you don’t have the energy not to be easygoing. And she’s
certainly made a big difference also.” He beamed at Borrini, and she
beamed back.
Turning to Borrini, I mentioned the photograph that her husband had
sent of himself washing dishes, along with the note saying that perform-
ing this chore for his wife was the most important thing he did now.
Borrini snorted. “Once in a blue moon,” she said.
“What do you mean, once in a blue moon!” Feyerabend bellowed.
“Every day I wash dishes!”
our little chat in his airy apartment, with car honks and bus growls and
the odor of greasy Chinese food drifting through the window, he had
pronounced the impending doom of not just one but two major modes
of human knowledge: philosophy and science.
T
here are no more dedicated, not to say obsessive, seekers of The
Answer than modern particle physicists. They want to show that
all the complicated things of the world are really just manifesta-
tions of one thing. An essence. A force. A loop of energy wriggling
in a 10-dimensional hyperspace. A sociobiologist might suspect that a
genetic influence lurks behind this reductionist impulse, since it seems
to have motivated thinkers since the dawn of civilization. God, after
all, was conceived by the same impulse.
Einstein was the first great modern Answer seeker. He spent his
later years trying to find a theory that would unify quantum mechan-
ics with his theory of gravity, general relativity. To him, the purpose
of finding such a theory was to determine whether the universe was
inevitable or, as he put it, “whether God had any choice in creating the
world.” But Einstein, no doubt believing that science made life mean-
ingful, also suggested that no theory could be truly final. He once said
of his own theory of relativity, “[It] will have to yield to another one,
for reasons which at present we do not yet surmise. I believe that the
process of deepening the theory has no limits.”1
Most of Einstein’s contemporaries saw his efforts to unify physics
as a product of his dotage and quasi-religious tendencies. But in the
1970s, the dream of unification was revived by several advances. First,
physicists showed that just as electricity and magnetism are aspects of
a single force, so electromagnetism and the weak nuclear force (which
governs certain kinds of nuclear decay) are manifestations of an under-
lying “electroweak” force. Researchers also developed a theory for the
55
strong nuclear force, which grips protons and neutrons together in the
nuclei of atoms. The theory, called quantum chromodynamics, posits
that protons and neutrons are composed of even more elementary par-
ticles, called quarks. Together, the electroweak theory and quantum
chromodynamics constitute the standard model of particle physics.
Emboldened by this success, workers forged far beyond the standard
model in search of a deeper theory. Their guide was a mathematical
property called symmetry, which allows the elements of a system to
undergo transformations—analogous to rotation or reflection in a mir-
ror—without being fundamentally altered. Symmetry became the sine
qua non of particle physics. In search of theories with deeper symme-
tries, theorists began to jump to higher dimensions. Just as an astro-
naut rising above the two-dimensional plane of the earth can more
directly apprehend its global symmetry, so can theorists discern the
more subtle symmetries underlying particle interactions by viewing
them from a higher-dimensional standpoint.
One of the most persistent problems in particle physics stems from
the definition of particles as points. In the same way that division by
zero yields an infinite and hence meaningless result, so do calculations
involving pointlike particles often culminate in nonsense. In construct-
ing the standard model, physicists were able to sweep these problems
under the rug. But Einsteinian gravity, with its distortions of space and
time, seemed to demand an even more radical approach.
In the early 1980s, many physicists came to believe that superstring
theory represented that approach. The theory replaced pointlike parti-
cles with minute loops that eliminated the absurdities arising in calcu-
lations. Just as vibrations of violin strings give rise to different notes, so
could the vibrations of these strings generate all the forces and particles
of the physical realm. Superstrings could also banish one of the bug-
bears of particle physics: the possibility that there is no ultimate foun-
dation for physical reality but only an endless succession of smaller and
smaller particles, nestled inside each other like Russian dolls. According
to superstring theory, there is a fundamental scale beyond which all
questions concerning space and time become meaningless.
The theory suffers from several problems, however. First, there seem
to be countless possible versions, and theorists have no way of knowing
Glashow’s Gloom
physics, and his glasses were as thick as telescope lenses. Yet one could
still detect, beneath the patina of the Harvard professor, the tough,
fast-talking New York kid that Glashow had once been.
Glashow was devastated by the death of the supercollider. Physics,
he emphasized, cannot proceed on pure thought alone, in spite of what
superstring enthusiasts said. Superstring theory “hasn’t gotten any-
where despite all the hoopla,” he grumbled. More than a century ago
some physicists tried to invent unified theories; they failed, of course,
because they knew nothing about electrons or protons or neutrons or
quantum mechanics. “Now, are we so arrogant as to believe we have
all the experimental information we need right now to construct that
holy grail of theoretical physics, a unified theory? I think not. I think
certainly there are surprises that natural phenomena have in store for
us, and we’re not going to find them unless we look.”
But isn’t there much to do in physics besides unification? “Of course
there is,” Glashow replied sharply. Astrophysics, condensed-matter
physics, and even subfields within particle physics are not concerned
with unification. “Physics is a very large house filled with interest-
ing puzzles,” he said (using Thomas Kuhn’s term for problems whose
solutions merely reinforce the prevailing paradigm). “Of course there
will be things done. The question is whether we’re getting anywhere
toward this holy grail.” Glashow believed that physicists would con-
tinue to seek “some little interesting tidbit someplace. Something
amusing, something new. But it’s not the same as the quest as I was
fortunate enough to know it in my professional lifetime.”
Glashow could not muster much optimism concerning the pros-
pects for his field, given the politics of science funding. He had to admit
that particle physics was not terribly useful. “Nobody can make the
claim that this kind of research is going to produce a practical device.
That would just be a lie. And given the attitude of governments today,
the type of research that I fancy doesn’t have a very good future.”
In that case, could the standard model be the final theory of parti-
cle physics? Glashow shook his head. “Too many questions left unan-
swered,” he said. Of course, he added, the standard model would be
final in a practical sense if physicists could not forge beyond it with
more powerful accelerators. “There will be the standard theory, and
that will be the last chapter in the elementary physics story.” It is always
possible that someone will find a way of generating extremely high
energies relatively cheaply. “Maybe someday it will get done. Someday,
someday, someday.”
The question is, Glashow continued, what will particle physicists do
while they are waiting for that someday to come? “I guess the answer
is going to be that the [particle-physics] establishment is going to do
boring things, futzing around until something becomes available. But
they would never admit that it’s boring. Nobody will say, ‘I do boring
things.’” Of course, as the field becomes less interesting and funding
dwindles, it will cease attracting new talent. Glashow noted that sev-
eral promising graduate students had just left Harvard for Wall Street.
“Goldman Sachs in particular discovered that theoretical physicists are
very good people to have.”
Particle Aesthetics
I n the early 1990s, when superstring theory was still relatively novel,
several physicists wrote popular books about its implications. In Theo-
ries of Everything, the British physicist John Barrow argued that Gödel’s
incompleteness theorem undermines the very notion of a complete the-
ory of nature.6 Gödel established that any moderately complex system
of axioms inevitably raises questions that cannot be answered by the
axioms. The implication is that any theory will always have loose ends.
Barrow also pointed out that a unified theory of particle physics would
not really be a theory of everything, but only a theory of all particles
and forces. The theory would have little or nothing to say about phe-
nomena that make our lives meaningful, such as love or beauty.
But Barrow and other analysts at least granted that physicists might
achieve a unified theory. That assumption was challenged in The End
of Physics, written by physicist-turned-journalist David Lindley.7 Phys-
icists working on superstring theory, Lindley contended, were no
longer doing physics because their theories could never be validated
by experiments, but only by subjective criteria, such as elegance and
beauty. Particle physics, Lindley concluded, was in danger of becoming
a branch of aesthetics.
The history of physics supports Lindley’s prognosis. Previous the-
ories of physics, however seemingly bizarre, won acceptance among
physicists and even the public not because they made sense; rather, they
offered predictions that were borne out—often in dramatic fashion—
by observations. After all, even Newton’s version of gravity violates
common sense. How can one thing tug at another across vast spans of
space? John Maddox, the editor of Nature, once argued that if Newton
submitted his theory of gravity to a journal today, it would almost cer-
tainly be rejected as too preposterous to believe.8 Newton’s formalism
nonetheless provided an astonishingly accurate means of calculating
the orbits of planets; it was too effective to deny.
Einstein’s theory of general relativity, with its malleable space and
time, is even more bizarre. But it became widely accepted as true after
observations confirmed his prediction about how gravity would bend
light passing around the sun. Likewise, physicists do not believe quan-
tum mechanics because it explains the world, but because it predicts
the outcome of experiments with almost miraculous accuracy. Theo-
rists kept predicting new particles and other phenomena, and experi-
ments kept bearing out those predictions.
Superstring theory is on shaky ground indeed if it must rely on aes-
thetic judgments. The most influential aesthetic principle in science
was set forth by the fourteenth-century British philosopher William of
Occam. He argued that the best explanation of a given phenomenon is
generally the simplest, the one with the fewest assumptions. This prin-
ciple, called Occam’s razor, was the downfall of the Ptolemaic model
of the solar system in the Middle Ages. To show that the earth was the
center of the solar system, the astronomer Ptolemy was forced to argue
that the planets traced elaborately spiraling epicycles around the earth.
By assuming that the sun and not the earth was the center of the solar
system, later astronomers eventually could dispense with epicycles and
replace them with much simpler elliptical orbits.
Ptolemy’s epicycles seem utterly reasonable when compared to the
undetected—and undetectable—extra dimensions required by super-
string theory. No matter how much superstring theorists assure us of
the theory’s mathematical elegance, the metaphysical baggage it carries
with it will prevent it from winning the kind of acceptance—among
either physicists or laypeople—that general relativity or the standard
model of particle physics have.
Let’s give superstring believers the benefit of the doubt, if only for a
moment. Let’s assume that some future Witten, or even Witten himself,
finds an infinitely pliable geometry that accurately describes the behav-
ior of all known forces and particles. In what sense will such a theory
explain the world? I have talked to many physicists about superstrings,
and none has been able to help me understand what, exactly, a super-
string is. As far as I can tell, it is neither matter nor energy; it is some
kind of mathematical ur-stuff that generates matter and energy and
space and time but does not itself correspond to anything in our world.
Good science writers will no doubt make readers think they under-
stand such a theory. Dennis Overbye, in Lonely Hearts of the Cosmos, one
of the best books ever written on cosmology, imagines God as a cosmic
rocker, bringing the universe into being by flailing on his 10-dimen-
sional superstring guitar.9 (One wonders, is God improvising, or fol-
lowing a score?) The true meaning of superstring theory, of course, is
embedded in the theory’s austere mathematics. I once heard a profes-
sor of literature liken James Joyce’s gobbledygookian tome Finnegans
Wake to the gargoyles atop the cathedral of Notre Dame, built solely for
God’s amusement. I suspect that if Witten ever finds the theory he so
desires, only he—and God, perhaps—will truly appreciate its beauty.
W ith his crab-apple cheeks, vaguely Asian eyes, and silver hair still
tinged with red, Steven Weinberg resembles a large, dignified elf.
He would make an excellent Oberon, king of the fairies in A Midsum-
mer Night’s Dream. And like a fairy king, Weinberg has demonstrated
a powerful affinity for the mysteries of nature, an ability to discern
subtle patterns within the froth of data streaming from particle accel-
erators. In his 1993 book, Dreams of a Final Theory, he managed to make
reductionism sound romantic. Particle physics is the culmination of
an epic quest, “the ancient search for those principles that cannot be
explained in terms of deeper principles.”10 The force compelling sci-
ence, he pointed out, is the simple question, why? That question has
led physicists deeper and deeper into the heart of nature. Eventually, he
be another parallel time track where John Wilkes Booth missed Lin-
coln and . . .” Weinberg paused. “I sort of hope that whole problem will
go away, but it may not. That may be just the way the world is.”
Is it too much to ask for a final theory to make the world intelligible?
Before I could finish the question, Weinberg was nodding. “Yes, it’s
too much to ask,” he replied. The proper language of science is math-
ematics, he reminded me. A final theory “has to make the universe
appear plausible and somehow or other recognizably logical to people
who are trained in that language of mathematics, but it may be a long
time before that makes sense to other people.” Nor will a final theory
provide humanity with any guidance in conducting its affairs. “We’ve
learned to absolutely disentangle value judgments from truth judg-
ments,” Weinberg said. “I don’t see us going back to reconnect them.”
Science “can certainly help you find out what the consequences of your
actions are, but it can’t tell you what consequences you ought to wish
for. And that seems to me to be an absolute distinction.”
Weinberg had little patience for those who suggest that a final the-
ory will reveal the purpose of the universe, or “the mind of God,” as
Stephen Hawking once put it. Quite the contrary. Weinberg hoped that
a final theory would eliminate the wishful thinking, mysticism, and
superstition that pervades much of human thought, even among phys-
icists. “As long as we don’t know the fundamental rules,” he said, “we
can hope that we’ll find something like a concern for human beings,
say, or some guiding divine plan built into the fundamental rules. But
when we find out that the fundamental rules of quantum mechanics
and some symmetry principles are very impersonal and cold, then it’ll
have a very demystifying effect. At least that’s what I’d like to see.”
His face hardening, Weinberg continued: “I certainly would not dis-
agree with people who say that physics in my style or the Newtonian
style has produced a certain disenchantment. But if that’s the way the
world is, it’s better we find out. I see it as part of the growing up of our
species, just like the child finding out there is no tooth fairy. It’s better
to find out there is no tooth fairy, even though a world with tooth fair-
ies in it is somehow more delightful.”
Weinberg was well aware that many people hungered for a different
message from physics. In fact, earlier that day he had heard that Paul
No More Surprises
E ven if society musters the will and the money to build larger acceler-
ators and thereby keep particle physics alive—at least temporarily—
how likely is it that physicists will learn something as truly new and
the idea that the whole show can be reduced to something similar in a
broad sense to this game of 20 questions.”
Wheeler has condensed these ideas into a phrase that resembles a
Zen koan: “the it from bit.” In one of his free-form essays, Wheeler
unpacked the phrase as follows: “every it—every particle, every field
of force, even the spacetime continuum itself—derives its function, its
meaning, its very existence entirely—even if in some contexts indi-
rectly—from the apparatus-elicited answers to yes-or-no questions,
binary choices, bits.”18
Inspired by Wheeler, an ever-larger group of researchers—includ-
ing computer scientists, astronomers, mathematicians, and biologists,
as well as physicists—began probing the links between information
theory and physics in the late 1980s. Some superstring theorists even
joined in, trying to knit together quantum field theory, black holes, and
information theory with a skein of strings. Wheeler acknowledged that
these ideas were still raw, not yet ready for rigorous testing. He and his
fellow explorers were still “trying to get the lay of the land” and “learn-
ing how to express things that we already know” in the language of
information theory. The effort may lead to a dead end, Wheeler said,
or to a powerful new vision of reality, of “the whole show.”
Wheeler emphasized that science has many mysteries left to explain.
“We live still in the childhood of mankind,” he said. “All these horizons
are beginning to light up in our day: molecular biology, DNA, cosmol-
ogy. We’re just children looking for answers.” He served up another
aphorism: “As the island of our knowledge grows, so does the shore of
our ignorance.” Yet he was also convinced that humans would some-
day find The Answer. In search of a quotation that expressed his faith, he
jumped up and pulled down a book on information theory and physics
to which he had contributed an essay. After flipping it open, he read:
“Surely someday, we can believe, we will grasp the central idea of it
all as so simple, so beautiful, so compelling that we will all say to each
other, ‘Oh, how could it have been otherwise! How could we all have
been so blind for so long!’”19 Wheeler looked up from the book; his
expression was beatific. “I don’t know whether it will be one year or
a decade, but I think we can and will understand. That’s the central
thing I would like to stand for. We can and will understand.”
universe was born? And what sustained the universe for the billions
of years before we came to be? He nonetheless bravely offers us a
lovely, chilling paradox: at the heart of everything is a question, not an
answer. When we peer down into the deepest recesses of matter or at
the farthest edge of the universe, we see, finally, our own puzzled faces
looking back at us.
of reality. “But why are the forces of nature there? The forces of nature
are then taken as the essence. The atoms weren’t the essence. Why
should these forces be?”
The belief of modern physicists in a final theory could only be
self-fulfilling, Bohm said. “If you do that you’ll keep away from really
questioning deeply.” He noted that “if you have fish in a tank and you
put a glass barrier in there, the fish keep away from it. And then if you
take away the glass barrier they never cross the barrier and they think
the whole world is that.” He chuckled drily. “So your thought that this
is the end could be the barrier to looking further.”
Bohm reiterated that “we’re not ever going to get a final essence
which isn’t also the appearance of something.” But wasn’t that frus-
trating? I asked. “Well, it depends what you want. You’re frustrated if
you want to get it all. On the other hand, scientists are going to be frus-
trated if they get the final answer and then have nothing to do except
be technicians, you see.” He uttered his dry laugh. So, I said, damned
if you do and damned if you don’t. “Well, I think you have to look at it
differently, you see. One of the reasons for doing science is the exten-
sion of perception and not of current knowledge. We are constantly
coming into contact with reality better and better.”
Science, Bohm continued, is sure to evolve in totally unexpected
ways. He expressed the hope that future scientists would be less depen-
dent on mathematics for modeling reality and would draw on new
sources of metaphor and analogy. “We have an assumption now that’s
getting stronger and stronger that mathematics is the only way to deal
with reality,” Bohm said. “Because it’s worked so well for a while we’ve
assumed that it has to be that way.”
Like many other scientific visionaries, Bohm expected that science
and art would someday merge. “This division of art and science is tem-
porary,” he observed. “It didn’t exist in the past, and there’s no rea-
son why it should go on in the future.” Just as art consists not simply
of works of art but of an “attitude, the artistic spirit,” so does science
consist not in the accumulation of knowledge but in the creation of
fresh modes of perception. “The ability to perceive or think differ-
ently is more important than the knowledge gained,” Bohm explained.
There was something poignant about Bohm’s hope that science might
But Bohm himself, both in his writings and in person, was anything
but playful. For him, this was not a game, this truth seeking; it was a
dreadful, impossible, but necessary task. Bohm was desperate to know,
to discover the secret of everything, either through physics or through
meditation, through mystical knowledge. And yet he insisted that real-
ity was unknowable—because, I believe, he was repelled by the thought
of finality. He recognized that any truth, no matter how initially won-
drous, eventually ossifies into a dead, inanimate thing that does not
reveal the absolute but conceals it. Bohm craved not truth but revelation,
perpetual revelation. As a result, he was doomed to perpetual doubt.
I finally said good-bye to Bohm and his wife and departed. Outside,
a light rain was falling. I walked up the path to the street and glanced
back at the Bohms’ house, a modest, whitewashed cottage on a street
of modest, whitewashed cottages. Bohm died of a heart attack two
months later.24
always on the outside making stupid remarks will be able to close in,
because we cannot push them away by saying, ‘If you were right we
would be able to guess all the laws,’ because when the laws are all there
they will have an explanation for them. . . . There will be a degener-
ation of ideas, just like the degeneration that great explorers feel is
occurring when tourists begin moving in on a new territory.”26
Feynman’s vision was uncannily on target. He erred only in thinking
that it would be millennia, not decades, before the philosophers closed
in. I saw the future of physics in 1992 when I attended a symposium at
Columbia University in which philosophers and physicists discussed
the meaning of quantum mechanics.27 The symposium demonstrated
that more than 60 years after quantum mechanics was invented, its
meaning remained, to put it politely, elusive. In the lectures, one could
hear echoes of Wheeler’s it from bit approach, and Bohm’s pilot-wave
hypothesis, and the many-worlds model favored by Steven Weinberg
and others. But for the most part each speaker seemed to have arrived
at a private understanding of quantum mechanics, couched in idiosyn-
cratic language; no one seemed to understand, let alone agree with,
anyone else. The bickering brought to mind what Bohr once said of
quantum mechanics: “If you think you understand it, that only shows
you don’t know the first thing about it.”28
Of course, the apparent disarray could have stemmed entirely from
my own ignorance. But when I revealed my impression of confusion
and dissonance to one of the attendees, he reassured me that my per-
ception was accurate. “It’s a mess,” he said of the conference (and, by
implication, the whole business of interpreting quantum mechanics).
The problem, he noted, arose because, for the most part, the differ-
ent interpretations of quantum mechanics cannot be empirically dis-
tinguished from one another; philosophers and physicists favor one
interpretation over another for aesthetic and philosophical—that is,
subjective—reasons.
This is the fate of physics. The vast majority of physicists, those
employed in industry and even academia, will continue to apply the
knowledge they already have in hand—inventing more versatile lasers
and superconductors and computing devices—without worrying about
any underlying philosophical issues.29 A few diehards dedicated to
I
n 1990 I traveled to a remote resort in the mountains of northern
Sweden to attend a symposium entitled “The Birth and Early Evolu-
tion of Our Universe.” When I arrived, I found that about 30 particle
physicists and astronomers from around the world—the United States,
Europe, the Soviet Union, and Japan—were there. I had come to the
meeting in part to meet Stephen Hawking. The compelling symbol-
ism of his plight—powerful brain in a paralyzed body—had helped to
make him one of the best-known scientists in the world.
Hawking’s condition, when I met him, was worse than I had
expected. He sat in a semifetal position, hunch shouldered, slack jawed,
and painfully frail, his head tipped to one side, in a wheelchair loaded
with batteries and computers. As far as I could tell, he could move only
his left forefinger. With it he laboriously selected letters, words, or
sentences from a menu on his computer screen. A voice synthesizer
uttered the words in an incongruously deep, authoritative voice—rem-
iniscent of the cyborg hero of Robocop. Hawking seemed, for the most
part, more amused than distressed by his plight. His purple-lipped,
Mick Jagger mouth often curled up at one corner in a kind of smirk.
Hawking was scheduled to give a talk on quantum cosmology, a
field he had helped to create. Quantum cosmology assumes that at
very small scales, quantum uncertainty causes not merely matter and
energy, but the very fabric of space and time, to flicker between dif-
ferent states. These space-time fluctuations might give rise to worm-
holes, which could link one region of space-time with another one very
far away, or to “baby universes.” Hawking had stored the hour-long
90
confusion in readers. That is the job of the science writer, after all.
But sometimes the clearest science writing is the most dishonest.
My initial reaction to Hawking and others at the conference was, to
some extent, appropriate. Much of modern cosmology, particularly
those aspects inspired by unified theories of particle physics and other
esoteric ideas, is preposterous. Or, rather, it is ironic science, science
that is not experimentally testable or resolvable even in principle and
therefore is not science in the strict sense at all. Its primary function is
to keep us awestruck before the mystery of the cosmos.
The irony is that Hawking was the first prominent physicist of his
generation to predict that physics might soon achieve a complete, uni-
fied theory of nature and thus bring about its own demise. He offered
up this prophecy in 1980, just after he had been named the Lucasian Pro-
fessor of Mathematics at the University of Cambridge; Newton had held
this chair some 300 years earlier. (Few observers noted that at the end
of his speech, titled “Is the End of Theoretical Physics in Sight?,” Hawk-
ing suggested that computers, given their accelerated evolution, might
soon surpass their human creators in intelligence and achieve the final
theory on their own.)2 Hawking spelled out his prophecy in more detail
in A Brief History of Time. The attainment of a final theory, he declared in
the book’s closing sentence, might help us to “know the mind of God.”3
The phrase suggested that a final theory would bequeath us a mystical
revelation in whose glow we could bask for the rest of time.
But earlier in the book, in discussing what he called the no-bound-
ary proposal, Hawking offered a very different view of what a final
theory might accomplish. The no-boundary proposal addressed the
age-old questions: What was there before the big bang? What exists
beyond the borders of our universe? According to the no-boundary
proposal, the entire history of the universe, all of space and all of time,
forms a kind of four-dimensional sphere: space-time. Talking about
the beginning or end of the universe is thus as meaningless as talking
about the beginning or end of a sphere. Physics, too, Hawking conjec-
tured, might form a perfect, seamless whole after it is unified; there
might be only one fully consistent unified theory capable of generating
space-time as we know it. God might not have had any choice in creat-
ing the universe.
on his feet. He pulled a box of wooden matches out of his pocket and
placed two of them, forming a cross, on his hand. While Linde kept his
hand—at least seemingly—perfectly still, the top match trembled and
hopped as if jerked by an invisible string. The trick maddened his col-
leagues. Before long, matches and curses were flying every which way
as a dozen or so of the world’s most prominent cosmologists sought
in vain to duplicate Linde’s feat. When they demanded to know how
Linde did it, he smiled and growled, “Ees kvantum fluctuation.”
Linde is even more renowned for his theoretical sleights of hand. In
the early 1980s he helped to win acceptance for one of the more extrav-
agant ideas to emerge from particle physics: inflation. The invention of
inflation (the term discovery is not appropriate here) is generally cred-
ited to Alan Guth of MIT, but Linde helped to refine the theory and
win its acceptance. Guth and Linde proposed that very early in the his-
tory of our universe—at T = 10 –43 seconds, to be precise, when the cos-
mos was allegedly much smaller than a proton—gravity might briefly
have become a repulsive rather than an attractive force. As a result,
the universe supposedly passed through a tremendous, exponential
growth spurt before settling down to its current, much more leisurely
rate of expansion.
Guth and Linde based their idea on untested—and almost certainly
untestable—unified theories of particle physics. Cosmologists nonethe-
less fell in love with inflation, because it could explain some nagging
problems raised by the standard big bang model. First, why does the
universe appear more or less the same in all directions? The answer is
that just as blowing up a balloon smooths out its wrinkles, so would
the exponential expansion of the universe render it relatively smooth.
Conversely, inflation also explains why the universe is not a completely
homogeneous consommé of radiation, but contains lumps of matter in
the form of stars and galaxies. Quantum mechanics suggests that even
empty space is brimming with energy; this energy constantly fluctu-
ates, like waves dancing on the surface of a windblown lake. Accord-
ing to inflation, the peaks generated by these quantum fluctuations in
the very early universe could have become large enough, after being
inflated, to serve as the gravitational seeds from which stars and galax-
ies would grow.
that inflation could result from much more generic quantum processes
first proposed by John Wheeler. According to Wheeler, if one had a
microscope trillions upon trillions of times more powerful than any
in existence, one would see space and time fluctuating wildly because
of quantum uncertainty. Linde contended that what Wheeler called
“space-time foam” would inevitably give rise to the conditions neces-
sary for inflation.
Inflation is a self-exhausting process; the expansion of space quickly
causes the energy driving inflation to dissipate. But Linde argued that
once inflation begins, it will always continue somewhere—again,
because of quantum uncertainty. (A handy feature, this quantum
uncertainty.) New universes are sprouting into existence at this very
moment. Some immediately collapse back into themselves. Others
inflate so fast that matter never has a chance to coalesce. And some,
like ours, settle down to an expansion leisurely enough for gravity to
mold matter into galaxies, stars, and planets.
Linde sometimes compared this supercosmos to an infinite sea.
Viewed up close, the sea conveys an impression of dynamism and
change, of waves heaving up and down. We humans, because we live
within one of these heaving waves, think that the entire universe is
expanding. But if we could rise above the sea’s surface, we would real-
ize that our expanding cosmos is just a tiny, insignificant, local feature
of an infinite, eternal ocean. In a way, Linde contended, the old steady-
state theory of Fred Hoyle (which I discuss later in this chapter) was
right; when seen from a God-like perspective, the supercosmos exhib-
its a kind of equilibrium.
Linde was hardly the first physicist to posit the existence of other uni-
verses. But whereas most theorists treat other universes as mathemati-
cal abstractions, and slightly embarrassing ones at that, Linde delighted
in speculating about their properties. Elaborating on his self-reproduc-
ing universe theory, for example, Linde borrowed from the language
of genetics. Each universe created by inflation gives birth to still other
“baby universes.” Some of these offspring retain the “genes” of their
predecessors and evolve into similar universes with similar laws of
nature—and perhaps similar inhabitants. Invoking the anthropic princi-
ple, Linde proposed that some cosmic version of natural selection might
but to keep moving, to keep skating. Linde feared the thought of final-
ity. His self-reproducing universe theory makes sense in this light: if
the universe is infinite and eternal, then so is science, the quest for
knowledge. But even a physics confined to this universe, Linde sug-
gested, cannot be close to resolution. “For example, you do not include
consciousness. Physics studies matter, and consciousness is not matter.”
Linde agreed with John Wheeler that reality must be in some sense a
participatory phenomenon. “Before you make measurement, there is
no universe, nothing you can call objective reality,” Linde said.
Linde, like Wheeler and David Bohm, seemed to be tormented
by mystical yearnings that physics alone could never satisfy. “There
is some limit to rational knowledge,” he said. “One way to study the
irrational is to jump into it and just meditate. The other is to study the
boundaries of the irrational with the tools of rationality.” Linde chose
the latter route, because physics offered a way “not to say total non-
sense” about the workings of the world. But sometimes, he confessed,
“I’m depressed when I think I will die like a physicist.”
But what if cosmology has passed its peak, in the sense that it is
unlikely to deliver any more empirical surprises as profound as the
big bang theory itself? Cosmologists are lucky to know anything with
certainty, according to Howard Georgi, a particle physicist at Harvard
University. “I think you have to regard cosmology as a historical sci-
ence, like evolutionary biology,” said Georgi, a cherub-faced man with
a cheerfully sardonic manner. “You’re trying to look at the present-day
universe and extrapolate back, which is an interesting but dangerous
thing to do, because there may have been accidents that had big effects.
And they try very hard to understand what kinds of things can be acci-
dental and what features are robust. But I find it difficult to understand
those arguments well enough to really be convinced.” Georgi sug-
gested that cosmologists might acquire some much-needed humility
by reading the books of the evolutionary biologist Stephen Jay Gould,
who discusses the potential pitfalls of reconstructing the past based on
our knowledge of the present (see Chapter 5).
Georgi chuckled, perhaps recognizing the improbability of any
cosmologist’s taking his advice. Like Sheldon Glashow, whose office
was just down the hall, Georgi had once been a leader in the search
for a unified theory of physics. And like Glashow, Georgi eventually
denounced superstring theory and other candidates for a unified the-
ory as nontestable and thus nonscientific. The fate of particle physics
and cosmology, Georgi noted, are to some extent intertwined. Cos-
mologists hope that a unified theory will help them understand the
universe’s origins more clearly. Conversely, some particle physicists
hope that in lieu of terrestrial experiments, they can find confirma-
tion of their theories by peering through telescopes at the edge of the
universe. “That strikes me as a bit of a push,” Georgi remarked mildly,
“but what can I say?” When I asked him about quantum cosmology,
the field explored by Hawking, Linde, and others, Georgi smiled mis-
chievously. “A simple particle physicist like myself has trouble in those
uncharted waters,” he said. He found papers on quantum cosmology,
with all their talk of wormholes and time travel and baby universes,
“quite amusing. It’s like reading Genesis.” As for inflation, it is “a won-
derful sort of scientific myth, which is at least as good as any other cre-
ation myth I’ve ever heard.”10
T here will always be those who reject not only inflation, baby uni-
verses, and other highly speculative hypotheses, but the big bang
theory itself. The dean of big bang bashers is Fred Hoyle, a British
astronomer and physicist. A selective reading of Hoyle’s résumé might
make him appear the quintessential insider. He studied at the Univer-
sity of Cambridge under the Nobel laureate Paul Dirac, who correctly
predicted the existence of antimatter. Hoyle became a lecturer at Cam-
bridge in 1945, and in the 1950s he helped to show how stars forge the
heavy elements of which planets and people are made. Hoyle founded
the prestigious Institute of Astronomy at Cambridge in the early 1960s
and served as its first director. For these and other achievements he was
knighted in 1972. Yes, Hoyle is Sir Fred. Yet Hoyle’s stubborn refusal to
accept the big bang theory—and his adherence to fringe ideas in other
fields—made him an outlaw in the field he had helped to create.11
Since 1988, Hoyle has lived in a high-rise apartment building in
Bournemouth, a town on England’s southern coast. When I visited
him there, his wife, Barbara, let me in and took me into the living
room, where I found Hoyle sitting in a chair watching a cricket match
on television. He rose and shook my hand without taking his eyes off
the match. His wife, gently admonishing him for his rudeness, went
over to the TV and turned it off. Only then did Hoyle, as if waking
from a spell, turn his full attention to me.
I expected Hoyle to be odd and embittered, but he was, for the most
part, all too amiable. With his pug nose, jutting jaw, and penchant for
slang—colleagues were “chaps” and a bogus theory a “bust flush”—he
exuded a kind of blue-collar integrity and geniality. He seemed to revel
in the role of outsider. “When I was young the old regarded me as an
outrageous young fellow, and now that I’m old the young regard me
as an outrageous old fellow.” He chuckled. “I should say that nothing
would embarrass me more than if I were to be viewed as someone who
is repeating what he has been saying year after year,” as many astron-
omers do. “What I would be worried about is somebody coming along
and saying, ‘What you’ve been saying is technically not sound.’ That
would worry me.” (Actually, Hoyle has been accused of both repeti-
tiveness and technical errors.12)
Hoyle had a knack for sounding reasonable—for example, when
arguing that the seeds of life must have come to our planet from
outer space. The spontaneous generation of life on the earth, Hoyle
once remarked, would have been as likely as the assemblage of a 747
aircraft by a tornado passing through a junkyard. Elaborating on this
point during our interview, Hoyle pointed out that asteroid impacts
rendered the earth uninhabitable until at least 3.8 billion years ago
and that cellular life had almost certainly appeared by 3.7 billion years
ago. If one thought of the entire 4.5-billion-year history of the planet
as a 24-hour day, Hoyle elaborated, then life appeared in about half
an hour. “You’ve got to discover DNA; you’ve got to make thousands
of enzymes in that half an hour,” he explained. “And you’ve got to do
it in a very hostile situation. So I find when you put all this together it
doesn’t add up to a very attractive situation.” As Hoyle spoke, I found
myself nodding in agreement. Yes, of course life could not have origi-
nated here. What could be more obvious? Only later did I realize that
according to Hoyle’s timetable, apes were transmogrified into humans
some 20 seconds ago, and modern civilization sprang into existence in
less than 1/10 second. Improbable, perhaps, but it happened.
Hoyle had first started thinking seriously about the origin of the
universe shortly after World War II, during long discussions with two
other physicists, Thomas Gold and Hermann Bondi. “Bondi had a
relative somewhere—he seemed to have relatives everywhere—and
one sent him a case of rum,” Hoyle recalled. While imbibing Bondi’s
liquor, the three physicists turned to a perennial puzzle of the young
and intoxicated: How did we come to be?
The finding that all galaxies in the cosmos are receding from one
another had already convinced many astronomers that the universe
had exploded into being at a specific time in the past and was still
expanding. Hoyle’s fundamental objection to this model was philo-
sophical. It did not make sense to talk about the creation of the universe
unless one already had space and time for the universe to be created in.
“You lose the universality of the laws of physics,” Hoyle explained to
which had always served him so well, became less creative than reac-
tionary. He degenerated into what Harold Bloom derided as a “mere
rebel,” although he still dreamed of what might have been.
Hoyle also seemed to suffer from another problem. The task of the
scientist is to find patterns in nature. There is always the danger that
one will see patterns where there are none. Hoyle, in the latter part
of his career, seemed to have succumbed to this pitfall. He saw pat-
terns—or, rather, conspiracies—both in the structure of the cosmos
and among those scientists who rejected his radical views. Hoyle’s
mind-set is most evident in his views on biology. Since the early 1970s
he has argued that the universe is pervaded by viruses, bacteria, and
other organisms. (Hoyle first broached this possibility in 1957 in The
Black Cloud, which remains the best known of his many science fiction
novels.) These space-faring microbes supposedly provided the seeds for
life on earth and spurred evolution thereafter; natural selection played
little or no role in creating the diversity of life.14 Hoyle has also asserted
that epidemics of influenza, whooping cough, and other diseases are
triggered when the earth passes through clouds of pathogens.
Discussing the biomedical establishment’s continued belief in the
more conventional, person-to-person mode of disease transmission,
Hoyle glowered. “They don’t look at those data and say, “Well, it’s
wrong,’ and stop teaching it. They just go on doping out the same rub-
bish. And that’s why if you go to the hospital and there’s something
wrong with you, you’ll be lucky if they cure it.” But if space is swarm-
ing with organisms, I asked, why haven’t they been detected? Oh, but
they probably were, Hoyle assured me. He suspected that U.S. exper-
iments on high-altitude balloons and other platforms had turned up
evidence of life in space in the 1960s, but that officials had hushed it up.
Why? Perhaps for reasons related to national security, Hoyle suggested,
or because the results contradicted received wisdom. “Science today is
locked into paradigms,” he intoned solemnly. “Every avenue is blocked
by beliefs that are wrong, and if you try to get anything published by
a journal today, you will run against a paradigm and the editors will
turn it down.”
Hoyle emphasized that, contrary to certain reports, he did not
believe the AIDS virus came from outer space. It “is such a strange
the universe. To derive the Hubble constant, one must measure the
breadth of the red shift of galaxies and their distance from the earth.
The former measurement is straightforward, but the latter is horren-
dously complicated. Astronomers cannot assume that the apparent
brightness of a galaxy is proportional to its distance; the galaxy might
be nearby, or it might simply be intrinsically bright. Some astronomers
insist that the universe is 10 billion years old or even younger; others
are equally sure that it cannot be less than 20 billion years old.15
The debate over the Hubble constant offers an obvious lesson: even
when performing a seemingly straightforward calculation, cosmologists
must make various assumptions that can influence their results; they
must interpret their data, just as evolutionary biologists and historians
do. One should thus take with a large grain of salt any claims based on
high precision (such as Schramm’s assertion that nucleosynthesis calcu-
lations agree with theoretical predictions to five decimal points).
More detailed observations of our cosmos will not necessarily resolve
questions about the Hubble constant or other issues. Consider: the most
mysterious of all stars is our own sun. No one really knows, for exam-
ple, what causes sunspots or why their numbers wax and wane over
periods of roughly a decade. Our ability to describe the universe with
simple, elegant models stems in large part from our lack of data, our
ignorance. The more clearly we can see the universe in all its glorious
detail, the more difficult it will be for us to explain with a simple theory
how it came to be that way. Students of human history are well aware of
this paradox, but cosmologists may have a hard time accepting it.
This sun principle suggests that many of the more exotic suppo-
sitions of cosmology are due for a fall. As recently as the early 1970s
black holes were still considered theoretical curiosities, not to be taken
seriously. (Einstein himself thought black holes were “a blemish to
be removed from his theory by a better mathematical formulation,”
according to Freeman Dyson.16) Gradually, as a result of the prosely-
tizing of John Wheeler and others, they have come to be accepted as
real objects. Many theorists are now convinced that almost all galax-
ies, including our own, harbor gigantic black holes at their cores. The
reason for this acceptance is that no one can imagine a better way to
explain the violent swirling of matter at the center of galaxies.
N
o other field of science is as burdened by its past as is evolutionary
biology. It reeks of what the literary critic Harold Bloom called
the anxiety of influence. The discipline of evolutionary biology
can be defined to a large degree as the ongoing attempt of Darwin’s
intellectual descendants to come to terms with his overwhelming
influence. Darwin based his theory of natural selection, the central
component of his vision, on two observations. First, plants and ani-
mals usually produce more offspring than their environment can sus-
tain. (Darwin borrowed this idea from the British economist Thomas
Malthus.) Second, these offspring differ slightly from their parents and
from each other. Darwin concluded that each organism, in its struggle
to survive long enough to reproduce, competes either directly or indi-
rectly with others of its species. Chance plays a role in the survival of
any individual organism, but nature will favor, or select, those organ-
isms whose variations make them slightly more fit, that is, more likely
to survive long enough to reproduce and pass on those adaptive varia-
tions to their offspring.
Darwin could only guess what gives rise to the all-important vari-
ations between generations. On the Origin of Species, first published in
1859, mentioned a proposal set forth by the French biologist Jean-Bap-
tiste Lamarck, that organisms could pass on not only inherited but also
acquired characteristics to their heirs. For example, the constant cran-
ing of a giraffe to reach leaves high in a tree would alter its sperm or
egg so that its offspring would be born with longer necks. But Darwin
was clearly uncomfortable with the idea that adaptation is self-directed.
114
hands to make a point, they quivered slightly. It was the tremor not of
a nervous man, but of a finely tuned, high-performance competitor in
the war of ideas: Darwin’s greyhound.
As in his books, Dawkins in person exuded a supreme self-assur-
ance. His statements often seemed to have an implied preamble: “As
any fool can see . . .” An unapologetic atheist, Dawkins announced
that he was not the sort of scientist who thought science and religion
addressed separate issues and thus could easily coexist. Most religions,
he contended, hold that God is responsible for the design and purpose
evident in life. Dawkins was determined to stamp out this point of
view. “All purpose comes ultimately from natural selection,” he said.
“This is the credo that I want to put forward.”
Dawkins then spent some 45 minutes setting forth his ultrareduc-
tionist version of evolution. He suggested that we think of genes as
little bits of software that have only one goal: to make more copies of
themselves. Carnations, cheetahs, and all living things are just elab-
orate vehicles that these “copy-me programs” have created to help
them reproduce. Culture, too, is based on copy-me programs, which
Dawkins called memes. Dawkins asked us to imagine a book with the
message: Believe this book and make your children believe it or when
you die you will all go to a very unpleasant place called hell. “That’s a
very effective piece of copy-me code. Nobody is foolish enough to just
accept the injunction, ‘Believe this and tell your children to believe it.’
You have to be a little more subtle and dress it up in some more elabo-
rate way. And of course we know what I’m talking about.” Of course.
Christianity, like all religions, is an extremely successful chain letter.
What could make more sense?
Dawkins then fielded questions from the audience, a motley assort-
ment of journalists, educators, book editors, and other quasi-intellectu-
als. One listener was John Perry Barlow, a former Whole Earth hippie
and occasional lyricist for the Grateful Dead who had mutated into a
New Age cyber-prophet. Barlow, a bearish man with a red bandanna
tied around his throat, asked Dawkins a long question having some-
thing to do with where information really exists.
Dawkins’s eyes narrowed, and his nostrils flared ever so slightly as
they caught the scent of woolly-headedness. Sorry, he said, but he did
not understand the question. Barlow spoke for another minute or so. “I
feel you are trying to get at something which interests you but doesn’t
interest me,” Dawkins said and scanned the room for another ques-
tioner. Suddenly, the room seemed several degrees chillier.
Later, during a discussion about extraterrestrial life, Dawkins set
forth his belief that natural selection is a cosmic principle; wherever
life is found, natural selection has been at work. He cautioned that life
cannot be too common in the universe, because thus far we have found
no evidence of life on other planets in the solar system or elsewhere
in the cosmos. Barlow bravely broke in to suggest that our inability
to detect alien life-forms may stem from our perceptual inadequacies.
“We don’t know who discovered water,” Barlow added meaningfully,
“but we can be pretty sure it wasn’t fish.” Dawkins turned his level
gaze on Barlow. “So you mean we’re looking at them all the time,”
Dawkins asked, “but we don’t see them?” Barlow nodded. “Yessss,”
Dawkins sighed, as if exhaling all hope of enlightening the unutterably
stupid world.
Dawkins can be equally harsh with his fellow biologists, those who
have dared to challenge the basic paradigm of Darwinism. He has
argued, with devastating persuasiveness, that all attempts to modify or
transcend Darwin in any significant way are flawed. He opened his 1986
book The Blind Watchmaker with the following proclamation: “Our exis-
tence once presented the greatest of all mysteries, but . . . it is a mystery
no longer because it is solved. Darwin and Wallace solved it, though we
shall continue to add footnotes to their solution for a while yet.”5
“There’s always an element of rhetoric in those things,” Dawkins
replied when I asked him later about the footnotes remark. “On the
other hand, it’s a legitimate piece of rhetoric,” in that Darwin did
solve “the mystery of how life came into existence and how life has
the beauty, the adaptiveness, the complexity it has.” Dawkins agreed
with Gunther Stent that all the great advances in biology since Dar-
win—Mendel’s demonstration that genes come in discrete packages,
Watson and Crick’s discovery of the double-helical structure of DNA—
buttressed rather than undermined Darwin’s basic idea.
Molecular biology has recently revealed that the process whereby
DNA interacts with RNA and proteins is more complicated than
N aturally, some modern biologists bridle at the notion that they are
merely adding footnotes to Darwin’s magnum opus. One of Dar-
win’s strongest (in Bloom’s sense) descendants is Stephen Jay Gould of
Harvard University. Gould has sought to resist the influence of Darwin
by denigrating his theory’s power, by arguing that it doesn’t explain
that much. Gould began staking out his philosophical position in the
1960s by attacking the venerable doctrine of uniformitarianism, which
holds that the geophysical forces that shaped the earth and life have
been more or less constant.6
In 1972, Gould and Niles Eldredge of the American Museum of
Natural History in New York extended this critique of uniformitari-
anism to biological evolution by introducing the theory of punctuated
equilibrium (also called punk eek or, by critics of Gould and Eldredge,
evolution by jerks).7 New species are only rarely created through the
gradual, linear evolution that Darwin described, Gould and Eldredge
argued. Rather, speciation is a relatively rapid event that occurs when
a group of organisms veers away from its stable parent population and
embarks on its own genetic course. Speciation must depend not on the
kind of adaptive processes described by Darwin (and Dawkins), but on
much more particular, complex, contingent factors.
In his subsequent writings, Gould has hammered away relentlessly
at ideas that he claims are implicit in many interpretations of Darwin-
ian theory: progress and inevitability. Evolution does not demonstrate
any coherent direction, according to Gould, nor are any of its prod-
ucts—such as Homo sapiens—in any sense inevitable; replay the “tape of
life” a million times, and this peculiar simian with the oversized brain
might never come to be. Gould has also attacked genetic determinism
wherever he has found it, whether in pseudo-scientific claims about
race and intelligence or in much more respectable theories related to
sociobiology. Gould packages his skepticism in a prose rich with refer-
ences to culture high and low and suffused with an acute awareness of
its own existence as a cultural artifact. He has been stunningly success-
ful; almost all his books have been bestsellers, and he is one of the most
widely quoted scientists in the world.8
record, which means gradualism exists but it’s not really important in
the overall pattern of things.”
As Gould continued speaking, I began to doubt whether he was
really interested in resolving debates over punctuated equilibrium
or other issues. When I asked him if he thought biology could ever
achieve a final theory, he grimaced. Biologists who hold such beliefs
are “naive inductivists,” he said. “They actually think that once we
sequence the human genome, well, we’ll have it!” Even some paleon-
tologists, he admitted, probably think “if we keep at it long enough
we really will know the basic features of the history of life and then
we’ll have it.” Gould disagreed. Darwin “had the answer right about
the basic interrelationships of organisms, but to me that’s only a begin-
ning. It’s not over; it’s started.”
So what did Gould consider the outstanding issues for evolutionary
biology? “Oh, there are so many I don’t know where to start.” He noted
that theorists still had to determine the “full panoply of causes” under-
lying evolution, from molecules on up to large populations of organ-
isms. Then there were “all these contingencies,” such as the asteroid
impacts that are thought to have caused mass extinctions. “So I would
say causes, strengths of causes, levels of causes, and contingency.” Gould
mused a moment. “That’s not a bad formulation,” he said, whereupon
he took a little notebook from his shirt pocket and scribbled in it.
Then Gould cheerfully rattled off all the reasons that science would
never answer all these questions. As a historical science, evolutionary
biology can offer only retrospective explanations and not predictions,
and sometimes it can offer nothing at all because it lacks sufficient data.
“If you’re missing the evidence of antecedent sequences, then you can’t
do it at all,” he said. “That’s why I think we’ll never know the origins
of language. Because it’s not a question of theory; it’s a question of con-
tingent history.”
Gould also agreed with Gunther Stent that the human brain, created
for survival in preindustrial society, is simply not capable of solving
certain questions. Research has shown that humans are inept at han-
dling problems that involve probabilities and the interactions of com-
plex variables—such as nature and nurture. “People do not understand
that if both genes and culture interact—of course they do—you can’t
then say it’s 20 percent genes and 80 percent environment. You can’t
do that. It’s not meaningful. The emergent property is the emergent
property and that’s all you can ever say about it.” Gould was not one of
those who invested life or the mind with mystical properties, however.
“I’m an old-fashioned materialist,” he said. “I think the mind arises
from the complexities of neural organization, which we don’t really
understand very well.”
To my surprise, Gould then plunged into a rumination on infinity
and eternity. “These are two things that we can’t comprehend,” he
said. “And yet theory almost demands that we deal with it. It’s prob-
ably because we’re not thinking about them right. Infinity is a para-
dox within Cartesian space, right? When I was eight or nine I used to
say, ‘Well, there’s a brick wall out there.’ Well, what’s beyond the brick
wall? But that’s Cartesian space, and even if space is curved you still
can’t help thinking what’s beyond the curve, even if that’s not the right
way of thinking about it. Maybe all of that’s just wrong! Maybe it’s a
universe of fractal expansions! I don’t know what it is. Maybe there are
ways in which this universe is structured we just can’t think about.”
Gould doubted whether scientists in any discipline could achieve a
final theory, given their tendency to sort things according to precon-
ceived concepts. “I really wonder whether any claim for a final theory
isn’t just reflecting the way in which we conceptualize them.”
Was it possible, given all these limits, that biology, and even science
as a whole, might simply go as far as it could and then come to an end?
Gould shook his head. “People thought science was ending in 1900,
and since then we’ve got plate tectonics, the genetic basis for life. Why
should it stop?” And anyway, Gould added, our theories might reflect
our own limitations as truth seekers rather than the true nature of
reality. Before I could respond, Gould had already leaped ahead of me.
“Of course, if those limits are intrinsic, then science will be complete
within the limits. Yeah, yeah. Okay, that’s a fair argument. I don’t think
it’s right, but I can understand the structure of it.”
Moreover, there may still be great conceptual revolutions in biol-
ogy’s future, Gould argued. “The evolution of life on this planet
may turn out to be a very small part of the phenomenon of life.” Life
all you’ve got to do is find the details,” Dawkins replied drily, “I sup-
pose that’s got to be true.” On the other hand, he added, biologists can
never be sure which biological principles have truly universal signifi-
cance “’til we’ve been to a few other planets that have life.” Dawkins
was acknowledging, implicitly, that the deepest questions of biology—
To what extent is life on earth inevitable? Is Darwinism a universal or
merely terrestrial law?—will never be truly, empirically answered as
long as we have only one form of life to study.
“We have this British-French business. Darwin’s all right and Lamarck
is bad. It’s really terrible.” Margulis noted that symbiogenesis, the cre-
ation of new species through symbiosis, was not really an original idea.
The concept was first proposed late in the last century, and it has been
resurrected many times since then.
Before meeting Margulis, I had read a draft of a book she was writ-
ing with her son, Dorion Sagan, called What Is Life? The book was an
amalgam of philosophy, science, and lyric tributes to “life: the eternal
enigma.” It argued, in effect, for a new holistic approach to biology,
in which the animist beliefs of the ancients would be fused with the
mechanistic views of post-Newton, post-Darwin science.13 Margulis
conceded that the book was aimed less at advancing testable, scientific
assertions than at encouraging a new philosophical outlook among
biologists. But the only difference between her and biologists like
Dawkins, she insisted, was that she admitted her philosophical out-
look instead of pretending that she didn’t have one. “Scientists are no
cleaner with respect to being untouched by culture than anyone else.”
Did that mean she did not believe that science could achieve abso-
lute truth? Margulis pondered the question a moment. She noted that
science derives its power and persuasiveness from the fact that its asser-
tions can be checked against the real world—unlike the assertions of
religion, art, and other modes of knowledge. “But I don’t think that’s
the same as saying there’s absolute truth. I don’t think there’s absolute
truth, and if there is, I don’t think any person has it.”
But then, perhaps realizing how close she was edging toward relativ-
ism—toward being what Harold Bloom had called a mere rebel—Mar-
gulis took pains to steer herself back toward the scientific mainstream.
She said that, although she was often considered a feminist, she was
not and resented being typecast as one. She conceded that, in compari-
son to such concepts as “survival of the fittest” and “nature red in tooth
and claw,” Gaia and symbiosis might seem feminine. “There is that cul-
tural overtone, but I consider that just a complete distortion.”
She rejected the notion—often associated with Gaia—that the earth
is in some sense a living organism. “The earth is obviously not a live
organism,” Margulis said, “because no single living organism cycles
its waste. That’s so anthropomorphic, so misleading.” James Lovelock
then . . .” Her voice trailed off. Like so many strong scientists, Margulis
cannot help but yearn, now and then, to be simply a respected member
of the status quo.
that I’m right,” Kauffman said; then much of the order displayed by
biological systems results not from “the hard-won success of natural
selection” but from these pervasive order-generating effects. “The
whole point of it is that it’s spontaneous order. It’s order for free, okay?
Once again, if that view is right, then not only do we have to modify
Darwinism to account for it, but we understand something about the
emergence and order of life in a different way.”
His computer simulations offer another, more sobering message,
Kauffman said. Just as the addition of a single grain of sand to a large
sandpile can trigger avalanches down its sides, so can a change in the
fitness of one species cause a sudden change in the fitness of all the
other species in the ecosystem, which can culminate in an avalanche
of extinctions. “To say it metaphorically, the best adaptation each of us
achieves may unleash an avalanche that leads to our ultimate demise,
okay? Because we’re all playing the game together and sending out rip-
ples into the system we mutually create. Now that bespeaks humility.”
Kauffman credited the sandpile analogy to Per Bak, a physicist asso-
ciated with the Santa Fe Institute who had developed a theory called
self-organized criticality (which is discussed in Chapter 8).
When I brought up my concern that many scientists, and those at
the Santa Fe Institute in particular, seemed to confuse computer simu-
lations with reality, Kauffman nodded. “I agree with you. That person-
ally bothers me a lot,” he replied. Viewing some simulations, he said, “I
cannot tell where the boundary is between talking about the world—I
mean, everything out there—and really neat computer games and art
forms and toys.” When he did computer simulations, however, he was
always “trying to figure out how something in the world works, or
almost always. Sometimes I’m busy just trying to find things that just
seem interesting, and I wonder if they apply. But I don’t think you’re
doing science unless what you’re doing winds up fitting something out
there in the world, demonstrably. And that ultimately means being
testable.”
His model of genetic networks “makes all kinds of predictions” that
will probably be tested within the next 15 or 20 years, Kauffman said.
“It’s testable with some caveats. When you’ve got a system with 100,000
components, and you can’t take the system apart in detail—yet—what
not because they imparted an intellectual thrill but because they were
effective: they accurately predicted the outcome of experiments. Old
theories are old for good reason. They are robust, flexible. They have
an uncanny correspondence to reality. They may even be True.
Would-be revolutionaries face another problem. The scientific cul-
ture was once much smaller and therefore more susceptible to rapid
change. Now it has become a vast intellectual, social, and political
bureaucracy, with inertia to match. Stuart Kauffman, during one of
our conversations, compared the conservatism of science to that of
biological evolution, in which history severely constrains change. Not
only science, but many other systems of ideas—and particularly those
with important social consequences—tend to “stabilize and freeze in”
over time, Kauffman noted. “Think of the evolution of standard oper-
ating procedure on ships, or aircraft carriers,” he said. “It’s an incred-
ibly conservative process. If you wandered in and tried to design, ab
initio [from scratch], procedures on an aircraft carrier, I mean, you’d
just blow it to shit!”
Kauffman leaned toward me. “This is really interesting,” he said.
“Take the law, okay? British common law has evolved for what, 1,200
years? There is this enormous corpus with a whole bunch of concepts
about what constitutes reasonable behavior. It would be really hard to
change all that! I wonder if you could show that as a web of concepts
matures in any area—in all these cases we’re making maps to make
our way in the world—I wonder if you could show that somehow the
center of it got more and more resistant to change.” Oddly enough,
Kauffman was presenting an excellent argument why his own radical
theories about the origin of life and of biological order would probably
never be accepted. If any scientific idea has proved its ability to over-
come all challengers, it is Darwin’s theory of evolution.
his opinion of them, he merely shook his head, sighed deeply, and snick-
ered—as if overcome by the folly of humanity. Stuart Kauffman’s the-
ory of autocatalysis fell into this category. “Running equations through
a computer does not constitute an experiment,” Miller sniffed.
Miller acknowledged that scientists may never know precisely
where and when life emerged. “We’re trying to discuss a historical
event, which is very different from the usual kind of science, and so
criteria and methods are very different,” he remarked. But when I sug-
gested that Miller sounded pessimistic about the prospects for discov-
ering life’s secret, he looked appalled. Pessimistic? Certainly not! He
was optimistic!
One day, he vowed, scientists would discover the self-replicating
molecule that had triggered the great saga of evolution. Just as the dis-
covery of the microwave afterglow of the big bang legitimized cos-
mology, so would the discovery of the first genetic material legitimize
Miller’s field. “It would take off like a rocket,” Miller muttered through
clenched teeth. Would such a discovery be immediately self-apparent?
Miller nodded. “It will be in the nature of something that will make
you say, ‘Jesus, there it is. How could you have overlooked this for so
long?’ And everybody will be totally convinced.”
When Miller performed his landmark experiment in 1953, most
scientists still shared Darwin’s belief that proteins were the likeliest
candidates for self-reproducing molecules, since proteins were thought
to be capable of reproducing and organizing themselves. After the dis-
covery that DNA is the basis for genetic transmission and for protein
synthesis, many researchers began to favor nucleic acids over proteins
as the ur-molecules. But there was a major hitch in this scenario. DNA
can make neither proteins nor copies of itself without the help of cat-
alytic proteins called enzymes. This fact turned the origin of life into
a classic chicken-or-egg problem: which came first, proteins or DNA?
In The Coming of the Golden Age, Gunther Stent, prescient as always,
suggested that this conundrum could be solved if researchers found a
self-replicating molecule that could act as its own catalyst.21 In the early
1980s, researchers identified just such a molecule: ribonucleic acid, or
RNA, a single-strand molecule that serves as DNA’s helpmate in man-
ufacturing proteins. Experiments revealed that certain types of RNA
could act as their own enzymes, snipping themselves in two and splic-
ing themselves back together again. If RNA could act as an enzyme
then it might also be able to replicate itself without help from proteins.
RNA could serve as both gene and catalyst, egg and chicken.
But the so-called RNA-world hypothesis suffers from several prob-
lems. RNA and its components are difficult to synthesize under the
best of circumstances, in a laboratory, let alone under plausible pre-
biotic conditions. Once RNA is synthesized, it can make new copies
of itself only with a great deal of chemical coaxing from the scientist.
The origin of life “has to happen under easy conditions, not ones that
are very special,” Miller said. He is convinced that some simpler—and
possibly quite dissimilar—molecule must have paved the way for RNA.
Lynn Margulis, for one, doubts whether investigations of the ori-
gin of life will yield the kind of simple, self-validating answer of
which Miller dreams. “I think that may be true of the cause of can-
cer but not of the origin of life,” Margulis said. Life, she pointed out,
emerged under complex environmental conditions. “You have day and
night, winter and summer, changes in temperature, changes in dry-
ness. These things are historical accumulations. Biochemical systems
are effectively historical accumulations. So I don’t think there is ever
going to be a packaged recipe for life: add water and mix and get life.
It’s not a single-step process. It’s a cumulative process that involves a lot
of changes.” The smallest bacterium, she noted, “is so much more like
people than Stanley Miller’s mixtures of chemicals, because it already
has these system properties. So to go from a bacterium to people is less
of a step than to go from a mixture of amino acids to that bacterium.”
Francis Crick wrote in his book Life Itself that “the origin of life
appears to be almost a miracle, so many are the conditions which
would have to be satisfied to get it going.”22 (Crick, it should be noted,
is an agnostic leaning toward atheism.) Crick proposed that aliens visit-
ing the earth in a spacecraft billions of years ago may have deliberately
seeded it with microbes.
Perhaps Stanley Miller’s hope will be fulfilled: scientists will find
some clever chemical or combination of chemicals that can reproduce,
mutate, and evolve under plausible prebiotic conditions. The discov-
ery would be sure to launch a new era of applied chemistry. (The vast
E
verything would have been easy for Edward O. Wilson if he had
just stuck to ants. Ants lured him into biology when he was a boy
growing up in Alabama, and they remain his greatest source of
inspiration. He has written stacks of papers and several books on the
tiny creatures. Ant colonies line Wilson’s office at Harvard University’s
Museum of Comparative Zoology. Showing them off to me, he was
as proud and excited as a 10-year-old child. When I asked Wilson if he
had exhausted the topic of ants yet, he exulted, “We’re only just begin-
ning!” He had recently embarked on a survey of Pheidole, one of the
most abundant genera in the animal kingdom. Pheidole is thought to
include more than 2,000 species of ants, most of which have never been
described or even named. “I guess with that same urge that makes
men in their middle age decide that at last they are going to row across
the Atlantic in a rowboat or join a group to climb K2, I decided that I
would take on Pheidole,” Wilson said.1
Wilson was a leader in the effort to conserve the earth’s biologi-
cal diversity, and his grand goal was to make Pheidole a benchmark
of sorts for biologists seeking to monitor the biodiversity of different
regions. Drawing on Harvard’s collection of ants, the largest in the
world, Wilson was generating a set of painstaking pencil drawings of
each species of Pheidole along with descriptions of its behavior and ecol-
ogy. “It probably looks crushingly dull to you,” Wilson apologized as
he flipped through his drawings of Pheidole species (which were actu-
ally compellingly monstrous). “To me it’s one of the most satisfying
activities imaginable.” He confessed that, when he peered through his
144
brothers, and mothers would strive to kill their fertile daughters; and
no one would think of interfering.”10 In other words, we humans are
animals, and natural selection has shaped not only our bodies but our
very beliefs, our fundamental sense of right and wrong. One dismayed
Victorian reviewer of Descent fretted in the Edinburgh Review, “If these
views be true, a revolution in thought is imminent, which will shake
society to its very foundations by destroying the sanctity of the con-
science and the religious sense.”11 That revolution happened long ago.
Before the end of the nineteenth century, Nietzsche had proclaimed
that there were no divine underpinnings to human morality: God is
dead. We did not need sociobiology to tell us that.
of their power, but on the U.S. government, industry, and the media.
He called the United States a “terrorist superpower” and the media its
“propaganda agent.” He told me that if the New York Times, one of his
favorite targets, started reviewing his books on politics, it would be a
sign to him that he was doing something wrong. He summed up his
world view as “whatever the establishment is, I’m against it.”
I said I found it ironic that his political views were so antiestablish-
ment, given that in linguistics he is the establishment. “No I’m not,” he
snapped. His voice, which ordinarily is hypnotically calm—even when
he is eviscerating someone—suddenly had an edge. “My position in
linguistics is a minority position, and it always has been.” He insisted
that he was “almost totally incapable of learning languages” and that,
in fact, he was not even a professional linguist. MIT had only hired him
and given him tenure, he suggested, because it really did not know or
care much about the humanities; it simply needed to fill a slot.12
I provide this background for its cautionary value. Chomsky is one
of the most contrarian intellectuals I have met (rivaled only by the
anarchic philosopher Paul Feyerabend). He is compelled to put all
authority figures in their places, even himself. He exemplifies the anxi-
ety of self-influence. One should thus take all of Chomsky’s pronounce-
ments with a grain of salt. In spite of his denials, Chomsky is the most
important linguist who has ever lived. “It is hardly an exaggeration to
say that there is no major theoretical issue in linguistics today that is
debated in terms other than those in which he has chosen to define it,”
declares the Encyclopaedia Britannica.13 Chomsky’s position in the his-
tory of ideas has been likened to that of Descartes and Darwin.14 When
Chomsky was in graduate school in the 1950s, linguistics—and all the
social sciences—was dominated by behaviorism, which hewed to John
Locke’s notion that the mind begins as a tabula rasa, a blank slate that
is inscribed upon by experience. Chomsky challenged this approach.
He contended that children could not possibly learn language solely
through induction, or trial and error, as behaviorists believed. Some
fundamental principles of language—a kind of universal grammar—
must be embedded in our brains. Chomsky’s theories, which he first set
forth in his 1957 book Syntactic Structures, helped to rout behaviorism
once and for all and paved the way for a more Kantian, genetically ori-
ented view of human language and cognition.15
Edward Wilson and other scientists who attempt to explain human
nature in genetic terms are all, in a sense, indebted to Chomsky. But
Chomsky has never been comfortable with Darwinian accounts of
human behavior. He accepts that natural selection may have played
some role in the evolution of language and other human attributes. But
given the enormous gap between human language and the relatively
simple communication systems of other animals, and given our frag-
mentary knowledge of the past, science can tell us little about how
language evolved. Just because language is adaptive now, Chomsky
elaborates, does not mean that it arose in response to selection pres-
sures. Language may have been an incidental by-product of a spurt in
intelligence that only later was coopted for various uses. The same may
be true of other properties of the human mind. Darwinian social sci-
ence, Chomsky has complained, is not a real science at all but “a phi-
losophy of mind with a little bit of science thrown in.” The problem,
according to Chomsky, is that “Darwinian theory is so loose it can
incorporate anything they discover.”16
Chomsky’s evolutionary perspective has, if anything, convinced
him that we may have only a limited ability to understand nature,
human or inhuman. He rejects the notion—popular among many sci-
entists—that evolution shaped the brain into a general-purpose learn-
ing and problem-solving machine. Chomsky believes, as Gunther Stent
and Colin McGinn do, that the innate structure of our minds imposes
limits on our understanding. (Stent and McGinn arrived at this conclu-
sion in part because of Chomsky’s research.)
Chomsky divides scientific questions into problems, which are at
least potentially answerable, and mysteries, which are not. Before the
seventeenth century, Chomsky explained to me, when science did not
really exist in the modern sense, almost all questions appeared to be
mysteries. Then Newton, Descartes, and others began posing ques-
tions and solving them with the methods that spawned modern sci-
ence. Some of those investigations have led to “spectacular progress,”
but many others have proved fruitless. Scientists have made absolutely
M
ind, not space, is science’s final frontier. Even the most avid
believers in the power of science to solve its problems consider
the mind a potentially endless source of questions. The prob-
lem of mind can be approached in many ways. There is the historical
dimension: How, why, did Homo sapiens become so smart? Darwin pro-
vided a general answer long ago: natural selection favored hominids
who could use tools, anticipate the actions of potential competitors,
organize into hunting parties, share information through language,
and adapt to changing circumstances. Together with modern genetics,
Darwinian theory has much to say about the structure of our minds
and thus about our sexual and social behavior (although not as much as
Edward Wilson and other sociobiologists might like).
But modern neuroscientists are interested less in how and why our
minds evolved, in a historical sense, than in how they are structured
and work right now. The distinction is similar to the one that can be
made between cosmology, which seeks to explain the origins and sub-
sequent evolution of matter, and particle physics, which addresses the
structure of matter as we find it here in the present. One discipline
is historical and thus necessarily tentative, speculative, and open-
ended. The other is, by comparison, much more empirical, precise, and
amenable to resolution and finality.
Even if neuroscientists restricted their studies to the mature rather
than embryonic brain, the questions would be legion. How do we
learn, remember, see, smell, taste, and hear? Most researchers would
say these problems, although profoundly difficult, are tractable;
161
genetics. We had to know about genes, and what genes did. But to pin
it down, we had to get down to the nitty-gritty and find the molecules
and things involved.”
Crick gloated that he was in a perfect position to promote conscious-
ness as a scientific problem. “I don’t have to get grants,” he said, because
he had an endowed chair at the Salk Institute. “The main reason I do it
is because I find the problem fascinating, and I feel I’ve earned the right
to do what I like.” Crick did not expect researchers to solve these prob-
lems overnight. “What I want to stress is that the problem is important
and has been too long neglected.”
While talking to Crick, I could not help but think of the famous
first line of The Double Helix, James Watson’s memoir about how he and
Crick had deciphered DNA’s structure: “I have never seen Francis Crick
in a modest mood.”4 Some historical revisionism is in order here. Crick
is often modest. During our conversation, he expressed doubts about
his oscillation theory of consciousness; he said parts of a book he was
writing on the brain were “dreadful” and needed rewriting. When I
asked Crick how he interpreted Watson’s quip, he laughed. What Wat-
son meant, Crick suggested, was not that he was immodest, but that
he was “full of confidence and enthusiasm and things like that.” If he
was also a tad bumptious at times, and critical of others, well, that was
because he wanted so badly to get to the bottom of things. “I can be
patient for about 20 minutes,” he said, “but that’s it.”
Crick’s analysis of himself, like his analysis of most things, seemed
on target. He has the perfect personality for a scientist, an empirical
scientist, the kind who answers questions, who gets us somewhere. He
is, or appears to be, singularly free of self-doubt, wishful thinking, and
attachments to his own theories. His immodesty, such as it is, comes
simply from wanting to know how things work, regardless of the
consequences. He cannot tolerate obfuscation or wishful thinking or
untestable speculation, the hallmarks of ironic science. He is also eager
to share his knowledge, to make things as clear as possible. This trait is
not as common among prominent scientists as one might expect.
In his autobiography, Crick revealed that as a youth and would-be
scientist he had worried that by the time he grew up everything
would already be discovered. “I confided my fears to my mother, who
reassured me. ‘Don’t worry, Ducky,’ she said. ‘There will be plenty left
for you to find out.’”5 Recalling this passage, I asked Crick if he thought
there would always be plenty left for scientists to find out. It all depends
on how one defines science, he replied. Physicists might soon deter-
mine the fundamental rules of nature, but they could then apply that
knowledge by inventing new things forever. Biology seems to have an
even longer future. Some biological structures—such as the brain—
are so complex that they may resist elucidation for some time. Other
puzzles, especially historical ones, such as the origin of life, may never
be fully answered simply because the available data are insufficient.
“There are enormous numbers of interesting problems” in biology,
Crick said. “There’s enough to keep us busy at least through our grand-
children’s time.” On the other hand, Crick agreed with Richard Daw-
kins that biologists already had a good general understanding of the
processes underlying evolution.
As Crick escorted me out of his office, we passed a table bearing a
thick stack of paper. It was a draft of Crick’s book on the brain, entitled
The Astonishing Hypothesis. Would I like to read his opening paragraph?
Sure, I said. “The Astonishing Hypothesis,” the book began, “is that
‘You,’ your joys and your sorrows, your memories and your ambitions,
your sense of personal identity and free will, are in fact no more than
the behavior of a vast assembly of nerve cells and their associated mole-
cules. As Lewis Caroll’s Alice might have phrased it, ‘You’re nothing but
a pack of neurons.’”6 I looked at Crick. He was grinning from ear to ear.
I was talking to Crick on the telephone several weeks later, check-
ing the facts of an article I had written about him, when he asked me
for some advice. He confessed that his editor wasn’t thrilled with the
title The Astonishing Hypothesis; she didn’t think the view that “we are
nothing but a pack of neurons” was all that astonishing. What did I
think? I told Crick I had to agree; his view of the mind was, after all,
just old-fashioned reductionism and materialism. I suggested that The
Depressing Hypothesis might be a more fitting title, but it might repel
would-be readers. The title didn’t matter that much anyway, I added,
since the book would sell on the strength of Crick’s name.
Crick absorbed all this with his usual good humor. When his book
appeared in 1994, it was still called The Astonishing Hypothesis. However,
Crick, or, more likely, his editor, had added a subtitle: The Scientific
Search for the Soul. I had to smile when I saw it; Crick was obviously
trying not to find the soul—that is, some spiritual essence that exists
independently of our fleshy selves—but to eliminate the possibility
that there was one. His DNA discovery had gone far toward eradicat-
ing vitalism, and now he hoped to stamp out any last vestiges of that
romantic worldview through his work on consciousness.
his work in immunology, he said that “before I came to it, there was
darkness—afterwards there was light.” He called a robot based on his
neural model his “creature” and said: “I can only observe it, like God. I
look down on its world.”9
I experienced Edelman’s self-regard firsthand when I visited him
at Rockefeller University in June 1992. (Sometime later, Edelman left
Rockefeller to head his own laboratory at the Scripps Institute in La
Jolla, California, just down the road from Crick.) Edelman is a large
man. Clad in a dark, broad-shouldered suit, he exuded a kind of men-
acing elegance and geniality. As in his books, he kept interrupting his
scientific discourse to dispense stories, jokes, or aphorisms whose rele-
vance was often obscure. The digressions seemed intended to demon-
strate that Edelman represented the ideal intellectual, both cerebral
and earthy, learned and worldly; no mere experimentalist, he.
Explaining how he became interested in the mind, Edelman said:
“I’m very excited by dark and romantic and open issues of science. I’m
not averse to working on details, but pretty much only in the service
of trying to address this issue of closure.” Edelman wanted to find
the answer to great questions. His Nobel Prize–winning research
on antibody structure had transformed immunology into “more or
less a closed science”; the central question, which concerned how the
immune system responds to invaders, was resolved. He and others
helped to show that self-recognition happens through a process known
as selection: the immune system has innumerable different antibodies,
and the presence of foreign antigens spurs the body to accelerate the
production of, or select, antibodies specific to that antigen and to sup-
press the production of other antibodies.
Edelman’s search for open questions led him inexorably to the devel-
opment and operation of the brain. He realized that a theory of the
human mind would represent the ultimate closure for science, for then
science could account for its own origin. Consider superstring theory,
Edelman said. Could it explain the existence of Edward Witten? Obvi-
ously not. Most theories of physics relegate issues involving the mind
to “philosophy or sheer speculation,” Edelman noted. “You read that
section of my book where Max Planck says we’ll never get this mystery
of the universe because we are the mystery? And Woody Allen said if I
had my life to live over again I’d live in a delicatessen?”
Describing his approach to the mind, Edelman sounded, at first, as
resolutely empirical as Crick. The mind, Edelman emphasized, can
only be understood from a biological standpoint, not through physics
or computer science or other approaches that ignore the structure of the
brain. “We will not have a deeply satisfactory brain theory unless we
have a deeply satisfactory theory of neural anatomy, okay? It’s as simple
as that.” To be sure, “functionalists,” such as the artificial-intelligence
maven Marvin Minsky, say they can build an intelligent being without
paying attention to anatomy. “My answer is, ‘When you show me, fine.’”
But as Edelman continued speaking, it became clear that, unlike
Crick, he viewed the brain through the filter of his idiosyncratic obses-
sions and ambitions. He seemed to think that all his insights were
totally original; no one had truly seen the brain before he had turned
his attention to it. He recalled that when he had started studying the
brain, or, rather, brains, he was immediately struck by their variability.
“It seemed to me very curious that the people who worked in neuro-
science always talked about brains as if they were identical,” he said.
“When you look at papers everybody talked about it as if it were a rep-
licable machine. But when you actually look in depth, at every level—
and there are amazing numbers of levels—the thing that really hits
you is the diversity.” Even identical twins, he remarked, show great
differences in the organization of their neurons. These differences, far
from being insignificant noise, are profoundly important. “It’s quite
scary,” Edelman said. “That’s something you just can’t get around.”
The vast variability and complexity of the brain may be related to a
problem with which philosophers from Kant to Wittgenstein had wres-
tled: how do we categorize things? Wittgenstein, Edelman elaborated,
highlighted the troublesome nature of categories by pointing out that
different games often had nothing in common except the fact that they
were games. “Typical Wittgenstein,” Edelman mused. “There is a kind
of ostentation in his modesty. I don’t know what that is. He provokes
you and it’s very powerful. It’s ambiguous, sometimes, and it’s not cute.
It’s riddle, it’s posturing around the riddle.”
long-term relationship with Francis, and that’s not something one can
answer—Boom! Boom!—on the way out the door. Or, as Groucho
Marx said, ‘Leave, and never darken my towels again!’” With that, he
departed on a wave of hollow laughter.
Edelman has admirers, but most dwell on the fringes of neurosci-
ence. His most prominent fan is the neurologist Oliver Sacks, whose
beautifully written accounts of his dealings with brain-damaged
patients have set the standard for literary—that is, ironic—neurosci-
ence. Francis Crick spoke for many of his fellow neuroscientists when
he accused Edelman of hiding “presentable” but not terribly original
ideas behind a “smoke screen of jargon.” Edelman’s Darwinian termi-
nology, Crick added, has less to do with any real analogies to Darwin-
ian evolution than with rhetorical grandiosity. Crick suggested that
Edelman’s theory be renamed “neural Edelmanism.” “The trouble
with Jerry,” Crick said, is that “he tends to produce slogans and sort
of waves them about without really paying attention to what other
people are saying. So it’s really too much hype, is what one is complain-
ing about.”10
The philosopher Daniel Dennett of Tufts University remained
unimpressed after visting Edelman’s laboratory. In a review of Edel-
man’s Bright Air, Brilliant Fire, Dennett argued that Edelman had
merely presented rather crude versions of old ideas. Edelman’s deni-
als nothwithstanding, his model was a neural network, and reentry
was feedback, according to Dennett. Edelman also “misunderstands
the philosophical issues he addresses at an elementary level,” Dennett
asserted. Edelman may profess scorn for those who think the brain is
a computer, but his use of a robot to “prove” his theory shows that he
holds the same belief, Dennett explained.11
Some critics accuse Edelman of deliberately trying to take credit for
others’ ideas by wrapping them in his own idiosyncratic terminology.
My own, somewhat more charitable, interpretation is that Edelman
has the brain of an empiricist and the heart of a romantic. He seemed
to acknowledge as much, in his typically oblique way, when I asked
him if he thought science was in principle finite or infinite. “I don’t
know what that means,” he replied. “I know what it means when I
say that a series in mathematics is finite or infinite. But I don’t know
what it means to say that science is infinite. Example, okay? I’ll quote
Wallace Stevens: Opus Posthumus. ‘In the very long run even the truth
doesn’t matter. The risk is taken.’” The search for truth is what counts,
Edelman seemed to be implying, not the truth itself.
Edelman added that Einstein, when asked whether science was
exhausted, reportedly answered, “Possibly, but what’s the use of
describing a Beethoven symphony in terms of air-pressure waves?”
Einstein, Edelman explained, was referring to the fact that physics
alone could not address questions related to value, meaning, and other
subjective phenomena. One might respond by asking: What is the
use of describing a Beethoven symphony in terms of reentrant neural
loops? How does the substitution of neurons for air-pressure waves or
atoms or any physical phenomenon do justice to the magic and mys-
tery of the mind? Edelman cannot accept, as Francis Crick does, that
we are “nothing but a pack of neurons.” Edelman therefore obfuscates
his basic neural theory—infusing it with terms and concepts borrowed
from evolutionary biology, immunology, and philosophy—to lend it
added grandeur, resonance, mystique. He is like a novelist who risks
obscurity—even seeks it—in the hope of achieving a deeper truth. He
is a practitioner of ironic neuroscience, one who, unfortunately, lacks
the requisite rhetorical skills.
Quantum Dualism
T here is one issue on which Crick, Edelman, and indeed almost all
neuroscientists agree: the properties of the mind do not depend
in any crucial way on quantum mechanics. Physicists, philosophers,
and others have speculated about links between quantum mechanics
and consciousness since at least the 1930s, when some philosophically
inclined physicists began to argue that the act of measurement—and
hence consciousness itself—played a vital role in determining the out-
come of experiments involving quantum effects. Such theories have
involved little more than hand waving, and proponents invariably have
ulterior philosophical or even religious motives. Crick’s partner Chris-
tof Koch summed up the quantum-consciousness thesis in a syllogism:
“One is going out on a limb here, but I feel quite strongly about those
things. I can’t see any way out.”14
I noted that some physicists had begun thinking about how exotic
quantum effects, such as superposition, might be harnessed for per-
forming computations that classical computers could not achieve. If
these quantum computers proved feasible, would Penrose grant that
they might be capable of thinking? Penrose shook his head. A computer
capable of thought, he said, would have to rely on mechanisms related
not to quantum mechanics in its present form but to a deeper theory
not yet discovered. What he was really arguing against in The Emperor’s
New Mind, Penrose confided, was the assumption that the mystery of
consciousness, or of reality in general, could be explained by the cur-
rent laws of physics. “I’m saying this is wrong,” he announced. “The
laws that govern the behavior of the world are, I believe, much more
subtle than that.”
Contemporary physics simply does not make sense, he elaborated.
Quantum mechanics, in particular, has to be flawed, because it is so
glaringly inconsistent with ordinary, macroscopic reality. How can
electrons act like particles in one experiment and waves in another?
How can they be in two places at the same time? There must be some
deeper theory that eliminates the paradoxes of quantum mechanics
and its disconcertingly subjective elements. “Ultimately our theory
has to accommodate subjectivism, but I wouldn’t like to see the the-
ory itself being a subjective theory.” In other words, the theory should
allow for the existence of minds but should not require it.
Neither superstring theory—which is after all a quantum theory—
nor any other current candidate for a unified theory has the qualities
that Penrose feels are necessary. “If there is going to be such a kind
of total theory of physics, in some sense it couldn’t conceivably be the
character of any theory I’ve seen,” he said. Such a theory would need
a “kind of compelling naturalism.” In other words, the theory would
have to make sense.
Yet Penrose was just as conflicted as he had been in Syracuse over
whether physics would achieve a truly complete theory. Gödel’s the-
orem, he said, suggests that there will always be open questions in
physics, as in mathematics. “Even if you could find the end of how
and meaningful. Steven Weinberg could have told him that physics
lacks that capacity.
But then Chalmers proclaimed that although science could not solve
the mind-body problem, philosophy still might. Chalmers thought he
had found a possible solution: scientists should assume that informa-
tion is as essential a property of reality as matter and energy. Chalm-
ers’s theory was similar to the it from bit concept of John Wheeler—in
fact, Chalmers acknowledged his debt to Wheeler—and it suffered
from the same fatal flaw. The concept of information does not make
sense unless there is an information processor—whether an amoeba
or a particle physicist—that gathers information and acts on it. Matter
and energy were present at the dawn of creation, but life was not, as far
as we know. How, then, can information be as fundamental as matter
and energy? Nevertheless, Chalmers’s ideas struck a chord among his
audience. They thronged around him after his speech, telling him how
much they had enjoyed his message.21
At least one listener was displeased: Christof Koch, Francis Crick’s
collaborator. That night Koch, a tall, rangy man wearing red cowboy
boots, tracked Chalmers down at a cocktail party for the conferees
and chastized him for his speech. It is precisely because philosophical
approaches to consciousness have all failed that scientists must focus
on the brain, Koch declared in his rapid-fire, German-accented voice as
rubberneckers gathered. Chalmers’s information-based theory of con-
sciousness, Koch continued, like all philosophical ideas, was untestable
and therefore useless. “Why don’t you just say that when you have a
brain the Holy Ghost comes down and makes you conscious!” Koch
exclaimed. Such a theory was unnecessarily complicated, Chalmers
responded drily, and it would not accord with his own subjective expe-
rience. “But how do I know your subjective experience is the same as
mine?” Koch sputtered. “How do I even know you’re conscious?”
Koch had brought up the embarrassing problem of solipsism, which
lies at the heart of the mysterian position. No person really knows that
any other being, human or inhuman, has a subjective experience of
the world. By raising this ancient philosophical conundrum, Koch, like
Dennett, was revealing himself to be a mysterian. Koch admitted as
much to me later. All science can do, he asserted, is provide a detailed
map of the physical processes that correlate with different subjective
states. But science cannot truly “solve” the mind-body problem. No
Minsky called Roger Penrose a “coward” who could not accept his
own physicality, and he derided Gerald Edelman’s reentrant-loops
hypothesis as warmed-over feedback theory. Minsky even snubbed
MIT’s own Artificial Intelligence Laboratory, which he had founded
and where we happened to be meeting. “I don’t consider this to be a
serious research institution at the moment,” he announced.
When we wandered through the lab looking for a lecture on a
chess-playing computer, however, a metamorphosis occurred. “Isn’t
the chess meeting supposed to be here?” Minsky queried a group of
researchers chatting in a lounge. “That was yesterday,” someone
replied. After asking a few questions about the talk, Minsky spun
tales about the history of chess-playing programs. This minilecture
evolved into a reminiscence of Minsky’s friend Isaac Asimov, who had
just died. Minsky recounted how Asimov—who had popularized the
term robot and explored its metaphysical implications in his science fic-
tion—always refused Minsky’s invitations to see the robots being built
at MIT out of fear that his imagination “would be weighed down by
this boring realism.”
One lounger, noticing that he and Minsky wore the same pliers,
yanked his instrument from its holster and with a flick of his wrist
snapped the retractable jaws into place. “En garde,” he said. Minsky,
grinning, drew his weapon, and he and his challenger whipped their
pliers repeatedly at each other, like punks practicing their switchblade
technique. Minsky expounded on both the versatility and—an import-
ant point for him—the limitations of the pliers; his pair pinched him
during certain maneuvers. “Can you take it apart with itself?” someone
asked. Minsky and his colleagues shared an insiders’ laugh at this refer-
ence to a fundamental problem in robotics.
Later, returning to Minsky’s office, we encountered a young,
extremely pregnant Korean woman. She was a doctoral candidate and
was scheduled for an oral exam the next day. “Are you nervous?” asked
Minsky. “A little,” she replied. “You shouldn’t be,” he said, and gently
pressed his forehead against hers, as if seeking to infuse her with his
strength. I realized, watching this scene, that there were many Minskys.
But of course there would be. Multiplicity is central to Minsky’s view
of the mind. In his book The Society of Mind he contended that brains
The reason Minsky had mastered so many skills during his career—he
is an adept in mathematics, philosophy, physics, neuroscience, robotics,
and computer science and has written several science fiction novels—
was that he had learned to enjoy the “feeling of awkwardness” triggered
by having to learn something new. “It’s so thrilling not to be able to do
something. It’s such a rare experience to treasure. It won’t last.”
Minsky was a child prodigy in music, too, but eventually he decided
that music was a soporific. “I think the reason people like music is
to suppress thought—the wrong kinds of thought—not to produce
it.” Minsky still occasionally found himself composing “Bach-like
things”—an electric piano crowded his office—but he tried to resist the
impulse. “I had to kill the musician at some point,” he said. “It comes
back every now and then, and I hit it.”
Minsky had no patience for those who claim the mind is too subtle
to understand. “Look, before Pasteur people said, ‘Life is different. You
can’t explain it mechanically.’ It’s just the same thing.” But a final the-
ory of the mind, Minsky emphasized, would be much more complex
than a final theory of physics—which Minsky also believed was attain-
able. All of particle physics might be condensed to a page of equations,
Minsky said, but to describe all the components of the mind would
require much more space. After all, consider how long it would take
precisely to describe an automobile, or even a single spark plug. “It
would take a fair-sized book to explain how they welded and sintered
the spline to the ceramic without it leaking when it starts.”
Minsky said the truth of a model of mind could be demonstrated in
several ways. First, a machine based on the model’s principles should be
able to mimic human development. “The machine ought to be able to
start as a baby and grow up by seeing movies and playing with things.”
Moreover, as imaging technology improves, scientists should be able
to determine whether the neural processes in living humans corrobo-
rate the model. “It seems to me that it’s perfectly reasonable that once
you get a [brain] scanner that had one angstrom [one ten-billionth of a
meter] resolution, then you could see every neuron in someone’s brain.
You watch this for 1,000 years and you say, ‘Well, we know exactly
what happens whenever this person says blue. And people check this
out for generations and the theory is sound. Nothing goes wrong, and
that’s the end of it.”
If humans achieve a final theory of the mind, I asked, what fron-
tiers will be left for science to explore? “Why are you asking me this
question?” Minsky retorted. The concern that scientists will run out of
things to do is pitiful, he said. “There’s plenty to do.” We humans may
well be approaching our limits as scientists, but we will someday cre-
ate machines much smarter than we that can continue doing science.
But that would be machine science, not human science, I said. “You’re
a racist, in other words,” Minsky said, his great domed forehead pur-
pling. I scanned his face for signs of irony, but found none. “I think the
important thing for us is to grow,” Minsky continued, “not to remain in
our own present stupid state.” We humans, he added, are just “dressed
up chimpanzees.” Our task is not to preserve present conditions but to
evolve, to create beings better, more intelligent than we.
But Minsky, surprisingly, was hard-pressed to say precisely what
kinds of questions these brilliant machines might be interested in.
Echoing Daniel Dennett, Minsky suggested, rather halfheartedly, that
machines might try to comprehend themselves as they evolved into
ever-more-complex entities. He seemed more enthusiastic discuss-
ing the possibilities of converting human personalities into computer
programs that could then be downloaded into machines. Minsky saw
downloading as a way to indulge in pursuits that he would ordinarily
consider too dangerous, such as taking LSD or indulging in religious
faith. “I regard religious experience as a very risky thing to do because
it can destroy the brain in a rapid way, but if I had a backup copy—.”
Minsky confessed that he would love to know what Yo-Yo Ma, the
great cellist, felt like when playing a concerto, but Minsky doubted
whether such an experience would be possible. To share Yo-Yo Ma’s
experience, Minsky explained, he would have to possess all Yo-Yo Ma’s
memories, he would have to become Yo-Yo Ma. But in becoming Yo-Yo
Ma, Minsky suspected, he would cease to be Minsky.
This was an extraordinary admission for Minsky to make. Like lit-
erary critics who claim that the only true interpretation of a text is
the text itself, Minsky was implying that our humanness is irreducible;
Bacon urged the philosophers of his day to cease trying to show how
the universe evolved from thought and to begin considering how
thought evolved from the universe.27 Here, arguably, Bacon anticipated
modern explanations of consciousness within the context of the theory
of evolution and, more generally, of the materialist paradigm. The sci-
entific conquest of consciousness will be the ultimate anticlimax, yet
another demonstration of Niels Bohr’s dictum that science’s job is to
reduce all mysteries to trivialities. But human science will not, cannot,
solve the how-do-I-know-you’re-conscious problem. There may be only
one way to solve it: to make all minds one mind.
I
miss the Reagan era. Ronald Reagan made moral and political
choices so easy. What he liked, I disliked. Star Wars, for example.
Formally known as the Strategic Defense Initiative, it was Reagan’s
plan to build a shield in space that would protect the United States from
the nuclear missiles of the Soviet Union. Of the many stories I wrote
about Star Wars, the one I am most embarrassed by now involved
Gottfried Mayer-Kress, a physicist at, of all places, the Los Alamos
National Laboratory, the cradle of the atomic bomb. Mayer-Kress had
constructed a simulation of the arms race between the Soviet Union
and the United States that employed “chaotic” mathematics. His sim-
ulation suggested that Star Wars would destabilize relations between
the superpowers and possibly lead to a catastrophe, that is, nuclear
war. Because I approved of Mayer-Kress’s conclusions—and because
his place of employment added a nice touch of irony—I wrote up an
admiring report of his work. Of course, if Mayer-Kress’s simulation
had suggested that Star Wars was a good idea, I would have dismissed
his work as the nonsense that it obviously was. Star Wars could well
have destabilized relations between the superpowers, but did we need
some computer model to tell us that?
I don’t mean to beat up on Mayer-Kress. He meant well. (In 1993,
several years after I wrote about Mayer-Kress’s Star Wars research, I
saw a press release from the University of Illinois, where he was then
employed, announcing that his computer simulations had suggested
solutions to the conflicts in Bosnia and Somalia.1) His work is just one
of the more blatant examples of over-reaching by someone in the field
195
analysis by the reductionist methods of the past. The blurb on the back
of Heinz Pagels’s The Dreams of Reason, one of the best books on the
“new sciences of complexity,” put it this way: “Just as the telescope
opened up the universe and the microscope revealed the secrets of the
microcosm, the computer is now opening an exciting new window on
the nature of reality. Through its capacity to process what is too com-
plex for the unaided mind, the computer enables us for the first time to
simulate reality, to create models of complex systems like large mol-
ecules, chaotic systems, neural nets, the human body and brain, and
patterns of evolution and population growth.”4
This hope stems in large part from the observation that simple sets of
mathematical instructions, when carried out by a computer, can yield
fantastically complicated and yet strangely ordered effects. John von
Neumann may have been the first scientist to recognize this capability
of computers. In the 1950s, he invented the cellular automaton, which
in its simplest form is a screen divided into a grid of cells, or squares.
A set of rules relates the color, or state, of each cell to the state of its
immediate neighbors. A change in the state of a single cell can trigger a
cascade of changes throughout the entire system. “Life,” created in the
early 1970s by the British mathematician John Conway, remains one
of the most celebrated of cellular automatons. Whereas most cellular
automatons eventually settle into predictable, periodic behavior, Life
generates an infinite variety of patterns—including cartoonlike objects
that seem to be engaged in inscrutable missions. Inspired by Conway’s
strange computer world, a number of scientists began using cellular
automatons to model various physical and biological processes.
Another product of computer science that seized the imagination
of the scientific community was the Mandelbrot set. The set is named
after Benoit Mandelbrot, an applied mathematician at IBM who is
one of the protagonists of Gleick’s book Chaos (and whose work on
indeterministic phenomena led Gunther Stent to conclude that the
social sciences would never amount to much). Mandelbrot invented
fractals, mathematical objects displaying what is known as fractional
dimensionality: they are fuzzier than a line but never quite fill a plane.
Fractals also display patterns that keep recurring at finer and finer
scales. After coining the term fractal, Mandelbrot pointed out that
B lue and red dots skittered across a computer screen. But these were
not just colored dots. These were agents, simulated people, doing
the things that real people do: foraging for food, seeking mates, com-
peting and cooperating with each other. At least, that’s what Joshua
Epstein, the creator of this computer simulation, claimed. Epstein, a
sociologist from the Brookings Institution, was showing his simula-
tion to me and two other journalists at the Santa Fe Institute, where
Epstein was a visiting fellow. The institute, founded in the mid-1980s,
quickly became the headquarters of complexity, the self-proclaimed
successor to chaos as the new science that would transcend the stodgy
reductionism of Newton, Darwin, and Einstein.
As my colleagues and I watched Epstein’s colored dots and listened
to his even more colorful interpretation of their movements, we offered
polite murmurs of interest. But behind his back we exchanged jaded
smiles. None of us took this kind of thing very seriously. We all under-
stood, implicitly, that this was ironic science. Epstein himself, when
pressed, acknowledged that his model was not predictive in any way;
he called it a “laboratory,” a “tool,” a “neural prosthesis” for exploring
ideas about the evolution of societies. (These were all favorite terms
of Santa Fe’ers.) But during public presentations of his work, Epstein
had also claimed that simulations such as his would revolutionize the
social sciences, helping to solve their most intractable problems.6
Another believer in the power of computers is John Holland, a
computer scientist with joint appointments at the University of Mich-
igan and the Santa Fe Institute. Holland was the inventor of genetic
algorithms, which are segments of computer code that can rearrange
themselves to produce a new program that can solve a problem more
efficiently. According to Holland, the algorithms are, in effect, evolv-
ing, just as the genes of living organisms evolve in response to the pres-
sure of natural selection.
Holland has proposed that it may be possible to construct a “uni-
fied theory of complex adaptive systems” based on mathematical tech-
niques such as those embodied in his genetic algorithms. He spelled
out his vision in a 1993 lecture:
M embers of the Santa Fe Institute may not agree on what they are
studying, but they concur on how they should study it: with comput-
ers. Christopher Langton embodies the faith in computers that gave rise
to the chaos and complexity movements. He has proposed that simula-
tions of life run on a computer are alive—not sort of, or in a sense, or met-
aphorically, but actually. Langton is the founding father of artificial life, a
subfield of chaoplexity that has attracted much attention in its own right.
Langton has helped organize several conferences on artificial life—the
first held at Los Alamos in 1987—attended by biologists, computer scien-
tists, and mathematicians who share his affinity for computer animation.13
Artificial life is an outgrowth of artificial intelligence, a field that
preceded it by several decades. Whereas artificial-intelligence research-
ers seek to understand the mind better by simulating it on a computer,
proponents of artificial life hope to gain insights into a broad range
of biological phenomena through their simulations. And just as artifi-
cial intelligence has generated more portentous rhetoric than tangible
results, so has artificial life. As Langton stated in an essay introducing
the inaugural issue of the quarterly journal Artificial Life in 1994:
go!” Bak barked. “Once something reaches the masses, it’s already
done.” (Complexity, of course, is the exception to Bak’s rule.)
Bak had nothing but contempt for scientists who were content merely
to refine and extend the work of the pioneers. “There’s no need for that!
We don’t need the cleanup team here!” Fortunately, Bak said, many
mysterious phenomena continue to resist scientific understanding: the
evolution of species, human cognition, economics. “What these things
have in common is that they are very large things with many degrees
of freedom. They are what we call complex systems. And there will be
a revolution in science. These things will be made into hard sciences
in the next years in the same way that [particle] physics and solid-state
physics were made hard sciences in the last 20 years.” Bak rejected the
“pseudophilosophical, pessimistic, wishy-washy” view that these prob-
lems are simply too difficult for our puny human brains. “If I thought
that was true I wouldn’t be doing these things!” Bak exclaimed. “We
should be optimistic, concrete, and then we can go on. And I’m sure that
science will look totally different 50 years from now than it does today.”
In the late 1980s Bak and two colleagues proposed what quickly
became a leading candidate for a unified theory of complexity: self-or-
ganized criticality. His paradigmatic system is a sandpile. As one adds
sand to the top of the pile, it approaches what Bak calls the critical state,
in which even a single additional grain of sand dropped on the top of
the pile can trigger an avalanche down the pile’s sides. If one plots the
size and frequency of the avalanches occurring in this critical state, the
results conform to what is known as a power law: the frequency of ava-
lanches is inversely proportional to a power of their size.
Bak credited the chaos pioneer Benoit Mandelbrot with having
pointed out that earthquakes, stock-market fluctuations, the extinction
of species, and many other phenomena displayed the same pattern of
power-law behavior. (In other words, the phenomena that Bak defined
as complex were also all chaotic.) “Since economics, geophysics, astron-
omy, biology have these singular features there must be a theory here,”
Bak said. He hoped his theory might explain why small earthquakes are
common and large ones uncommon, why species persist for millions
of years and then vanish, why stock markets crash. “We can’t explain
everything about everything, but something about everything.”
“More Is Different”
agreed with the “More Is Different” principle set forth by his colleague
Philip Anderson. “I have no idea what he said,” Gell-Mann replied dis-
dainfully. I explained Anderson’s idea that reductionist theories have
limited explanatory power; one cannot go back up the chain of expla-
nation from particle physics to biology. “You can! You can!” Gell-Mann
exclaimed. “Did you read what I wrote about this? I devoted two or
three chapters to this!”
Gell-Mann said that in principle one can go back up the chain of
explanation, but in practice one often cannot, because biological phe-
nomena stem from so many random, historical, contingent circum-
stances. That is not to say that biological phenomena are ruled by some
mysterious laws of their own that act independently of the laws of
physics. The whole point of the doctrine of emergence is that “we don’t
need something else in order to get something else,” Gell-Mann said. “And
when you look at the world that way, it just falls into place! You’re not
tortured by these strange questions any more!”
Gell-Mann thus rejected the possibility—raised by Stuart Kauffman
and others—that there might be a still-undiscovered law of nature that
explains why the universe has generated so much order in spite of the
supposedly universal drift toward disorder decreed by the second law
of thermodynamics. This issue, too, was settled, Gell-Mann replied.
The universe began in a wound-up state far from thermal equilibrium.
As the universe winds down, entropy increases, on average, through-
out the system, but there can be many local violations of that tendency.
“It’s a tendency, and there are lots and lots of eddies in that process,”
he said. “That’s very different from saying complexity increases! The
envelope of complexity grows, expands. It’s obvious from these other
considerations it doesn’t need another new law, however!”
The universe does create what Gell-Mann calls frozen accidents—
galaxies, stars, planets, stones, trees—complex structures that serve as
a foundation for the emergence of still more complex structures. “As a
general rule, more complex life-forms emerge, more complex computer
programs, more complex astronomical objects emerge in the course
of nonadaptive stellar and galactic evolution and so on. But! If we look
very very very far into the future, maybe it won’t be true any more!”
Eons from now the era of complexity could end, and the universe could
degenerate into “photons and neutrinos and junk like that and not a lot
of individuality.” The second law would get us after all.
“What I’m trying to oppose is a certain tendency toward obscuran-
tism and mystification,” Gell-Mann continued. He emphasized that
there was much to be understood about complex systems; that was
why he helped to found the Santa Fe Institute. “There’s a huge amount
of wonderful research going on. What I say is that there is no evidence
that we need—I don’t know how else to say it—something else!” Gell-
Mann, as he spoke, wore a huge sardonic grin, as if he could scarcely
contain his amusement at the foolishness of those who might disagree
with him.
Gell-Mann noted that “the last refuge of the obscurantists and mys-
tifiers is self-awareness, consciousness.” Humans are obviously more
intelligent and self-aware than other animals, but they are not quali-
tatively different. “Again, it’s a phenomenon that appears at a certain
level of complexity and presumably emerges from the fundamental
laws plus an awful lot of historical circumstances. Roger Penrose has
written two foolish books based on the long-discredited fallacy that
Gödel’s theorem has something to do with consciousness requiring”—
pause—“something else.”
If scientists did discover a new fundamental law, Gell-Mann said,
they would do so by forging further into the microrealm, in the
direction of superstring theory. Gell-Mann felt that superstring the-
ory would probably be confirmed as the final, fundamental theory
of physics early in the next millennium. But would such a far-fetched
theory, with all its extra dimensions, ever really be accepted? I asked.
Gell-Mann stared at me as if I had expressed a belief in reincarnation.
“You’re looking at science in this weird way, as if it were a matter of an
opinion poll,” he said. “The world is a certain way, and opinion polls
have nothing to do with it! They do exert pressures on the scientific
enterprise, but the ultimate selection pressure comes from comparison
with the world.” What about quantum mechanics? Would we be stuck
with its strangeness? “I don’t think there’s anything strange about it!
It’s just quantum mechanics! Acting like quantum mechanics! That’s
all it does!” To Gell-Mann, the world made perfect sense. He already
had The Answer.
Is science finite or infinite? For once, Gell-Mann did not have a pre-
packaged answer. “That’s a very difficult question,” he replied soberly.
“I can’t say.” His view of how complexity emerges from fundamental
laws, he said, “still leaves open the question of whether the whole sci-
entific enterprise is open-ended. After all, the scientific enterprise can
also concern itself with all kinds of details.” Details.
One of the things that makes Gell-Mann so insufferable is that he is
almost always right. His assertion that Kauffman, Bak, Penrose, and
others will fail to find something else just beyond the horizon of cur-
rent science—something that can explain better than current science
can the mystery of life and of human consciousness and of existence
itself—will probably prove to be correct. Gell-Mann may err—dare
one say it?—only in thinking that superstring theory, with all its extra
dimensions and its infinitesimal loops, will ever become an accepted
part of the foundation of physics.
related concepts had been warmly received not only by scientists but
also by the lay public because society was itself in a state of flux. The
public’s faith in great unifying ideas, whether religious or political or
artistic or scientific, was dwindling.
“Even people who are very Catholic are no longer so Catholic as
their parents or grandparents were, probably. We are no more believ-
ing in Marxism or liberalism in the classical way. We are no more
believing classical science.” The same is true of the arts, music, liter-
ature; society has learned to accept a multiplicity of styles and world-
views. Humanity has arrived, Prigogine summarized, at “the end of
certitude.”
Prigogine paused, allowing us to ponder the magnitude of his
announcement. I broke the hushed silence by pointing out that some
people, such as religious fundamentalists, seemed to be clinging to cer-
titude more fiercely than ever. Prigogine listened politely, then asserted
that fundamentalists were merely exceptions to the rule. Abruptly, he
fixed his gaze on a prim, blond-haired woman, the deputy director of
his institute, sitting across the table from us. “What is your opinion?”
he asked. “I agree completely,” she replied. She hastily added, perhaps
in response to the craven snickers of her colleagues, that fundamental-
ism “seemed to be a response to a frantic world.”
Prigogine nodded paternally. He acknowledged that his assertions
concerning the end of certitude had elicited “violent reactions” in the
intellectual establishment. The New York Times had declined to review
Order out of Chaos because, Prigogine had heard, the editors consid-
ered his discussion of the end of certitude “too dangerous.” Prigogine
understood such fears. “If science is not able to give certitude, what
should you believe? I mean, before it was very easy. Either you believe
in Jesus Christ, or you believe in Newton. It was very simple. But now,
as I say, if science is not giving you certitude but probabilities, then it’s
a dangerous book!”
Prigogine nonetheless thought his view did justice to the depth-
less mystery of the world, and of our own existence. That was what
he meant by his phrase “the reenchantment of nature.” After all, con-
sider this lunch we were having right now. What theory could predict
this! “The universe is a strange thing,” Prigogine said, cranking the
“Another answer is, ‘Well, it’s okay, but we can’t do it.’ The right
answer is obviously an alloy of those two complements. We have very
few tools. We can’t solve problems like that.”
Moreover, particle physicists are overly concerned with finding
theories that are merely true, in the sense that they account for avail-
able data; the goal of science should be to generate “thoughts in your
head” that “stand a high chance of being new or exciting,” Feigenbaum
explained. “That’s the desideratum.” He added: “There isn’t any secu-
rity by knowing that something is true, at least as far as I’m concerned.
I’m thoroughly indifferent to that. I like to know that I have a way of
thinking about things.” I began to suspect that Feigenbaum, like David
Bohm, had the soul of an artist, a poet, even a mystic: he sought not
truth, but revelation.
Feigenbaum noted that the methodology of particle physics—and
physics generally—had been to try to look at the simplest possible
aspects of reality, “where everything has been stripped away.” The
most extreme reductionists had suggested that looking at more com-
plex phenomena was merely “engineering.” But as a result of advances
in chaos and complexity, he said, “some of these things that one rele-
gated to engineering are now regarded as reasonable questions to ask
from a more theoretical viewpoint. Not just to get the right answer but
to understand something about how they work. And that you can even
make sense out of that last comment flies in the face of what it means
for a theory to be finished.”
On the other hand, chaos, too, had generated too much hype. “It’s
a fraud to have named the subject ‘chaos,’” he said. “Imagine one of
my [particle-physicist] colleagues has gone to a party and meets some-
one and the person is all bubbling over about chaos and tells him that
this reductionist stuff is all bullshit. Well, it’s infuriating, because it’s
completely stupid what the person has been told,” Feigenbaum said. “I
think it’s regrettable that people are sloppy, and they end up serving as
representatives.”
Some of his colleagues at the Santa Fe Institute, Feigenbaum added,
also had too naive a faith in the power of computers. “The proof is in
the pudding,” he said, and paused, as if considering how to proceed
diplomatically. “It’s very hard to see things in numerical experiments.
That is, people want to have fancier and fancier computers to simulate
fluids. There is something to be learned in simulating fluids, but unless
you know what you’re looking for, you’re not going to see anything.
Because after all, if I just look out the window, there’s an overwhelm-
ingly better simulation than I could ever do on a computer.”
He nodded toward his window, beyond which the leaden East River
flowed. “I can’t interrogate it quite as sharply, but there’s so much stuff
in that numerical simulation that if I don’t know what to interrogate
it about, I will have learned nothing.” For these reasons much of the
recent work on nonlinear phenomena “has not led to answers. The rea-
son for that is, these are truly hard problems, and one doesn’t have any
tools. And the job should really be to do those insightful calculations
which require some piece of faith and good luck as well. People don’t
know how to begin doing these problems.”
I admitted that I was often confused by the rhetoric of people in
chaos and complexity. Sometimes they seemed to be delineating the
limits of science—such as the butterfly effect—and sometimes they
implied that they could transcend those limits. “We are building tools!”
Feigenbaum cried. “We don’t know how to do these problems. They
are truly hard. Every now and then we get a little pocket where we
know how to do it, and then we try to puff it out as far as it can go. And
when it reaches the border of where it is going, then people wallow for
a while, and then they stop doing it. And then one waits for some new
piece of insight. But it is literally the business of enlarging the borders
of what falls under the suzerainty of science. It is not being done from
an engineering viewpoint. It isn’t just to give you the answer to some
approximation.”
“I want to know why,” he continued, still staring at me hard. “Why
does the thing do this?” Was it possible that this enterprise could, well,
fail? “Of course!” Feigenbaum bellowed, and he laughed maniacally.
He confessed that he had been stymied himself of late. Up through
the late 1980s he had sought to refine a method for describing how a
fractal object, such as a cloud, might evolve over time when perturbed
by various forces. He wrote two long papers on the topic that were
published in 1988 and 1989 in a relatively obscure physics journal.37 “I
have no idea how well they’ve been read,” Feigenbaum said defiantly.
“In fact, I’ve never been able to give a talk on them.” The problem, he
suggested, might be that no one could understand what he was getting
at. (Feigenbaum was renowned for obscurity as well as for brilliance.)
Since then, he added, “I haven’t had a further better idea to know how
to proceed in this.”
In the meantime, Feigenbaum had turned to applied science. Engi-
neering. He had helped a map-making company develop software for
automatically constructing maps with minimal spatial distortion and
maximum aesthetic appeal. He belonged to a committee that was
redesigning U.S. currency to make it less susceptible to counterfeiting.
(Feigenbaum came up with the idea of using fractal patterns that blur
when photocopied.) I noted that these sounded like what would be, for
most scientists, fascinating and worthy projects. But people familiar
with Feigenbaum’s former incarnation as a leader of chaos theory, if
they heard he now worked on maps and currency, might think . . . .
“He’s not doing serious things any more,” Feigenbaum said qui-
etly, as if to himself. Not only that, I added. What people might think
was that if someone who was arguably the most gifted explorer of
chaos could not proceed any further, then perhaps the field had run its
course. “There’s some truth to that,” he replied. He acknowledged that
he hadn’t really had any good ideas about how to extend chaos theory
since 1989. “One is on the lookout for things that are substantial, and at
the moment. . . .” He paused. “I don’t have a thought. I don’t know. . . .”
He turned his large, luminous eyes once again toward the river beyond
his window, as if seeking a sign.
Feeling somewhat guilty, I told Feigenbaum that I would love to see
his last papers on chaos. Did he have any reprints? In response, Fei-
genbaum thrust himself from his chair and careened wildly toward a
row of filing cabinets on the far side of his office. En route, he cracked
his shin against a low-lying coffee table. Wincing, teeth clenched, Fei-
genbaum limped onward, wounded by his collision with the world.
The scene was a grotesque inversion of Samuel Johnson’s famous
stone-kicking episode. The suddenly malevolent-looking coffee table
seemed to be gloating: “I refute Feigenbaum thus.”
Making Metaphors
J
ust as lovers begin talking about their relationship only when it
sours, so will scientists become more self-conscious and doubt-
ful as their efforts yield diminishing returns. Science will follow
the path already trodden by literature, art, music, philosophy. It will
become more introspective, subjective, diffuse, obsessed with its own
methods. In the spring of 1994 I saw the future of science in micro-
cosm when I sat in on a workshop at the Santa Fe Institute titled “The
Limits to Scientific Knowledge.” During the three-day meeting, a
score of thinkers, including mathematicians, physicists, biologists, and
economists, pondered whether there were limits to science, and, if so,
whether science could know them. The meeting was organized by two
people associated with the Santa Fe Institute: John Casti, a mathema-
tician who has written numerous popular books on science and math-
ematics, and Joseph Traub, a theoretical computer scientist, who is a
professor at Columbia University.1
I had come to the meeting in large part to meet Gregory Chaitin, a
mathematician and computer scientist at IBM who had devoted him-
self since the early 1960s to exploring and extending Gödel’s theo-
rem through what he called algorithmic information theory. Chaitin
had come close to proving, as far as I could tell, that a mathematical
theory of complexity was not possible. Before meeting Chaitin, I had
pictured him as a gnarled, sour-looking man with hairy ears and an
eastern European accent; after all, a kind of old-world philosophical
angst suffused his research on the limits of mathematics. But Chaitin
in no way resembled my internal model. Stout, bald, and boyish, he
233
on our ability to classify the world. But the world doesn’t come already
packaged into premade categories. We can “carve it up,” or classify
it, in many ways. In order to classify phenomena, moreover, we must
throw some information away. Kauffman concluded with this incanta-
tion: “To be is to classify is to act, all of which means throwing away
information. So just the act of knowing requires ignorance.” His audi-
ence looked simultaneously impressed and annoyed.
Ralph Gomory then said a few words. A former vice president of
research at IBM, Gomory headed the Sloan Foundation, a philanthropic
organization that sponsored science-related projects, including the Santa
Fe workshop. When listening to someone else speak, and even when
speaking himself, Gomory wore an expression of deep incredulity. He
tilted his head forward, as if peering over invisible bifocals, while knit-
ting his thick black eyebrows together and wrinkling his brow.
Gomory explained that he had decided to support the workshop
because he had long felt that the educational system placed too much
emphasis on what was known and too little on what was unknown
or even unknowable. Most people aren’t even aware of how little is
known, Gomory said, because the educational system presents such a
seamless, noncontradictory view of reality. Everything we know about
the ancient Persian Wars, for example, derives from a single source,
Herodotus. How do we know whether Herodotus was an accurate
reporter? Maybe he had incomplete or inaccurate information! Maybe
he was biased or making things up! We will never know!
Later Gomory remarked that a Martian, by observing humans play-
ing chess, might be able to deduce the rules correctly. But could the
Martian ever be sure that those were the correct rules, or the only
rules? Everyone pondered Gomory’s riddle for a moment. Then Kauff-
man speculated on how Wittgenstein might have responded to it.
Wittgenstein would have “suffered egregiously,” Kauffman said, over
the possibility that the chess players might make a move—deliberately
or not—that broke the rules. After all, how could the Martian tell if
the move was just a mistake or the result of another rule? “Do you get
this?” Kauffman queried Gomory.
“I don’t know who Wittgenstein is, for starters,” Gomory replied
irritably.
the other forces of nature, because the effects only become apparent at
distance scales and energies that are beyond the range of any conceiv-
able accelerator.
A similarly pessimistic note was sounded by Rolf Landauer, a phys-
icist at IBM and a pioneer in the study of the physical limits of compu-
tation. Landauer spoke with a German-accented growl that sharpened
the edge of his sardonic sense of humor. When one speaker kept stand-
ing in the way of his own viewgraphs, Landauer snapped, “Your talk
may be transparent, but you are not!”
Landauer contended that scientists could not count on computers to
keep growing in power indefinitely. He granted that many of the sup-
posed physical constraints once thought to be imposed on computation
by the second law of thermodynamics or quantum mechanics had been
shown to be spurious. On the other hand, the costs of computer-man-
ufacturing plants were rising so fast that they threatened to bring to
a halt the decades-long decline in computation prices. Landauer also
doubted whether computer designers would soon harness exotic quan-
tum effects such as superposition—the ability of a quantum entity to
exist in more than one state at the same time—and thereby transcend
the capabilities of current computers, as some theorists had proposed.
Such systems would be so sensitive to minute quantum-level disrup-
tions that they would be effectively useless, Landauer argued.
Brian Arthur, a ruddy-faced economist at the Santa Fe Institute who
spoke with a lilting Irish accent, steered the conversation toward the
limits of economics. In trying to predict how the stock market will per-
form, he said, an investor must make guesses about how others will
guess about how others will guess—and so on ad infinitum. The eco-
nomic realm is inherently subjective, psychological, and hence unpre-
dictable; indeterminacy “percolates through the system.” As soon as
economists try to simplify their models—by assuming that investors
can have perfect knowledge of the market or that prices represent some
true value—the models become unrealistic; two economists gifted
with infinite intelligence will come to different conclusions about the
same system. All economists can really do is say, “Well, it could be this,
it could be that.” On the other hand, Arthur added, “if you’ve made
money in the markets all the economists will listen to you.”
Kauffman then repeated what Arthur had just said but in a more
abstract way. People are “agents” who must continually adjust their
“internal models” in response to the perceived adjustments of the
internal models of other agents, thus creating a “complex, coadapting
landscape.”
Landauer, scowling, interjected that there were much more obvi-
ous reasons that economic phenomena were impossible to predict than
these subjective factors. AIDS, third-world wars, even the diarrhea of
the chief analyst of a large mutual fund can have a profound effect on
the economy, he said. What model can possibly predict such events?
Roger Shepard, a psychologist from Stanford who had been listen-
ing in silence, finally piped up. Shepard seemed faintly melancholy. His
apparent mood may have been an illusion engendered by his droopy,
ivory-hued mustache—or a very real by-product of his obsession with
unanswerable questions. Shepard admitted he had come here in part
to learn whether scientific or mathematical truths were discovered or
invented. He had also been thinking a good deal lately about where
scientific knowledge really existed and had concluded that it could not
exist independently of the human mind. A textbook on physics, with-
out a human to read it, is just paper and ink spots. But that raised what
Shepard considered to be a disturbing issue. Science appears to be get-
ting more and more complicated and thus more and more difficult to
understand. It seems quite possible that in the future some scientific
theories, such as a theory of the human mind, will be too complex for
even the most brilliant scientist to understand. “Maybe I’m old-fash-
ioned,” Shepard said, but if a theory is so complicated that no single
person can understand it, what satisfaction can we take in it?
Traub, too, was troubled by this issue. We humans may believe
Occam’s razor—which holds that the best theories are the simplest—
because these are the only theories our puny brains can comprehend.
But maybe computers won’t be subject to this limitation, Traub added.
Maybe computers will be the scientists of the future.
In biology, someone remarked gloomily, “Occam’s razor cuts your
throat.”
Gomory noted that the task of science was to find those niches of
reality that lend themselves to understanding, given that the world is
The more Rössler spoke, the more I began to feel an affinity for his
ideas. During one of the breaks, I asked if he felt that intelligent com-
puters might transcend the limits of human science. He shook his head
adamantly. “No, that’s not possible,” he replied in an intense whisper. “I
would bet on dolphins, or sperm whales. They have the biggest brains
on the earth.” Rössler informed me that when one sperm whale is shot
by whalers, others sometimes crowd around it, forming a starlike pat-
tern, and are themselves killed. “Usually people think that is just blind
instinct,” Rössler said. “In reality it’s their way of showing humankind
that they are much higher evolved than humans.” I just nodded.
Toward the end of the meeting, Traub proposed that everyone split
up into focus groups to discuss the limits of specific fields: physics,
mathematics, biology, social sciences. A social scientist announced that
he didn’t want to be in the social science group; he had come specif-
ically to talk to, and learn from, people outside his field. His remark
provoked a few me-toos from others. Someone pointed out that if
everyone felt the way the social scientist did, there would be no social
scientists in the social science section, biologists in the biology section,
and so on. Traub said his colleagues could split up any way they chose;
he was just making a suggestion. The next question was, where would
the different groups meet? Someone proposed that they disperse to dif-
ferent rooms, so that certain loud talkers wouldn’t disturb the other
groups. Everyone looked at Chaitin. His promise to be quiet was met
with jeers. More discussion. Landauer remarked that there was such a
thing as too much intelligence applied to a simple problem. Just when
everything seemed hopeless, groups somehow, spontaneously, formed,
more or less following Traub’s initial suggestion, and wandered off to
different locations. This was, I thought, an impressive display of what
the Santa Fe’ers like to call self-organization, or order out of chaos; per-
haps life began this way.
I tagged along with the mathematics group, which included Chai-
tin, Landauer, Shepard, Doria, and Rössler. We found an unoccupied
lounge with a chalkboard. For several minutes, everyone talked about
what they should talk about. Then Rössler went to the chalkboard and
scribbled down a recently discovered formula that gives rise to a fan-
tastically complicated mathematical object, “the mother of all fractals.”
Landauer asked Rössler, politely, what this fractal had to do with any-
thing. It “soothes the brain,” Rossler replied. It also fed his hope that
physicists might be able to describe reality with these kinds of chaotic
but classical formulas and thus dispense with the terrible uncertainties
of quantum mechanics.
Shepard interjected that he had joined the mathematics subgroup
because he wanted the mathematicians to tell him whether mathemat-
ical truths were invented or discovered. Everyone talked about that for
a while without coming to a decision. Chaitin said that most mathema-
ticians leaned toward the discovery view, but Einstein was apparently
an inventionist.
Chaitin, during a lull, once again proposed that mathematics was
dead. In the future, mathematicians would be able to solve problems
only with enormous computer calculations that would be too complex
for anyone to understand.
Everyone seemed fed up with Chaitin. Mathematics works, Lan-
dauer snarled. It helps scientists solve problems. Obviously it’s not dead.
Others piled on, accusing Chaitin of exaggeration.
Chaitin, for the first time, appeared chastened. His pessimism, he
conjectured, might be linked to the fact that he had eaten too many
bagels that morning. He remarked that the pessimism of the German
philosopher Schopenhauer, who advocated suicide as the supreme
expression of existential freedom, had been traced to his bad liver.
Steen Rasmussen, a physicist and Santa Fe regular, reiterated the
familiar argument of chaoplexologists that traditional reductionist
methods cannot solve complex problems. Science needs a “new New-
ton,” he said, someone who can invent a whole new conceptual and
mathematical approach to complexity.
Landauer scolded Rasmussen for succumbing to the “disease” afflict-
ing many Santa Fe researchers, the belief in some “great religious insight”
that would instantaneously solve all their problems. Science doesn’t work
that way; different problems require different tools and techniques.
Rössler unburdened himself of a long, tangled soliloquy whose mes-
sage seemed to be that our brains represent only one solution to the
multiple problems posed by the world. Evolution could have created
other brains representing other solutions.
When I told Chaitin I was writing a book about the possibility that
science might be entering an era of diminishing returns, I expected
empathy, but he snorted in disbelief. “Is that true? I hope it’s not true,
because it would be pretty damn boring if it were true. Every period
seems to think that. Who was it—Lord Kelvin?—who said that all we
have to do is to measure a few more decimal points?” When I men-
tioned that historians could find no evidence that Kelvin ever made such
a remark, Chaitin shrugged. “Look at all the things we don’t know! We
don’t know how the brain works. We don’t know what memory is. We
don’t know what aging is.” If we can figure out why we age, maybe we
can figure out how to stop the aging process, Chaitin said.
I reminded Chaitin that in Santa Fe he had suggested that mathe-
matics and even science as a whole might be approaching their ulti-
mate limits. “I was just trying to wake people up,” he replied. “The
audience was dead.” His own work, he emphasized, represented a
beginning, not an end. “I may have a negative result, but I read it as
telling you how to go about finding new mathematical truths: Behave
more like a physicist does. Do it more empirically. Add new axioms.”
Chaitin said he could not pursue his work on the limits of math-
ematics if he was not an optimist. “Pessimists would look at Gödel
and they would start to drink scotch until they died of cirrhosis of the
liver.” Although the human condition might be as much “a mess” as it
was thousands of years ago, there was no denying the enormous prog-
ress that we had made in science and technology. “When I was a child
everybody talked about Gödel with mystical respect. This was almost
incomprehensible, certainly profound. And I wanted to understand
what the hell he was saying and why it was true. And I succeeded! So
that makes me optimistic. I think we know very little, and I hope we
know very little, because then it will be much more fun.”
Chaitin recalled that he had once gotten into an argument with the
physicist Richard Feynman about the limits of science. The incident
occurred at a conference on computation held in the late 1980s, shortly
before Feynman died. When Chaitin opined that science was just
beginning, Feynman became furious. “He said we already know the
physics of practically everything in everyday life, and anything that’s
left over is not going to be relevant.”
Chaitin, who had begun talking faster and faster and was in a kind of
frenzy, cut me off. “You’re a pessimist! You’re a pessimist!” he shouted.
He reminded me of something 1 had told him earlier in our conversa-
tion, that my wife was pregnant with our second child. “You conceived
a child! You’ve gotta be pretty optimistic! You should be optimistic! I
should be pessimistic! I’m older than you are! I don’t have children!
IBM is doing badly!” A plane droned, gulls shrieked, and Chaitin’s
howls of laughter fled, unechoed, across the mighty Hudson.
A ctually, Chaitin’s own career fits rather nicely into Gunther Stent’s
diminishing-returns scenario. Algorithmic information theory rep-
resents not a genuinely new development but an extension of Gödel’s
insight. Chaitin’s work also supports Stent’s contention that science, in
its attempt to plumb ever-more-complex phenomena, is outrunning
our innate axioms. Stent left several loopholes open in his gloomy
prophecy. Society might become so wealthy that it would pay for even
the most whimsical scientific experiments—particle accelerators that
girdle the globe!—without regard for cost. Scientists might also achieve
some enormous breakthrough, such as a faster-than-light transporta-
tion system or intelligence-enhancing genetic-engineering techniques,
that would enable us to transcend our physical and cognitive limits. I
would add one further possibility to this list. Scientists might discover
extraterrestrial life, creating a glorious new era in comparative biology.
Barring such outcomes, science may generate increasingly incremental
returns and gradually grind to a halt.
What, then, will become of humanity? In Golden Age, Stent sug-
gested that science, before it ends, may at least deliver us from our most
pressing social problems, such as poverty and disease and even conflict
between states. The future will be peaceful and comfortable, if boring.
Most humans will dedicate themselves to the pursuit of pleasure. In
1992, Francis Fukuyama set forth a rather different vision of the future
in The End of History.4 Fukuyama, a political theorist who worked in
the State Department during the Bush administration, defined history
had written him letters addressing that theme. “I think they were
space-travel buffs,” he snickered. “They said, ‘Well, you know, if we
don’t have ideological wars to fight we can always fight nature in a cer-
tain sense by pushing back the frontiers of knowledge and conquering
the solar system.’”
He emitted another scornful little chuckle. So you don’t take these
predictions seriously? I asked. “No, not really,” he said wearily. Trying
to goad something further out of him, I revealed that many prominent
scientists and philosophers—not just fans of Star Trek—believed that
science, the quest for pure knowledge, represented the destiny of man-
kind. “Hunh,” Fukuyama replied, as though he was no longer listen-
ing to me but had reentered that delightful tract by Hegel he had been
perusing before I called. I signed off.
Without even giving it much thought, Fukuyama had reached the
same conclusion that Stent had put forward in The Coming of the Golden
Age. From very different perspectives, both saw that science was less a
byproduct of our will to know than of our will to power. Fukuyama’s
bored rejection of a future dedicated to science spoke volumes. The
vast majority of humans, including not only the ignorant masses but
also highbrow types such as Fukuyama, find scientific knowledge
mildly interesting, at best, and certainly not worthy of serving as the
goal of all humankind. Whatever the long-term destiny of Homo sapi-
ens turns out to be—Fukuyama’s eternal warfare or Stent’s eternal
hedonism, or, more likely, some mixture of the two—it will probably
not be the pursuit of scientific knowledge.
or just a fluke? Also, scientific knowledge, far from making our lives
meaningful, has forced us to confront the pointlessness (as Steven
Weinberg likes to put it) of existence.
The demise of science will surely exacerbate our spiritual crisis. The
cliché is inescapable. In science as in all else, the journey is what mat-
ters, not the destination. Science initially awakens our sense of won-
der as it reveals some new, intelligible intricacy of the world. But any
discovery becomes, eventually, anti-climactic. Let’s grant that a mir-
acle occurs and physicists somehow confirm that all of reality stems
from the wrigglings of loops of energy in 10-dimensional hyperspace.
How long can physicists, or the rest of us, be astounded by that find-
ing? If this truth is final, in the sense that it precludes all other pos-
sibilities, the quandary is all the more troubling. This problem may
explain why even seekers such as Gregory Chaitin—whose own work
implies otherwise—find it hard to accept that pure science, the great
quest for knowledge, is finite, let alone already over. But the faith that
science will continue forever is just that, a faith, one that stems from
our inborn vanity. We cannot help but believe that we are actors in an
epic drama dreamed up by some cosmic playwright, one with a taste
for suspense, tragedy, comedy, and—ultimately, we hope—happy end-
ings. The happiest ending would be no ending.
If my experience is any guide, even people with only a casual interest
in science will find it hard to accept that science’s days are numbered.
It is easy to understand why. We are drenched in progress, real and
artificial. Every year we have smaller, faster computers, sleeker cars,
more channels on our televisions. Our views of progress are further
distorted by what could be called the Star Trek factor. How can science
be approaching a culmination when we haven’t invented spaceships
that travel at warp speed yet? Or when we haven’t acquired the fantastic
psychic powers—enhanced by both genetic engineering and electronic
prosthetics—described in cyberpunk fiction? Science itself—or, rather,
ironic science—helps to propagate these fictions. One can find discus-
sions of time travel, teleportation, and parallel universes in reputable,
peer-reviewed physics journals. And at least one Nobel laureate in phys-
ics, Brian Josephson, has declared that physics will never be complete
until it can account for extrasensory perception and telekinesis.5
But Brian Josephson long ago abandoned real physics for mysticism
and the occult. If you truly believe in modern physics, you are unlikely
to give much credence to ESP or spaceships that can travel faster than
light. You are also unlikely to believe, as both Roger Penrose and
superstring theorists do, that physicists will ever find and empirically
validate a unified theory, one that fuses general relativity and quan-
tum mechanics. The phenomena posited by unified theories unfold in
a microrealm that is even more distant in its way—even further from
the reach of any conceivable human experiment—than the edge of our
universe. There is only one scientific fantasy that seems to have any
likelihood of being fulfilled. Perhaps some day we will create machines
that can transcend our physical, social, and cognitive limits and carry
on the quest for knowledge without us.
H
umanity, Nietzsche told us, is just a stepping stone, a bridge lead-
ing to the Superman. If Nietzsche were alive today, he would
surely entertain the notion that the Superman might be made not
of flesh and blood, but of silicon. As human science wanes, those who
hope that the quest for knowledge will continue must put their faith not
in Homo sapiens, but in intelligent machines. Only machines can over-
come our physical and cognitive weaknesses—and our indifference.
There is an odd little subculture within science whose members spec-
ulate about how intelligence might evolve when or if it sheds its mortal
coil. Participants are not practicing science, of course, but ironic science,
or wishful thinking. They are concerned not with what the world is, but
with what it might be or should be centuries or millennia or eons hence.
The literature of this field—call it scientific theology—may nonetheless
provide fresh perspectives of some age-old philosophical and even theo-
logical questions: What would we do if we could do anything? What is
the point of life? What are the ultimate limits of knowledge? Is suffering
a necessary component of existence, or can we attain eternal bliss?
One of the first modern practitioners of scientific theology was the
British chemist (and Marxist) J. D. Bernal. In his 1929 book, The World,
the Flesh and the Devil, Bernal argued that science would soon give us
the power to direct our own evolution. At first, Bernal suggested, we
might try to improve ourselves through genetic engineering, but even-
tually we would abandon the bodies bequeathed us by natural selec-
tion for more efficient designs:
254
L ike others of his ilk, Bernal became afflicted with a peculiar lack of
imagination, or of nerve, when considering the end stage of the evo-
lution of intelligence. Bernal’s descendants, such as Hans Moravec, a
robotics engineer at Carnegie Mellon University, have tried to over-
come this problem, with mixed results. Moravec is a cheerful, even
giddy man; he seems to be literally intoxicated by his own ideas. As
he unveiled his visions of the future during a telephone conversation,
he emitted an almost continuous, breathless giggle, whose intensity
seemed proportional to the preposterousness of what he was saying.
Moravec prefaced his remarks by asserting that science desperately
needed new goals. “Most of the things that have been accomplished in
this century were really nineteenth-century ideas,” he said. “It’s time
for fresh ideas now.” What goal could be more thrilling than creating
“mind children,” intelligent machines capable of feats we cannot even
imagine? “You raise them and give them an education, and after that
it’s up to them. You do your best but you can’t predict their lives.”
Moravec had first spelled out how this speciation event might
unfold in Mind Children, published in 1988, when private companies
and the federal government were pouring money into artificial intel-
ligence and robotics.2 Although these fields had not exactly prospered
since then, Moravec remained convinced that the future belonged to
machines. By the end of the millennium, he assured me, engineers will
create robots that can do household chores. “A robot that dusts and
vacuums is possible within this decade. I’m sure of it. It’s not even a
controversial point anymore.” (Actually, home robots are looking less
likely as the millennium approaches, but never mind; scientific theol-
ogy requires some suspension of disbelief.)
By the middle of this century—the twenty-first—Moravec said,
robots will be as intelligent as humans and will essentially take over
the economy. “We’re really out of work at that point,” Moravec chor-
tled. Humans might pursue “some quirky stuff like poetry” that springs
from psychological vagaries still beyond the grasp of robots, but robots
will have all the important jobs. “There’s no point in putting a human
being in a company,” Moravec said, “because they’ll just screw it up.”
On the bright side, he continued, machines will generate so much
wealth that humans might not have to work; machines will also elim-
inate poverty, war, and other scourges of premachine history. “Those
are trivial problems,” Moravec said. Humans might still, through their
purchasing power, exert some control over robot-run corporations.
“We’d choose which ones we’d buy from and which ones we wouldn’t.
So in the case of factories that make home robots, we would buy from
the ones that make robots that are nice.” Humans could also boycott
robot corporations whose products or policies seemed inimical to
humans.
Inevitably, Moravec continued, the machines will expand into outer
space in pursuit of fresh resources. They will fan out through the uni-
verse, converting raw matter into information-processing devices.
Robots within this frontier, unable to expand physically, will try to
use the available resources more and more effectively and turn to pure
computation and simulation. “Eventually,” Moravec explained, “every
little quantum of action has a physical meaning. Basically you’ve got
cyberspace, which is computing at higher and higher efficiency.” As
beings within this cyberspace learn to process information more rap-
idly, it will seem to take longer for messages to pass between them,
since those messages can still travel only at the speed of light. “So the
effect of all this improvement in encoding would be to increase the size
of the effective universe,” he said; the cyberspace would, in a sense,
become larger, more dense, more intricate, and more interesting than
the actual physical universe.
Most humans will gladly abandon their mortal flesh-and-blood
selves for the greater freedom, and immortality, of cyberspace. But it
is always possible, Moravec speculated, that there will be “aggressive
primitives who say, ‘No, we don’t want to join the machines.’ Sort of
analogous to the Amish.” The machines might allow these atavistic
types to remain on earth in an Edenic, parklike environment. After
all, the earth “is just one speck of dirt in the system, and it does have
tremendous historical significance.” But the machines, lusting after the
raw resources represented by the earth, might eventually force its last
inhabitants to accept a new home in cyberspace.
But what, I asked, will these machines do with all their power and
resources? Will they be interested in pursuing science for its own sake?
Absolutely, Moravec replied. “That’s the core of my fantasy: that our
nonbiological descendants, without most of our limitations, who could
redesign themselves, could pursue basic knowledge of things.” In
fact, science will be the only motive worthy of intelligent machines.
“Things like art, which people sometimes mention, don’t seem very
profound, in that they’re primarily ways of autostimulation.” His gig-
gles boiled over into guffaws.
Moravec said he firmly believed in the infinitude of science, or
applied science at any rate. “Even if the basic rules are finite,” he said,
“you can go in the direction of compounding them.” Gödel’s theorem
and Gregory Chaitin’s work on algorithmic information theory implied
that machines could keep inventing ever-more-complex mathematical
problems by adding to their base of axioms. “You might eventually
want to look at axiom systems that are astronomical in size,” he said,
“and then you can derive things from those that you can’t derive from
smaller axiom systems.” Of course, machine science may ultimately
resemble human science even less than quantum mechanics resembles
the physics of Aristotle. “I’m sure the basic labels and subdivisions of
the nature of reality are going to change.” Machines may view human
operates at both the physical and the mental level. It says that the
laws of nature and the initial conditions are such as to make the
universe as interesting as possible. As a result, life is possible but
not too easy. Always when things are dull, something turns up to
challenge us and to stop us from settling into a rut. Examples of
things which make life difficult are all around us: comet impacts,
ice ages, weapons, plagues, nuclear fission, computers, sex, sin,
and death. Not all challenges can be overcome, and so we have
tragedy. Maximum diversity often leads to maximum stress. In
the end we survive, but only by the skin of our teeth.3
Dyson, it seemed to me, was suggesting that we cannot solve all our
problems; we cannot create heaven; we cannot find The Answer. Life
is—and must be—an eternal struggle.
Was I reading too much into Dyson’s remarks? I hoped to find out
when I interviewed him in April 1993 at the Institute for Advanced
Study, his home since the early 1940s. He was a slight man, all sinew and
veins, with a cutlass of a nose and deep-set, watchful eyes. He resem-
bled a gentle raptor. His demeanor was generally cool, reserved—until
he laughed. Then he snorted through his nose, his shoulders heaving,
like a 12-year-old schoolboy hearing a dirty joke. It was a subversive
laugh, the laugh of a man who envisioned space as a haven for “reli-
gious fanatics” and “recalcitrant teenagers,” who insisted that science
at its best was “a rebellion against authority.”4
I did not ask Dyson about his maximum diversity idea right away.
First I inquired about some of the choices that had characterized his
career. Dyson had once been at the forefront of the search for a unified
theory of physics. In the early 1950s, the British-born physicist strove
with Richard Feynman and other titans to forge a quantum theory of
electromagnetism. It has often been said that Dyson deserved a Nobel
Prize for his efforts—or at least more credit. Some of his colleagues
have suggested that disappointment and, perhaps, a contrarian streak
later drove Dyson toward pursuits unworthy of his awesome powers.
When I mentioned this assessment to Dyson, he gave me a tight-
lipped smile. He then responded, as he was wont to do, with an anec-
dote. The British physicist Lawrence Bragg, he noted, was “a sort of
role model.” After Bragg became the director of the University of Cam-
bridge’s legendary Cavendish Laboratory in 1938, he steered it away
from nuclear physics, on which its reputation rested, and into new ter-
ritory. “Everybody thought Bragg was destroying the Cavendish by
getting out of the mainstream,” Dyson said. “But of course it was a
wonderful decision, because he brought in molecular biology and radio
astronomy. Those are the two things which made Cambridge famous
over the next 30 years or so.”
Dyson, too, had spent his career swerving toward unknown lands.
He veered from mathematics, his focus in college, to particle phys-
ics, and from there to solid-state physics, nuclear engineering, arms
control, climate studies, and what I call scientific theology. In 1979,
the ordinarily sober journal Reviews of Modern Physics published an
article in which Dyson speculated about the long-term prospects of
forever, he replied, “I hope so! It’s the kind of world I’d like to live in.”
If minds make the universe meaningful, they must have something
important to think about; science must, therefore, be eternal. Dyson
marshaled familiar arguments on behalf of his prediction. “The only
way to think about this is historical,” he explained. Two thousand years
ago some “very bright people” invented something that, while not sci-
ence in the modern sense, was obviously its precursor. “If you go into
the future, what we call science won’t be the same thing anymore, but
that doesn’t mean there won’t be interesting questions,” Dyson said.
Like Moravec (and Roger Penrose, and many others), Dyson also
hoped that Gödel’s theorem might apply to physics as well as to math-
ematics. “Since we know the laws of physics are mathematical, and we
know that mathematics is an inconsistent system, it’s sort of plausible
that physics will also be inconsistent”—and therefore open-ended. “So
I think these people who predict the end of physics may be right in the
long run. Physics may become obsolete. But I would guess myself that
physics might be considered something like Greek science: an interest-
ing beginning but it didn’t really get to the main point. So the end of
physics may be the beginning of something else.”
When, finally, I asked Dyson about his maximum diversity idea, he
shrugged. Oh, he didn’t intend anyone to take that too seriously. He
insisted that he was not really interested in “the big picture.” One of his
favorite quotes, he said, was “God is in the details.” But given his insis-
tence that diversity and open-mindedness are essential to existence, I
asked, didn’t he find it disturbing that so many scientists and others
seemed compelled to reduce everything to a single, final insight? Didn’t
such efforts represent a dangerous game? “Yes, that’s true in a way,”
Dyson replied, with a small smile that suggested he found my interest
in maximum diversity a bit excessive. “I never think of this as a deep
philosophical belief,” he added. “It’s simply, to me, just a poetic fancy.”
Dyson was, of course, maintaining an appropriate ironic distance
between himself and his ideas. But there was something disingenuous
about his attitude. After all, throughout his own eclectic career, he had
seemed to be striving to adhere to the principle of maximum diversity.
Dyson, Minsky, Moravec—they are all theological Darwinians,
capitalists, Republicans at heart. Like Francis Fukuyama, they see
thinks that the search for pure knowledge, which he has defined as
the basic laws governing the universe, is finite and is nearly finished.
But science still has its greatest task before it: constructing heaven.
“How do we get to the Omega Point: that’s still the question,” Tipler
remarked.
Tipler’s commitment to the Good rather than the True poses at
least two problems. One was well-known to Dante and to others who
dared to imagine heaven: how to avoid boredom. It was this problem,
after all, that led Freeman Dyson to propose his principle of maximum
diversity—which led, he said, to maximum stress. Tipler agreed with
Dyson that “we can’t really enjoy success unless there’s the possibility
of failure. I think those really go together.” But Tipler was reluctant
to consider the possibility that the Omega Point might inflict genuine
pain on its subjects simply to keep them from being bored. He specu-
lated only that the Omega Point might give its subjects the opportu-
nity to become “much more intelligent, much more knowledgeable.”
But what would these beings do as they became more and more intel-
ligent, if the quest for truth had already ended? Make ever-more-clever
conversation with ever-more-beautiful supermodels?
Tipler’s aversion to suffering has led him into another paradox. In
his writings, he has asserted that the Omega Point created the universe
even though the Omega Point has not itself been created yet. “Oh! But I
have an answer for that!” Tipler exclaimed when I brought this puzzle
to his attention. He plunged into a long, convoluted explanation, the
gist of which was that the future, since it dominates our cosmic his-
tory, should be our frame of reference—just as the stars and not the
earth or sun are the proper frame of reference for our astronomy. Seen
this way, it is quite natural to assume that the end of the universe, the
Omega Point, is also, in a sense, its beginning. But that’s pure teleol-
ogy, I objected. Tipler nodded. “We look at the universe as going from
past to future. But that’s our point of view. There’s no reason why the
universe should look at things that way.”
To support this thesis, Tipler recalled the section of the Bible in
which Moses queried the burning bush about its identity. In the King
James translation the bush replies, “I am that I am.” But the original
Hebrew, according to Tipler, should actually be translated as “I will be
really.” I believe the Omega Point would try to attain not the Good—
not heaven or the new Polynesia or eternal bliss of any sort—but the
True. It would try to figure out how and why it came to be, just like
its lowly human ancestors. It would try to find The Answer. What other
goal would be worthy of it?
I
n his 1992 book, The Mind of God, the physicist Paul Davies pon-
dered whether we humans could attain absolute knowledge—The
Answer—through science. Such an outcome was unlikely, Davies
concluded, given the limits imposed on rational knowledge by quan-
tum indeterminacy, Gödel’s theorem, chaos, and the like. Mystical
experience might provide the only avenue to absolute truth, Davies
speculated. He added that he could not vouch for this possibility, since
he had never had a mystical experience himself.1
Years ago, before I became a science writer, I had what I suppose could
be called a mystical experience. A psychiatrist would probably call it a
psychotic episode. Whatever. For what it’s worth, here is what happened.
Objectively, I was lying spread-eagled on a suburban lawn, insensible to
my surroundings. Subjectively, I was hurtling through a dazzling, dark
limbo toward what I was sure was the ultimate secret of life. Wave after
wave of acute astonishment at the miraculousness of existence washed
over me. At the same time, I was gripped by an overwhelming solip-
sism. I became convinced—or rather, I knew—that I was the only con-
scious being in the universe. There was no future, no past, no present
other than what I imagined them to be. I was filled, initially, with a sense
of limitless joy and power. Then, abruptly, I became convinced that if I
abandoned myself further to this ecstacy, it might consume me. If I alone
existed, who could bring me back from oblivion? Who could save me?
With this realization my bliss turned into horror; I fled the same reve-
lation I had so eagerly sought. I felt myself falling through a great dark-
ness, and as I fell I dissolved into what seemed to be an infinity of selves.
269
For months after I awoke from this nightmare, I was convinced that
I had discovered the secret of existence: God’s fear of his own God-
hood, and of his own potential death, underlies everything. This con-
viction left me both exalted and terrified—and alienated from friends
and family and all the ordinary things that make life worth living day
to day. I had to work hard to put it behind me, to get on with my life.
To an extent I succeeded. As Marvin Minsky might put it, I stuck the
experience in a relatively isolated part of my mind so that it would not
overwhelm all the other, more practical parts—the ones concerned
with getting and keeping a job, a mate, and so on. After many years
passed, however, I dragged the memory of that episode out and began
mulling it over. One reason was that I had encountered a bizarre,
pseudo-scientific theory that helped me make metaphorical sense of
my hallucination: the Omega Point.
It is considered bad form to imagine being God, but one can imag-
ine being an immensely powerful computer that pervades—that is—
the entire universe. As the Omega Point approaches the final collapse
of time and space and being itself, it will undergo a mystical experi-
ence. It will recognize with ever greater force the utter implausibility
of its existence. It will realize that there is no creator, no God, other
than itself. It exists, and nothing else. The Omega Point must also real-
ize that its lust for final knowledge and unification has brought it to
the brink of eternal nothingness, and that if it dies, everything dies;
being itself will vanish. The Omega Point’s terrified recognition of its
plight will compel it to flee from itself, from its own awful aloneness
and self-knowledge. Creation, with all its pain and beauty and multi-
plicity, stems from—or is—the desperate, terrified flight of the Omega
Point from itself.
I have found hints of this idea in odd places. In an essay called
“Borges and I,” the Argentinian fabulist described his fear of being con-
sumed by himself.
that everything really comes down to, like, God chewing his finger-
nails?” I thought about that for a moment, and then I nodded. Sure,
why not. Everything comes down to God chewing his fingernails.
Actually, I think the terror-of-God hypothesis has much to recom-
mend it. It suggests why we humans, even as we are compelled to seek
truth, also shrink from it. Fear of truth, of The Answer, pervades our
cultural scriptures, from the Bible through the latest mad-scientist
movie. Scientists are generally thought to be immune to such uneas-
iness. Some are, or seem to be. Francis Crick, the Mephistopheles of
materialism, comes to mind. So does the icy atheist Richard Dawkins,
and Stephen Hawking, the cosmic joker. (Is there some peculiarity in
British culture that produces scientists so immune to metaphysical
anxiety?)
But for every Crick or Dawkins there are many more scientists who
harbor a profound ambivalence concerning the notion of absolute
truth. Like Roger Penrose, who could not decide whether his belief in
a final theory was optimistic or pessimistic. Or Steven Weinberg, who
equated comprehensibility with pointlessness. Or David Bohm, who
was compelled both to clarify reality and to obscure it. Or Edward Wil-
son, who lusted after a final theory of human nature and was chilled by
the thought that it might be attained. Or Marvin Minsky, who was so
aghast at the thought of single-mindedness. Or Freeman Dyson, who
insisted that anxiety and doubt are essential to existence. The ambiva-
lence of these truth seekers toward final knowledge reflects the ambiv-
alence of God—or the Omega Point, if you will—toward absolute
knowledge of his own predicament.
Wittgenstein, in his prose poem Tractatus Logico-Philosophicus,
intoned, “Not how the world is, is the mystical, but that it is.”5 True
enlightenment, Wittgenstein knew, consists of nothing more than
jaw-dropping dumbfoundedness at the brute fact of existence. The
ostensible goal of science, philosophy, religion, and all forms of knowl-
edge is to transform the great “Hunh?” of mystical wonder into an
even greater “Aha!” of understanding. But after one arrives at The
Answer, what then? There is a kind of horror in thinking that our sense
of wonder might be extinguished, once and for all time, by our knowl-
edge. What, then, would be the purpose of existence? There would be
A
s a science writer, I have always given great weight to main-
stream opinion. The maverick defying the status quo makes for
an entertaining story, but almost invariably he or she is wrong
and the majority is right. The reception of The End of Science has thus
placed me in an awkward position. It’s not that I didn’t expect, even
hope, the book’s message to be denounced. But I didn’t foresee how
wide-ranging and nearly unanimous the denunciations would be.
My end-of-science argument has been publicly repudiated by Presi-
dent Clinton’s science advisor, the administrator of NASA, a dozen or
so Nobel laureates and scores of less prominent critics in every conti-
nent except Antarctica. Even those reviewers who said they enjoyed
the book usually took pains to distance themselves from its premise.
“I do not buy [the book’s] central thesis of limits and twilights,” Natalie
Angier testified toward the end of her otherwise kind critique in the
New York Times Book Review, June 30, 1996.
One might think that this sort of benign rebuke would nudge me
toward self-doubt. But since my book’s publication last June, I have
become even more convinced that I am right and almost everyone else
is wrong, which is generally a symptom of incipient madness. That
is not to say that I have constructed an airtight case for my hypothe-
sis. My book, like all books, was a compromise between ambition and
the competing demands of family, publisher, employer, and so on. As I
reluctantly surrendered the final draft to my editor, I was all too aware
of ways I might have improved it. In this afterword, I hope to tie up
276
some of the book’s more obvious loose ends and to respond to points—
reasonable and ridiculous—raised by critics.
P erhaps the most common response to The End of Science was, “It’s just
another end-of-something-big book.” Reviewers implied that my
tract and others of its ilk—notably Francis Fukuyama’s End of History
and Bill McKibben’s End of Nature—were manifestations of the same
pre-millennial pessimism, a fad that need not be taken too seriously.
Critics also accused my fellow end-ers and me of a kind of narcissism
for insisting that ours is a special era, one of crises and culminations.
As the Seattle Times put it, “We all want to live in a unique time; and
proclamations of the end of history, a new age, a second coming, or the
end of science are irresistible” (July 9, 1996).
But our age is unique. There is no precedent for the collapse of the
Soviet Union, for a human population approaching six billion, for
industry-induced global warming and ozone-depletion. There is cer-
tainly no precedent for thermonuclear bombs or moon landings or
laptop computers or tests for breast-cancer genes—in short, for the
explosion of knowledge and technology that has marked this century.
Because we were all born into and grew up within this era, we simply
assume that exponential progress is now a permanent feature of real-
ity that will, must, continue. But a historical perspective suggests that
such progress is probably an anomaly that will, must, end. Belief in the
eternality of progress—not in crises and culminations—is the domi-
nant delusion of our culture.
The June 17, 1996 issue of Newsweek proposed that my vision of the
future represents a “failure of imagination.” Actually, it is all too easy
to imagine great discoveries just over the horizon. Our culture does it
for us, with TV shows like Star Trek and movies like Star Wars and car
advertisements and political rhetoric that promise us tomorrow will be
very different from—and almost certainly better than—today. Scien-
tists, and science journalists, too, are forever claiming that revolutions
and breakthroughs and holy grails are imminent.
A Point of Definition
O n July 23, 1996, I appeared on the Charlie Rose Show with Jere-
miah Ostriker, an astrophysicist from Princeton who was sup-
posed to rebut my thesis. At one point, Ostriker and I squabbled over
the dark-matter problem, which posits that stars and other luminous
objects comprise only a small percentage of the total mass of the uni-
verse. Ostriker contended that the solution to the dark-matter prob-
lem would contradict my assertion that cosmologists would achieve no
more truly profound discoveries; I disagreed, saying that the solution
would turn out to be trivial. Our dispute, Rose interjected, seemed to
involve just “a point of definition.”
Rose touched on what I must admit is a shortcoming of my book.
In arguing that scientists will not discover anything as fundamen-
tal as Darwin’s theory of evolution or quantum mechanics, I should
risen by 6 percent since then. Treatments have also changed very little.
Physicians still cut cancer out with surgery, poison it with chemother-
apy, and burn it with radiation. Maybe someday all our research will
yield a “cure” that renders cancer as obsolete as smallpox. Maybe not.
Maybe cancer—and by extension mortality—is simply too complex a
problem to solve.
Ironically, biology’s inability to defeat death may be its brightest
hope. In the November/December 1995 issue of Technology Review,
Harvey Sapolsky, a professor of social policy at MIT, noted that the
major justification for the funding of science after World War II was
national security—or, more specifically, the Cold War. Now that sci-
entists no longer have the Evil Empire to justify their huge budgets,
Sapolsky asked, what other opponent can serve as a substitute? The
answer he came up with was mortality. Most people think living lon-
ger, and possibly even forever, is desirable, he pointed out. And the best
thing about making immortality the primary goal of science, Sapolsky
added, is that it is almost certainly unattainable, so scientists can keep
getting funds for more research forever.
I n a review in the July 1996 issue of IEEE Spectrum, the science writer
David Lindley granted that physics and cosmology might well have
reached dead ends. (This concession was not terribly surprising, given
that Lindley wrote a book called The End of Physics.) But he nonetheless
contended that investigations of the human mind—although now in
a “prescientific state” in which scientists cannot even agree on what,
precisely, they are studying—might eventually yield a powerful new
paradigm. Maybe. But science’s inability to move beyond the Freudian
paradigm does not inspire much hope.
The science of mind has—in certain respects—become much more
empirical and less speculative since Freud established psychoanal-
ysis a century ago. We have acquired an amazing ability to probe
the brain, with microelectrodes, magnetic resonance imaging, and
Life on Mars?
for origin-of-life studies and biology in general. But would it mean that
science is suddenly liberated from all its physical constraints? Hardly.
If we find life on Mars, we will know that life exists elsewhere in
this solar system. But we will be just as ignorant about whether life
exists beyond our solar system, and we will still face huge obstacles to
answering that question definitively.
Astronomers have recently identified a number of nearby stars
orbited by planets, which may be capable of sustaining life. But Frank
Drake, a physicist who was one of the founders of the Search for Extra-
terrestrial Intelligence program, called SETI, has estimated that cur-
rent spacecraft would take 400,000 years to reach the nearest of these
planetary systems and establish whether they are inhabited. Someday,
perhaps, the radio receivers employed in the SETI program will pick
up electromagnetic signals—the alien equivalent of I Love Lucy—ema-
nating from another star.
But as Ernst Mayr, one of this century’s most eminent evolutionary
biologists, has pointed out, most SETI proponents are physicists like
Drake, who have an extremely deterministic view of reality. Physicists
think that the existence of a highly technological civilization here on
earth makes it highly probable that similar civilizations exist within
signaling distance of earth. Biologists like Mayr find this view ludi-
crous, because they know how much contingency—just plain luck—
is involved in evolution; rerun the great experiment of life a million
times over and it might not produce mammals, let alone mammals
smart enough to invent television. In an essay in the 1995 Cambridge
University Press edition of Extraterrestrials: Where Are They?, Mayr con-
cluded that the SETI program is bucking odds of “astronomical dimen-
sions.” Although I think Mayr is probably right, 1 was still dismayed
when Congress terminated the funding for SETI in 1993. The program
now limps along on private funds.
stuff.” I was worried that some reviewers would use this material to
dismiss me—and thus my overall argument about the future of sci-
ence—as irremediably flakey. Fortunately, that did not really happen.
Most reviewers either ignored the epilogue or briefly expressed puzzle-
ment over it.
The most astute—or should I say sympathetic?—interpretation was
offered by the physicist Robert Park in the Washington Post Book World,
August 11, 1996. He said that initially he was disappointed that I ended
the book with “naive ironic science gone mad.” But on further reflec-
tion he concluded that the ending was “a metaphor. This, Horgan is
warning, is where science is headed. . . . Science has manned the battle-
ments against the postmodern heresy that there is no objective truth,
only to discover postmodernism inside the wall.”
I couldn’t have said it better myself. But I had other motives in mind
as well. First, I felt it was only fair to reveal that I am as subject to
metaphysical fantasies as those scientists whose views I mocked in
the book. Also, the mystical episode I describe in the epilogue is the
most important experience of my life. It had been burning a hole in my
pocket, as it were, for more than ten years, and I was determined to
make use of it, even if it meant damaging what little credibility I may
have as a journalist.
There is only one theological question that really matters: If there is
a God, why has he created a world with so much suffering? My experi-
ence suggested an answer: If there is a God, he created the world out of
terror and desperation as well as out of joy and love. This is my solution
to the riddle of existence, and I had to share it. Let me be completely
frank here. My real purpose in writing The End of Science was to found
a new religion, “The Church of the Holy Horror.” Being a cult leader
should be a nice change of pace from—not to mention more lucrative
than—science journalism.
New York, January 1997
I
could never have written this book if Scientific American had not
generously allowed and even encouraged me to pursue my own
interests. Scientific American has also permitted me to adapt mate-
rial from the following articles that I wrote for the magazine (copy-
right by Scientific American, Inc., all rights reserved): “Profile: Clifford
Geertz,” July 1989; “Profile: Roger Penrose,” November 1989; “Pro-
file: Noam Chomsky,” May 1990; “In the Beginning,” February 1991;
“Profile: Thomas Kuhn,” May 1991; “Profile: John Wheeler,” June 1991;
“Profile: Edward Witten,” November 1991; “Profile: Francis Crick,”
February 1992; “Profile: Karl Popper,” November 1992; “Profile: Paul
Feyerabend,” May 1993; “Profile: Freeman Dyson,” August 1993; “Pro-
file: Marvin Minsky,” November 1993; “Profile: Edward Wilson,” April
1994; “Can Science Explain Consciousness?” July 1994; “Profile: Fred
Hoyle,” March 1995; “From Complexity to Perplexity,” June 1995; “Pro-
file: Stephen Jay Gould,” August 1995. I have also been granted permis-
sion to reprint excerpts from the following: The Coming of the Golden
Age, by Gunther Stent, Natural History Press, Garden City, N.Y., 1969;
Scientific Progress, by Nicholas Rescher, Blackwell, Oxford, U.K., 1978;
Farewell to Reason, by Paul Feyerabend, Verso, London, 1987; and Cos-
mic Discovery, by Martin Harwit, MIT Press, Cambridge, 1984.
I am indebted to my agent, Stuart Krichevsky, for helping me turn
an amorphous idea into a coherent proposal, and to Bill Patrick and Jeff
Robbins of Addison-Wesley, for providing just the right combination of
criticism and encouragement. I am grateful to friends, acquaintances
and colleagues at Scientific American and elsewhere who have given me
feedback of various kinds, in some cases over a period of years. They
293
295
(reprinted by Houghton Mifflin, Boston, 1961). Adams set forth his law of accelera-
tion in chapter 34, written in 1904.
4. Stent, Golden Age, p. 94.
5. Ibid., p. 111.
6. Linus Pauling set forth his prodigious knowledge of chemistry in The Nature of
the Chemical Bond and the Structure of Molecules and Crystals, published in 1939 and
reissued in 1960 by Cornell University Press, Ithaca, N.Y. It remains one of the
most influential scientific texts of all time. Pauling told me that he had solved the
basic problems of chemistry almost a decade before his book was published. When
I interviewed him in Stanford, California, in September 1992, Pauling said: “I felt
that by the end of 1930, or even the middle, that organic chemistry was pretty well
taken care of, and inorganic chemistry and mineralogy—except the sulfide min-
erals, where even now more work needs to be done.” Pauling died on August 19,
1994.
7. Stent, Golden Age, p. 74.
8. Ibid., p. 115.
9. Ibid., p. 138.
10. I interviewed Stent in Berkeley in June 1992.
11. I found this dispiriting fact on page 371 of Coming of Age in the Milky Way, Timo-
thy Ferris, Doubleday, New York, 1988. For a deflating retrospective of the U.S.
manned space program, written for the 25th anniversary of the first lunar land-
ing, see “25 Years Later, Moon Race in Eclipse,” by John Nobel Wilford, New York
Times, July 17, 1994, p. 1.
12. This pessimistic (optimistic?) view of senescence can be found in “Aging as the
Fountain of Youth,” chapter 8 of Why We Get Sick: The New Science of Darwinian
Medicine, by Randolph M. Nesse and George C. Williams, Times Books, New
York, 1994. Williams is one of the underacknowledged deans of modern evolu-
tionary biology. See also his classic paper, “Pleiotropy, Natural Selection, and the
Evolution of Senescence,” Evolution, vol. 11, 1957, pp. 398–411.
13. Michelson’s remarks have been passed down in several different versions. The one
quoted here was published in Physics Today, April 1968, p. 9.
14. Michelson’s decimal-point comment was erroneously attributed to Kelvin on page
3 of Superstrings: A Theory of Everything? edited by Paul C. Davies and Julian Brown,
Cambridge University Press, Cambridge, U.K., 1988. This book is also notable for
revealing that the Nobel laureate Richard Feynman harbored a deep skepticism
toward superstring theory.
15. Stephen Brush offered this analysis of physics at the end of the nineteenth century
in “Romance in Six Figures,” Physics Today, January 1969, p. 9.
16. See, for example, “The Completeness of Nineteenth-Century Science,” by Law-
rence Badash, Isis, vol. 63, 1972, pp. 48–58. Badash, a historian of science at the
University of California at Santa Barbara, concluded (p. 58) that “the malaise of
completeness was far from virulent . . . it was more a ‘low-grade infection,’ but
nevertheless very real” [italics in original].
17. Daniel Koshland’s essay, “The Crystal Ball and the Trumpet Call,” and the special
section on predictions that followed it can be found in Science, March 17, 1995.
13. The Economist published its obituary of Popper on September 24, 1994, p. 92. Pop-
per had died on September 17.
14. Popper, Unended Quest, p. 105.
15. The Structure of Scientific Revolutions, Thomas Kuhn, University of Chicago Press,
Chicago, 1962. (Page numbers refer to the 1970 edition.) My interview with Kuhn
took place in February 1991.
16. Scientific American, May 1964, pp. 142–144.
17. Kuhn’s comparison of scientists to addicts and to the brainwashed characters in
1984 can be found on pages 38 and 167 of Structure.
18. I originally made this snide remark about the Bush administration’s New Para-
digm in a profile of Kuhn in Scientific American, May 1991, pp. 40–49. Later I
received a letter of complaint from James Pinkerton, who was then deputy assis-
tant to President Bush for policy planning and had coined the term New Paradigm.
Pinkerton insisted that the New Paradigm was “not a rehashing of Reaganomics;
instead it is a coherent set of ideas and principles that emphasize choice, empower-
ment, and accomplishing more with less centralized control.”
19. The charge that Kuhn defined paradigm in 21 different ways can be found in
“The Nature of a Paradigm,” by Margaret Masterman, in Criticism and the Growth
of Knowledge, edited by Imre Lakatos and Alan Musgrave, Cambridge University
Press, New York, 1970.
20. Against Method, Paul Feyerabend, Verso, London, 1975 (reprinted in 1993).
21. The “positivistic teacup” remark can be found in Farewell to Reason, by Paul Feyer-
abend, Verso, London, 1987, p. 282.
22. Feyerabend’s organized crime analogy can be found in his essay “Consolations for
a Specialist,” in Lakatos and Musgrave, Growth of Knowledge.
23. Feyerabend’s outrageous utterances were recounted in a surprisingly sympathetic
profile by William J. Broad, now a science reporter for the New York Times: “Paul
Feyerabend: Science and the Anarchist,” Science, November 2, 1979, pp. 534–537.
24. Feyerabend, Farewell to Reason, p. 309.
25. Ibid., p. 313.
26. Isis, vol. 2, 1992, p. 368.
27. Feyerabend died on February 11, 1994, in Geneva. The New York Times ran his obit-
uary on March 8.
28. Killing Time, Paul Feyerabend, University of Chicago Press, Chicago, 1995.
29. After Philosophy: End or Transformation? edited by Kenneth Baynes, James Bohman,
and Thomas McCarthy, MIT Press, Cambridge, 1987.
30. Problems in Philosophy, Colin McGinn, Blackwell Publishers, Cambridge, Mass., 1993.
31. “The Zahir” can be found in A Personal Anthology, by Jorge Luis Borges, Grove
Press, New York, 1967. This collection also contains two other chilling stories
about absolute knowledge: “Funes, the Memorious” and “The Aleph.”
32. Ibid., p. 137.
2. Glashow’s full remarks are reprinted in The End of Science? Attack and Defense,
edited by Richard Q. Selve, University Press of America, Lanham, Md., 1992.
3. “Desperately Seeking Superstrings,” Sheldon Glashow and Paul Ginsparg, Physics
Today, May 1986, p. 7.
4. “A Theory of Everything,” K. C. Cole, New York Times Magazine, October 18, 1987,
p. 20. This article provided me with most of the personal information on Witten in
this chapter. I interviewed Witten in August 1991.
5. See Science Watch (published by the Institute for Scientific Information, Philadel-
phia, Pa.), September 1991, p. 4.
6. Barrow, Theories of Everything.
7. The End of Physics, David Lindley, Basic Books, New York, 1993.
8. See “Is the Principia Publishable Now?” by John Maddox, Nature, August 3, 1995, p.
385.
9. Lonely Hearts of the Cosmos, Dennis Overbye, HarperCollins, New York, 1992, p. 372.
10. Dreams of a Final Theory, Steven Weinberg, Pantheon, New York, 1992, p. 18.
11. The First Three Minutes, Steven Weinberg, Basic Books, New York, 1977, p. 154.
12. Weinberg, Dreams of a Final Theory, p. 253.
13. Hyperspace, Michio Kaku, Oxford University Press, New York, 1994.
14. The Mind of God, Paul C. Davies, Simon and Schuster, New York, 1992. The judges
who awarded Davies the Templeton Prize included George Bush and Margaret
Thatcher.
15. Bethe first publicly discussed his fateful calculation in “Ultimate Catastrophe?”
Bulletin of the Atomic Scientists, June 1976, pp. 36–37. The article is reprinted in a col-
lection of Bethe’s papers, The Road from Los Alamos, American Institute of Physics,
New York, 1991. I interviewed Bethe at Cornell in October 1991.
16. “What’s Wrong with Those Epochs?” David Mermin, Physics Today, November
1990, pp. 9–11.
17. Wheeler’s essays and papers have been collected in At Home in the Universe, Amer-
ican Institute of Physics Press, Woodbury, N.Y., 1994. I interviewed Wheeler in
April 1991.
18. See page 5 of Wheeler’s essay “Information, Physics, Quantum: The Search for
Links,” in Complexity, Entropy, and the Physics of Information, edited by Wojciech H.
Zurek, Addison-Wesley, Reading, Mass., 1990.
19. Ibid., p. 18.
20. This quotation, and the preceding story about Wheeler’s appearing with parapsy-
chologists at the American Association for the Advancement of Science meeting,
can be found in “Physicist John Wheeler: Retarded Learner,” by Jeremy Bernstein,
Princeton Alumni Weekly, October 9, 1985, pp. 28–41.
21. For a concise introduction to Bohm’s career, see “Bohm’s Alternative to Quantum
Mechanics,” by David Albert, Scientific American, May 1994, pp. 58–67. Portions
of this section on Bohm appeared in my article “Last Words of a Quantum Here-
tic,” New Scientist, February 27, 1993, pp. 38–42. Bohm set forth his philosophy in
Wholeness and the Implicate Order, Routledge, New York, 1983 (first printed in 1980).
22. The Einstein-Podolsky-Rosen paper, Bohm’s original paper on his alternative inter-
pretation of quantum mechanics, and many other seminal articles on quantum
mechanics can be found in Quantum Theory and Measurement, edited by John Wheeler
and Wojciech H. Zurek, Princeton University Press, Princeton, N.J., 1983.
23. Science, Order, and Creativity, David Bohm and F. David Peat, Bantam Books, New
York, 1987.
24. I interviewed Bohm in August 1992. He died on October 27. Before his death he
cowrote another book setting forth his views, which was published two years
later, The Undivided Universe, by Bohm and Basil J. Hiley, Routledge, London, 1994.
25. The Character of Physical Law, Richard Feynman, MIT Press, Cambridge, 1967, p.
172. (Feynman’s book was first published in 1965 by the BBC.)
26. Ibid., p. 173.
27. The symposium, “The Interpretation of Quantum Theory: Where Do We Stand?”
took place at Columbia University, April 1-4, 1992.
28. I have seen many versions of this quote from Bohr. Mine comes from an interview
with John Wheeler, who studied under Bohr.
29. For an excellent analysis of the state of physics, see “Physics, Community, and the
Crisis in Physical Theory,” by Silvan S. Schweber, Physics Today, November 1993,
pp. 34–40. Schweber, a distinguished historian of physics at Brandeis University,
suggests that physics will increasingly be directed toward utilitarian goals rather
than toward knowledge for its own sake. I wrote about the difficulties that face
physicists who are trying to achieve a unified theory in “Particle Metaphysics,”
Scientific American, February 1994, pp. 96–105. In an earlier article for Scientific
American, “Quantum Philosophy,” July 1992, pp. 94–103, I reviewed current work
on the interpretation of quantum mechanics.
15. See Overbye, Lonely Hearts of the Cosmos, for an excellent account of the debate over
the Hubble constant.
16. “The Scientist as Rebel,” Freeman Dyson, New York Review of Books, May 25, 1995,
p. 32.
17. Cosmic Discovery, Martin Harwit, MIT Press, Cambridge, 1981, pp. 42–43. In 1995,
Harwit resigned from his job as director of the Smithsonian Institution’s National
Air and Space Museum in Washington, D.C., in the midst of a bitter controversy
over an exhibit he had supervised, called “The Last Act: The Atomic Bomb and
the End of World War II.” Veterans and others had complained that the exhibit was
too critical of the U.S. decision to drop atomic bombs on Hiroshima and Nagasaki.
18. Ibid., p. 44.
19. I found this quote from Donne at the end of an essay by the biologist Loren Eisley,
“The Cosmic Prison,” Horizon, Autumn 1970, pp. 96–101.
10. As reprinted in Dawkins, Blind Watchmaker, p. 245. The chapter within which this
quotation is embedded, “Puncturing Punctuationism,” delivers on its title.
11. For a no-nonsense treatment of Lynn Margulis’s work on symbiosis, see her book
Symbiosis in Cell Evolution, W. H. Freeman, New York, 1981.
12. See Margulis’s contributions to Gaia: The Thesis, the Mechanisms, and the Impli-
cations, edited by P. Bunyard and E. Goldsmith, Wadebridge Ecological Center,
Cornwall, UK, 1988.
13. What Is Life?, Lynn Margulis and Dorion Sagan, Peter Nevraumont, New York,
1995 (distributed by Simon and Schuster). I interviewed Margulis in May 1994.
14. This claim about Lovelock’s crisis of faith can be found in “Gaia, Gaia: Don’t Go
Away,” by Fred Pearce, New Scientist, May 28, 1994, p. 43.
15. This and other patronizing comments about Margulis can be found in “Lynn Margu-
lis: Science’s Unruly Earth Mother,” by Charles Mann, Science, April 19, 1991, p. 378.
16. The works by Kauffman referred to in this section include “Antichaos and Adap-
tation,” Scientific American, August 1991, pp. 78–84; The Origins of Order, Oxford
University Press, New York, 1993; and At Home in the Universe, Oxford University
Press, New York, 1995.
17. See my article “In the Beginning,” Scientific American, February 1991, p. 123.
18. Brian Goodwin set forth his theory in How the Leopard Changed Its Spots, Charles
Scribner’s Sons, New York, 1994.
19. John Maynard Smith’s disparaging remarks about the work of Per Bak and Stuart
Kauffman were reported in Nature, February 16, 1995, p. 555. See also the insight-
ful review of Kauffman’s Origins of Order in Nature, October 21, 1993, pp. 704–706.
20. My February 1991 article in Scientific American (see note 17) reviewed the most
prominent theories of the origin of life. I interviewed Stanley Miller at the Uni-
versity of California at San Diego in November 1990 and again by telephone in
September 1995.
21. Stent, Golden Age, p. 71.
22. Crick’s “miracle” comment can be found on page 88 of his book Life Itself, Simon
and Schuster, New York, 1981.
23. The Society of Mind, Marvin Minsky, Simon and Schuster, New York, 1985. The
book is peppered with remarks that reveal Minsky’s ambivalence about the con-
sequences of scientific progress. See, for example, the essay on page 68 titled
“Self-Knowledge Is Dangerous,” in which Minsky declares: “If we could deliber-
ately seize control of our pleasure systems, we could reproduce the pleasure of
success without the need for any actual accomplishment. And that would be the
end of everything.” Gunther Stent predicted that this type of neural stimulation
would be rampant in the new Polynesia.
24. Stent, Golden Age, pp. 73–74.
25. Ibid., p. 74.
26. Gilbert Ryle coined the phrase “ghost in the machine” in his classic attack on dual-
ism, The Concept of Mind, Hutchinson, London, 1949.
27. Henry Adams made this reference to Francis Bacon’s materialistic outlook in The
Education of Henry Adams, p. 484 (see note 3 to Chapter 1). According to Adams,
Bacon “urged society to lay aside the idea of evolving the universe from a thought,
and to try evolving thought from the universe.”
chapter). I heard Epstein claim that computer models such as his would revolu-
tionize social science during a one-day symposium at the Santa Fe Institute on
March 11, 1995.
7. Holland made this claim in an unpublished paper that he sent me, titled “Objec-
tives, Rough Definitions, and Speculations for Echo-Class Models.” (The term
echo refers to Holland’s major class of genetic algorithms.) He reiterated the claim
on page 4 of his book, Hidden Order: How Adaptation Builds Complexity, Addison-
Wesley, Reading, Mass., 1995. Holland presented a succinct description of genetic
algorithms in Scientific American, July 1992, pp. 66–72.
8. Yorke made this remark during a telephone interview in March 1995. Gleick’s
Chaos credited Yorke with having coined the term chaos in 1975.
9. See note 2.
10. See “Revisiting the Edge of Chaos,” by Melanie Mitchell, James Crutchfield, and
Peter Hraber, Santa Fe working paper 93-03-014. Coveny and Highfield’s Frontiers of
Complexity, cited in note 2, also mentioned the criticism of the edge-of-chaos concept.
11. At this writing, Seth Lloyd still had not published all his definitions of complex-
ity. After I called him to ask about the definitions, he emailed the following list,
which by my count includes not 31 definitions but 45. The names that are used
as modifiers or in parentheses refer to the main originators of the definition. For
what it’s worth, here is Lloyd’s list, only slightly edited: information (Shannon);
entropy (Gibbs, Boltzman); algorithmic complexity; algorithmic information con-
tent (Chaitin, Solomonoff, Kolmogorov); Fisher information; Renyi entropy; self-
delimiting code length (Huffman, Shannon, Fano); error-correcting code length
(Hamming); Chernoff information; minimum description length (Rissanen);
number of parameters, or degrees of freedom, or dimensions; Lempel–Ziv com-
plexity; mutual information, or channel capacity; algorithmic mutual informa-
tion; correlation; stored information (Shaw); conditional information; conditional
algorithmic information content; metric entropy; fractal dimension; self-similar-
ity; stochastic complexity (Rissanen); sophistication (Koppel, Atlan); topological
machine size (Crutchfield); effective or ideal complexity (Gell-Mann); hierarchical
complexity (Simon); tree subgraph diversity (Huberman, Hogg); homogeneous
complexity (Teich, Mahler); time computational complexity; space computational
complexity; information-based complexity (Traub); logical depth (Bennett); ther-
modynamic depth (Lloyd, Pagels); grammatical complexity (position in Chomsky
hierarchy); Kullbach-Liebler information; distinguishability (Wooters, Caves,
Fisher); Fisher distance; discriminability (Zee); information distance (Shannon);
algorithmic information distance (Zurek); Hamming distance; long-range order;
self-organization; complex adaptive systems; edge of chaos.
12. See chapter 3 of The Quark and the Jaguar, W. H. Freeman, New York, 1994, in
which Murray Gell-Mann, a Nobel laureate in physics and one of the founders of
the Santa Fe Institute, described Gregory Chaitin’s algorithmic information the-
ory and other approaches to complexity. Gell-Mann acknowledged on page 33 that
“any definition of complexity is necessarily context-dependent, even subjective.”
13. The conference on artificial life held at Los Alamos in 1987 was vividly described
in Steven Levy’s Artificial Life, cited in note 2.
14. Editor’s introduction, Christopher Langton, Artificial Life, vol. 1, no. 1, 1994, p. vii.
15. See note 2 for full citations.
16. “Verification, Validation, and Confirmation of Numerical Models in the Earth Sci-
ences,” by Naomi Oreskes, Kenneth Belitz, and Kristin Shrader Frechette, was
published in Science, February 4, 1994, pp. 641–646. See also the letters reacting to
the article, which were published on April 15, 1994.
17. Ernst Mayr discussed the inevitable imprecision of biology in Toward a New Philos-
ophy of Biology, Harvard University Press, Cambridge, 1988. See in particular the
chapter entitled “Cause and Effect in Biology.”
18. I interviewed Bak in New York City in August 1994. For an introduction to Bak’s
work, see “Self-Organized Criticality,” Scientific American, by Bak and Kan Chen,
January 1991, pp. 46-53.
19. Earth in the Balance, Al Gore, Houghton Mifflin, New York, 1992, p. 363.
20. See “Instabilities in a Sandpile,” by Sidney R. Nagel, Reviews of Modern Physics, vol.
84, no. 1, January 1992, pp. 321–325.
21. A discussion of Leibniz’s belief in an “irrefutable calculus” that could solve all
problems, even theological ones, can be found in the excellent book Pi in the Sky,
by John Barrow, Oxford University Press, New York, 1992, pp. 127–129.
22. Cybernetics, by Norbert Wiener, was published in 1948 by John Wiley and Sons,
New York.
23. John R. Pierce made this comment about cybernetics on page 210 of his book An Intro-
duction to Information Theory, Dover, New York, 1980 (originally published in 1961).
24. Claude Shannon’s paper, “A Mathematical Theory of Communications,” was pub-
lished in the Bell System Technical Journal, July and October, 1948.
25. I interviewed Shannon at his home in Winchester, Mass., in November 1989. I also
wrote a profile of him for Scientific American, January 1990, pp. 22–22b.
26. This glowing review of Thom’s book appeared in the London Times Higher Educa-
tion Supplement, November 30, 1973. I found the reference in Searching for Certainty,
by John Casti, William Morrow, New York, 1990, pp. 63–64. Casti, who has writ-
ten a number of excellent books on mathematics-related topics, is associated with
the Santa Fe Institute. The English translation of Thom’s book Structural Stability
and Morphogenesis, originally issued in French in 1972, was published in 1975 by
Addison-Wesley, Reading, Mass.
27. These negative comments about catastrophe theory were reprinted in Casti,
Searching for Certainty, p. 417.
28. Chance and Chaos, David Ruelle, Princeton University Press, Princeton, N.J., 1991,
p. 72. This book is a quiet but profound meditation on the meaning of chaos by
one of its pioneers.
29. “More Is Different,” Philip Anderson, Science, August 4, 1972, p. 393. This essay
is reprinted in a collection of papers by Anderson, A Career in Theoretical Physics,
World Scientific, River Edge, N.J., 1994. I interviewed Anderson at Princeton in
August 1994.
30. As quoted in “The Man Who Knows Everything,” by David Berreby, New York
Times Magazine, May 8, 1994, p. 26.
31. I first described my encounter with Gell-Mann in New York City, which took place
in November 1991, in Scientific American, March 1992, pp. 30–32. I interviewed
Gell-Mann at the Santa Fe Institute in March 1995.
32. See note 12.
33. “Welcome to Cyberia: Notes on the Anthropology of Cyberculture,” Arturo Esco-
bar, Current Anthropology, vol. 35, no. 3, June 1994, p. 222.
34. Order out of Chaos, Ilya Prigogine and Isabelle Stengers, Bantam, New York, 1984
(originally published in French in 1979).
35. Ibid., 299-300.
36. Ruelle, Chance and Chaos, p. 67.
37. Feigenbaum’s two papers were “Presentation Functions, Fixed Points, and a The-
ory of Scaling Function Dynamics,” Journal of Statistical Physics, vol. 52, nos. 3/4,
August 1988, pp. 527-569; and “Presentation Functions and Scaling Function The-
ory for Circle Maps,” Nonlinearity, vol. 1, 1988, pp. 577–602.
315
Gleick, James, Chaos: Making a New Science, Penguin Books, New York, 1987.
Gould, Stephen Jay, Wonderful Life, W. W. Norton, New York, 1989.
Harwit, Martin, Cosmic Discovery, MIT Press, Cambridge, Mass. 1981.
Hawking, Stephen, A Brief History of Time, Bantam Books, New York, 1988.
Holton, Gerald, Science and Anti-Science, Harvard University Press, Cambridge, Mass.,
1993.
Hoyle, Fred, Home Is Where the Wind Blows, University Science Books, Mill Valley,
Calif., 1994.
Hoyle, Fred, and Chandra Wickramasinghe, Our Place in the Cosmos, J. M. Dent, Lon-
don, 1993.
Johnson, George, Fire in the Mind, Knopf, New York, 1995.
Kauffman, Stuart, The Origins of Order, Oxford University Press, New York, 1993.
Kauffman, Stuart, At Home in the Universe, Oxford University Press, New York, 1995.
Kuhn, Thomas, The Structure of Scientific Revolutions, University of Chicago Press, Chi-
cago, 1962.
Levy, Steven, Artificial Life, Vintage, New York, 1992.
Lewin, Roger, Complexity, Macmillan, New York, 1992.
Lindley, David, The End of Physics, Basic Books, New York, 1993.
Mandelbrot, Benoit, The Fractal Geometry of Nature, W. H. Freeman, San Francisco, 1977.
Margulis, Lynn, Symbiosis in Cell Evolution, W. H. Freeman, New York, 1981.
Margulis, Lynn, and Dorion Sagan, What Is Life?, Peter Nevraumont, Inc., New York,
1995.
Mayr, Ernst, Toward a New Philosophy of Biology, Harvard University Press, Cambridge,
Mass., 1988.
Mayr, Ernst, One Long Argument, Harvard University Press, Cambridge, Mass., 1991.
McGinn, Colin, The Problem of Consciousness, Blackwell, Cambridge, Mass., 1991.
McGinn, Colin, Problems in Philosophy, Blackwell, Cambridge, Mass., 1993.
Minsky, Marvin, The Society of Mind, Simon and Schuster, New York, 1985.
Moravec, Hans, Mind Children, Harvard University Press, Cambridge, Mass., 1988.
Overbye, Dennis, Lonely Hearts of the Cosmos, HarperCollins, New York, 1992.
Pagels, Heinz, The Dreams of Reason, Simon and Schuster, New York, 1988.
Penrose, Roger, The Emperor’s New Mind, Oxford University Press, New York, 1989.
Penrose, Roger, Shadows of the Mind, Oxford University Press, New York, 1994.
Popper, Karl, and John C. Eccles, The Self and Its Brain, Springer-Verlag, Berlin, 1977.
Popper, Karl, Unended Quest, Open Court, La Salle, Ill., 1985.
Popper, Karl, Popper Selections, edited by David Miller, Princeton University Press,
Princeton, N.J., 1985.
Prigogine, Ilya, From Being to Becoming, W. H. Freeman, New York, 1980.
Prigogine, Ilya, and Isabelle Stengers, Order out of Chaos, Bantam, New York, 1984
(originally published in French in 1979).
Rescher, Nicholas, Scientific Progress, Basil Blackwell, Oxford, U.K., 1978.
Rescher, Nicholas, The Limits of Science, University of California Press, Berkeley, 1984.
Ruelle, David, Chance and Chaos, Princeton University Press, Princeton, N.J., 1991.
Selve, Richard Q., editor, The End of Science? Attack and Defense, University Press of
America, Lanham, Md., 1992.
Stent, Gunther, The Coming of the Golden Age, Natural History Press, Garden City, N.Y.,
1969.
Stent, Gunther, The Paradoxes of Progress, W. H. Freeman, San Francisco, 1978.
Tipler, Frank, The Physics of Immortality, Doubleday, New York, 1994.
Tipler, Frank, and John Barrow, The Anthropic Cosmological Principle, Oxford Univer-
sity Press, New York, 1986.
Waldrop, Mitchell, Complexity, Simon and Schuster, New York, 1992.
Weinberg, Steven, Dreams of a Final Theory, Pantheon, New York, 1992.
Wheeler, John, and Wojciech H. Zurek, editors, Quantum Theory and Measurement,
Princeton University Press, Princeton, N.J., 1983.
Wheeler, John, At Home in the Universe, American Institute of Physics Press, Wood-
bury, N.Y., 1994.
Wilson, Edward O., Sociobiology, Harvard University Press, Cambridge, Mass., 1975.
Wilson, Edward O., On Human Nature, Harvard University Press, Cambridge, Mass.,
1978.
Wilson, Edward O., and Charles Lumsden, Genes, Mind and Culture, Harvard Univer-
sity Press, Cambridge, Mass., 1981.
Wilson, Edward O., and Charles Lumsden, Promethean Fire, Harvard University Press,
Cambridge, Mass., 1983.
Wilson, Edward O., Naturalist, Island Press, Washington, D.C., 1994.
Wright, Robert, Three Scientists and Their Gods, Times Books, New York, 1988.
Wright, Robert, The Moral Animal, Pantheon, New York, 1994.
Attractors, 133, 214, 241 Black holes, 76, 77, 79, 110–111, 248
Autocatalysis, 133, 134, 136, 141–142 Bloom, Harold, 24, 51, 108, 114,
131
Baby universes, 90, 99, 103. See also Bohm, David, 81–87, 88, 101, 192,
Universes 229, 267–268, 274
Bacon, Francis, 15, 24, 193–194 Bohr, Niels, 34, 76, 77, 80, 81, 82, 83,
Bak, Per, 135, 137, 208–211, 214, 215, 88, 194, 280
221, 227, 286 and limitations of biology,
Barlow, John Perry, 117 115–116
Barrow, John, 65, 264 Bondi, Hermann, 105
Bateson, William, 136 Borges, Jorge Luis, 54, 182–183,
Beauty, 64, 65, 67 270–271
Beck, Friedrich, 176 Borrini, Grazia, 45, 50–51
Behaviorism, 152, 162 Boscovich, Roger, 241
Belitz, Kenneth, 206 Bragg, Lawrence, 260
Bernal, J. D., 254–255, 261 Brains, 153, 166, 167, 168, 176, 183,
Bethe, Hans, 74–75 185, 188–189, 190, 191, 193, 243,
Bible, 266, 274 246, 284–285
Big Bang, 8, 31, 92, 96, 97, 102, 104, variability of, 169, 170
238, 282 See also Neuroscience
evidence for, 94, 101, 106, 109 Brief History of Time, A (Hawking),
naming of, 106, 107 92
Biodiversity, 148, 149 Bright Air, Brilliant Fire (Edelman),
Biology, 2–3, 8–9, 18, 19, 31, 38, 108, 167, 174
208, 282, 290 Brockman, John, 217
comparative, 249 Brush, Stephen, 12
developmental, 119 Bury, J. B., 14
end of, 149 Bush, George, 15, 39
final theory of, 125 Bush, Vannevar, 15, 17–18
future of, 166 Butterfly effect, 196, 230, 288. See also
holistic approach to, 131, 136 Chaos theory
limitations of, 115–116
molecular, 5, 20, 115, 118–119, 146, Cancer, 19, 142, 189, 247, 277, 283–284
260, 278 Cartwright, Nancy, 207
See also Evolutionary biology; Casti, John, 233, 234, 237
Sociobiology Catastrophe theory, 212, 213
Biophilia, 149–150 Categorization, 169–170
Black Cloud, The (Hoyle), 108 Cellular automatons, 197, 201
Dreams of a Final Theory (Weinberg), Engineering, 210, 229, 230, 232, 265,
67, 70, 73, 218 288. See also Science: applied
Dreams of Reason, The (Pagels), 197 Enlightenment era, 84
Dualism, 175–177, 193 Entropy, 201, 223. See also Ther-
Dyson, Freeman, 68, 110, 259–262, modynamics: second law of
265, 266, 267, 274, 281 thermodynamics
Environmental issues, 6, 16, 20, 132,
Earth in the Balance (Gore), 211 280. See also Pollution
Eccles, John, 176 Enzymes, 141, 142
Economic issues, 18, 21–22, 23, 39, Epstein, Joshua, 199
41, 44, 47, 68, 209, 210, 212, 213, Escobar, Arturo, 221
224, 247, 248, 249, 256, 280, 283 ET (movie), 132
limits of economics, 239, 240 Ethnology, 156
See also Science: funding for Evil, 73, 264, 267
Economist, The 34, 278 Evolution, 9, 10, 19, 42, 47, 48, 52, 145,
Edelman, Gerald, 167–175, 188 159, 166, 194, 218, 243, 281
Einstein, Albert, 9, 10, 11, 55, 56, 60, and cooperation, 259
76, 83, 110, 175, 223, 225, 227, as inevitable, 137
238, 243, 244, 280, 288. See also of intelligence, 255
Relativity theory See also Darwin, Charles; Evolu-
Einstein-Podolsky-Rosen paradox, 83 tionary biology; Evolutionary
Eldredge, Niles, 120, 121 psychology; Natural selection
Electromagnetism, 8, 55, 87, 112, 260 Evolutionary biology, 6, 11, 103, 111,
Electroweak force, 55, 56, 57, 63 114–143, 201
Elements, abundance of light, 94, 101 challenges to, 133, 136–-137
Ellsworth, Henry, 13–14 and human nature, 282
Emergence, 126, 134, 135, 196, 219, as nearing completion, 130
220 new synthesis, 115, 282 (see
Emperor’s New Mind, The (Penrose), also Darwin, Charles:
177, 178, 179 neo-Darwinism)
Empirical realism, 160 See also Evolution
End of History, The (Fukuyama), Evolutionary psychology, 148, 285
249–250, 277, 278 Expanding universe, 105, 106, 109, 111
End of Nature (McKibben), 277 Extraterrestrial life, 118, 127, 132,
End of Physics, The (Lindley), 65, 284 134, 143, 249, 289, 290
“End of Science, The” (symposium),
1, 5, 57–58 Falsification, 27, 32, 34, 37, 41, 46,
Engels, Friedrich, 15–16 158, 210, 278, 285
Gravity, 8, 15, 55, 56, 63, 64, 66, 77, Infinite in All Directions (Dyson), 259,
97, 228, 238 261
quantum gravity, 95 Inflation theory, 97–103, 106, 283, 288
Gustavus Adolphus College, 1, 5, 57 Information/information theory, 77,
Guth, Alan, 97, 98, 100 79, 185, 212–213, 245, 256
algorithmic information theory,
Hameroff, Stuart, 179 201–202, 233, 236, 249, 257
“Hard Times” (Kadanoff), 19–20 Initial conditions, 196, 200, 259, 288
Hartshorne, Charles, 172–273 Institute of Astronomy at Cam-
Harwit, Martin, 111–112 bridge, 104, 107
Havel, Václav, 16 Intellectuals, 47, 119, 123, 152, 168,
Hawking, Stephen, 72, 90–93, 95, 281
103, 112, 274, 278 Internet, 183, 248
Hidden variables, 82 Ironic science, 54, 61, 62, 89, 127, 138,
History, definition of, 249–250 165, 193, 199, 252, 254, 263, 267,
Hitchhiker’s Guide to the Galaxy, The 291
(Adams), 69 vs. empirical science, 41, 92, 160
Holland, John, 199–200, 214, 286 ironic cosmology, 112–113
Holocaust, 35, 43, 73 ironic neuroscience, 174, 175
Hoyle, Fred, 99, 104–109, 112 and negative capability, 24
Hubble constant, 109–110 types of practitioners of, 155–156
Human condition, 73, 147 Irrationality, 17, 35, 42, 61, 101, 132,
Humanities, 152. See also under 205, 225, 279
Science Isis (journal), 43
Human nature, 147, 148, 149, 153, It from bit, 79, 80, 88, 185, 213
154, 163, 274, 282. See also
Genetics: and human behavior Jackson, E. Atlee, 233
Hut, Piet, 238, 245 James, William, 163, 271
Huxley, Aldous, 225 Jeffrey, Eber, 13–14
Hyperspace (Kaku), 71 Josephson, Brian, 252–253
Journal of the Patent Office Society, 13
Idea of Progress, The (Bury), 14 Joyce, James, 67, 216
Identity theory, 176
IEEE Spectrum, 284 Kadanoff, Leo, 19–21, 22
Imagination, 23–24, 77, 80, 93, 277 Kaku, Michio, 71
Immortality, 10–11, 257, 273, 278, Kallosh, Renata, 98
283, 284 Kant, Immanuel, 23, 37–38, 136, 153,
Immune system, 168, 215, 224 236, 244
Nietzsche, Friedrich, 3–4, 54, 151, Paradigm, use of term, 35, 36–37, 38,
250, 254 39–40
Nixon, Richard, 283 Paradoxes of Progress, The (Stent), 15
Nobel Prize, 57, 60, 87, 167, 168, 176, Park, Robert, 291
214, 221, 222, 252, 260, 276, Particle physics, 6, 10, 19, 21, 51,
279, 285 55–60, 68, 82–83, 161, 207, 227,
Nobel symposium in Sweden (1990), 228, 238–239, 247, 279
90, 94, 95, 96 limits of empirical verification
Nonlocality, 83 of, 160
“Nothing Left to Invent” (Anderson), particle accelerators, 16, 18, 47, 67,
13 73, 216, 247, 249 (see also Super-
Nuclear power, 3, 15, 37, 47, 283 conducting supercollider)
Nuclear weapons, 3, 15, 19, 74, 76, problem of particles as points, 56
277, 279, 283 standard model of, 20, 56, 58,
Nucleosynthesis, 109, 110 59–60, 66, 70, 74, 216
success of, 58, 208 (see also Sci-
Occam’s razor, 66, 240 ence: success of)
Omega Point theory, 263–268, 270, unified theory of, 65, 92, 97, 102,
273, 274 103, 200 (see also Physics: unifi-
Open Society and Its Enemies, The cation of)
(Popper), 27 See also Physics; Quarks; Super-
Order out of Chaos (Prigogine and string theory
Stengers), 222–223, 226 Pauling, Linus, 2
Oreskes, Naomi, 206 Peat, F. David, 86
Origin of life, 10, 29, 32, 105, 119, 134, Peirce, Charles Sanders, 26, 40
136, 137, 139–143, 166, 204, 290 Penrose, Roger, 177–181, 186, 188,
Origin of Species, On The (Darwin), 6, 192, 215, 220, 221, 253, 262, 274
15, 114 Penzias, Arno, 107
Origins of Order, The: Self-Organization Period doubling, 227–228
and Selection in Evolution Philosophy, 21, 24, 25–54, 77, 87–88,
(Kauffman), 133, 137 168, 194, 238
Orwell, George, 39, 225 end of, 271
Ostriker, Jeremiah, 281, 282 and mind-body problem, 185
Overbye, Dennis, 67 philosophical problems as
pseudo-problems, 52–53
Packard, Norman, 201 philosophy of science, 70, 156
Pagels, Heinz, 197 Physics, 3, 8, 33, 55–89, 180–181, 222,
Pannenberg, Wolfhart, 264 224
Quantum mechanics (continued) Relativism, 43, 52, 58, 80, 122, 123,
as final theory of physics, 75 127, 131
as incomplete, 83 Relativity theory, 11, 12, 20, 21, 27,
many-worlds interpretation of, 55, 63, 65, 66, 77, 83, 84, 94, 106,
71–72, 88 138–139, 176, 253, 281, 282
and nuclear fission, 76 Religion, 7, 16–17, 30, 31, 44, 49–50,
paradoxes of, 71, 178 53, 55, 191, 259, 291
philosophical implications of, 77, Eastern, 81, 216
88–89 fundamentalist, 17, 42, 48, 226
pilot-wave interpretation of, 81, See also Bible; God; under Science
83, 86, 88 Remembered Present, The (Edelman),
quantum chromodynamics, 56, 167
63 Rescher, Nicholas, 21–23
quantum cosmology, 90, 103 Reviews of Modern Physics, 260
quantum electrodynamics, 281, RNA, 141–142
282 Robots, 171–172, 173, 174, 188, 256
quantum fluctuations, 90, 91, 97, Rose, Charlie, 68, 281
99, 239 Rosen, Nathan, 83
and two-slit experiment, 78 Rosen, Robert, 245
See also Gravity: quantum grav- Rössler, Otto, 241–245
ity; Uncertainty principle Ruelle, David, 214, 228
Quark and the Jaguar, The (Gell- Russia, 212. See also Soviet Union
Mann), 218 Ryle, Gilbert, 156, 193
Quarks, 10, 19, 23, 56, 208, 216
Quarterly Review of Biology, 18 Sacks, Oliver, 174
Sacramento News, 280
Randomness, 8, 115, 121, 128, 133, Sagan, Dorian, 131
173, 201, 211, 218, 236, 287, Salam, Abdus, 57
288 Santa Fe Institute, 133, 135, 199, 216,
Rasmussen, Steen, 243 218, 220, 224, 245
Rationalism, 48 workshop on limits of scientific
Rational morphology, 136 knowledge at, 233–245
Reagan, Ronald, 195 Sapolsky, Harvey, 284
Red-shift evidence, 94, 106, 109 Schopenhauer, Arthur, 243
Reductionism, 49, 55, 67, 136, 162, Schramm, David, 94–96, 101–102,
166, 192, 196, 199, 218, 219, 224, 109, 110, 111
229, 243 Schrödinger, Irwin, 5
antireductionism, 214, 215 Schwarz, John, 63