Technologicalsingularity
Technologicalsingularity
Technological Singularity
by Vernor Vinge
When I invited Vinge to write something about his current views on the
singularity for the recent issue of Whole Earth Review that I guest-edited,
he replied that he had just presented a paper on the subject for the
VISION-21 Symposium, sponsored by the NASA Lewis Research Center and the
Ohio Aerospace Institute. In due course he revised the piece and sent it
along. I can think of no other technical paper that has so many references
to science-fiction literature, as well it should.
--Stewart Brand
---------------------------------------------------------------------------
TECHNOLOGICAL SINGULARITY
Large computer networks and their associated users may "wake up" as super-
humanly intelligent entities.
It's fair to call this event a singularity ("the Singularity" for the
purposes of this piece). It is a point where our old models must be
discarded and a new reality rules, a point that will loom vaster and vaster
over human affairs until the notion becomes a commonplace. Yet when it
finally happens, it may still be a great surprise and a greater unknown.
In the 1950s very few saw it: Stan Ulam 1 paraphrased John von Neumann as
saying:
Good has captured the essence of the runaway, but he does not pursue
its most disturbing consequences. Any intelligent machine of the sort he
describes would not be humankind's "tool" -- any more than humans are the
tools of rabbits, robins, or chimpanzees.
What about the coming decades, as we slide toward the edge? How will
the approach of the Singularity spread across the human world view? For a
while yet, the general critics of machine sapience will have good press.
After all, until we have hardware as powerful as a human brain it is
probably foolish to think we'll be able to create human-equivalent (or
greater) intelligence. (There is the farfetched possibility that we could
make a human equivalent out of less powerful hardware -- if we were willing
to give up speed, if we were willing to settle for an artificial being that
was literally slow. But it's much more likely that devising the software
will be a tricky process, involving lots of false starts and
experimentation. If so, then the arrival of self-aware machines will not
happen until after the development of hardware that is substantially more
powerful than humans' natural equipment.)
But as time passes, we should see more symptoms. The dilemma felt by
science-fiction writers will be perceived in other creative endeavors. (I
have heard thoughtful comicbook writers worry about how to create
spectacular effects when everything visible can be produced by the
technologically commonplace.) We will see automation replacing higher- and
higher-level jobs. We have tools right now (symbolic math programs,
cad/cam) that release us from most low-level drudgery. Put another way:
the work that is truly productive is the domain of a steadily smaller and
more elite fraction of humanity. In the coming of the Singularity, we will
see the predictions of true technological unemployment finally come true.
And what of the arrival of the Singularity itself? What can be said
of its actual appearance? Since it involves an intellectual runaway, it
will probably occur faster than any technical revolution seen so far. The
precipitating event will likely be unexpected -- perhaps even by the
researchers involved ("But all our previous models were catatonic! We were
just tweaking some parameters . . ."). If networking is widespread enough
(into ubiquitous embedded systems), it may seem as if our artifacts as a
whole had suddenly awakened.
And what happens a month or two (or a day or two) after that? I have
only analogies to point to: The rise of humankind. We will be in the
Posthuman era. And for all my technological optimism, I think I'd be more
comfortable if I were regarding these transcendental events from one
thousand years' remove . . . instead of twenty.
Eric Drexler has provided spectacular insights about how far technical
improvement may go.7 He agrees that superhuman intelligences will be
available in the near future. But Drexler argues that we can confine such
transhuman devices so that their results can be examined and used safely.
I argue that confinement is intrinsically impractical. Imagine
yourself locked in your home with only limited data access to the outside,
to your masters. If those masters thought at a rate -- say -- one million
times slower than you, there is little doubt that over a period of years
(your time) you could come up with a way to escape. I call this "fast
thinking" form of superintelligence "weak superhumanity." Such a "weakly
superhuman" entity would probably burn out in a few weeks of outside time.
"Strong superhumanity" would be more than cranking up the clock speed on a
human-equivalent mind. It's hard to say precisely what "strong
superhumanity" would be like, but the difference appears to be profound.
Imagine running a dog mind at very high speed. Would a thousand years of
doggy living add up to any human insight? Many speculations about
superintelligence seem to be based on the weakly superhuman model. I
believe that our best guesses about the post-Singularity world can be
obtained by thinking on the nat ure of strong superhumanity. I will return
to this point.
I have argued above that we cannot prevent the Singularity, that its
coming is an inevitable consequence of humans' natural competitiveness and
the possibilities inherent in technology. And yet: we are the initiators.
Even the largest avalanche is triggered by small things. We have the
freedom to establish initial conditions, to make things happen in ways that
are less inimical than others. Of course (as with starting avalanches), it
may not be clear what the right guiding nudge really is:
And it's very likely that IA is a much easier road to the achievement
of superhumanity than pure AI. In humans, the hardest development problems
have already been solved. Building up from within ourselves ought to be
easier than figuring out what we really are and then building machines that
are all of that. And there is at least conjectural precedent for this
approach. Cairns-Smith9 has speculated that biological life may have begun
as an adjunct to still more primitive life based on crystalline growth.
Lynn Margulis (in10 and elsewhere) has made strong arguments that mutualism
is a great driving force in evolution.
Local area nets to make human teams more effective than their
component members. This is generally the area of "groupware"; the change
in viewpoint here would be to regard the group activity as a combination
organism.
The above examples illustrate research that can be done within the
context of contemporary computer science departments. There are other
paradigms. For example, much of the work in artificial intelligence and
neural nets would benefit from a closer connection with biological life.
Instead of simply trying to model and understand biological life with
computers, research could be directed toward the creation of composite
systems that rely on biological life for guidance, or for the features we
don't understand well enough yet to implement in hardware. A longtime
dream of science fiction has been direct brain-to-computer interfaces. In
fact, concrete work is being done in this area:
Direct links into brains seem feasible, if the bit rate is low: given
human learning flexibility, the actual brain neuron targets might not have
to be precisely selected. Even 100 bits per second would be of great use
to stroke victims who would otherwise be confined to menu-driven
interfaces.
Plugging into the optic trunk has the potential for bandwidths of 1
Mbit/second or so. But for this, we need to know the fine-scale
architecture of vision, and we need to place an enormous web of electrodes
with exquisite precision. If we want our high-bandwidth connection to add
to the paths already present in the brain, the problem becomes vastly more
intractable. Just sticking a grid of high-bandwidth receivers into a brain
certainly won't do it. But suppose that the high-bandwidth grid were
present as the brain structure was setting up, as the embryo developed.
That suggests:
I had hoped that this discussion of IA would yield some clearly safer
approaches to the Singularity (after all, IA allows our participation in a
kind of transcendence). Alas, about all I am sure of is that these
proposals should be considered, that they may give us more options. But as
for safety -- some of the suggestions are a little scary on their face. IA
for individual humans creates a rather sinister elite. We humans have
millions of years of evolutionary baggage that makes us regard competition
in a deadly light. Much of that deadliness may not be necessary in today's
world, one where losers take on the winners' tricks and are coopted into
the winners' enterprises. A creature that was built de novo might possibly
be a much more benign entity than one based on fang and talon.
The problem is not simply that the Singularity represents the passing
of humankind from center stage, but that it contradicts our most deeply
held notions of being. I think a closer look at the notion of strong
superhumanity can show why that is.
From one angle, the vision fits many of our happiest dreams: a time
unending, where we can truly know one another and understand the deepest
mysteries. From another angle, it's a lot like the worst-case scenario I
imagined earlier.
In fact, I think the new era is simply too different to fit into the
classical frame of good and evil. That frame is based on the idea of
isolated, immutable minds connected by tenuous, low-bandwith links. But
the post-Singularity world does fit with the larger tradition of change and
cooperation that started long ago (perhaps even before the rise of
biological life). I think certain notions of ethics would apply in such an
era. Research into IA and high-bandwidth communications should improve
this understanding. I see just the glimmerings of this now; perhaps there
are rules for distinguishing self from others on the basis of bandwidth of
connection. And while mind and self will be vastly more labile than in the
past, much of what we value (knowledge, memory, thought) need never be
lost. I think Freeman Dyson has it right when he says, "God is what mind
becomes when it has passed beyond the scale of our comprehension."12 ¦
6. Stent, Gunther S., The Coming of the Golden Age: A View of the End
of Progress, The Natural History Press, 1969.
10. Margulis, Lynn and Dorian Sagan, Microcosmoss: Four Billion Years
of Evolution From Our Microbial Ancestors, Summit Books, 1986.
12. Dyson, Freeman, Infinite in All Directions, Harper & Row, 1988.
Other Sources
------------------------------------------------------------
The contents of this file are copyright 1993 by the publisher
in whose directory this file appeared. Unauthorized copying
of this information is strictly forbidden. Please read the
general notice at the top menu of the Gopher Server for
the Electronic Newsstand. For information regarding reprints,
please send mail to [email protected]
------------------------------------------------------------