JohnScalesAvery 2012 8BIOINFORMATIONTECHNO InformationTheoryAndE
JohnScalesAvery 2012 8BIOINFORMATIONTECHNO InformationTheoryAndE
JohnScalesAvery 2012 8BIOINFORMATIONTECHNO InformationTheoryAndE
Chapter 8
BIO-INFORMATION
TECHNOLOGY
All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable copyright law.
173
EBSCO Publishing : eBook Academic Collection (EBSCOhost) - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY
AN: 479900 ; John Scales Avery.; Information Theory And Evolution (2nd Edition)
Account: s6221847.main.ehost
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
has to break many hydrogen bonds to make a hole for itself, but it cannot
pay for these broken bonds by forming new hydrogen bonds with water.
There is a special configuration of the system of water and phospholipid
molecules which has a very low Gibbs free energy — the lipid bilayer.
In this configuration, all the hydrophilic polar heads are in contact with
water, while the hydrophobic nonpolar tails are in the interior of the double
membrane, away from the water, and in close contact with each other, thus
maximizing their mutual Van der Waals attractions. (The basic structure
of biological membranes is the lipid bilayer just described, but there are
also other components, such as membrane-bound proteins, caveolae, and
ion pores.)
The mechanism of self-organization of supramolecular structures is one
of the most important universal mechanisms of biology. Chemical reactions
take place spontaneously when the change in Gibbs free energy produced
by the reaction is negative, i.e., chemical reactions take place in such a
direction that the entropy of the universe increases. When spontaneous
chemical reactions take place, the universe moves from a less probable con-
figuration to a more probable one. The same principle controls the mo-
tion of larger systems, where molecules arrange themselves spontaneously
to form supramolecular structures. Self-assembling collections of molecules
move in such a way as to minimize their Gibbs free energy, thus maximizing
the entropy of the universe.
Biological structures of all kinds are formed spontaneously from their
components because assembly information is written onto their joining sur-
faces in the form of complementary surface contours and complementary
patterns of excess charge2 . Matching pieces fit together, and the Gibbs free
energy of the system is minimized. Virtually every structure observed in
biology is formed in this way — by a process analogous to crystallization,
except that biological structures can be far more complex than ordinary
crystals.
Researchers in microelectronics, inspired by the self-assembly of biologi-
cal structures, dream of using the same principles to generate self-organizing
integrated circuits with features so small as to approach molecular dimen-
sions. As we mentioned in Chapter 7, the speed of a computing operation
is limited by the time that it takes an electrical signal (moving at approx-
imately the speed of light) to traverse a processing unit. The desire to
produce ever greater computation speeds as well as ever greater memory
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
scope. The new microscope’s resolution was so great that single atoms could
be observed. The scanning tunneling microscope consists of a supersharp
conducting tip, which is brought near enough to a surface so that quantum
mechanical tunneling of electrons can take place between tip and surface
when a small voltage is applied. The distance between the supersharp tip
and the surface is controlled by means of a piezoelectric crystal. As the
tip is moved along the surface, its distance from the surface (and hence the
tunneling current) is kept constant by applying a voltage to the piezoelec-
tric crystal, and this voltage as a function of position gives an image of the
surface.
Variations on the scanning tunneling microscope allow single atoms
to be deposited or manipulated on a surface. Thus there is a hope that
nanoscale circuit templates can be constructed by direct manipulation of
atoms and molecules, and that the circuits can afterwards be reproduced
using autoassembly mechanisms.
The scanning tunneling microscope makes use of a quantum mechanical
effect: Electrons exhibit wavelike properties, and can tunnel small distances
into regions of negative kinetic energy — regions which would be forbidden
to them by classical mechanics. In general it is true that for circuit ele-
ments with feature sizes in the nanometer range, quantum effects become
important. For conventional integrated circuits, the quantum effects which
are associated with this size-range would be a nuisance, but workers in nan-
otechnology hope to design integrated circuits which specifically make use
of these quantum effects.
Osterhelt et al., Quart. Rev. Biophys. 24, 425-478 (1991); W. Stoeckenius and R.
Bogomolni, Ann. Rev. Biochem. 52, 587-616 (1982).
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
Fig. 8.1 A Threshold Logic Unit (TLU) of the type proposed by McCulloch and Pitts.
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
The quantity a, is called the activation. If the activation exceeds the thresh-
old 9, the unit “fires”, i.e. it produces an output y given by
1 if a ≥ θ
y= (8.2)
0 if a < θ
The decisions taken by a TLU can be given a geometrical interpretation:
The input signals can be thought of as forming the components of a vector,
x = x1 , x2 , ..., xN , in an N -dimensional space called pattern space. The
weights also form a vector, w = w1 , w2 , ..., wN , in the same space. If we
write an equation setting the scalar product of these two vectors equal to
some constant,
N
X
w·x≡ wj xj = θ (8.3)
j=1
then this equation defines a hyperplane in pattern space, called the decision
hyperplane. The decision hyperplane divides pattern space into two parts:
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
(1) input pulse patterns which will produce firing of the TLU, and (2)
patterns which will not cause firing.
The position and orientation of the decision hyperplane can be changed
by altering the weight vector w and/or the threshold θ. Therefore it is
convenient to put the threshold and the weights on the same footing by
introducing an augmented weight vector,
W = w1 , w2 , ..., wN , θ (8.4)
and an augmented input pattern vector,
X = x1 , x2 , ..., xN , −1 (8.5)
In the N +l-dimensional augmented pattern space, the decision hyperplane
now passes through the origin, and equation (8.3) can be rewritten in the
form
N
X +1
W·X≡ Wj Xj = 0 (8.6)
j=1
Those input patterns for which the scalar product W · X is positive or zero
will cause the unit to fire, but if the scalar product is negative, there will
be no response.
If we wish to “teach” a TLU to fire when presented with a particular
pattern vector X, we can evaluate its scalar product with the current aug-
mented weight vector W. If this scalar product is negative, the TLU will
not fire, and therefore we know that the weight vector needs to be changed.
If we replace the weight vector by
W0 = W + γX (8.7)
where γ is a small positive number, then the new augmented weight vector
W0 will point in a direction more nearly the same as the direction of X. This
change will be a small step in the direction of making the scalar product
positive, i.e. a small step in the right direction.
Why not take a large step instead of a small one? A small step is best
because there may be a whole class of input patterns to which we would
like the TLU to respond by firing. If we make a large change in weights to
help a particular input pattern, it may undo previous learning with respect
to other patterns.
It is also possible to teach a TLU to remain silent when presented with
a particular input pattern vector. To do so, we evaluate the augmented
scalar product W · X as before, but now, when we desire silence rather
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
Genetic algorithms
Genetic algorithms represent a second approach to machine learning and
to computational problems involving optimization. Like neural network
computation, this alternative approach has been inspired by biology, and it
has also been inspired by the Darwinian concept of natural selection. In a
genetic algorithm, the hardware is that of a conventional computer; but the
software creates a population and allows it to evolve in a manner closely
analogous to biological evolution.
One of the most important pioneers of genetic algorithms was John
Henry Holland (1929– ). After attending MIT, where he was influenced
by Norbert Wiener, Holland worked for IBM, helping to develop the 701.
He then continued his studies at the University of Michigan, obtaining the
first Ph.D. in computer science ever granted in America. Between 1962
and 1965, Holland taught a graduate course at Michigan called “Theory
of Adaptive Systems”. His pioneering course became almost a cult, and
together with his enthusiastic students he applied the genetic algorithm
approach to a great variety of computational problems. One of Holland’s
students, David Goldberg, even applied a genetic algorithm program to the
problem of allocating natural gas resources.
The programs developed by Holland and his students were modelled
after the natural biological processes of reproduction, mutation, selection
and evolution. In biology, the information passed between generations is
contained in chromosomes — long strands of DNA where the genetic mes-
sage is written in a four-letter language, the letters being adenine, thymine,
guanine and cytosine. Analogously, in a genetic algorithm, the information
is coded in a long string, but instead of a four-letter language, the code is
binary: The chromosome-analogue is a long string of 0’s and 1’s, i.e., a long
binary string. One starts with a population that has sufficient diversity so
that natural selection can act.
The genotypes are then translated into phenotypes. In other words,
the information contained in the long binary string (analogous to the geno-
type of each individual) corresponds to an entity, the phenotype, whose
fitness for survival can be evaluated. The mapping from genotype to phe-
notype must be such that very small changes in the binary string will not
produce radically different phenotypes. From the initial population, the
most promising individuals are selected to be the parents of the next gen-
eration, and of these, the fittest are allowed produce the largest number
of offspring. Before reproduction takes place, however, random mutations
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
Artificial life
As Aristotle pointed out, it is difficult to define the precise border between
life and nonlife. It is equally difficult to give a precise definition of artificial
life. Of course the term means “life produced by humans rather than by
nature”, but what is life? Is self-replication the only criterion? The phrase
“produced by humans” also presents difficulties. Humans have played a
role in creating domestic species of animals and plants. Can cows, dogs,
and high-yield wheat varieties be called “artificial life” ? In one sense, they
can. These species and varieties certainly would not have existed without
human intervention.
We come nearer to what most people might call “artificial life” when we
take parts of existing organisms and recombine them in novel ways, using
the techniques of biotechnology. For example, Steen Willadsen7 , working at
the Animal Research Station, Cambridge England, was able to construct
chimeras by operating under a microscope on embryos at the eight-cell
stage. The zona pelucida is a transparent shell that surrounds the cells of
7 Willadsen is famous for having made the first verified and reproducible clone of a
mammal. In 1984 he made two genetically identical lambs from early sheep embryo
cells.
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
the embryo. Willadsen was able to cut open the zona pelucida, to remove
the cells inside, and to insert a cell from a sheep embryo together with
one from a goat embryo. The chimeras which he made in this way were
able to grow to be adults, and when examined, their cells proved to be
a mosaic, some cells carrying the sheep genome while others carried the
genome of a goat. By the way, Willadsen did not create his chimeras in
order to produce better animals for agriculture. He was interested in the
scientifically exciting problem of morphogenesis: How is the information of
the genome translated into the morphology of the growing embryo?
Human genes are now routinely introduced into embryos of farm an-
imals, such as pigs or sheep. The genes are introduced into regulatory
sequences which cause expression in mammary tissues, and the adult an-
imals produce milk containing human proteins. Many medically valuable
proteins are made in this way. Examples include human blood-clotting
factors, interleukin-2 (a protein which stimulates T-lymphocytes), colla-
gen and fibrinogen (used to treat burns), human fertility hormones, human
hemoglobin, and human serum albumin.
Transgenic plants and animals in which the genes of two or more species
are inherited in a stable Mendelian way have become commonplace in mod-
ern laboratory environments, and, for better or for worse, they are also
becoming increasingly common in the external global environment. These
new species might, with some justification, be called “artificial life”.
In discussing the origin of life in Chapter 3, we mentioned that a long
period of molecular evolution probably preceded the evolution of cells. In
the early 1970’s, S. Spiegelman performed a series of experiments in which
he demonstrated that artificial molecular evolution can be made to take
place in vitro. Spiegelman prepared a large number of test tubes in which
RNA replication could take place. The aqueous solution in each of the
test tubes consisted of RNA replicase, ATP, UTP (uracil triphosphate),
GTP (guanine triphosphate), CTP (cytosine triphosphate) and buffer. He
then introduced RNA from a bacteriophage into the first test tube. Af-
ter a predetermined interval of time, during which replication took place,
Spiegelman transferred a drop of solution from the first test tube to a new
tube, uncontaminated with RNA. Once again, replication began and after
an interval a drop was transferred to a third test tube. Spiegelman re-
peated this procedure several hundred times, and at the end he was able to
demonstrate that the RNA in the final tube differed from the initial sample,
and that it replicated faster than the initial sample. The RNA had evolved
by the classical Darwinian mechanisms of mutation and natural selection.
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
ing its progeny with the duplicated instruction tape, thus making the new
automaton both functional and fertile.
In presenting his kinematic model at the Hixton Symposium (organized
by Linus Pauling in the late 1940’s), von Neumann remarked that “...it is
clear that the instruction [tape] is roughly effecting the function of a gene.
It is also clear that the copying mechanism B performs the fundamental act
of reproduction, the duplication of the genetic material, which is clearly the
fundamental operation in the multiplication of living cells. It is also easy
to see how arbitrary alterations of the system...can exhibit certain traits
which appear in connection with mutation, lethality as a rule, but with a
possibility of continuing reproduction with a modification of traits”.
It is very much to von Neumann’s credit that his kinematic model (which
he invented several years before Crick and Watson published their DNA
structure) was organized in much the same way that we now know the
reproductive apparatus of a cell to be organized. Nevertheless he was dis-
satisfied with the model because his automaton contained too many “black
boxes”. There were too many parts which were supposed to have certain
functions, but for which it seemed very difficult to propose detailed mech-
anisms by which the functions could be carried out. His kinematic model
seemed very far from anything which could actually be built8 .
Von Neumann discussed these problems with his close friend, the Polish-
American mathematician Stanislaw Ulam, who had for a long time been
interested in the concept of self-replicating automata. When presented
with the black box difficulty, Ulam suggested that the whole picture of an
automaton floating on a lake containing its parts should be discarded. He
proposed instead a model which later came to be known as the Cellular
Automaton Model. In Ulam’s model, the self-reproducing automaton lives
in a very special space. For example, the space might resemble an infinite
checkerboard, each square would constitute a multi-state cell. The state
of each cell in a particular time interval is governed by the states of its
near neighbors in the preceding time interval according to relatively simple
laws. The automaton would then consist of a special configuration of cell
states, and its reproduction would correspond to production of a similar
8 Von Neumann’s kinematic automaton was taken seriously by the Mission IV Group,
part of a ten-week program sponsored by NASA in 1980 to study the possible use of
advanced automation and robotic devices in space exploration. The group, headed by
Richard Laing, proposed plans for self-reproducing factories, designed to function on
the surface of the moon or the surfaces of other planets. Like von Neumann’s kinetic
automaton, to which they owed much, these plans seemed very far from anything that
could actually be constructed.
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
generation unless exactly three of its eight neighbors is alive. In that case,
the cell will be ‘born’ in the next generation”.
Originally Conway’s Life game was played by himself and by his col-
leagues at Cambridge University’s mathematics department in their com-
mon room: At first the game was played on table tops at tea time. Later
it spilled over from the tables to the floor, and tea time began to extend:
far into the afternoons. Finally, wishing to convert a wider audience to
his game, Conway submitted it to Martin Gardner, who wrote a popular
column on “Mathematical Games” for the Scientific American. In this way
Life spread to MIT’s Artificial Intelligence Laboratory, where it created
such interest that the MIT group designed a small computer specifically
dedicated to rapidly implementing Life’s rules.
The reason for the excitement about Conway’s Life game was that it
seemed capable of generating extremely complex patterns, starting from rel-
atively simple configurations and using only its simple rules. Ed Fredkin,
the director of MIT’s Artificial Intelligence Laboratory, became enthusias-
tic about cellular automata because they seemed to offer a model for the
way in which complex phenomena can emerge from the laws of nature,
which are after all very simple. In 1982, Fredkin (who was independently
wealthy because of a successful computer company which he had founded)
organized a conference on cellular automata on his private island in the
Caribbean. The conference is notable because one of the participants was
a young mathematical genius named Stephen Wolfram, who was destined
to refine the concept of cellular automata and to become one of the leading
theoreticians in the field10 .
One of Wolfram’s important contributions was to explore exhaustively
the possibilities of 1-dimensional cellular automata. No one before him had
looked at 1-dimensional CA’s, but in fact they had two great advantages:
The first of these advantages was simplicity, which allowed Wolfram to ex-
plore and classify the possible rule sets. Wolfram classified the rule sets
into 4 categories, according to the degree of complexity which they gen-
erated. The second advantage was that the configurations of the system
in successive generations could be placed under one another to form an
easily-surveyed 2-dimensional visual display. Some of the patterns gener-
ated in this way were strongly similar to the patterns of pigmentation on
the shells of certain molluscs. The strong resemblance seemed to suggest
that Wolfram’s 1-dimensional cellular automata might yield insights into
10
As many readers probably know, Stephen Wolfram was also destined to become a
millionaire by inventing the elegant symbol-manipulating program system, Mathematica.
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
Among the scientists who were attracted to the artificial life conferences
was the biologist Thomas Ray, a graduate of Florida State University and
Harvard, and an expert in the ecology of tropical rain forests. In the late
1970’s, while he was working on his Harvard Ph.D., Ray happened to have
a conversation with a computer expert from the MIT Artificial Intelligence
Lab, who mentioned to him that computer programs can replicate. To
Ray’s question “How?”, the AI man answered “Oh, it’s trivial”.
Ray continued to study tropical ecologies, but the chance conversation
from his Cambridge days stuck in his mind. By 1989 he had acquired an
academic post at the University of Delaware, and by that time he had
also become proficient in computer programming. He had followed with
interest the history of computer viruses. Were these malicious creations in
some sense alive? Could it be possible to make self-replicating computer
programs which underwent evolution by natural selection? Ray considered
John Holland’s genetic algorithms to be analogous to the type of selection
imposed by plant and animal breeders in agriculture. He wanted to see what
would happen to populations of digital organisms that found their own cri-
teria for natural selection — not humanly imposed goals, but self-generated
and open-ended criteria growing naturally out of the requirements for sur-
vival.
Although he had a grant to study tropical ecologies, Ray neglected the
project and used most of his time at the computer, hoping to generate
populations of computer organisms that would evolve in an open-ended
and uncontrolled way. Luckily, before starting his work in earnest, Thomas
Ray consulted Christopher Langton and his colleague James Farmer at the
Center for Nonlinear Studies in New Mexico. Langton and Farmer realized
that Ray’s project could be a very dangerous one, capable of producing
computer viruses or worms far more malignant and difficult to eradicate
than any the world had yet seen. They advised Ray to make use of Tur-
ing’s concept of a virtual computer. Digital organisms created in such a
virtual computer would be unable to live outside it. Ray adopted this plan,
and began to program a virtual world in which his freely evolving digital
organisms could live. He later named the system “Tierra”.
Ray’s Tierra was not the first computer system to aim at open-ended
evolution. Steen Rasmussen, working at the Danish Technical University,
had previously produced a system called “VENUS” (Virtual Evolution in
a Nonstochastic Universe Simulator) which simulated the very early stages
of the evolution of life on earth. However, Ray’s aim was not to understand
the origin of life, but instead to produce digitally something analogous to
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
digital intelligence, truly rooted in the nature of the medium, rather than
brutishly copied from organic nature. It would be a fundamentally alien in-
telligence, but one that would complement rather than duplicate our talents
and abilities”.
In Thomas Ray’s experiments, the source of thermodynamic information
is the electrical power needed to run the computer. In an important sense
one might say that the digital organisms in Ray’s Tierra system are living.
This type of experimentation is in its infancy, but since it combines the
great power of computers with the even greater power of natural selection,
it is hard to see where it might end, and one can fear that it will end badly
dispite the precaution of conducting the experiments in a virtual computer.
Have Thomas Ray and other “a-lifers”13 created artificial living organ-
isms? Or have they only produced simulations that mimic certain aspects of
life? Obviously the answer to this question depends on the definition of life,
and there is no commonly agreed-upon definition. Does life have to involve
carbon chemistry? The a-lifers call such an assertion “carbon chauvinism”.
They point out that elsewhere in the universe there may exist forms of life
based on other media, and their program is to find medium-independent
characteristics which all forms of life must have.
In the present book, especially in Chapter 4, we have looked at the
phenomenon of life from the standpoint of thermodynamics, statistical me-
chanics and information theory. Seen from this viewpoint, a living organism
is a complex system produced by an input of thermodynamic information in
the form of Gibbs free energy. This incoming information keeps the system
very far away from thermodynamic equilibrium, and allows it to achieve a
statistically unlikely and complex configuration. The information content
of any complex (living) system is a measure of how unlikely it would be
to arise by chance. With the passage of time, the entropy of the universe
increases, and the almost unimaginably improbable initial configuration
of the universe is converted into complex free-energy-using systems that
could never have arisen by pure chance. Life maintains itself and evolves
by feeding on Gibbs free energy, that is to say, by feeding on the enormous
improbability of the initial conditions of the universe.
All of the forms of artificial life that we have discussed derive their
complexity from the consumption of free energy. For example, Spiegelman’s
evolving RNA molecules feed on the Gibbs free energy of the phosphate
bonds of their precursors, ATP, GTP, UTP, and CTP. This free energy
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
(8) D.M. Eigler and E.K. Schweizer, Positioning single atoms with a
scanning electron microscope, Nature, 344, 524-526 (1990).
(9) E.D. Gilbert, editor, Miniaturization, Reinhold, New York, (1961).
(10) R.C. Haddon and A.A. Lamola, The molecular electronic devices and
the biochip computer: present status, Proc. Natl. Acad. Sci. USA,
82, 1874-1878 (1985).
(11) H.M. Hastings and S. Waner, Low dissipation computing in biological
systems, BioSystems, 17, 241-244 (1985).
(12) J.J. Hopfield, J.N. Onuchic and D.N. Beritan, A molecular shift reg-
ister based on electron transfer, Science, 241, 817-820 (1988).
(13) L. Keszthelyi, Bacteriorhodopsin, in Bioenergetics, P. P. Graber and
G. Millazo (editors), Birkhäusr Verlag, Basil Switzerland, (1997).
(14) F.T. Hong, The bacteriorhodopsin model membrane as a prototype
molecular computing element, BioSystems, 19, 223-236 (1986).
(15) L.E. Kay, Life as technology: Representing, intervening and molecu-
larizing, Rivista di Storia della Scienzia, II, 1, 85-103 (1993).
(16) A.P. Alivisatos et al., Organization of ’nanocrystal molecules’ using
DNA, Nature, 382, 609-611, (1996).
(17) T. Bjørnholm et al., Self-assembly of regioregular, amphiphilic poly-
thiophenes into highly ordered pi-stacked conjugated thin films and
nanocircuits, J. Am. Chem. Soc. 120, 7643 (1998).
(18) L.J. Fogel, A.J.Owens, and M.J. Walsh, Artificial Intelligence
Through Simulated Evolution, John Wiley, New York, (1966).
(19) L.J. Fogel, A retrospective view and outlook on evolutionary algo-
rithms, in Computational Intelligence: Theory and Applications, in
5th Fuzzy Days, B. Reusch, editor, Springer-Verlag, Berlin, (1997).
(20) P.J. Angeline, Multiple interacting programs: A representation for
evolving complex behaviors, Cybernetics and Systems, 29 (8), 779-
806 (1998).
(21) X. Yao and D.B. Fogel, editors, Proceedings of the 2000 IEEE Sym-
posium on Combinations of Evolutionary Programming and Neural
Networks, IEEE Press, Piscataway, NJ, (2001).
(22) R.M. Brady, Optimization strategies gleaned from biological evolu-
tion, Nature 317, 804-806 (1985).
(23) K. Dejong, Adaptive system design — a genetic approach, IEEE Syst.
M. 10, 566-574 (1980).
(24) W.B. Dress, Darwinian optimization of synthetic neural systems,
IEEE Proc. ICNN 4, 769-776 (1987).
(25) J.H. Holland, A mathematical framework for studying learning in
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
(111) C.G. Langton et al., editors, Artificial Life II: Proceedings of the
Workshop on Artificial Life Held in Santa Fe, New Mexico, Addison-
Wesley, Reading MA, (1992).
(112) W. Aspray and A. Burks, eds., Papers of John von Neumann on
Computers and Computer Theory, MIT Press, (1967).
(113) M. Conrad and H.H. Pattee, Evolution experiments with an artificial
ecosystem, J. Theoret. Biol., 28, (1970).
(114) C. Emmeche, Life as an Abstract Phenomenon: Is Artificial Life Pos-
sible?, in Toward a Practice of Artificial Systems: Proceedings of the
First European Conference on Artificial Life, MIT Press, Cambridge
MA, (1992).
(115) C. Emmeche, The Garden in the Machine: The Emerging Science of
Artificial Life, Princeton University Press, Princeton NJ, (1994).
(116) S. Levy, Artificial Life: The Quest for New Creation, Pantheon, New
York, (1992).
(117) K. Lindgren and M.G. Nordahl, Cooperation and Community Struc-
ture in Artificial Ecosystems, Artificial Life, 1, 15-38 (1994).
(118) P. Husbands and I. Harvey (editors), Proceedings of the 4th Confer-
ence on Artificial Life (ECAL ’97), MIT Press, (1997).
(119) C.G. Langton, (editor), Artificial Life: An Overview, MIT Press,
Cambridge MA, (1997).
(120) C.G. Langton, ed., Artificial Life, Addison-Wesley, (1987).
(121) A.A. Beaudry and G.F. Joyce, Directed evolution of an RNA enzyme,
Science, 257, 635-641 (1992).
(122) D.P. Bartel and J.W. Szostak, Isolation of new ribozymes from a
large pool of random sequences, Science, 261, 1411-1418 (1993).
(123) K. Kelly, Out of Control, www.kk.org/outofcontrol/index.html,
(2002).
(124) K. Kelly, The Third Culture, Science, February 13, (1998).
(125) S. Blakeslee, Computer life-form “mutates” in an evolution experi-
ment, natural selection is found at work in a digital world, New York
Times, November 25, (1997).
(126) M. Ward, It’s life, but not as we know it, New Scientist, July 4,
(1998).
(127) P. Guinnessy, “Life” crawls out of the digital soup, New Scientist,
April 13, (1996).
(128) L. Hurst and R. Dawkins, Life in a test tube, Nature, May 21, (1992).
(129) J. Maynard Smith, Byte-sized evolution, Nature, February 27, (1992).
(130) W.D. Hillis, Intelligence as an Emergent Behavior, in Artificial In-
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda
EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use