JohnScalesAvery 2012 8BIOINFORMATIONTECHNO InformationTheoryAndE

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

Chapter 8

BIO-INFORMATION
TECHNOLOGY
All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable copyright law.

The merging of information technology and biotechnology


Information technology and biology are today the two most rapidly devel-
oping fields of science. Interestingly, these two fields seem to be merging,
each gaining inspiration and help from the other. For example, computer
scientists designing both hardware and software are gaining inspiration from
physiological studies of the mechanism of the brain; and conversely, neu-
rophysiologists are aided by insights from the field of artificial intelligence.
Designers of integrated circuits wish to prolong the period of validity of
Moore’s law; but they are rapidly approaching physical barriers which will
set limits to the miniaturization of conventional transistors and integrated
circuits. They gain inspiration from biology, where the language of molec-
ular complementarity and the principle of autoassembly seem to offer hope
that molecular switches and self-assembled integrated circuits may one day
be constructed.
Geneticists, molecular biologists, biochemists and crystallographers
have now obtained so much information about the amino acid sequences
and structures of proteins and about the nucleotide sequences in genomes
that the full power of modern information technology is needed to store
and to analyze this information. Computer scientists, for their part, turn
to evolutionary genetics for new and radical methods of developing both
software and hardware — genetic algorithms and simulated evolution.

Self-assembly of supramolecular structures — Nanoscience


Copyright 2012. World Scientific.

In previous chapters, we saw that the language of molecular complemen-


tarity (the “lock and key” fitting discovered by Paul Ehrlich) is the chief
mechanism by which information is stored and transferred in biological

173
EBSCO Publishing : eBook Academic Collection (EBSCOhost) - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY
AN: 479900 ; John Scales Avery.; Information Theory And Evolution (2nd Edition)
Account: s6221847.main.ehost
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

174 INFORMATION THEORY AND EVOLUTION

systems. Biological molecules have physical shapes and patterns of excess


charge1 which are recognized by complementary molecules because they fit
together, just as a key fits the shape of a lock. Examples of biological “lock
and key” fitting are the fit between the substrate of an enzyme and the
enzyme’s active site, the recognition of an antigen by its specific antibody,
the specificity of base pairs in DNA and RNA, and the autoassembly of
structures such as viruses and subcellular organelles.
One of the best studied examples of autoassembly through the mecha-
nism of molecular complementarity is the tobacco mosaic virus. The assem-
bled virus has a cylindrical form about 300 nm long (1 nm = 1 nanometer
= 10−9 meters = 10 Ångstroms), with a width of 18 nm. The cylindrically
shaped virus is formed from about 2000 identical protein molecules. These
form a package around an RNA molecule with a length of approximately
6400 nucleotides. The tobacco mosaic virus can be decomposed into its
constituent molecules in vitro, and the protein and RNA can be separated
and put into separate bottles, as was discussed in Chapter 4.
If, at a later time, one mixes the protein and RNA molecules together in
solution, they spontaneously assemble themselves into new infective tobacco
mosaic virus particles. The mechanism for this spontaneous autoassembly
is a random motion of the molecules through the solvent until they ap-
proach each other in such a way that a fit is formed. When two molecules
fit closely together, with their physical contours matching, and with com-
plementary patterns of excess charge also matching, the Gibbs free energy
of the total system is minimized. Thus the self-assembly of matching com-
ponents proceeds spontaneously, just as every other chemical reaction pro-
ceeds spontaneously when the difference in Gibbs free energy between the
products and reactants is negative. The process of autoassembly is analo-
gous to crystallization, except that the structure formed is more complex
than an ordinary crystal.
A second very well-studied example of biological autoassembly is the
spontaneous formation of bilayer membranes when phospholipid molecules
are shaken together in water. Each phospholipid molecule has a small polar
(hydrophilic) head, and a long nonpolar (hydrophobic) tail. The polar head
is hydrophilic — water-loving — because it has large excess charges with
which water can form hydrogen bonds. By contrast, the non-polar tail
of a phospholipid molecule has no appreciable excess charges. The tail is
hydrophobic — it hates water — because to fit into the water structure it
1They also have patterns of polarizable groups and reactive groups, and these patterns
can also play a role in recognition.

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 175

has to break many hydrogen bonds to make a hole for itself, but it cannot
pay for these broken bonds by forming new hydrogen bonds with water.
There is a special configuration of the system of water and phospholipid
molecules which has a very low Gibbs free energy — the lipid bilayer.
In this configuration, all the hydrophilic polar heads are in contact with
water, while the hydrophobic nonpolar tails are in the interior of the double
membrane, away from the water, and in close contact with each other, thus
maximizing their mutual Van der Waals attractions. (The basic structure
of biological membranes is the lipid bilayer just described, but there are
also other components, such as membrane-bound proteins, caveolae, and
ion pores.)
The mechanism of self-organization of supramolecular structures is one
of the most important universal mechanisms of biology. Chemical reactions
take place spontaneously when the change in Gibbs free energy produced
by the reaction is negative, i.e., chemical reactions take place in such a
direction that the entropy of the universe increases. When spontaneous
chemical reactions take place, the universe moves from a less probable con-
figuration to a more probable one. The same principle controls the mo-
tion of larger systems, where molecules arrange themselves spontaneously
to form supramolecular structures. Self-assembling collections of molecules
move in such a way as to minimize their Gibbs free energy, thus maximizing
the entropy of the universe.
Biological structures of all kinds are formed spontaneously from their
components because assembly information is written onto their joining sur-
faces in the form of complementary surface contours and complementary
patterns of excess charge2 . Matching pieces fit together, and the Gibbs free
energy of the system is minimized. Virtually every structure observed in
biology is formed in this way — by a process analogous to crystallization,
except that biological structures can be far more complex than ordinary
crystals.
Researchers in microelectronics, inspired by the self-assembly of biologi-
cal structures, dream of using the same principles to generate self-organizing
integrated circuits with features so small as to approach molecular dimen-
sions. As we mentioned in Chapter 7, the speed of a computing operation
is limited by the time that it takes an electrical signal (moving at approx-
imately the speed of light) to traverse a processing unit. The desire to
produce ever greater computation speeds as well as ever greater memory

2 Patterns of reactive or polarizable groups also play a role.

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

176 INFORMATION THEORY AND EVOLUTION

densities, motivates the computer industry’s drive towards ultraminiatur-


ization.
Currently the fineness of detail in integrated circuits is limited by diffrac-
tion effects caused by the finite wavelength of the light used to project an
image of the circuit onto a layer of photoresist covering the chip where the
circuit is being built up. For this reason, there is now very active research on
photolithography using light sources with extremely short wavelengths, in
the deep ultraviolet, or even X-ray sources, synchrotron radiation, or elec-
tron beams. The aim of this research is to produce integrated circuits whose
feature size is in the nanometer range — smaller than 100 nm. In addition
to these efforts to create nanocircuits by “top down” methods, intensive
research is also being conducted on “bottom up” synthesis, using principles
inspired by biological self-assembly. The hope to make use of “the spon-
taneous association of molecules, under equilibrium conditions, into stable,
structurally well-defined aggregates, joined by non-covalent bonds”3 .
The Nobel Laureate Belgian chemist J.-M. Lehn pioneered the field of
supramolecular chemistry by showing that it is possible to build nanoscale
structures of his own design. Lehn and his coworkers at the University of
Strasbourg used positively-charged metal ions as a kind of glue to join larger
structural units at points where the large units exhibited excess negative
charges. Lehn predicts that the supramolecular chemistry of the future will
follow the same principles of self-organization which underlie the growth of
biological structures, but with a greatly expanded repertory, making use of
elements (such as silicon) that are not common in carbon-based biological
systems.
Other workers in nanotechnology have concentrated on the self-assembly
of two-dimensional structures at water-air interfaces. For example, Thomas
Bjørnholm, working at the University of Copenhagen, has shown that a
nanoscale wire can be assembled spontaneously at a water-air interface,
using metal atoms complexed with DNA and a DNA template. The use of
a two-dimensional template to reproduce a nanostructure can be thought
of as “microprinting”. One can also think of self-assembly at surfaces as the
two-dimensional version of the one-dimensional copying process by which
a new DNA or RNA strand assembles itself spontaneously, guided by the
complementary strand.
In 1981, Gerd Binning and Heinrich Rohrer of IBM’s Research Center
in Switzerland announced their invention of the scanning tunneling micro-

3 G.M. Whiteside et al., Science, 254, 1312-1314, (1991).

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 177

scope. The new microscope’s resolution was so great that single atoms could
be observed. The scanning tunneling microscope consists of a supersharp
conducting tip, which is brought near enough to a surface so that quantum
mechanical tunneling of electrons can take place between tip and surface
when a small voltage is applied. The distance between the supersharp tip
and the surface is controlled by means of a piezoelectric crystal. As the
tip is moved along the surface, its distance from the surface (and hence the
tunneling current) is kept constant by applying a voltage to the piezoelec-
tric crystal, and this voltage as a function of position gives an image of the
surface.
Variations on the scanning tunneling microscope allow single atoms
to be deposited or manipulated on a surface. Thus there is a hope that
nanoscale circuit templates can be constructed by direct manipulation of
atoms and molecules, and that the circuits can afterwards be reproduced
using autoassembly mechanisms.
The scanning tunneling microscope makes use of a quantum mechanical
effect: Electrons exhibit wavelike properties, and can tunnel small distances
into regions of negative kinetic energy — regions which would be forbidden
to them by classical mechanics. In general it is true that for circuit ele-
ments with feature sizes in the nanometer range, quantum effects become
important. For conventional integrated circuits, the quantum effects which
are associated with this size-range would be a nuisance, but workers in nan-
otechnology hope to design integrated circuits which specifically make use
of these quantum effects.

Molecular switches; bacteriorhodopsin


The purple, salt-loving archaebacterium Halobacterium halobium (recently
renamed Halobacterium salinarum) possesses one of the simplest structures
that is able to perform photosynthesis. The purple membrane subtrac-
tion of this bacterium’s cytoplasmic membrane contains only two kinds of
molecules — lipids and bacteriorhodopsin. Nevertheless, this simple struc-
ture is able to trap the energy of a photon from the sun and to convert it
into chemical energy.
The remarkable purple membrane of Halobacterium has been studied in
detail by Walter Stoeckenius, D. Osterhelt4 , Lajos Keszthelyi and others.
4 D. Osterhelt and Walter Stoeckenius, Nature New Biol. 233, 149-152 (1971); D.

Osterhelt et al., Quart. Rev. Biophys. 24, 425-478 (1991); W. Stoeckenius and R.
Bogomolni, Ann. Rev. Biochem. 52, 587-616 (1982).

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

178 INFORMATION THEORY AND EVOLUTION

It can be decomposed into its constituent molecules. The lipids from


the membrane and the bacteriorhodopsin can be separated from each other
and put into different bottles. At a later time, the two bottles can be taken
from the laboratory shelf, and their contents can be shaken together in
water. The result is the spontaneous formation of tiny vesicles of purple
membrane.
In the self-organized two-component vesicles, the membrane-bound pro-
tein bacteriorhodopsin is always correctly oriented, just as it would be in
the purple membrane of a living Halobacterium. When the vesicles are illu-
minated, bacteriorhodopsin absorbs H+ ions from the water on the inside,
and releases them outside.
Bacteriorhodopsin consists of a chain of 224 amino acids, linked to the
retinal chromophore. The amino acids are arranged in 7 helical segments,
each of which spans the purple membrane, and these are joined on the mem-
brane surface by short nonhelical segments of the chain. The chromophore
is in the middle of the membrane, surrounded by α-helical segments. When
the chromophore is illuminated, its color is temporarily bleached, and it
undergoes a cis-trans isomerization which disrupts the hydrogen-bonding
network of the protein. The result is that a proton is released on the out-
side of the membrane. Later, a proton is absorbed from the water in the
interior of the membrane vesicle, the hydrogen-bonding system of the pro-
tein is reestablished, and both the protein and the chromophore return to
their original conformations. In this way, bacteriorhodopsin functions as a
proton pump. It uses the energy of photons to transport H+ ions across the
membrane, from the inside to the outside, against the electrochemical gra-
dient. In the living Halobacterium, this H+ concentration difference would
be used to drive the synthesis of the high-energy phosphate bond of adeno-
sine triphosphate (ATP), the inward passage of H+ through other parts of
the cytoplasmic membrane being coupled to the reaction ADP +Pi → AT P
by membrane-bound reversible ATPase.
Bacteriorhodopsin is interesting as a component of one of the simplest
known photosynthetic systems, and because of its possible relationship to
the evolution of the eye (as was discussed in Chapter 3). In addition,
researchers like Lajos Keszthelyi at the Institute of Biophysics of the Hun-
garian Academy of Sciences in Szeged are excited about the possible use of
bacteriorhodopsin in optical computer memories5 . Arrays of oriented and
partially dehydrated bacteriorhodopsin molecules in a plastic matrix can be
5
A. Der and L. Keszthelyi, editors, Bioelectronic Applications of Photochromic Pig-
ments, IOS Press, Amsterdam, Netherlands, (2001).

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 179

used to construct both 2-dimensional and 3-dimensional optical memories


using the reversible color changes of the molecule. J. Chen and coworkers6
have recently constructed a prototype 3-dimensional optical memory by
orienting the proteins and afterwards polymerizing the solvent into a solid
polyacrylamide matrix. Bacteriorhodopsin has extraordinary stability, and
can tolerate as many as a million optical switching operations without dam-
age.

Neural networks, biological and artificial


In 1943, W. McCulloch and W. Pitts published a paper entitled A Logical
Calculus of the Ideas Immanent in Nervous Activity. In this pioneering
paper, they proposed the idea of a Threshold Logic Unit (TLU), which
they visualized not only as a model of the way in which neurons function in
the brain but also as a possible subunit for artificial systems which might be
constructed to perform learning and pattern-recognition tasks. Problems
involving learning, generalization, pattern recognition and noisy data are
easily handled by the brains of humans and animals, but computers of the
conventional von Neumann type find such tasks especially difficult.
Conventional computers consist of a memory and one or more central
processing units (CPUs). Data and instructions are repeatedly transferred
from the memory to the CPUs, where the data is processed and returned to
the memory. The repeated performance of many such cycles requires a long
and detailed program, as well as high-quality data. Thus conventional com-
puters, despite their great speed and power, lack the robustness, intuition,
learning powers and powers of generalization which characterize biologi-
cal neural networks. In the 1950’s, following the suggestions of McCulloch
and Pitts, and inspired by the growing knowledge of brain structure and
function which was being gathered by histologists and neurophysiologists,
computer scientists began to construct artificial neural networks - massively
parallel arrays of TLU’s.
The analogy between a TLU and a neuron can be seen by comparing
Figure 5.2, which shows a neuron, with Figure 8.1, which shows a TLU. As
we saw in Chapter 5, a neuron is a specialized cell consisting of a cell body
(soma) from which an extremely long, tubelike fiber called an axon grows.
The axon is analogous to the output channel of a TLU. From the soma, a
number of slightly shorter, rootlike extensions called dendrites also grow.
The dendrites are analogous to the input channels of a TLU.
6 J. Chen et al., Biosystems 35, 145-151 (1995).

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

180 INFORMATION THEORY AND EVOLUTION

Fig. 8.1 A Threshold Logic Unit (TLU) of the type proposed by McCulloch and Pitts.

In a biological neural network, branches from the axon of a neuron


are connected to the dendrites of many other neurons; and at the points of
connection there are small, knoblike structures called synapses. As was dis-
cussed in Chapter 5, the “firing” of a neuron sends a wave of depolarization
out along its axon. When the pulselike electrical and chemical disturbance
associated with the wave of depolarization (the action potential) reaches
a synapse, where the axon is connected with another neuron, transmitter
molecules are released into the post-synaptic cleft. The neurotransmitter

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 181

Fig. 8.2 A perceptron, introduced by Rosenblatt in 1962. The perceptron is similar to


a TLU, but its input is preprocessed by a set of association units (A-units). The A-units
are not trained, but are assigned a fixed Boolean functionality.

molecules travel across the post-synaptic cleft to receptors on a dendrite of


the next neuron in the net, where they are bound to receptors. There are
many kinds of neurotransmitter molecules, some of which tend to make the
firing of the next neuron more probable, and others which tend to inhibit
its firing. When the neurotransmitter molecules are bound to the receptors,

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

182 INFORMATION THEORY AND EVOLUTION

they cause a change in the dendritic membrane potential, either increas-


ing or decreasing its polarization. The post-synaptic potentials from the
dendrites are propagated to the soma; and if their sum exceeds a threshold
value, the neuron fires. The subtlety of biological neural networks derives
from the fact that there are many kinds of neurotransmitters and synapses,
and from the fact that synapses are modified by their past history.
Turning to Figure 8.1, we can compare the biological neuron with the
Threshold Logic Unit of McCulloch and Pitts. Like the neuron, the TLU
has many input channels. To each of the N channels there is assigned a
weight, w1 , w2 , ..., wN . The weights can be changed; and the set of weights
gives the TLU its memory and learning capabilities. Modification of weights
in the TLU is analogous to the modification of synapses in a neuron, de-
pending on their history. In the most simple type of TLU, the input signals
are either 0 or 1. These signals, multiplied by their appropriate weights,
are summed, and if the sum exceeds a threshold value, θ the TLU “fires”,
i.e. a pulse of voltage is transmitted through the output channel to the
next TLU in the artificial neural network.
Let us imagine that the input signals, x1 , x2 , ..., xN can take on the
values 0 or 1. The weighted sum of the input signals will then be given by
N
X
a= w j xj (8.1)
j=1

The quantity a, is called the activation. If the activation exceeds the thresh-
old 9, the unit “fires”, i.e. it produces an output y given by

 1 if a ≥ θ
y= (8.2)

0 if a < θ
The decisions taken by a TLU can be given a geometrical interpretation:
The input signals can be thought of as forming the components of a vector,
x = x1 , x2 , ..., xN , in an N -dimensional space called pattern space. The
weights also form a vector, w = w1 , w2 , ..., wN , in the same space. If we
write an equation setting the scalar product of these two vectors equal to
some constant,
N
X
w·x≡ wj xj = θ (8.3)
j=1

then this equation defines a hyperplane in pattern space, called the decision
hyperplane. The decision hyperplane divides pattern space into two parts:

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 183

(1) input pulse patterns which will produce firing of the TLU, and (2)
patterns which will not cause firing.
The position and orientation of the decision hyperplane can be changed
by altering the weight vector w and/or the threshold θ. Therefore it is
convenient to put the threshold and the weights on the same footing by
introducing an augmented weight vector,
W = w1 , w2 , ..., wN , θ (8.4)
and an augmented input pattern vector,
X = x1 , x2 , ..., xN , −1 (8.5)
In the N +l-dimensional augmented pattern space, the decision hyperplane
now passes through the origin, and equation (8.3) can be rewritten in the
form
N
X +1
W·X≡ Wj Xj = 0 (8.6)
j=1

Those input patterns for which the scalar product W · X is positive or zero
will cause the unit to fire, but if the scalar product is negative, there will
be no response.
If we wish to “teach” a TLU to fire when presented with a particular
pattern vector X, we can evaluate its scalar product with the current aug-
mented weight vector W. If this scalar product is negative, the TLU will
not fire, and therefore we know that the weight vector needs to be changed.
If we replace the weight vector by
W0 = W + γX (8.7)
where γ is a small positive number, then the new augmented weight vector
W0 will point in a direction more nearly the same as the direction of X. This
change will be a small step in the direction of making the scalar product
positive, i.e. a small step in the right direction.
Why not take a large step instead of a small one? A small step is best
because there may be a whole class of input patterns to which we would
like the TLU to respond by firing. If we make a large change in weights to
help a particular input pattern, it may undo previous learning with respect
to other patterns.
It is also possible to teach a TLU to remain silent when presented with
a particular input pattern vector. To do so, we evaluate the augmented
scalar product W · X as before, but now, when we desire silence rather

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

184 INFORMATION THEORY AND EVOLUTION

than firing, we wish the scalar product to be negative, and if it is positive,


we know that the weight vector must be changed. In changing the weight
vector, we can again make use of equation (8.7), but now γ must be a small
negative number rather than a small positive one.
Two sets of input patterns, A and B, are said to be linearly separable if
they can be separated by some decision hyperplane in pattern space. Now
suppose that the four sets, A, B, C, and D, can be separated by two decision
hyperplanes. We can then construct a two-layer network which will identify
the class of an input signal belonging to any one of the sets.
The first layer consists of two TLU’s. The first TLU in this layer is
taught to fire if the input pattern belongs to A or B, and to be silent if
the input belongs to C or D. The second TLU is taught to fire if the input
pattern belongs to A or D, and to be silent if it belongs to B or C. The
second layer of the network consists of four output units which are not
taught, but which are assigned a fixed Boolean functionality. The first
output unit fires if the signals from the first layer are given by the vector
y = {0, 0} (class A); the second fires if y = {0, 1} (class B), the third if
y = {1, 0} (class C), and the fourth if y = {1, 1} (class D). Thus the simple
two-layer network functions as a classifier. The output units in the second
layer are analogous to the “grandmother’s face cells” whose existence in
the visual cortex is postulated by neurophysiologists. These cells will fire
if and only if the retina is stimulated with a particular class of patterns.
This very brief glance at artificial neural networks does not do justice to
the high degree of sophistication which network architecture and training
algorithms have achieved during the last two decades. However, the sug-
gestions for further reading at the end of this chapter may help to give the
reader an impression of the wide range of problems to which these networks
are now being applied.
Besides being useful for computations requiring pattern recognition,
learning, generalization, intuition, and robustness in the face of noisy data,
artificial neural networks are important because of the light which they
throw on the mechanism of brain function. For example, one can compare
the classifier network with the discoveries of Kuffler, Hubel and Wessel
concerning pattern abstraction in the mammalian retina and visual cortex
(Chapter 5).

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 185

Genetic algorithms
Genetic algorithms represent a second approach to machine learning and
to computational problems involving optimization. Like neural network
computation, this alternative approach has been inspired by biology, and it
has also been inspired by the Darwinian concept of natural selection. In a
genetic algorithm, the hardware is that of a conventional computer; but the
software creates a population and allows it to evolve in a manner closely
analogous to biological evolution.
One of the most important pioneers of genetic algorithms was John
Henry Holland (1929– ). After attending MIT, where he was influenced
by Norbert Wiener, Holland worked for IBM, helping to develop the 701.
He then continued his studies at the University of Michigan, obtaining the
first Ph.D. in computer science ever granted in America. Between 1962
and 1965, Holland taught a graduate course at Michigan called “Theory
of Adaptive Systems”. His pioneering course became almost a cult, and
together with his enthusiastic students he applied the genetic algorithm
approach to a great variety of computational problems. One of Holland’s
students, David Goldberg, even applied a genetic algorithm program to the
problem of allocating natural gas resources.
The programs developed by Holland and his students were modelled
after the natural biological processes of reproduction, mutation, selection
and evolution. In biology, the information passed between generations is
contained in chromosomes — long strands of DNA where the genetic mes-
sage is written in a four-letter language, the letters being adenine, thymine,
guanine and cytosine. Analogously, in a genetic algorithm, the information
is coded in a long string, but instead of a four-letter language, the code is
binary: The chromosome-analogue is a long string of 0’s and 1’s, i.e., a long
binary string. One starts with a population that has sufficient diversity so
that natural selection can act.
The genotypes are then translated into phenotypes. In other words,
the information contained in the long binary string (analogous to the geno-
type of each individual) corresponds to an entity, the phenotype, whose
fitness for survival can be evaluated. The mapping from genotype to phe-
notype must be such that very small changes in the binary string will not
produce radically different phenotypes. From the initial population, the
most promising individuals are selected to be the parents of the next gen-
eration, and of these, the fittest are allowed produce the largest number
of offspring. Before reproduction takes place, however, random mutations

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

186 INFORMATION THEORY AND EVOLUTION

and chromosome crossing can occur. For example, in chromosome crossing,


the chromosomes of two individuals are broken after the nth binary digit,
and two new chromosomes are formed, one with the head of the first old
chromosome and the tail of the second, and another with the head of the
second and the tail of the first. This process is analogous to the biological
crossings which allowed Thomas Hunt Morgan and his “fly squad” to map
the positions of genes on the chromosomes of fruit flies, while the mutations
are analogous to those studied by Hugo de Vries and Hermann J. Muller.
After the new generation has been produced, the genetic algorithm ad-
vances the time parameter by a step, and the whole process is repeated:
The phenotypes of the new generation are evaluated and the fittest selected
to be parents of the next generation; mutation and crossings occur; and
then fitness-proportional reproduction. Like neural networks, genetic algo-
rithms are the subject of intensive research, and evolutionary computation
is a rapidly growing field.
Evolutionary methods have been applied not only to software, but also
to hardware. Some of the circuits designed in this way defy analysis using
conventional techniques — and yet they work astonishingly well.

Artificial life
As Aristotle pointed out, it is difficult to define the precise border between
life and nonlife. It is equally difficult to give a precise definition of artificial
life. Of course the term means “life produced by humans rather than by
nature”, but what is life? Is self-replication the only criterion? The phrase
“produced by humans” also presents difficulties. Humans have played a
role in creating domestic species of animals and plants. Can cows, dogs,
and high-yield wheat varieties be called “artificial life” ? In one sense, they
can. These species and varieties certainly would not have existed without
human intervention.
We come nearer to what most people might call “artificial life” when we
take parts of existing organisms and recombine them in novel ways, using
the techniques of biotechnology. For example, Steen Willadsen7 , working at
the Animal Research Station, Cambridge England, was able to construct
chimeras by operating under a microscope on embryos at the eight-cell
stage. The zona pelucida is a transparent shell that surrounds the cells of
7 Willadsen is famous for having made the first verified and reproducible clone of a

mammal. In 1984 he made two genetically identical lambs from early sheep embryo
cells.

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 187

the embryo. Willadsen was able to cut open the zona pelucida, to remove
the cells inside, and to insert a cell from a sheep embryo together with
one from a goat embryo. The chimeras which he made in this way were
able to grow to be adults, and when examined, their cells proved to be
a mosaic, some cells carrying the sheep genome while others carried the
genome of a goat. By the way, Willadsen did not create his chimeras in
order to produce better animals for agriculture. He was interested in the
scientifically exciting problem of morphogenesis: How is the information of
the genome translated into the morphology of the growing embryo?
Human genes are now routinely introduced into embryos of farm an-
imals, such as pigs or sheep. The genes are introduced into regulatory
sequences which cause expression in mammary tissues, and the adult an-
imals produce milk containing human proteins. Many medically valuable
proteins are made in this way. Examples include human blood-clotting
factors, interleukin-2 (a protein which stimulates T-lymphocytes), colla-
gen and fibrinogen (used to treat burns), human fertility hormones, human
hemoglobin, and human serum albumin.
Transgenic plants and animals in which the genes of two or more species
are inherited in a stable Mendelian way have become commonplace in mod-
ern laboratory environments, and, for better or for worse, they are also
becoming increasingly common in the external global environment. These
new species might, with some justification, be called “artificial life”.
In discussing the origin of life in Chapter 3, we mentioned that a long
period of molecular evolution probably preceded the evolution of cells. In
the early 1970’s, S. Spiegelman performed a series of experiments in which
he demonstrated that artificial molecular evolution can be made to take
place in vitro. Spiegelman prepared a large number of test tubes in which
RNA replication could take place. The aqueous solution in each of the
test tubes consisted of RNA replicase, ATP, UTP (uracil triphosphate),
GTP (guanine triphosphate), CTP (cytosine triphosphate) and buffer. He
then introduced RNA from a bacteriophage into the first test tube. Af-
ter a predetermined interval of time, during which replication took place,
Spiegelman transferred a drop of solution from the first test tube to a new
tube, uncontaminated with RNA. Once again, replication began and after
an interval a drop was transferred to a third test tube. Spiegelman re-
peated this procedure several hundred times, and at the end he was able to
demonstrate that the RNA in the final tube differed from the initial sample,
and that it replicated faster than the initial sample. The RNA had evolved
by the classical Darwinian mechanisms of mutation and natural selection.

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

188 INFORMATION THEORY AND EVOLUTION

Mistakes in copying had produced mutant RNA strands which competed


for the supply of energy-rich precursor molecules (ATP, UTP, GTP and
CTP). The most rapidly-reproducing mutants survived. Was Spiegelman’s
experiment merely a simulation of an early stage of biological evolution?
Or was evolution of an extremely primitive life-form actually taking place
in his test tubes?
G.F. Joyce, D.P. Bartel and others have performed experiments in which
strands of RNA with specific catalytic activity (ribozymes) have been made
to evolve artificially from randomly coded starting populations of RNA. In
these experiments, starting populations of 1013 to 1015 randomly coded
RNA molecules are tested for the desired catalytic activity, and the most
successful molecules are then chosen as parents for the next generation.
The selected molecules are replicated many times, but errors (mutations)
sometimes occur in the replication. The new population is once again tested
for catalytic activity, and the process is repeated. The fact that artificial
evolution of ribozymes is possible can perhaps be interpreted as supporting
the “RNA world” hypothesis, i.e., the hypothesis that RNA preceded DNA
and proteins in the early history of terrestrial life.
In Chapter 4, we mentioned that John von Neumann speculated on the
possibility of constructing artificial self-reproducing automata. In the early
1940’s, a period when there was much discussion of the Universal Turing
Machine, he became interested in constructing a mathematical model of the
requirements for self-reproduction. Besides the Turing machine, another
source of his inspiration was the paper by Warren McCulloch and Walter
Pitts entitled A logical calculus of the ideas immanent in nervous activity,
which von Neumann read in 1943. In his first attempt (the kinematic
model), he imagined an extremely large and complex automaton, floating
on a lake which contained its component parts.
Von Neumann’s imaginary self-reproducing automaton consisted of four
units, A, B, C and D. Unit A was a sort of factory, which gathered com-
ponent parts from the surrounding lake and assembled them according to
instructions which it received from other units. Unit B was a copying unit,
which reproduced sets of instructions. Unit C was a control apparatus,
similar to a computer. Finally D was a long string of instructions, analo-
gous to the “tape” in the Turing machine described in Chapter 7. In von
Neumann’s kinematic automaton, the instructions were coded as a long bi-
nary number. The presence of what he called a “girder” at a given position
corresponded to 1, while its absence corresponded to 0. In von Neumann’s
model, the automaton completed the assembly of its offspring by inject-

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 189

ing its progeny with the duplicated instruction tape, thus making the new
automaton both functional and fertile.
In presenting his kinematic model at the Hixton Symposium (organized
by Linus Pauling in the late 1940’s), von Neumann remarked that “...it is
clear that the instruction [tape] is roughly effecting the function of a gene.
It is also clear that the copying mechanism B performs the fundamental act
of reproduction, the duplication of the genetic material, which is clearly the
fundamental operation in the multiplication of living cells. It is also easy
to see how arbitrary alterations of the system...can exhibit certain traits
which appear in connection with mutation, lethality as a rule, but with a
possibility of continuing reproduction with a modification of traits”.
It is very much to von Neumann’s credit that his kinematic model (which
he invented several years before Crick and Watson published their DNA
structure) was organized in much the same way that we now know the
reproductive apparatus of a cell to be organized. Nevertheless he was dis-
satisfied with the model because his automaton contained too many “black
boxes”. There were too many parts which were supposed to have certain
functions, but for which it seemed very difficult to propose detailed mech-
anisms by which the functions could be carried out. His kinematic model
seemed very far from anything which could actually be built8 .
Von Neumann discussed these problems with his close friend, the Polish-
American mathematician Stanislaw Ulam, who had for a long time been
interested in the concept of self-replicating automata. When presented
with the black box difficulty, Ulam suggested that the whole picture of an
automaton floating on a lake containing its parts should be discarded. He
proposed instead a model which later came to be known as the Cellular
Automaton Model. In Ulam’s model, the self-reproducing automaton lives
in a very special space. For example, the space might resemble an infinite
checkerboard, each square would constitute a multi-state cell. The state
of each cell in a particular time interval is governed by the states of its
near neighbors in the preceding time interval according to relatively simple
laws. The automaton would then consist of a special configuration of cell
states, and its reproduction would correspond to production of a similar
8 Von Neumann’s kinematic automaton was taken seriously by the Mission IV Group,

part of a ten-week program sponsored by NASA in 1980 to study the possible use of
advanced automation and robotic devices in space exploration. The group, headed by
Richard Laing, proposed plans for self-reproducing factories, designed to function on
the surface of the moon or the surfaces of other planets. Like von Neumann’s kinetic
automaton, to which they owed much, these plans seemed very far from anything that
could actually be constructed.

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

190 INFORMATION THEORY AND EVOLUTION

configuration of cell states in a neighboring region of the cell lattice.


Von Neumann liked Ulam’s idea, and he began to work in that direction.
However, he wished his self-replicating automaton to be able to function
as a universal Turing machine, and therefore the plans which he produced
were excessively complicated. In fact, von Neumann believed complexity
to be a necessary requirement for self-reproduction. In his model, the cells
in the lattice were able to have 29 different states, and the automaton
consisted of a configuration involving hundreds of thousands of cells. Von
Neumann’s manuscript on the subject became longer and longer, and he
did not complete it before his early death from prostate cancer in 1957.
The name “cellular automaton” was coined by Arthur Burks, who edited
von Neumann’s posthumous papers on the theory of automata.
Arthur Burks had written a Ph.D. thesis in philosophy on the work
of the nineteenth century thinker Charles Sanders Pierce, who is today
considered to be one of the founders of semiotics9 . He then studied electrical
engineering at the Moore School in Philadelphia, where he participated
in the construction of ENIAC, one of the first general purpose electronic
digital computers, and where he also met John von Neumann. He worked
with von Neumann on the construction of a new computer, and later Burks
became the leader of the Logic of Computers Group at the University of
Michigan. One of Burks’ students at Michigan was John Holland, the
pioneer of genetic algorithms. Another student of Burks, E.F. Codd, was
able to design a self-replicating automaton of the von Neumann type using
a cellular automaton system with only 8 states (as compared with von
Neumann’s 29). For many years, enthusiastic graduate students at the
Michigan group continued to do important research on the relationships
between information, logic, complexity and biology.
Meanwhile, in 1968, the mathematician John Horton Conway, working
in England at Cambridge University, invented a simple game which greatly
increased the popularity of the cellular automaton concept. Conway’s game,
which he called “Life”, was played on an infinite checker-board-like lattice
of cells, each cell having only two states, “alive” or “dead”. The rules which
Conway proposed are as follows: “If a cell on the checkerboard is alive, it
will survive in the next time step (generation) if there are either two or
three neighbors also alive. It will die of overcrowding if there are more
than three live neighbors, and it will die of exposure if there are fewer than
two. If a cell on the checkerboard is dead, it will remain dead in the next

9 Semiotics is defined as the study of signs (see Appendix 2).

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 191

generation unless exactly three of its eight neighbors is alive. In that case,
the cell will be ‘born’ in the next generation”.
Originally Conway’s Life game was played by himself and by his col-
leagues at Cambridge University’s mathematics department in their com-
mon room: At first the game was played on table tops at tea time. Later
it spilled over from the tables to the floor, and tea time began to extend:
far into the afternoons. Finally, wishing to convert a wider audience to
his game, Conway submitted it to Martin Gardner, who wrote a popular
column on “Mathematical Games” for the Scientific American. In this way
Life spread to MIT’s Artificial Intelligence Laboratory, where it created
such interest that the MIT group designed a small computer specifically
dedicated to rapidly implementing Life’s rules.
The reason for the excitement about Conway’s Life game was that it
seemed capable of generating extremely complex patterns, starting from rel-
atively simple configurations and using only its simple rules. Ed Fredkin,
the director of MIT’s Artificial Intelligence Laboratory, became enthusias-
tic about cellular automata because they seemed to offer a model for the
way in which complex phenomena can emerge from the laws of nature,
which are after all very simple. In 1982, Fredkin (who was independently
wealthy because of a successful computer company which he had founded)
organized a conference on cellular automata on his private island in the
Caribbean. The conference is notable because one of the participants was
a young mathematical genius named Stephen Wolfram, who was destined
to refine the concept of cellular automata and to become one of the leading
theoreticians in the field10 .
One of Wolfram’s important contributions was to explore exhaustively
the possibilities of 1-dimensional cellular automata. No one before him had
looked at 1-dimensional CA’s, but in fact they had two great advantages:
The first of these advantages was simplicity, which allowed Wolfram to ex-
plore and classify the possible rule sets. Wolfram classified the rule sets
into 4 categories, according to the degree of complexity which they gen-
erated. The second advantage was that the configurations of the system
in successive generations could be placed under one another to form an
easily-surveyed 2-dimensional visual display. Some of the patterns gener-
ated in this way were strongly similar to the patterns of pigmentation on
the shells of certain molluscs. The strong resemblance seemed to suggest
that Wolfram’s 1-dimensional cellular automata might yield insights into
10
As many readers probably know, Stephen Wolfram was also destined to become a
millionaire by inventing the elegant symbol-manipulating program system, Mathematica.

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

192 INFORMATION THEORY AND EVOLUTION

the mechanism by which the pigment patterns are generated.


In general, cellular automata seemed to be promising models for gain-
ing insight into the fascinating and highly important biological problem
of morphogenesis: How does the fertilized egg translate the information on
the genome into the morphology of the growing embryo, ending finally with
the enormously complex morphology of a fully developed and fully differ-
entiated multicellular animal? Our understanding of this amazing process
is as yet very limited, but there is evidence that as the embryo of a multi-
cellular animal develops, cells change their state in response to the states
of neighboring cells. In the growing embryo, the “state” of a cell means the
way in which it is differentiated, i.e., which genes are turned on and which
off - which information on the genome is available for reading, and which
segments are blocked. Neighboring cells signal to each other by means of
chemical messengers11 . Clearly there is a close analogy between the way
complex patterns develop in a cellular automaton, as neighboring cells in-
fluence each other and change their states according to relatively simple
rules, and the way in which the complex morphology of a multicellular
animal develops in the growing embryo.
Conway’s Life game attracted another very important worker to the
field of cellular automata: In 1971, Christopher Langton was working as
a computer programmer in the Stanley Cobb Laboratory for Psychiatric
Research at Massachusetts General Hospital. When colleagues from MIT
brought to the laboratory a program for executing Life, Langton was im-
mediately interested. He recalls “It was the first hint that there was a
distinction between the hardware and the behavior which it would sup-
port... You had the feeling that there was something very deep here in this
little artificial universe and its evolution through time. [At the lab] we had
a lot of discussions about whether the program could be open ended - could
you have a universe in which life could evolve?”
Later, at the University of Arizona, Langton read a book describing von
Neumann’s theoretical work on automata. He contacted Arthur Burks, von
Neumann’s editor, who told him that no self-replicating automaton had
actually been implemented, although E.F. Codd had proposed a simplified
plan with only 8 states instead of 29. Burks suggested to Langton that he
should start by reading Codd’s book.
When Langton studied Codd’s work, he realized that part of the prob-
lem was that both von Neumann and Codd had demanded that the self-
11
We can recall the case of slime mold cells which signal to each other by means of the
chemical messenger, cyclic AMP (Chapter 3).

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 193

reproducing automaton should be able to function as a universal Turing


machine, i.e., as a universal computer. When Langton dropped this demand
(which he considered to be more related to mathematics than to biology)
he was able to construct a relatively simple self-reproducing configuration
in an 8-state 2-dimensional lattice of CA cells. As they reproduced them-
selves, Langton’s loop-like cellular automata filled the lattice of cells in a
manner reminiscent of a growing coral reef, with actively reproducing loops
on the surface of the filled area, and “dead” (nonreproducing) loops in the
center.
Langton continued to work with cellular automata as a graduate stu-
dent at Arthur Burks’ Logic of Computers Group at Michigan. His second
important contribution to the field was an extension of Wolfram’s classifi-
cation of rule sets for cellular automata. Langton introduced a parameter
λ to characterize various sets of rules according to the type of behavior
which they generated. Rule sets with a value near to the optimum (λ =
0.273) generated complexity similar to that found in biological systems.
This value of Langton’s λ parameter corresponded to a borderline region
between periodicity and chaos.
After obtaining a Ph.D. from Burks’ Michigan group, Christopher Lang-
ton moved to the Center for Nonlinear Studies at Los Alamos, New Mexico,
where in 1987 he organized an “Interdisciplinary Workshop on the Synthe-
sis and Simulation of Living Systems” - the first conference on artificial
life ever held. Among the participants were Richard Dawkins, Astrid Lin-
denmayer, John Holland, and Richard Laing. The noted Oxford biologist
and author Richard Dawkins was interested in the field because he had
written a computer program for simulating and teaching evolution. Astrid
Lindenmayer and her coworkers in Holland had written programs capable
of simulating the morphogenesis of plants in an astonishingly realistic way.
As was mentioned above, John Holland pioneered the development of ge-
netic algorithms, while Richard Laing was the leader of NASA’s study to
determine whether self-reproducing factories might be feasible.
Langton’s announcement for the conference, which appeared in the Sci-
entific American, stated that “Artificial life is the study of artificial systems
that exhibit behavior characteristic of natural living systems... The ulti-
mate goal is to extract the logical form of living systems. Microelectronic
technology and genetic engineering will soon give us the capability to create
new life in silico as well as in vitro. This capacity will present humanity
with the most far-reaching technical, theoretical, and ethical challenges it
has ever confronted. The time seems appropriate for a gathering of those

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

194 INFORMATION THEORY AND EVOLUTION

involved in attempts to simulate or synthesize aspects of living systems”.


In the 1987 workshop on artificial life, a set of ideas which had gradually
emerged during the previous decades of work on automata and simulations
of living systems became formalized and crystallized: All of the participants
agreed that something more than reductionism was needed to understand
the phenomenon of life. This belief was not a revival of vitalism; it was
instead a conviction that the abstractions of molecular biology are not in
themselves sufficient. The type of abstraction found in Darwin’s theory of
natural selection was felt to be nearer to what was needed. The viewpoints
of thermodynamics and statistical mechanics were also helpful. What was
needed, it was felt, were insights into the flow of information in complex
systems; and computer simulations could give us this insight. The fact that
the simulations might take place in silico did not detract from their validity.
The logic and laws governing complex systems and living systems were felt
to be independent of the medium.
As Langton put it, “The ultimate goal of artificial life would be to create
‘life’ in some other medium, ideally a virtual medium where the essence
of life has been abstracted from the details of its implementation in any
particular model. We would like to build models that are so lifelike that
they cease to become models of life and become examples of life themselves”.
Most of the participants at the first conference on artificial life had until
then been working independently, not aware that many other researchers
shared their viewpoint. Their conviction that the logic of a system is largely
independent of the medium echoes the viewpoint of the Macy Conferences
on cybernetics in the 1940’s, where the logic of feedback loops and control
systems was studied in a wide variety of contexts, ranging from biology and
anthropology to computer systems. A similar viewpoint can also be found
in biosemiotics (Appendix 2), where, in the words of the Danish biologist
Jesper Hoffmeyer, “the sign, rather than the molecule” is considered to be
the starting point for studying life. In other words, the essential ingredient
of life is information; and information can be expressed in many ways. The
medium is less important than the message.
The conferences on artificial life have been repeated each year since
1987, and European conferences devoted to the new and rapidly growing
field have also been organized. Langton himself moved to the Santa Fe
Institute, where he became director of the institute’s artificial life program
and editor of a new journal, Artificial Life. The first three issues of the
journal have been published as a book by the MIT Press, and the book
presents an excellent introduction to the field.

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 195

Among the scientists who were attracted to the artificial life conferences
was the biologist Thomas Ray, a graduate of Florida State University and
Harvard, and an expert in the ecology of tropical rain forests. In the late
1970’s, while he was working on his Harvard Ph.D., Ray happened to have
a conversation with a computer expert from the MIT Artificial Intelligence
Lab, who mentioned to him that computer programs can replicate. To
Ray’s question “How?”, the AI man answered “Oh, it’s trivial”.
Ray continued to study tropical ecologies, but the chance conversation
from his Cambridge days stuck in his mind. By 1989 he had acquired an
academic post at the University of Delaware, and by that time he had
also become proficient in computer programming. He had followed with
interest the history of computer viruses. Were these malicious creations in
some sense alive? Could it be possible to make self-replicating computer
programs which underwent evolution by natural selection? Ray considered
John Holland’s genetic algorithms to be analogous to the type of selection
imposed by plant and animal breeders in agriculture. He wanted to see what
would happen to populations of digital organisms that found their own cri-
teria for natural selection — not humanly imposed goals, but self-generated
and open-ended criteria growing naturally out of the requirements for sur-
vival.
Although he had a grant to study tropical ecologies, Ray neglected the
project and used most of his time at the computer, hoping to generate
populations of computer organisms that would evolve in an open-ended
and uncontrolled way. Luckily, before starting his work in earnest, Thomas
Ray consulted Christopher Langton and his colleague James Farmer at the
Center for Nonlinear Studies in New Mexico. Langton and Farmer realized
that Ray’s project could be a very dangerous one, capable of producing
computer viruses or worms far more malignant and difficult to eradicate
than any the world had yet seen. They advised Ray to make use of Tur-
ing’s concept of a virtual computer. Digital organisms created in such a
virtual computer would be unable to live outside it. Ray adopted this plan,
and began to program a virtual world in which his freely evolving digital
organisms could live. He later named the system “Tierra”.
Ray’s Tierra was not the first computer system to aim at open-ended
evolution. Steen Rasmussen, working at the Danish Technical University,
had previously produced a system called “VENUS” (Virtual Evolution in
a Nonstochastic Universe Simulator) which simulated the very early stages
of the evolution of life on earth. However, Ray’s aim was not to understand
the origin of life, but instead to produce digitally something analogous to

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

196 INFORMATION THEORY AND EVOLUTION

the evolutionary explosion of diversity that occurred on earth at the start


of the Cambrian era. He programmed an 80-byte self-reproducing digital
organism which he called “Ancestor”, and placed it in Tierra, his virtual
Garden of Eden.
Ray had programmed a mechanism for mutation into his system, but
he doubted that he would be able to achieve an evolving population with
his first attempt. As it turned out, Ray never had to program another
organism. His 80-byte Ancestor reproduced and populated his virtual earth,
changing under the action of mutation and natural selection in a way that
astonished and delighted him.
In his freely evolving virtual zoo, Ray found parasites, and even hy-
perparasites, but he also found instances of altruism and symbiosis. Most
astonishingly of all, when he turned off the mutations in his Eden, his or-
ganisms invented sex (using mechanisms which Ray had introduced to allow
for parasitism). They had never been told about sex by their creator, but
they seemed to find their own way to the Tree of Knowledge.
Thomas Ray expresses the aims of his artificial life research as follows:12
“Everything we know about life is based on one example: Life on Earth.
Everything we know about intelligence is based on one example: Human
intelligence. This limited experience burdens us with preconceptions, and
limits our imaginations... How can we go beyond our conceptual limits, find
the natural form of intelligent processes in the digital medium, and work
with the medium to bring it to its full potential, rather than just imposing
the world we know upon it by forcing it to run a simulation of our physics,
chemistry and biology?...”
“In the carbon medium it was evolution that explored the possibilities
inherent in the medium, and created the human mind. Evolution listens
to the medium it is embedded in. It has the advantage of being mindless,
and therefore devoid of preconceptions, and not limited by imagination.”
“I propose the creation of a digital nature - a system of wildlife reserves
in cyberspace in the interstices between human colonizations, feeding off
unused CPU-cycles and permitted a share of our bandwidth. This would
be a place where evolution can spontaneously generate complex information
processes, free from the demands of human engineers and market analysts
telling it what the target applications are - a place for a digital Cambrian
explosion of diversity and complexity...”
“It is possible that out of this digital nature, there might emerge a

12 T. Ray, http://www.hip.atr.co.jp/ ray/pubs/pubs.html

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 197

digital intelligence, truly rooted in the nature of the medium, rather than
brutishly copied from organic nature. It would be a fundamentally alien in-
telligence, but one that would complement rather than duplicate our talents
and abilities”.
In Thomas Ray’s experiments, the source of thermodynamic information
is the electrical power needed to run the computer. In an important sense
one might say that the digital organisms in Ray’s Tierra system are living.
This type of experimentation is in its infancy, but since it combines the
great power of computers with the even greater power of natural selection,
it is hard to see where it might end, and one can fear that it will end badly
dispite the precaution of conducting the experiments in a virtual computer.
Have Thomas Ray and other “a-lifers”13 created artificial living organ-
isms? Or have they only produced simulations that mimic certain aspects of
life? Obviously the answer to this question depends on the definition of life,
and there is no commonly agreed-upon definition. Does life have to involve
carbon chemistry? The a-lifers call such an assertion “carbon chauvinism”.
They point out that elsewhere in the universe there may exist forms of life
based on other media, and their program is to find medium-independent
characteristics which all forms of life must have.
In the present book, especially in Chapter 4, we have looked at the
phenomenon of life from the standpoint of thermodynamics, statistical me-
chanics and information theory. Seen from this viewpoint, a living organism
is a complex system produced by an input of thermodynamic information in
the form of Gibbs free energy. This incoming information keeps the system
very far away from thermodynamic equilibrium, and allows it to achieve a
statistically unlikely and complex configuration. The information content
of any complex (living) system is a measure of how unlikely it would be
to arise by chance. With the passage of time, the entropy of the universe
increases, and the almost unimaginably improbable initial configuration
of the universe is converted into complex free-energy-using systems that
could never have arisen by pure chance. Life maintains itself and evolves
by feeding on Gibbs free energy, that is to say, by feeding on the enormous
improbability of the initial conditions of the universe.
All of the forms of artificial life that we have discussed derive their
complexity from the consumption of free energy. For example, Spiegelman’s
evolving RNA molecules feed on the Gibbs free energy of the phosphate
bonds of their precursors, ATP, GTP, UTP, and CTP. This free energy

13 In this terminology, ordinary biologists are “b-lifers”.

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

198 INFORMATION THEORY AND EVOLUTION

is the driving force behind artificial evolution which Spiegelman observed.


In his experiment, thermodynamic information in the form of high-energy
phosphate bonds is converted into cybernetic information.
Similarly, in the polymerase chain reaction, discussed in Chapter 3, the
Gibbs free energy of the phosphate bonds in the precursor molecules ATP,
TTP, GTP and CTP drives the reaction. With the aid of the enzyme DNA
polymerase, the soup of precursors is converted into a highly improbable
configuration consisting of identical copies of the original sequence. Despite
the high improbability of the resulting configuration, the entropy of the
universe has increased in the copying process. The improbability of the set
of copies is less than the improbability of the high energy phosphate bonds
of the precursors.
The polymerase chain reaction reflects on a small scale, what happens
on a much larger scale in all living organisms. Their complexity is such that
they never could have originated by chance, but although their improbabil-
ity is extremely great, it is less than the still greater improbability of the
configurations of matter and energy from which they arose. As complex sys-
tems are produced, the entropy of the universe continually increases, i.e.,
the universe moves from a less probable configuration to a more probable
one.

Suggestions for further reading


(1) P. Priedland and L.H. Kedes, Discovering the secrets of DNA, Comm.
of the ACM, 28, 1164-1185 (1985).
(2) E.F. Meyer, The first years of the protein data bank, Protein Science
6, 1591-7, July (1997).
(3) C. Kulikowski, Artificial intelligence in medicine: History, evolution
and prospects, in Handbook of Biomedical Engineering, J. Bronzine
editor, 181.1-181.18, CRC and IEEE Press, Boca Raton Fla., (2000).
(4) C. Gibas and P. Jambeck, Developing Bioinformatics Computer
Skills, O’Reily, (2001).
(5) F.L. Carter, The molecular device computer: point of departure for
large-scale cellular automata, Physica D, 10, 175-194 (1984).
(6) K.E. Drexler, Molecular engineering: an approach to the development
of general capabilities for molecular manipulation, Proc. Natl. Acad.
Sci USA, 78, 5275-5278 (1981).
(7) K.E. Drexler, Engines of Creation, Anchor Press, Garden City, New
York, (1986).

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 199

(8) D.M. Eigler and E.K. Schweizer, Positioning single atoms with a
scanning electron microscope, Nature, 344, 524-526 (1990).
(9) E.D. Gilbert, editor, Miniaturization, Reinhold, New York, (1961).
(10) R.C. Haddon and A.A. Lamola, The molecular electronic devices and
the biochip computer: present status, Proc. Natl. Acad. Sci. USA,
82, 1874-1878 (1985).
(11) H.M. Hastings and S. Waner, Low dissipation computing in biological
systems, BioSystems, 17, 241-244 (1985).
(12) J.J. Hopfield, J.N. Onuchic and D.N. Beritan, A molecular shift reg-
ister based on electron transfer, Science, 241, 817-820 (1988).
(13) L. Keszthelyi, Bacteriorhodopsin, in Bioenergetics, P. P. Graber and
G. Millazo (editors), Birkhäusr Verlag, Basil Switzerland, (1997).
(14) F.T. Hong, The bacteriorhodopsin model membrane as a prototype
molecular computing element, BioSystems, 19, 223-236 (1986).
(15) L.E. Kay, Life as technology: Representing, intervening and molecu-
larizing, Rivista di Storia della Scienzia, II, 1, 85-103 (1993).
(16) A.P. Alivisatos et al., Organization of ’nanocrystal molecules’ using
DNA, Nature, 382, 609-611, (1996).
(17) T. Bjørnholm et al., Self-assembly of regioregular, amphiphilic poly-
thiophenes into highly ordered pi-stacked conjugated thin films and
nanocircuits, J. Am. Chem. Soc. 120, 7643 (1998).
(18) L.J. Fogel, A.J.Owens, and M.J. Walsh, Artificial Intelligence
Through Simulated Evolution, John Wiley, New York, (1966).
(19) L.J. Fogel, A retrospective view and outlook on evolutionary algo-
rithms, in Computational Intelligence: Theory and Applications, in
5th Fuzzy Days, B. Reusch, editor, Springer-Verlag, Berlin, (1997).
(20) P.J. Angeline, Multiple interacting programs: A representation for
evolving complex behaviors, Cybernetics and Systems, 29 (8), 779-
806 (1998).
(21) X. Yao and D.B. Fogel, editors, Proceedings of the 2000 IEEE Sym-
posium on Combinations of Evolutionary Programming and Neural
Networks, IEEE Press, Piscataway, NJ, (2001).
(22) R.M. Brady, Optimization strategies gleaned from biological evolu-
tion, Nature 317, 804-806 (1985).
(23) K. Dejong, Adaptive system design — a genetic approach, IEEE Syst.
M. 10, 566-574 (1980).
(24) W.B. Dress, Darwinian optimization of synthetic neural systems,
IEEE Proc. ICNN 4, 769-776 (1987).
(25) J.H. Holland, A mathematical framework for studying learning in

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

200 INFORMATION THEORY AND EVOLUTION

classifier systems, Physica 22 D, 307-313 (1986).


(26) R.F. Albrecht, C.R. Reeves, and N.C. Steele (editors), Artificial Neu-
ral Nets and Genetic Algorithms, Springer Verlag, (1993).
(27) L. Davis, editor, Handbook of Genetic Algorithms, Van Nostrand
Reinhold, New York, (1991).
(28) Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution
Programs, Springer-Verlag, New York, (1992), second edition, (1994).
(29) K.I. Diamantaris and S.Y. Kung, Principal Component Neural Net-
works: Theory and Applications, John Wiley and Sons, New York,
(1996).
(30) A. Garliauskas and A. Soliunas, Learning and recognition of visual
patterns by human subjects and artificial intelligence systems, Infor-
matica, 9 (4), (1998).
(31) A. Garliauskas, Numerical simulation of dynamic synapse-dendrite-
soma neuronal processes, Informatica, 9 (2), 141-160, (1998).
(32) U. Seifert and B. Michaelis, Growing multi-dimensional self-
organizing maps, International Journal of Knowledge-Based Intel-
ligent Engineering Systems,2 (1), 42-48, (1998).
(33) S. Mitra, S.K. Pal, and M.K. Kundu, Finger print classification using
fuzzy multi-layer perceptron, Neural Computing and Applications, 2,
227-233 (1994).
(34) M. Verleysen (editor), European Symposium on Artificial Neural Net-
works, D-Facto, (1999).
(35) R.M. Golden, Mathematical Methods for Neural Network Analysis
and Design, MIT Press, Cambridge MA, (1996).
(36) S. Haykin, Neural Networks — (A) Comprehensive Foundation,
MacMillan, New York, (1994).
(37) M.A. Gronroos, Evolutionary Design of Neural Networks, Thesis,
Computer Science, Department of Mathematical Sciences, Univer-
sity of Turku, Finland, (1998).
(38) D.E. Goldberg, Genetic Algorithms in Search, Optimization and Ma-
chine Learning, Addison-Wesley, (1989).
(39) M. Mitchell, An Introduction to Genetic Algorithms, MIT Press,
Cambridge MA, (1996).
(40) L. Davis (editor), Handbook of Genetic Algorithms, Van Nostrand
and Reinhold, New York, (1991).
(41) J.H. Holland, Adaptation in Natural and Artificial Systems, MIT
Press, Cambridge MA, (1992).
(42) J.H. Holland, Hidden Order; How Adaptation Builds Complexity, Ad-

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 201

dison Wesley, (1995).


(43) W. Banzhaf, P. Nordin, R.E. Keller and F. Francone, Genetic Pro-
gramming - An Introduction; On the Automatic Evolution of Com-
puter Programs and its Applications, Morgan Kaufmann, San Fran-
cisco CA, (1998).
(44) W. Banzhaf et al. (editors), (GECCO)-99: Proceedings of the Ge-
netic Evolutionary Computation Conference, Morgan Kaufman, San
Francisco CA, (2000).
(45) W. Banzhaf, Editorial Introduction, Genetic Programming and
Evolvable Machines, 1, 5-6, (2000).
(46) W. Banzhaf, The artificial evolution of computer code, IEEE Intelli-
gent Systems, 15, 74-76, (2000).
(47) J.J. Grefenstette (editor), Proceedings of the Second International
Conference on Genetic Algorithms and their Applications, Lawrence
Erlbaum Associates, Hillsdale New Jersey, (1987).
(48) J. Koza, Genetic Programming: On the Programming of Computers
by means of Natural Selection, MIT Press, Cambridge MA, (1992).
(49) J. Koza et al., editors, Genetic Programming 1997: Proceedings of
the Second Annual Conference, Morgan Kaufmann, San Francisco,
(1997).
(50) W.B. Langdon, Genetic Programming and Data Structures, Kluwer,
(1998).
(51) D. Lundh, B. Olsson, and A. Narayanan, editors, Bio-Computing and
Emergent Computation 1997, World Scientific, Singapore, (1997).
(52) P. Angeline and K. Kinnear, editors, Advances in Genetic Program-
ming: Volume 2, MIT Press, (1997).
(53) J.H. Holland, Adaptation in Natural and Artificial Systems, The Uni-
versity of Michigan Press, Ann Arbor, (1975).
(54) David B. Fogel and Wirt Atmar (editors), Proceedings of the First
Annual Conference on Evolutionary Programming, Evolutionary
Programming Society, La Jolla California, (1992).
(55) M. Sipper et al., A phylogenetic, ontogenetic, and epigenetic view
of bioinspired hardware systems, IEEE Transactions in Evolutionary
Computation 1, 1 (1997).
(56) E. Sanchez and M. Tomassini, editors, Towards Evolvable Hardware,
Lecture Notes in Computer Science, 1062, Springer-Verlag, (1996).
(57) J. Markoff, A Darwinian creation of software, New York Times, Sec-
tion C, p.6, February 28, (1990).
(58) A. Thompson, Hardware Evolution: Automatic design of electronic

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

202 INFORMATION THEORY AND EVOLUTION

circuits in reconfigurable hardware by artificial evolution, Distin-


guished dissertation series, Springer-Verlag, (1998).
(59) W. McCulloch and W. Pitts, A Logical Calculus of the Ideas Imma-
nent in Nervous Activity, Bulletin of Mathematical Biophysics, 7,
115-133, (1943).
(60) F. Rosenblatt, Principles of Neurodynamics, Spartan Books, (1962).
(61) C. von der Malsburg, Self-Organization of Orientation Sensitive Cells
in the Striate Cortex, Kybernetik, 14, 85-100, (1973).
(62) S. Grossberg, Adaptive Pattern Classification and Universal Recod-
ing: 1. Parallel Development and Coding of Neural Feature Detec-
tors, Biological Cybernetics, 23, 121-134, (1976).
(63) J.J. Hopfield and D.W. Tank, Computing with Neural Circuits: A
Model, Science, 233, 625-633, (1986).
(64) R.D. Beer, Intelligence as Adaptive Behavior: An Experiment in
Computational Neuroethology, Academic Press, New York, (1990).
(65) S. Haykin, Neural Networks: A Comprehensive Foundation, IEEE
Press and Macmillan, (1994).
(66) S.V. Kartalopoulos, Understanding Neural Networks and Fuzzy
Logic: Concepts and Applications, IEEE Press, (1996).
(67) D. Fogel, Evolutionary Computation: The Fossil Record, IEEE Press,
(1998).
(68) D. Fogel, Evolutionary Computation: Toward a New Philosophy of
Machine Intelligence, IEEE Press, Piscataway NJ, (1995).
(69) J.M. Zurada, R.J. Marks II, and C.J. Robinson, editors, Computa-
tional Intelligence: Imitating Life, IEEE Press, (1994).
(70) J. Bezdek and S.K. Pal, editors, Fuzzy Models for Pattern Recogni-
tion: Methods that Search for Structure in Data, IEEE Press, (1992).
(71) M.M. Gupta and G.K. Knopf, editors, Neuro-Vision Systems: Prin-
ciples and Applications, IEEE Press, (1994).
(72) C. Lau, editor, Neural Networks. Theoretical Foundations and Anal-
ysis, IEEE Press, (1992).
(73) T. Back, D.B. Fogel and Z. Michalewicz, editors, Handbook of Evo-
lutionary Computation, Oxford University Press, (1997).
(74) D.E. Rumelhart and J.L. McClelland, Parallel Distributed Process-
ing: Explorations in the Micro structure of Cognition, Volumes I and
II, MIT Press, (1986).
(75) J. Hertz, A. Krogh and R.G. Palmer, Introduction to the Theory of
Neural Computation, Addison Wesley, (1991).
(76) J.A. Anderson and E. Rosenfeld, Neurocomputing: Foundations of

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 203

Research, MIT Press, (1988).


(77) R.C. Eberhart and R.W. Dobbins, Early neural network development
history: The age of Camelot, IEEE Engineering in Medicine and
Biology 9, 15-18 (1990).
(78) T. Kohonen, Self-Organization and Associative Memory, Springer-
Verlag, Berlin, (1984).
(79) T. Kohonen, Self-Organizing Maps, Springer-Verlag, Berlin, (1997).
(80) G.E. Hinton, How neural networks learn from experience, Scientific
American 267, 144-151 (1992).
(81) K. Swingler, Applying Neural Networks: A Practical Guide, Aca-
demic Press, New York, (1996).
(82) B.K. Wong, T.A. Bodnovich and Y. Selvi, Bibliography of neural net-
work business applications research: 1988-September 1994, Expert
Systems 12, 253-262 (1995).
(83) I. Kaastra and M. Boyd, Designing neural networks for forecast-
ing financial and economic time series, Neurocomputing 10, 251-273
(1996).
(84) T. Poddig and H. Rehkugler, A world model of integrated financial
markets using artificial neural networks, Neurocomputing 10, 2251-
273 (1996).
(85) J.A. Burns and G.M. Whiteside, Feed forward neural networks in
chemistry: Mathematical systems for classification and pattern recog-
nition, Chem. Rev. 93, 2583-2601, (1993).
(86) M.L. Action and P.W. Wilding, The application of backpropagation
neural networks to problems in pathology and laboratory medicine,
Arch. Pathol. Lab. Med. 116, 995-1001 (1992).
(87) D.J. Maddalena, Applications of artificial neural networks to prob-
lems in quantitative structure activity relationships, Exp. Opin.
Ther. Patents 6, 239-251 (1996).
(88) W.G. Baxt, Application of artificial neural networks to clinical
medicine, [Review], Lancet 346, 1135-8 (1995).
(89) A. Chablo, Potential applications of artificial intelligence in telecom-
munications, Technovation 14, 431-435 (1994).
(90) D. Horwitz and M. El-Sibaie, Applying neural nets to railway engi-
neering, AI Expert, 36-41, January (1995).
(91) J. Plummer, Tighter process control with neural networks, 49-55, Oc-
tober (1993).
(92) T. Higuchi et al., Proceedings of the First International Conference
on Evolvable Systems: From Biology to Hardware (ICES96), Lecture

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

204 INFORMATION THEORY AND EVOLUTION

Notes on Computer Science, Springer-Verlag, (1997).


(93) S.A. Kaufman, Antichaos and adaption, Scientific American, 265,
78-84, (1991).
(94) S.A. Kauffman, The Origins of Order, Oxford University Press,
(1993).
(95) M.M. Waldrop, Complexity: The Emerging Science at the Edge of
Order and Chaos, Simon and Schuster, New York, (1992).
(96) H.A. Simon, The Science of the Artificial, 3rd Edition, MIT Press,
(1996).
(97) M.L. Hooper, Embryonic Stem Cells: Introducing Planned Changes
into the Animal Germline, Harwood Academic Publishers, Philadel-
phia, (1992).
(98) F. Grosveld, (editor), Transgenic Animals, Academic Press, New
York, (1992).
(99) G. Kohler and C. Milstein, Continuous cultures of fused cells secret-
ing antibody of predefined specificity, Nature, 256, 495-497 (1975).
(100) S. Spiegelman, An approach to the experimental analysis of precellu-
lar evolution, Quarterly Reviews of Biophysics, 4, 213-253 (1971).
(101) M. Eigen, Self-organization of matter and the evolution of biological
macromolecules, Naturwissenschaften, 58, 465-523 (1971).
(102) M. Eigen and W. Gardiner, Evolutionary molecular engineering
based on RNA replication, Pure and Applied Chemistry, 56, 967-978
(1984).
(103) G.F. Joyce, Directed molecular evolution, Scientific American 267
(6), 48-55 (1992).
(104) N. Lehman and G.F. Joyce, Evolution in vitro of an RNA enzyme
with altered metal dependence, Nature, 361, 182-185 (1993).
(105) E. Culotta, Forcing the evolution of an RNA enzyme in the test tube,
Science, 257, 31 July, (1992).
(106) S.A. Kauffman, Applied molecular evolution, Journal of Theoretical
Biology, 157, 1-7 (1992).
(107) H. Fenniri, Combinatorial Chemistry. A Practical Approach, Oxford
University Press, (2000).
(108) P. Seneci, Solid-Phase Synthesis and Combinatorial Technologies,
John Wiley & Sons, New York, (2001).
(109) G.B. Fields, J.P. Tam, and G. Barany, Peptides for the New Millen-
nium, Kluwer Academic Publishers, (2000).
(110) Y.C. Martin, Diverse viewpoints on computational aspects of molecu-
lar diversity, Journal of Combinatorial Chemistry, 3, 231-250, (2001).

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

BIO-INFORMATION TECHNOLOGY 205

(111) C.G. Langton et al., editors, Artificial Life II: Proceedings of the
Workshop on Artificial Life Held in Santa Fe, New Mexico, Addison-
Wesley, Reading MA, (1992).
(112) W. Aspray and A. Burks, eds., Papers of John von Neumann on
Computers and Computer Theory, MIT Press, (1967).
(113) M. Conrad and H.H. Pattee, Evolution experiments with an artificial
ecosystem, J. Theoret. Biol., 28, (1970).
(114) C. Emmeche, Life as an Abstract Phenomenon: Is Artificial Life Pos-
sible?, in Toward a Practice of Artificial Systems: Proceedings of the
First European Conference on Artificial Life, MIT Press, Cambridge
MA, (1992).
(115) C. Emmeche, The Garden in the Machine: The Emerging Science of
Artificial Life, Princeton University Press, Princeton NJ, (1994).
(116) S. Levy, Artificial Life: The Quest for New Creation, Pantheon, New
York, (1992).
(117) K. Lindgren and M.G. Nordahl, Cooperation and Community Struc-
ture in Artificial Ecosystems, Artificial Life, 1, 15-38 (1994).
(118) P. Husbands and I. Harvey (editors), Proceedings of the 4th Confer-
ence on Artificial Life (ECAL ’97), MIT Press, (1997).
(119) C.G. Langton, (editor), Artificial Life: An Overview, MIT Press,
Cambridge MA, (1997).
(120) C.G. Langton, ed., Artificial Life, Addison-Wesley, (1987).
(121) A.A. Beaudry and G.F. Joyce, Directed evolution of an RNA enzyme,
Science, 257, 635-641 (1992).
(122) D.P. Bartel and J.W. Szostak, Isolation of new ribozymes from a
large pool of random sequences, Science, 261, 1411-1418 (1993).
(123) K. Kelly, Out of Control, www.kk.org/outofcontrol/index.html,
(2002).
(124) K. Kelly, The Third Culture, Science, February 13, (1998).
(125) S. Blakeslee, Computer life-form “mutates” in an evolution experi-
ment, natural selection is found at work in a digital world, New York
Times, November 25, (1997).
(126) M. Ward, It’s life, but not as we know it, New Scientist, July 4,
(1998).
(127) P. Guinnessy, “Life” crawls out of the digital soup, New Scientist,
April 13, (1996).
(128) L. Hurst and R. Dawkins, Life in a test tube, Nature, May 21, (1992).
(129) J. Maynard Smith, Byte-sized evolution, Nature, February 27, (1992).
(130) W.D. Hillis, Intelligence as an Emergent Behavior, in Artificial In-

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use
March 8, 2012 7:54 World Scientific Book - 9in x 6in neweda

206 INFORMATION THEORY AND EVOLUTION

telligence, S. Graubard, ed., MIT Press, (1988).


(131) T.S. Ray, Evolution and optimization of digital organisms, in Sci-
entific Excellence in Supercomputing: The IBM 1990 Contest Prize
Papers, K.R. Billingsly, E. Derohanes, and H. Brown, III, editors,
The Baldwin Press, University of Georgia, Athens GA 30602, (1991).
(132) S. Lloyd, The calculus of intricacy, The Sciences, October, (1990).
(133) M. Minsky, The Society of Mind, Simon and Schuster, (1985).
(134) D. Pines, ed., Emerging Synthesis in Science, Addison-Wesley,
(1988).
(135) P. Prusinkiewicz and A. Lindenmayer, The Algorithmic Beauty of
Plants, Springer-Verlag, (1990).
(136) T. Tommaso and N. Margolus, Cellular Automata Machines: A New
Environment for Modeling, MIT Press, (1987).
(137) W.M. Mitchell, Complexity: The Emerging Science at the Edge of
Order and Chaos, Simon and Schuster, (1992).
(138) T.S. Ray et al., Kurtzweil’s Turing Fallacy, in Are We Spiritual Ma-
chines?: Ray Kurzweil vs. the Critics of Strong AI, J. Richards, ed.,
Viking, (2002).
(139) T.S. Ray, Aesthetically Evolved Virtual Pets, in Artificial Life 7
Workshop Proceedings, C.C. Maley and E. Bordreau, eds., (2000).
(140) T.S. Ray and J.F. Hart, Evolution of Differentiation in Digital Or-
ganisms, in Artificial Life VII, Proceedings of the Seventh Interna-
tional Conference on Artificial Life, M.A. Bedau, J.S. McCaskill,
N.H. Packard, and S. Rasmussen, eds., MIT Press, (2000).
(141) T.S. Ray, Artificial Life, in Frontiers of Life, Vol. 1: The Origins of
Life, R. Dulbecco et al., eds., Academic Press, (2001).
(142) T.S. Ray, Selecting naturally for differentiation: Preliminary evolu-
tionary results, Complexity, 3 (5), John Wiley and Sons, (1998).
(143) K. Sims, Artificial Evolution for Computer Graphics, Computer
Graphics, 25 (4), 319-328 (1991).
(144) K. Sims, Galapagos, http://web.genarts.com/galapagos , (1997).

EBSCOhost - printed on 9/2/2023 9:28 AM via SEOUL NATIONAL UNIVERSITY. All use subject to https://www.ebsco.com/terms-of-use

You might also like