0% found this document useful (0 votes)
37 views

Network Virtualisation

Network Virtualisation

Uploaded by

RKG
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Network Virtualisation

Network Virtualisation

Uploaded by

RKG
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Proceedings of the 3rd National Conference; INDIACom-2009

Computing For Nation Development, February 26 27, 2009


Bharati Vidyapeeths Institute of Computer Applications and Management, New Delhi

DNA COMPUTING:APPLICATIONS AND CHALLENGES


Medha Sidher BVICAM(IP) [email protected] ,
Aditya Miglani BPIOBS(IP) [email protected],
Sandeep Kataria BVICAM(IP) [email protected]

ABSTRACT
Think of DNA as software, and enzymes as hardware. Put
them together in a test tube. The way in which these
molecules undergo chemical reactions with each other
allows simple operation to be performed as a byproduct of
the reactions. The scientists tell the devices what to do by
controlling the composition of the DNA software molecules.
Its a completely different approach to pushing electrons
around a dry circuit in a conventional computer. DNA
omputing is a discipline that aims at harnessing individual
molecules at the nanoscopic level for
computational
purposes. Computation
with DNA molecules possesses
an Inherent interest for researchers in computers and
biology. With its cast parallelism and high-density storage,
DNA computing approaches are employed to solve many
combinatorial problems. DNA has been explored as an
excellent material and a fundamental building block for
building large scale nanostructures, constructing individual
nanomechanical devices, and
performing computations.
Molecular-scale autonomous programmable computers are
demonstrated allowing input and output information to be in
molecular form. This section presents a note of recent advances
in DNA Computing and discusses major achievements and
Challenges for researchers in the foreseeable future.
KEYWORDS
DNA(Deoxyribonucleic acid), Biomolecular computing ,
Swarm intelligence
INTRODUCTION
As we all know , DNA , basis of our existence has now
entered in our day to day problem solving life too by the
means of DNA computing. As a true fact computers have
affected over life greatly whether its of solving complex
problems or of making a safety pin using the machines. But
now the day is not far where DNA computers comes into
sight and carries the operation in the same way as what is
done in our body. Biomolecular computing, where
computations are performed by biomolecules, is challenging
traditional approaches to computation both theoretically and
technologically. The idea that molecular systems can
perform computations is not new and was indeed
more
natural in the pre-transistor age. The essential role of
information processing in evolution and the ability to address
these issues on laboratory timescales at the molecular level as
first addressed by Adlemans key experiment (Adleman 1994),
which demonstrated that the tools of laboratory molecular

Copy Right INDIACom 2009

biology could be used to program computations with DNA


in vitro. DNA Computing approaches can be performed either
in vitro (purely chemical) or in vivo (i.e. inside cellular life
forms). The huge information storage capacity of DNA and the
low energy dissipation of DNA processing led to an explosion
of interest in DNA computing. Artificial intelligence methods
are used to address the combinatorial issue in DNA computing.
DNA computing operates in natural noisy environments, such
as a glass of water . It involves an evolvable platform for
computation in which the computer construction machinery
itself is embedded.
DNA STRUCTURE
Deoxyribonucleic acid(DNA) is a nucleic acid that contains the
genetic instructions for the development and functioning of
living organisms. All living things contain DNA genomes.
DNA is the major example of a biological molecule that
stores information and can be manipulated, via enzymes and
nucleic acid interactions, to retrieve information. Similarly, as
a string of binary data is encoded with zeros and ones, a strand
of DNA is encoded with four bases (known as nucleotides),
represented by the letters A, T, C, and G. Each strand,
according to chemical convention, has a 5_ and a 3_ end;
hence, any single strand has a natural orientation. Bonding
occurs by the pair wise attraction of bases; A bonds with T and
G bonds with C. The pairs (A, T) and (G, C) are therefore
known as complementary base pairs. DNA computing relies on
developing algorithms that solve problems using the encoded
information in the sequence of nucleotides that make up
DNAs double helix and then breaking and making new bonds
between them to reach the answer . Due to spacing of the
nucleotides , every 0.35 nm along the DNA molecule, it gives
DNA a remarkable data density estimated as one bit per cubic
nanometre, and potentially exabytes (1018) amounts of
information in a gram of DNA. DNA computing is also
massively parallel and can reach approximately 10 to power 20
operations per second compared to existing teraflop
supercomputers. Another important property of DNA is its
double- Stranded nature. The bases A and T, and C and G,
can bind together, forming base pairs. Therefore, every DNA
sequence has a natural complement. For example, sequence S
is AATTCGCGT, its complement, S is TTAAGCGCA. Both
S and S will hybridize to form double-stranded DNA. This
complementarity can be used for error correction. If the error
occurs in one of the strands of double-stranded DNA,
repair enzymes can restore the proper DNA sequence by using
the complement strand as a reference. In DNA replication,

Proceedings of the National Conference; INDIACom-2009

there is one error for every 109 copied bases whereas hard
drives have one error for every 1013 for ReedSolomon
correction.

DNA COMPUTING
DNA computing is a novel and fascinating development at the
interface of computer science and molecular biology. It is
advantageous over electronic computers in power use, space
use, and efficiency, due to its ability to compute in a highly
parallel fashion. It also has a theoretical role in cryptography,
where in particular it allows unbreakable one-time pads to be
efficiently constructed and used. It has emerged in recent years,
not simply as an exciting technology for information
processing, but also as a Catalyst for knowledge transfer
between information processing, nanotechnology, and biology.
This area of research has the potential to change our
understanding of the theory and practice of computing.
BIOMOLECULAR COMPUTING
Biomolecular computers are molecular-scale programmable,
autonomous computing hardware are made of biological
molecules. Biomolecular computers hold the promise of direct
Copy Right INDIACom 2009

computational analysis of biological information in its native


biomolecular form, avoiding its conversion into an electronic
representation. This has led to pursing autonomous,
programmable computers which are considered as finite
automata. An automaton can be stochastic, namely has two
or more competing transitions for each state-symbol
combination, each with a prescribed probability. A stochastic
automaton is useful for processing uncertain information, like
most biological information. Because of the stochastic
nature of biomolecular systems, a stochastic biomolecular
computer would be more favorable for analyzing biological
information than a deterministic one. Stochastic molecular
automata have been constructed in which stochastic choice
is realized by means of competition between alternative paths,
and choice probabilities were programmed by the relative
molar concentrations of the software molecules coding for
the alternatives. This approach was used in the construction of
a molecular computer capable of probabilistic logical analysis
of disease -related molecular indicators.
PARALLEL COMPUTATION
In the cell, DNA is modified biochemically by a variety of
enzymes, which are tiny protein machines that read and process
DNA according to natures design. There is a wide variety and
number of these operational proteins, which manipulate
DNA on the molecular level. For example, there are enzymes
that cut DNA and enzymes that paste it back together. Other
enzymes function as copiers, and others as repair units.
Molecular biology, Biochemistry, and Biotechnology have
developed techniques that allow us to perform many of these
cellular functions in the test tube. Its this cellular
machinery, along with some synthetic chemistry, that makes up
the palette of operations available for computation. Just like a
CPU has a basic suite of operations like addition, bit-shifting,
logical operators (AND, OR, NOT, NOR), etc. that allow it to
perform even the most complex calculations, DNA has cutting,
copying, pasting, repairing, and many others. And note that in
the test tube, enzymes do not function sequentially, working on
one DNA at a time. Rather, many copies of the enzymes can
work on many DNA molecules simultaneously. This is the
power of DNA computing, that it can work in a massive
parallel fashion.
ADLEMANS EXPERIMENT
In 1994, an US proof-of-principle study showed that DNA
could be used to solve mathematical problems, which attracted
considerable interest from researchers hoping that DNA would
one day replace silicon as the basis for a new wave of
computers. But the initial excitement has since dampened down
as scientists have realized that there are numerous problems
inherent to DNA computing and that they would have to live
with their silicon-based computers for quite a while yet. The
field consequently changed its focus, and in essence, research
into DNA computing is now chiefly concerned with
"investigating processes in cells that can be viewed as logical
computations and then looking to use these computations to our
2

DNA Computing: Applications and Challenges

advantage".
It was Leonard Adleman, professor of computer science and
molecular biology at the University of Southern California,
USA, who pioneered the field when he built the first DNA
based computer. Intrigued by the molecule's immense capacity
to store information in a very small space, he set out to solve a
classic puzzle in mathematics the so-called Hamilton Path
problem, better known as the Traveling Salesman problem.
This seemingly simple puzzle a salesman must visit a
number of cities that are interconnected by a limited series of
roads without passing through any city more than onceis
actually quite a killer, and even the most advanced
supercomputers would take years to calculate the optimal route
for 50 cities. Adleman solved the problem for seven cities
within a second, using DNA molecules in a standard reaction
tube. He represented each of the seven cities as separate,
single-stranded DNA molecules, 20 nucleotides long, and all
possible paths between cities as DNA molecules composed of
the last ten nucleotides of the departure city and the first ten
nucleotides of the arrival city. Mixing the DNA strands with
DNA ligase and adenosine triphosphate (ATP) resulted in the
generation of all possible random paths through the cities.
However, the majority of these paths were not applicable to the
situationthey were either too long or too short, or they did not
start or finish in the right city. Adleman then filtered out all the
paths that neither started nor ended with the correct molecule
and those that did not have the correct length and composition.
Any remaining DNA molecules represented a solution to the
problem.
The computation in Adleman's experiment chugged along at
1,014 operations per second, a rate of 100 Teraflops or 100
trillion floating point operations per second; the world's fastest
supercomputer, Earth Simulator, owned by the NEC
Corporation in Japan, runs at just 35.8 Teraflops. Clearly,
computing with DNA has massive advantages over siliconbased machines.
CLASSES OF DNA COMPUTING
There are three classes of DNA computing apparent: (1) intramolecular, (2) inter-molecular, and (3) supra-molecular. The
Japanese Project lead by Hagiya focuses on intra-molecular
DNA computing, constructing programmable state machines
in single DNA molecules, which operate by means of intramolecular
advanced
conformational
transitions.
Intermolecular DNA computing, of which Adlemans
experiment is an example, focusing on the hybridization
between different DNA molecules as a basic step of
computations. Finally, supra-molecular DNA computing, as
pioneered by Winfree, harnesses the process of self assembly
of rigid DNA molecules with different sequences to perform
computations. Supra-molecular assembly is the creating of
molecular assemblies that are beyond the scale of one
molecule. The self-assembly of small molecular building
blocks programmed to form larger, nanometre-sized elements
is an important goal of molecular nanotechnology.

Copy Right INDIACom 2009

INTELLIGENT SYSTEMS BASED DNA COMPUTING


Due to the various uncommonly properties that exists in DNA,
have made it possible to solve some problems(or at least think
about the solution). Artificial Intelligence systems can be used
at new heights if we could use DNA computing in them.
Some of its applications are discussed below:DNA CHIPS
DNA chips are the prototype global technology for genetics,
because they let us look at the behavior of thousands of genes
at once." DNA-chip technology will be key to meeting one of
the biggest scientific challenges of the coming century-the
analysis of how all the genes in an organism work together as a
very complex system.
DNA chips promise to carry the science of understanding
genomes to a whole new level, and to bring tools for getting
DNA-sequence information out of research labs into doctors'
offices, the better to tailor-fit medical treatments to an
individual's particular genetic makeup. Researchers in the field
expect that DNA chips will enable cliniciansand in some
cases even patients themselvesto quickly and inexpensively
detect the presence of a whole array of genetically based
diseases and conditions, including AIDS, Alzheimer's disease,
cystic fibrosis, and some forms of cancer. Moreover, the
technology could make it possible to conduct widespread
disease screening cost-effectively, and to monitor the
effectiveness of patient therapies more effectively.
What gives DNA chips their power in the real world is their
flexibility, compact size, speed, and low cost. Scientists can put
not just a hundred but hundreds of thousands of distinct DNA
sequences on a microscopic grid a few centimeters across.
Then, using fluorescent molecular tags that light up when a
complementary strand binds to a particular spot, a person (or a
robot) can read out which sequences on the chip find their
complement in an unknown sample.
DNA chips can gather an incredible variety of data very
quickly. And because chips can be mass-produced, they will
likely be very inexpensive in the near future. That will allow
easy collection of genetic information from many, many
individuals, opening up all kinds of opportunities to help
doctors diagnose and treat their patients.
Expression Analysis
One way DNA chips allow scientists to observe genes working
together is called "expression analysis." (Remember that to
"express" a gene as a protein, cells first transcribe the gene's
DNA sequence into a complementary mRNA copy. Then a
ribosome translates the mRNA sequence into the string of
amino acids that makes up the protein. Cells constantly switch
genes on or off as conditions change. To understand a cell's
behavior in response to a stimulus-the presence of a hormone,
say, or a toxin, or some environmental signal-it would be handy
to have a minute-to-minute reading of which genes are turned
on.

Proceedings of the National Conference; INDIACom-2009

DNA chips are just about perfect for tracking this kind of
minute-to-minute change in gene expression.
Mapping Our Differences
Pick any two people in the world, and you would find their
DNA is 99.9 percent identical. The remaining 0.1 percent is the
genetic basis of all of humanity's differences, from the shape of
our faces to the way some people get cancer to the fact that
some patients respond to a certain drug while others don't.
Scientists are now starting to use DNA chips to map out tiny
one-letter variations in the 3-billion-nucleotide human genome.
These pinpoint differences are called "single nucleotide
polymorphisms," or SNPs. Identifying them will help
researchers understand the basis for human variation.

SWARM INTELLIGENCE
This technique is very unique in itself and lays closely related
to biology (and hence DNA), a swarm can be defined as a set
of (mobile) agents which are liable to communicate directly or
indirectly (by acting on their local environment) with each
other, and which collectively carry out a distributed problem
solving. Similarly our body can be defined as a swarm of cells
and tissues which, unlike the swarms of bees or ants, stick
relatively firmly together. Here the swarm of cells constituting
a human body is a very different kind of swarm from that of the
social insects. The body swarm is not built on ten thousand
nearly identical units such as a bee society. Rather it should be
seen as a swarm of swarms, i.e., a huge swarm of more or less
overlapping swarms of very different kinds. And the minor
swarms again are swarm-entities, so that we get a hierarchy of
swarms. In a layman term we can say it generally is a technique
Inspired by the collective intelligence in social animals such
as birds, ants, fish and termites. These social animals require no
leader. Their collective behaviours emerge from interactions
among individuals, in a process known as self-organization.
Each individual may not be intelligent, but together they
perform complex collaborative behaviours. Its present uses
are to assist the study of human social behaviour by observing
other social animals and to solve various optimization problems
We can mainly mention three main technique presently :
models of bird flocking, the ant colony optimization (ACO)
algorithm, and the particle swarm optimization (PSO)
algorithm. Here in the particle swarm, there is no central
Copy Right INDIACom 2009

control: no one gives orders. Each particle is a simple agent


acting upon local information. Yet, the swarm as a whole is
able to perform tasks, whose degree of complexity is well
beyond the capabilities of the individual. Kaewkamnerdpong
and Bentley proposed a new swarm algorithm, called the
Perceptive Particle Swarm Optimization (PPSO) algorithm. It
has extended the conventional PSO algorithm for applications
in the physical world. The PPSO algorithm is designed to
handle real-world physical control problems including
programming or controlling agents of nanotechnology, for
example nanorobots or we can say DNA computers.
CONCLUSION
We have seen that DNA computers, if ever existed, can provide
benefits of being a cheap, energy-efficient resource. Its unique
property of Parallelism i.e. solving a problems in highly
parallel manner results in different combination of answers at
the same time. But seeing the increasing error rates which rises
exponentially, it had not been possible to implement a
practically working DNA computer. On the "classical" front,
problem specific computers may prove to be the first practical
use of DNA computation for several reasons. First, a problem
specific computer will be easier to design and implement, with
less need for functional complexity and flexibility. Secondly,
DNA computing may prove to be entirely inefficient for a wide
range of problems, and directing efforts problems, and
directing efforts on universal models may be diverting energy
away from its true calling. Thirdly, the types of hard
computational problems that DNA based computers may be
able to effectively solve are of sufficient economic importance
that a dedicated processor would be financially reasonable.
Despite the identified inefficiencies, it is certainly possible that
in some instances a DNA-based computing system may prove
to be the best solution. And of course we are talking about
DNA here, the genetic code of life itself. It certainly has been
the molecule of this century and most likely the next one.
Considering all the attention that DNA has garnered, it isnt too
hard to imagine that one day we might have tools and talent to
produce a small integrated desktop machine that uses DNA, or
a DNA-like biopolymer, as a computing substrate along with
set of designer enzymes. Certainly the notion of including a
DNA computer as part of a cars control system is rather
laughable today.

FUTURE SCOPE
Since the boom in DNA computing research in the mid-1990's
there has been a significant decrease in the number of technical
papers and conferences related to the topic. The reasons for this
precipitous fall from grace turns out that, while DNA
computing provides a good theoretical framework within which
to design algorithms, the ability to construct a DNA-based
computer is limited by a number of implementation level
problems.
The first problem, which has already been alluded to, has to do
with the volume complexity that goes along with DNA
4

DNA Computing: Applications and Challenges

computing. While other algorithms may be more efficient in


terms of volume complexity, the issues of scale are universally
problematic for DNA computing. One contributor to this
problem is the fact that strand pairing during the annealing
process is subject to a whole host of errors.
One of the related problems with DNA computing is that there
is no universal method of data representation. In todays
computer systems, for example, the binary representation is
universally agreed upon. DNA computing, however, has no
such standard. This is primarily due to the fact that there is no
DNA-based operation to extract a strand if it has a particular
value at a particular position. Extraction in DNA computing is
performed solely be value and without respect to the values
position within the strand, meaning that position information
must be built into the sequence itself. This inclusion of
positional information only exacerbates the problems of
volume complexity outlined earlier.
Finally, one of the biggest problems facing the field of DNA
computing is that no efficient implementation has been
produced for testing, verification, and general experimentation.
While Adlemans initial experiment was performed in a lab,
many of the subsequent algorithms in DNA computing have
never been implemented or tested. The reason for this is that
the resources required to execute these algorithms are both
expensive and hard to find.
Despite all of the difficulties outlined above, there are still a
number of researchers working on topics related to DNA
computing. While the number is fewer than in past years, much
of their research seems to be motivated with a ground-up
approach, focused on answering basic questions about DNA
computing. Some more recent work has attempted to address
the issues of data representation and others with the ability to
emulate todays circuit-based computing in a DNA-based
system.
It remains to be seen whether or not DNA computing will
become a viable method of problem solving in the future.
REFERENCES
[1] DNA Computing : applications and challenges . Z.
Ezziane, Dubai University college, College of Information
Technology 2006 Nanotechnology 17 R27-R39.
[2] Sakakibara Y and Suyama A 2000 Intelligent DNA chips:
Logical operation of gene expression profiles on DNA
computers Genome Informatics 11 3342
[3] Bonabeau B, Dorigo M and Thraulaz G 1999 Swarm
Intelligence: From Natural to Artificial Systems
(Oxford: Oxford University Press)

Copy Right INDIACom 2009

You might also like