Network Virtualisation
Network Virtualisation
ABSTRACT
Think of DNA as software, and enzymes as hardware. Put
them together in a test tube. The way in which these
molecules undergo chemical reactions with each other
allows simple operation to be performed as a byproduct of
the reactions. The scientists tell the devices what to do by
controlling the composition of the DNA software molecules.
Its a completely different approach to pushing electrons
around a dry circuit in a conventional computer. DNA
omputing is a discipline that aims at harnessing individual
molecules at the nanoscopic level for
computational
purposes. Computation
with DNA molecules possesses
an Inherent interest for researchers in computers and
biology. With its cast parallelism and high-density storage,
DNA computing approaches are employed to solve many
combinatorial problems. DNA has been explored as an
excellent material and a fundamental building block for
building large scale nanostructures, constructing individual
nanomechanical devices, and
performing computations.
Molecular-scale autonomous programmable computers are
demonstrated allowing input and output information to be in
molecular form. This section presents a note of recent advances
in DNA Computing and discusses major achievements and
Challenges for researchers in the foreseeable future.
KEYWORDS
DNA(Deoxyribonucleic acid), Biomolecular computing ,
Swarm intelligence
INTRODUCTION
As we all know , DNA , basis of our existence has now
entered in our day to day problem solving life too by the
means of DNA computing. As a true fact computers have
affected over life greatly whether its of solving complex
problems or of making a safety pin using the machines. But
now the day is not far where DNA computers comes into
sight and carries the operation in the same way as what is
done in our body. Biomolecular computing, where
computations are performed by biomolecules, is challenging
traditional approaches to computation both theoretically and
technologically. The idea that molecular systems can
perform computations is not new and was indeed
more
natural in the pre-transistor age. The essential role of
information processing in evolution and the ability to address
these issues on laboratory timescales at the molecular level as
first addressed by Adlemans key experiment (Adleman 1994),
which demonstrated that the tools of laboratory molecular
there is one error for every 109 copied bases whereas hard
drives have one error for every 1013 for ReedSolomon
correction.
DNA COMPUTING
DNA computing is a novel and fascinating development at the
interface of computer science and molecular biology. It is
advantageous over electronic computers in power use, space
use, and efficiency, due to its ability to compute in a highly
parallel fashion. It also has a theoretical role in cryptography,
where in particular it allows unbreakable one-time pads to be
efficiently constructed and used. It has emerged in recent years,
not simply as an exciting technology for information
processing, but also as a Catalyst for knowledge transfer
between information processing, nanotechnology, and biology.
This area of research has the potential to change our
understanding of the theory and practice of computing.
BIOMOLECULAR COMPUTING
Biomolecular computers are molecular-scale programmable,
autonomous computing hardware are made of biological
molecules. Biomolecular computers hold the promise of direct
Copy Right INDIACom 2009
advantage".
It was Leonard Adleman, professor of computer science and
molecular biology at the University of Southern California,
USA, who pioneered the field when he built the first DNA
based computer. Intrigued by the molecule's immense capacity
to store information in a very small space, he set out to solve a
classic puzzle in mathematics the so-called Hamilton Path
problem, better known as the Traveling Salesman problem.
This seemingly simple puzzle a salesman must visit a
number of cities that are interconnected by a limited series of
roads without passing through any city more than onceis
actually quite a killer, and even the most advanced
supercomputers would take years to calculate the optimal route
for 50 cities. Adleman solved the problem for seven cities
within a second, using DNA molecules in a standard reaction
tube. He represented each of the seven cities as separate,
single-stranded DNA molecules, 20 nucleotides long, and all
possible paths between cities as DNA molecules composed of
the last ten nucleotides of the departure city and the first ten
nucleotides of the arrival city. Mixing the DNA strands with
DNA ligase and adenosine triphosphate (ATP) resulted in the
generation of all possible random paths through the cities.
However, the majority of these paths were not applicable to the
situationthey were either too long or too short, or they did not
start or finish in the right city. Adleman then filtered out all the
paths that neither started nor ended with the correct molecule
and those that did not have the correct length and composition.
Any remaining DNA molecules represented a solution to the
problem.
The computation in Adleman's experiment chugged along at
1,014 operations per second, a rate of 100 Teraflops or 100
trillion floating point operations per second; the world's fastest
supercomputer, Earth Simulator, owned by the NEC
Corporation in Japan, runs at just 35.8 Teraflops. Clearly,
computing with DNA has massive advantages over siliconbased machines.
CLASSES OF DNA COMPUTING
There are three classes of DNA computing apparent: (1) intramolecular, (2) inter-molecular, and (3) supra-molecular. The
Japanese Project lead by Hagiya focuses on intra-molecular
DNA computing, constructing programmable state machines
in single DNA molecules, which operate by means of intramolecular
advanced
conformational
transitions.
Intermolecular DNA computing, of which Adlemans
experiment is an example, focusing on the hybridization
between different DNA molecules as a basic step of
computations. Finally, supra-molecular DNA computing, as
pioneered by Winfree, harnesses the process of self assembly
of rigid DNA molecules with different sequences to perform
computations. Supra-molecular assembly is the creating of
molecular assemblies that are beyond the scale of one
molecule. The self-assembly of small molecular building
blocks programmed to form larger, nanometre-sized elements
is an important goal of molecular nanotechnology.
DNA chips are just about perfect for tracking this kind of
minute-to-minute change in gene expression.
Mapping Our Differences
Pick any two people in the world, and you would find their
DNA is 99.9 percent identical. The remaining 0.1 percent is the
genetic basis of all of humanity's differences, from the shape of
our faces to the way some people get cancer to the fact that
some patients respond to a certain drug while others don't.
Scientists are now starting to use DNA chips to map out tiny
one-letter variations in the 3-billion-nucleotide human genome.
These pinpoint differences are called "single nucleotide
polymorphisms," or SNPs. Identifying them will help
researchers understand the basis for human variation.
SWARM INTELLIGENCE
This technique is very unique in itself and lays closely related
to biology (and hence DNA), a swarm can be defined as a set
of (mobile) agents which are liable to communicate directly or
indirectly (by acting on their local environment) with each
other, and which collectively carry out a distributed problem
solving. Similarly our body can be defined as a swarm of cells
and tissues which, unlike the swarms of bees or ants, stick
relatively firmly together. Here the swarm of cells constituting
a human body is a very different kind of swarm from that of the
social insects. The body swarm is not built on ten thousand
nearly identical units such as a bee society. Rather it should be
seen as a swarm of swarms, i.e., a huge swarm of more or less
overlapping swarms of very different kinds. And the minor
swarms again are swarm-entities, so that we get a hierarchy of
swarms. In a layman term we can say it generally is a technique
Inspired by the collective intelligence in social animals such
as birds, ants, fish and termites. These social animals require no
leader. Their collective behaviours emerge from interactions
among individuals, in a process known as self-organization.
Each individual may not be intelligent, but together they
perform complex collaborative behaviours. Its present uses
are to assist the study of human social behaviour by observing
other social animals and to solve various optimization problems
We can mainly mention three main technique presently :
models of bird flocking, the ant colony optimization (ACO)
algorithm, and the particle swarm optimization (PSO)
algorithm. Here in the particle swarm, there is no central
Copy Right INDIACom 2009
FUTURE SCOPE
Since the boom in DNA computing research in the mid-1990's
there has been a significant decrease in the number of technical
papers and conferences related to the topic. The reasons for this
precipitous fall from grace turns out that, while DNA
computing provides a good theoretical framework within which
to design algorithms, the ability to construct a DNA-based
computer is limited by a number of implementation level
problems.
The first problem, which has already been alluded to, has to do
with the volume complexity that goes along with DNA
4