To Simple Operations Like Addition and Subtraction
To Simple Operations Like Addition and Subtraction
To Simple Operations Like Addition and Subtraction
Introduction
Computer chip manufacturers are furiously racing to make the next microprocessor that will topple speed records. Sooner or later, though, this competition is bound to hit a wall. Microprocessors made of silicon will eventually reach their limits of speed and miniaturization. Chip makers need a new material to produce faster computing speeds. Scientists have found the new material they need to build the next generation of microprocessors. Millions of natural supercomputers exist inside living organisms, including your body. DNA (deoxyribonucleic acid) molecules, the material our genes are made of, have the potential to perform calculations many times faster than the world's most powerful human-built computers. DNA might one day be integrated into a computer chip to create a so-called biochip that will push computers even faster. DNA molecules have already been harnessed to perform complex mathematical problems. Despite their respective complexities, biological and mathematical operations have some similarities: The very complex structure of a living being
is the result of applying simple operations to initial information encoded in a DNA sequence.
is the result of applying simple operations to initial information encoded in a DNA sequence (genes).
For the same reasons that DNA was presumably selected for living organisms as a genetic material, its stability and predictability in reactions, DNA strings can also be used to encode information for mathematical systems.
Human DNA While still in their infancy, DNA computers will be capable of storing billions of times more data than your personal computer. Scientists are using genetic material to create nano-computers that might take the place of silicon-based computers in the next decade. Israeli scientists have built a DNA computer so tiny that a trillion of them could fit in a test tube and perform a billion operations per second with 99.8 percent accuracy. Instead of using figures and formulas to solve a problem, the microscopic computer's input, output and software are made up of DNA
molecules -- which store and process encoded information in living organisms. Scientists see such DNA computers as future competitors to for their more conventional cousins because miniaturization is reaching its limits and DNA has the potential to be much faster than conventional computers. "We have built a nanoscale computer made of biomolecules that is so small you cannot run them one at a time. When a trillion computers run together they are capable of performing a billion operations," Professor Ehud Shapiro of the Weizmann Institute in Israel told. It is the first programmable autonomous computing machine in which the input, output, software and hardware are all made of biomolecules.
Although too simple to have any immediate applications it could form the basis of a DNA computer in the future that could potentially operate within human cells and act as a monitoring device to detect potentially disease-causing changes and synthesize drugs to fix them. The model could also form the basis of computers that could be used to screen DNA libraries in parallel without sequencing each molecule, which could speed up the acquisition of knowledge about DNA.
2. History of Invention
Enormous Potential DNA can hold more information in a cubic centimeter than a trillion CDs. The double helix molecule that contains human genes stores data on four chemical bases -- known by the letters A, T, C and G -- giving it massive memory capability that scientists are only just beginning to tap into. "The living cell contains incredible molecular machines that manipulate information-encoding molecules such as DNA and RNA (its chemical cousin) in ways that are fundamentally very similar to computation," said Shapiro, the head of the research team that developed the DNA computer. "Since we don't know how to effectively modify these machines or create new ones just yet, the trick is to find naturally existing machines that, when combined, can be steered to actually compute," he added. Writing in the science journal Nature, Shapiro and his team describe their DNA computer, which is a molecular model of one of the simplest computing machines -- the automaton which can answer certain yes or no questions. Data is represented by pairs of molecules on a strand of DNA and two naturally occurring enzymes act as the hardware to read copy and manipulate the code.
When it is all mixed together in the test tube, the software and hardware operate on the input molecule to create the output. The DNA computer also has very low energy consumption, so if it is put inside the cell it would not require much energy to work. DNA computing is a very young branch of science that started less than a decade ago, when Leonard Adleman of the University of Southern California pioneered the field by using DNA in a test tube to solve a mathematical problem. Scientists around the globe are now trying to marry computer technology and biology by using nature's own design to process information.
For biological systems, DNA is a master problem solver, storing and manipulating prodigious amounts of information. Recently, researchers have been investigating whether the problem-solving power of DNA can be used to solve nonbiological problems, specifically, problems from computer science that are out of the reach of traditional computers. Would a DNA computer actually work? To answer this question, one must turn to mathematics. In a groundbreaking 1994 paper, Leonard Adleman described a laboratory experiment involving DNA and a problem known as the Directed Hamiltonian Path Problem. Because of their massive complexity, this problem and others like it have eluded solution by conventional computers; no known algorithm can solve them in a reasonable amount of time. DNA has the potential to solve these kinds of problems because of its capacity to store a great deal of information compactly and to speedily perform
operations on that information. The idea is that DNA could be used to perform massively parallel searches much more quickly than on traditional computers. In his experiment, Adleman took the first step toward making this idea a reality: He got test tubes of DNA to solve a particular instance of the Directed Hamiltonian Path Problem. Adleman's device did not solve the problem in all its generality. What it did do is provide the first physical implementation of the idea of using DNA for computation. But Adleman's device can carry out one and only one set of calculations, leaving open the question of whether an all-purpose DNA computer is a practical possibility. Before investment capital firms rush to develop the first prototype, they will want hard evidence that a DNA computer would actually work. And this raises two questions that are fundamentally mathematical: Can any problem be simulated by a DNA computer, and can an all-purpose DNA computer be constructed theoretically?
Information Storage In DNA For traditional computers, these two questions have been answered affirmatively. And this is why traditional computers work: What they do can be represented mathematically and proven to produce the right answers. Fuses may blow, screens may freeze up, and the Pentium chip might be faulty, but the underlying mathematics is flawless. What the DNA computer needed was similar mathematical bedrock. Happily for the investment capitalists, mathematicians have provided the proof. These results lift the DNA computer out of the realm of science fiction and put it on a firm mathematical foundation.
3. A Fledgling Technology
DNA computers can't be found at local electronics store yet. The technology is still in development, and didn't even exist as a concept a decade ago. In 1994, Leonard Adleman introduced the idea of using DNA to solve complex mathematical problems. Adleman, a computer scientist at the University of Southern California, came to the conclusion that DNA had computational potential after reading the book "Molecular Biology of the Gene," written by James Watson, who co-discovered the structure of DNA in 1953. In fact, DNA is very similar to a computer hard drive in how it stores permanent information about genes. The success of the Adleman DNA computer proves that DNA can be used to calculate complex mathematical problems. However, this early DNA computer is far from challenging silicon-based computers in terms of speed. The Adleman DNA computer created a group of possible answers very quickly, but it took days for Adleman to narrow down the possibilities. Another drawback of his DNA computer is that it requires human assistance. The goal of the DNA computing field is to create a device that can work independent of human involvement. Three years after Adleman's experiment, researchers at the University of Rochester developed logic gates made of DNA. Logic gates are a vital part of how your computer carries out functions that you command it to do. These gates convert binary code moving through the computer into a series of signals that the computer uses to perform operations. Currently, logic gates interpret input signals from silicon transistors, and convert those
signals into an output signal that allows the computer to perform complex functions. The Rochester team's DNA logic gates are the first step toward creating a computer that has a structure similar to that of an electronic PC. Instead of using electrical signals to perform logical operations, these DNA logic gates rely on DNA code. They detect fragments of genetic material as input, splice together these fragments and form a single output. For instance, a genetic gate called the "And gate" links two DNA inputs by chemically binding them so they're locked in an end-to-end structure, similar to the way two Legos might be fastened by a third Lego between them. The researchers believe that these logic gates might be combined with DNA microchips to create a breakthrough in DNA computing.
First Invented DNA Chip DNA computer components -- logic gates and biochips -- will take years to develop into a practical, workable DNA computer. If such a computer is ever built, scientists say that it will be more compact, accurate and efficient than conventional computers. In the next section, we'll look at how DNA computers could surpass their silicon-based predecessors, and what tasks these computers would perform.
4.
The Experiment
The Adleman experiment Adleman is often called the inventor of DNA computers. His article in a 1994 issue of the journal Science outlined how to use DNA to solve a well-known mathematical problem, called the directed Hamilton Path problem, also known as the "traveling salesman" problem. The goal of the problem is to find the shortest route between a numbers of cities, going through each city only once. As you add more cities to the problem, the problem becomes more difficult. Adleman chose to find the shortest route between seven cities.
You could probably draw this problem out on paper and come to a solution faster than Adleman did using his DNA test-tube computer. Here are the steps taken in the Adleman DNA computer experiment:
1.
Strands of DNA represent the seven cities. In genes, genetic coding is represented by the letters A, T, C and G. Some sequence of these four letters represented each city and possible flight path.
2.
These molecules are then mixed in a test tube, with some of these DNA strands sticking together. A chain of these strands represents a possible answer.
3.
Within a few seconds, all of the possible combinations of DNA strands, which represent answers, are created in the test tube.
4.
Adleman eliminates the wrong molecules through chemical reactions, which leaves behind only the flight paths that connect all seven cities.
There is no better way to understand how something works than by going through an example step by step. So lets solve our own directed Hamiltonian Path problem, using the DNA methods demonstrated by Adleman. The concepts are the same but the example has been simplified to make it easier to follow and present. Suppose that I need to visit four cities: Houston, Chicago, Miami, and NY, with NY being my final destination. The airline Im taking has a specific set of connecting flights that restrict which routes I can take (i.e. there is a flight from L.A. to Chicago, but no flight from Miami to Chicago). What should my itinerary be if I want to visit each city only once?
It should take you only a moment to see that there is only one route. Starting from L.A. you need to fly to Chicago, Dallas, Miami and then to N.Y. Any other choice of cities will force you to miss a destination, visit a city twice, or not make it to N.Y. For this example you obviously dont need the help of a computer to find a solution. For six, seven, or even eight cities, the problem is still manageable. However, as the number of cities increases, the problem quickly gets out of hand. Assuming a random distribution of connecting routes, the number of itineraries you need to check increases exponentially. Pretty soon you will run out of pen and paper listing all the possible routes, and it becomes a problem for a computer......or perhaps DNA. The method Adleman used to solve this problem is basically the shotgun approach mentioned previously. He first generated all the possible itineraries and then selected the correct itinerary. This is the advantage of DNA. Its small and there are combinatorial techniques that can quickly generate many different data strings. Since the enzymes work on many DNA molecules at once, the selection process is massively parallel. When the number of cities gets to around one hundred it could take hundreds of years of conventional computer time to solve the problem, even with the most advanced parallel processing available.
Adleman developed a method of manipulating DNA which, in effect, conducts trillions of computations in parallel. Essentially he coded each city and each possible flight as a sequence of 4 components. For example he coded one city as GCAG and another as TCGG.
Giving cities codes Specifically, the method based on Adlemans experiment would be as follows: 1. Generate all possible routes.
2.
Select itineraries that start with the proper city and end with the final city.
3. 4.
Select itineraries with the correct number of cities. Select itineraries that contain each city only once.
Part I: Generate all possible routes Strategy: Encode city names in short DNA sequences. Encode itineraries by connecting the city sequences for which routes exist.
DNA can simply be treated as a string of data. For example, each city can be represented by a "word" of six bases:
1. 2. 3. 4. 5.
The entire itinerary can be encoded by simply stringing together these DNA sequences that represent specific cities. For example, the route from L.A -> Chicago -> Dallas -> Miami -> New York would simply be GCTACGCTAGTATCGTACCTACGGATGCCG, or equivalently it could be represented in double stranded form with its complement sequence. So how do we generate this? Synthesizing short single stranded DNA is now a routine process, so encoding the city names is straightforward. The molecules can be made by a machine called a DNA synthesizer or even custom ordered from a third party. Itineraries can then be produced from the city encodings by linking them together in proper order. To accomplish this you can take advantage of the fact that DNA hybridizes with its complimentary sequence. For example, you can encode the routes between cities by encoding the compliment of the second half (last three letters) of the departure city and the first half (first three letters) of the arrival city. For example the route between Miami (CTACGG) and NY (ATGCCG) can be made by taking the second half of the coding for Miami (CGG) and the first half of the coding for NY (ATG). This gives CGGATG. By taking the complement of this you get, GCCTAC, which not only uniquely represents
the route from Miami to NY, but will connect the DNA representing Miami and NY by hybridizing itself to the second half of the code representing Miami (...CGG) and the first half of the code representing NY (ATG...). For example: Random itineraries can be made by mixing city encodings with the route encodings. Finally, the DNA strands can be connected together by an enzyme called ligase. What we are left with are strands of DNA representing itineraries with a random number of cities and random set of routes. For example: We can be confident that we have all possible combinations including the correct one by using an excess of DNA encodings, say 10^13 copies of each city and each route between cities. Remember DNA is a highly compact data format, so numbers are on our side. Part II: Select itineraries that start and end with the correct cities Strategy: Selectively copy and amplify only the section of the DNA that starts with LA and ends with NY by using the Polymerase Chain Reaction. After Part I, we now have a test tube full of various lengths of DNA that encode possible routes between cities. What we want are routes that start with LA and end with NY. To accomplish this we can use a technique called Polymerase Chain Reaction (PCR), which allows you to produce many copies of a specific sequence of DNA. PCR is an iterative process that cycles through a series of copying events using an enzyme called polymerase. Polymerase will copy a section of single stranded DNA
starting at the position of a primer, a short piece of DNA complimentary to one end of a section of the DNA that you're interested in. By selecting primers that flank the section of DNA you want to amplify, the polymerase preferentially amplifies the DNA between these primers, doubling the amount of DNA containing this sequence. After many iterations of PCR, the DNA you're working on is amplified exponentially. So to selectively amplify the itineraries that start and stop with our cities of interest, we use primers that are complimentary to LA and NY. What we end up with after PCR is a test tube full of double stranded DNA of various lengths, encoding itineraries that start with LA and end with NY. Part III: Select itineraries that contain the correct number of cities. Strategy: Sort the DNA by length and select the DNA whose length corresponds to 5 cities. Our test tube is now filled with DNA encoded itineraries that start with LA and end with NY, where the number of cities in between LA and NY varies. We now want to select those itineraries that are five cities long. To accomplish this we can use a technique called Gel Electrophoresis, which is a common procedure used to resolve the size of DNA. The basic principle behind Gel Electrophoresis is to force DNA through a gel matrix by using an electric field. DNA is a negatively charged molecule under most conditions, so if placed in an electric field it will be attracted to the positive potential. However since the charge density of DNA is constant (charge per length) long pieces of DNA move as fast as short pieces when suspended in a fluid. This is why you use a gel matrix. The gel is made up of a polymer
that forms a meshwork of linked strands. The DNA now is forced to thread its way through the tiny spaces between these strands, which slows down the DNA at different rates depending on its length. What we typically end up with after running a gel is a series of DNA bands, with each band corresponding to a certain length. We can then simply cut out the band of interest to isolate DNA of a specific length. Since we known that each city is encoded with 6 base pairs of DNA, knowing the length of the itinerary gives us the number of cities. In this case we would isolate the DNA that was 30 base pairs long (5 cities times 6 base pairs). Part IV: Select itineraries that have a complete set of cities Strategy: Successively filter the DNA molecules by city, one city at a time. Since the DNA we start with contains five cities, we will be left with strands that encode each city once. DNA containing a specific sequence can be purified from a sample of mixed DNA by a technique called affinity purification. This is accomplished by attaching the compliment of the sequence in question to a substrate like a magnetic bead. The beads are then mixed with the DNA. DNA, which contains the sequence you're after then hybridizes with the complement sequence on the beads. These beads can then be retrieved and the DNA isolated. So we now affinity purifies fives times, using a different city complement for each run. For example, for the first run we use L.A.'-beads (where the ' indicates compliment strand) to fish out DNA sequences which contain the encoding for L.A. (which should be all the DNA because of step
3), the next run we use Dallas'-beads, and then Chicago'-beads, Miami'beads, and finally NY'-beads. The order isnt important. If an itinerary is missing a city, then it will not be "fished out" during one of the runs and will be removed from the candidate pool. What we are left with are the itineraries that start in LA, visit each city once, and end in NY. This is exactly what we are looking for. If the answer exists we would retrieve it at this step. Reading out the answer One possible way to find the result would be to simply sequence the DNA strands. However, since we already have the sequence of the city encodings we can use an alternate method called graduated PCR. Here we do a series of PCR amplifications using the primer corresponding to L.A., with a different primer for each city in succession. By measuring the various lengths of DNA for each PCR product we can piece together the final sequence of cities in our itinerary. For example, we know that the DNA itinerary starts with LA and is 30 base pairs long, so if the PCR product for the LA and Dallas primers was 24 base pairs long, you know Dallas is the fourth city in the itinerary (24 divided by 6). Finally, if we were careful in our DNA manipulations the only DNA left in our test tube should be DNA itinerary encoding LA, Chicago, Miami, Dallas, and NY. So if the succession of primers used is LA & Chicago, LA & Miami, LA & Dallas, and LA & NY, then we would get PCR products with lengths 12, 18, 24, and 30 base pairs. The incredible thing is that once the DNA sequences had been created he simply "just added water" to initiate the "computation". The DNA strands then began their highly efficient process of creating new sequences based on
the input sequences. If an "answer" to the problem for a given set of inputs existed then it should amongst these trillions of sequences. The next (difficult) step was to isolate the "answer" sequences. To do this Adleman used a range of DNA tools. For example, one technique can test for the correct start and end sequences, indicating that the strand has a solution for the start and end cities. Another step involved selecting only those strands which have the correct length, based on the total number of cities in the problem (remembering that each city is visited once). Finally another technique was used to determine if the sequence for each city was included in the strand. If any strands were left after these processes then:
His attempt at solving a seven-city, 14 flight map took seven days of lab work. This particular problem can be manually solved in a few minutes but the key point about Adleman's work is that it will work on a much larger scale, when manual or conventional computing techniques become overwhelmed. "The DNA computer provides enormous parallelism... in one fiftieth of a teaspoon of solution approximately 10 to the power 14 DNA 'flight numbers' were simultaneously concatenated in about one second".
Caveats Adleman's experiment solved a seven city problem, but there are two major shortcomings preventing a large scaling up of his computation. The complexity of the traveling salesman problem simply doesnt disappear when applying a different method of solution - it still increases exponentially. For Adlemans method, what scales exponentially is not the computing time, but rather the amount of DNA. Unfortunately this places some hard restrictions on the number of cities that can be solved; after the Adleman article was published, more than a few people have pointed out that using his method to solve a 200 city HP problem would take an amount of DNA that weighed more than the earth. Another factor that places limits on his method is the error rate for each operation. Since these operations are not deterministic but stochastically driven (we are doing chemistry here), each step contains statistical errors, limiting the number of iterations you can do successively before the probability of producing an error becomes greater than producing the correct result. For example an error rate of 1% is fine for 10 iterations, giving less than 10% error, but after 100 iterations this error grows to 63%. Conclusions of Experiment So will DNA ever be used to solve a traveling salesman problem with a higher number of cities than can be done with traditional computers? Well, considering that the record is a whopping 13,509 cities, it certainly will not be done with the procedure described above. It took this group only three months, using three Digital Alpha Server 4100s (a total of 12 processors) and a cluster of 32 Pentium-II PCs. The solution was possible not because of
brute force computing power, but because they used some very efficient branching rules. This first demonstration of DNA computing used a rather unsophisticated algorithm, but as the formalism of DNA computing becomes refined, new algorithms perhaps will one day allow DNA to overtake conventional computation and set a new record. On the side of the "hardware" (or should I say "wetware"), improvements in biotechnology are happening at a rate similar to the advances made in the semiconductor industry. For instance, look at sequencing; what once took a graduate student 5 years to do for a PhD thesis takes Celera just one day. With the amount of government funded research dollars flowing into genetic-related R&D and with the large potential payoffs from the lucrative pharmaceutical and medical-related markets, this isn't surprising. Just look at the number of advances in DNArelated technology that happened in the last five years. Today we have not one but several companies making "DNA chips," where DNA strands are attached to a silicon substrate in large arrays (for example Affymetrix's gene chip). Production technology of MEMS is advancing rapidly, allowing for novel integrated small scale DNA processing devices. The Human Genome Project is producing rapid innovations in sequencing technology. The future of DNA manipulation is speed, automation, and miniaturization. And of course we are talking about DNA here, the genetic code of life itself. It certainly has been the molecule of this century and most likely the next one. Considering all the attention that DNA has garnered, it isnt too hard to imagine that one day we might have the tools and talent to produce a small integrated desktop machine that uses DNA, or a DNA-like biopolymer, as a computing substrate along with set of designer enzymes.
Perhaps it wont be used to play Quake IV or surf the web -- things that traditional computers are good at -- but it certainly might be used in the study of logic, encryption, genetic programming and algorithms, automata, language systems, and lots of other interesting things that haven't even been invented yet. The Restricted Model: Since Adleman's original experiment, several methods to reduce error and improve efficiency have been developed. The problems with implementing a DNA computer can be separated into two types:
o
Physical obstructions: difficulties with large scale systems and coping with errors Logical obstructions: concerning the versatility of molecular computers and their capacity to efficiently accommodate a wide variety of computational problems
The Restricted model of DNA computing solves several physical problems with the unrestricted model. The Restricted model simplifies the physical obstructions in exchange for some additional logical considerations. The purpose of this restructuring is to simplify biochemical operations and reduce the errors due to physical obstructions.
Merging: pour two test tubes into one to perform union Detection: Confirm presence/absence of DNA in a given test tube
Despite these restrictions, this model can still solve NP-complete problems such as the 3-colourability problem, which decides if a map can be colored with three colors in such a way that no two adjacent territories have the same color. Certain assumptions must be made about the oligonucleotides used in the manipulations:
o
Under easily achievable conditions (temperature, pH, etc.) each oligonucleotide reliably forms stable hybrids with its Watson-Crick complement Under easily achievable reliably conditions, from each its
oligonucleotide
dissociates
Watson-Crick complement
o
Under neither of the conditions above does any oligonucleotide form hybrids with itself or another oligonucleotide (except its complement), nor another complement oligonucleotide's Watson-Crick
Error control is achieved mainly through logical operations, such as running all DNA samples showing positive results a second time to reduce false positives. Some molecular proposals, such as using DNA with a peptide backbone for stability, have also been recommended.
DNA Manipulation
6.
DNA, with its unique data structure and ability to perform many parallel operations, allows you to look at a computational problem from a different point of view. Transistor-based computers typically handle operations in a sequential manner. Of course there are multi-processor computers, and modern CPUs incorporate some parallel processing, but in general, in the basic von Neumann architecture computer, instructions are handled sequentially. A von Neumann machine, which is what all modern CPUs are, basically repeats the same "fetch and execute cycle" over and over again; it fetches an instruction and the appropriate data from main memory, and it executes the instruction. It does these many, many times in a row, really, really fast. The great Richard Feynman, in his Lectures on Computation, summed up von Neumann computers by saying, "the inside of a computer is as dumb as hell, but it goes like mad!" DNA computers, however, are non-von Neuman, stochastic machines that approach computation in a different way from ordinary computers for the purpose of solving a different class of problems. Typically, increasing performance of silicon computing means faster clock cycles (and larger data paths), where the emphasis is on the speed of the CPU and not on the size of the memory. For example, will doubling the clock speed or doubling your RAM give you better performance? For DNA computing, though, the power comes from the memory capacity and parallel processing. If forced to behave sequentially, DNA loses its appeal. For example, let's look at the read and write rate of DNA. In bacteria, DNA can
be replicated at a rate of about 500 base pairs a second. Biologically this is quite fast (10 times faster than human cells) and considering the low error rates, an impressive achievement. But this is only 1000 bits/sec, which is a snail's pace when compared to the data throughput of an average hard drive. But look what happens if you allow many copies of the replication enzymes to work on DNA in parallel. First of all, the replication enzymes can start on the second replicated strand of DNA even before they're finished copying the first one. So already the data rate jumps to 2000 bits/sec. But look what happens after each replication is finished - the number of DNA strands increases exponentially (2^n after n iterations). With each additional strand, the data rate increases by 1000 bits/sec. So after 10 iterations, the DNA is being replicated at a rate of about 1Mbit/sec; after 30 iterations it increases to 1000 Gbits/sec. This is beyond the sustained data rates of the fastest hard drives. Now let's consider how you would solve a nontrivial example of the traveling salesman problem (# of cities > 10) with silicon vs. DNA. With a von Neumann computer, one naive method would be to set up a search tree, measure each complete branch sequentially, and keep the shortest one. Improvements could be made with better search algorithms, such as pruning the search tree when one of the branches you are measuring is already longer than the best candidate. A method you certainly would not use would be to first generate all possible paths and then search the entire list. Why? Well, consider that the entire list of routes for a 20 city problem could theoretically take 45 million GBytes of memory (18! routes with 7 byte words)! Also for a 100 MIPS computer, it would take two years just to generate all paths (assuming one instruction cycle to generate each city in every path).
However, using DNA computing, this method becomes feasible! 10^15 is just a nanomole of material, a relatively small number for biochemistry. Also, routes no longer have to be searched through sequentially. Operations can be done all in parallel.
7. Conclusion
A Successor to Silicon
Silicon microprocessors have been the heart of the computing world for more than 40 years. In that time, manufacturers have crammed more and more electronic devices onto their microprocessors. In accordance with Moore's Law, the number of electronic devices put on a microprocessor has doubled every 18 months. Moore's Law is named after Intel founder Gordon Moore, who predicted in 1965 that microprocessors would double in complexity every two years. Many have predicted that Moore's Law will soon reach its end, because of the physical speed and miniaturization limitations of silicon microprocessors. DNA computers have the potential to take computing to new levels, picking up where Moore's Law leaves off. There are several advantages to using DNA instead of silicon:
As long as there are cellular organisms, there will always be a supply of DNA.
Unlike the toxic materials used to make traditional microprocessors, DNA biochips can be made cleanly. DNA computers are many times smaller than today's computers.
DNA's key advantage is that it will make computers smaller than any computer that has come before them, while at the same time holding more
data. One pound of DNA has the capacity to store more information than all the electronic computers ever built; and the computing power of a teardropsized DNA computer, using the DNA logic gates, will be more powerful than the world's most powerful supercomputer. More than 10 trillion DNA molecules can fit into an area no larger than 1 cubic centimeter (0.06 cubic inches). With this small amount of DNA, a computer would be able to hold 10 terabytes of data, and perform 10 trillion calculations at a time. By adding more DNA, more calculations could be performed. Unlike conventional computers, DNA computers perform calculations parallel to other calculations. Conventional computers operate linearly, taking on tasks one at a time. It is parallel computing that allows DNA to solve complex mathematical problems in hours, whereas it might take electrical computers hundreds of years to complete them. The first DNA computers are unlikely to feature word processing, e-mailing and solitaire programs. Instead, their powerful computing power will be used by national governments for cracking secret codes, or by airlines wanting to map more efficient routes. Studying DNA computers may also lead us to a better understanding of a more complex computer -the human brain.
8. Bibliography Books
1. Adleman, L. 1994. Molecular computation of solutions to combinatorial problems. Science 266:1021-1024. 2. 3. 4. 5. Lipton, R. J. Speeding up computations via molecular biology. Boneh, D., Lipton, R. J. Making DNA computers error Kari, L. 1997. DNA computing: the arrival of biological Adleman, L. 1995. On constructing a molecular computer. (unpublished manuscript) resistant. (unpublished manuscript) mathematics. (Unpublished manuscript). (unpublished manuscript)