This article originally appeared on the BeyeNETWORK
There have been studies conducted as early as 1994 which show that DNA computing is a viable alternative to today’s standards of electronic circuitry. A study in 1999 by the Defense Advanced Research Projects Agency (DARPA) showed that 108 terabytes could fit in one gram of DNA. Their experiments went on to demonstrate encryption, decryption and programming of specific DNA molecules with images and other artifacts. They concluded the study by discussing how they applied the technology to solving the NP-complete problem.
The timelines for government and scientific advancement of Nanotechnology are way beyond the predictions that were made in previous articles. The commercial market still has a long way to go. In this article we will review the 1999 DARPA grant and then speculate as to the machines and software needed to build such a system today.
My current hypothesis is as follows: Nanotechnology is changing at ten times the rate of standard technology (standard being that which abides by Moore’s Law).
What is DNA computing?
DNA computing is the ability to drive computations, store data, and retrieve data through the structure and use of DNA molecules. To understand DNA computing, we must first understand the molecule. Molecules are made up of atoms; the atom contains protons, neutrons, and electrons. The molecule can be a combination of atoms (such as a water molecule, H2O). The Nanotechnology aspect of this comes in man’s ability to alter and control the makeup of the atom within the molecule. DNA computing takes strands of DNA with proteins, enzymes and program specific states in each molecule in the double-helix strand.
The idea of DNA computing is similar to the action of DNA to begin with. One strand of DNA houses the “RAM” or memory, the other strand is a backup (like Raid 0+1 on disk). The enzymes are the motors that copy, search, access, read/write, the information into the DNA strands. When the DNA is put in to an aqueous solution (like water), and the data is added, the data or information finds the appropriate DNA component to combine with and attaches itself. The data is usually in the form of a chemical solution with its own enzymes, providing motion or movement to the atoms. Once the atoms bind, they cannot be unbound without changing the environment. Changing the environment may mean making it “unfriendly” to the data, thus the enzymes uncouple the chemically bonded elements (data) and return it to its previous state. For the basics of DNA computing, read the following reference:http://arstechnica.com/reviews/2q00/dna/dna-1.html. Here’s an example that might simplify the thinking:
Take 200 people, each one wearing a different color shirt, but each one is standing in a line – a very specific order. Then introduce another person wearing a new color (the data) – this persons’ job is to find the matching shirt, then stand in line with the other individual, or tie a rope around their waist (so that they are “bonded” together). Imagine 200 people released at the same time to find their matching shirt color, they go in parallel (at the same time) to bond. It’s very fast. Now, to untie the ropes around their waists, requires a knot specialist (200 of them) all at the same time, they untie the knots, then dissolve the ropes. This is unbinding of the DNA molecules.
This is a simplistic example of how DNA computing works. To understand the data sets and to generate “software” that programs specific information requires knowledge of chemistry and biology. The DARPA report in 1999 shows different experiments that they’ve actually completed (as of 2000), demonstrating reconfiguration, massively parallel data sets, and searches. It is well-known that in general to search across one gram of DNA in an aqueous solution, might take one to three seconds. Imagine, 108 terabytes searched in parallel in less than three seconds!
“Experimental Demonstrations: The BMC experimental demonstrations include moderate scale solution of computationally hard problems. A number of our BMC experiments encode large amounts of binary data (e.g., possible solutions to search problems) within distinct DNA strands and then process this data in massively parallel fashion to execute arithmetic and logical operations on the data. The nano-structures constructed in experimental demonstrations consist of DNA crossover molecules that self assemble into large lattices that can execute computations, as well as DNA molecules that reconfigure for possible use as motors.”, DARPA 1999 – Yearly Report, http://www.cs.duke.edu/~reif/BMC/reports/BMC.FY99.reports/BMC.DARPA.FY99.html
How do we model systems like this?
In 1994 a DNA computing experiment proved that data can be stored, replicated, searched and retrieved from DNA structures. “DNA bases represented the information bits: ATCG (nucleotides) spaced every 0.35 nanometers along the DNA molecule, giving DNA a remarkable data density of nearly 18 Mbits per inch.” This provides hope for computing power. Each nucleotide can represent a bit. Not only does the bit type make a difference, but the order or sequence as well. A “T” in a third position means something completely different than a “T” in the first position, leading to limitless possibilities for computation.
Furthermore, each of these nucleotides can be complemented by S’ and hybridized. In other words, they can produce double stranded DNA. For error correction this is very important. It gives the nano-computer a chance to correct what should be a comparable equivalent (copy) of the data. Such is the way of Raid 0+1 disk arrays. For example, if there are four values per atom, each atom taking 0.35 microns across, this would be impressive for storage sizes. If want to think about modeling the information, we must consider the first two-dimensional model (2D): atoms with electrons; different atoms tied together through valence bonding, providing a surface area to the molecule; and a chemical make up. The three-dimensional (3D) model allows for multiple twists of the atomic layers.
The Nanohousing software must allow for intermixture of chemical models, bonding and surface areas. Moving forward, we will have to think and construct systems and interactions in multi-dimensional space and in parallel. Thinking and coding in parallel won’t work anymore.
Nanohousing is already here. Unfortunately, the commercial availability of the technology makes it difficult (as of the printing of this article) to create the first Nanohouse. However, the notion of this “wet-technology” (the place where man-made and nature-made technology cross, and we can’t tell the difference), begs an ultimate question. Are we walking computers, or will computational DNA co-exist peacefully within each of us as we live our lives in the future?
The point is: what was predicted to happen in prior phase of my Nanotech timeline has already occurred. Compression and encryption algorithms have already been developed, tested and used in DNA computing. Terabyte-sized storage has already been reached, and furthermore, quantum level parallel operations have also already been created, used and proven successful.
There have been similar reports of success from governments and research labs all over the world. In the United States, the timeline for commercializing this technology was five years behind that of the government’s plan. Apply my new hypothesis and we are only eight months away from some commercially available computing platform capable of handling atomic level operations. Does anyone want to buy my first Nanohousing thumb drive? It probably will hold about 102 terabytes and will run on a USB port. It will cost $200,000 a piece. The encryption algorithms will be nearly unbreakable. I’m ready for investors!
In case you’re curious, or you’re a researcher, or you wish to get in touch with me, I’d love to hear your thoughts, comments and feedback on this issue – both critical and thoughtful perspectives.