CH391L/S13/DNA Computing

From OpenWetWare
Jump to navigationJump to search

Introduction

DNA Computing is the performing of computations using biological molecules, and DNA specifically, instead of traditional silicon. The idea of using molecules in general for computation was first proposed by in 1959 by American physicist Richard Feynman, but it took until 1994 for the first successful experiment using molecules for computation.

History

A computer scientist at the University of Southern California named Leonard Adleman first came up with the idea of using DNA for computing purposes after reading "Molecular Biology of the Gene," a book written by James Watson (co-discoverer of the structure of DNA).

"The principle of Leonard Adleman's DNA computer to solve the 'Travelling salesman' problem."

In 1994 Adleman published an article in the journal Science[1] that proposed how DNA could be used to solve a well-known mathematical problem, called the directed Hamilton Path problem, also known as the "traveling salesman" problem. The goal of the problem is to find the shortest path between a number of cities, going through each city only once. This experiment was significant because it showed that information could indeed be processed using the interactions between strands of synthetic DNA.

"He represented each of the seven cities as separate, singlestranded DNA molecules, 20 nucleotides long, and all possible paths between cities as DNA molecules composed of the last ten nucleotides of the departure city and the first ten nucleotides of the arrival city. Mixing the DNA strands with DNA ligase and adenosine triphosphate (ATP) resulted in the generation of all possible random paths through the cities. However, the majority of these paths were not applicable to the situation–they were either too long or too short, or they did not start or finish in the right city. Adleman then filtered out all the paths that neither started nor ended with the correct molecule and those that did not have the correct length and composition. Any remaining DNA molecules represented a solution to the problem."[2]

Challenges

Adleman's experiment is an example of an NP-complete problem (from non-polynomial) for which no known efficient (i.e. polynomial time) algorithm exists. It has been proposed that these types of problems are ideal for the massive parallelism of biology. Whereas traditional computers are designed to perform one calculation very fast, DNA strands can produce billions of potential answers simultaneously.

However, because in most difficult problems the number of possible solutions grows exponentially with the size of the problem, even relatively small problems would require massive amounts of DNA to represent all possible answers. For example, it has been estimated that scaling up the Hamilton Path Problem to 200 cities would require a weight of DNA that exceeds that of the earth. For this reason and others, the field has changed focus to other areas.


Storage

Church's methodology for encoding and decoding 700 terabytes of information in DNA.

DNA can be treated as a highly stable and compact storage medium. In 2012, George Church, a bioengineer and geneticist at Harvard, succeeded in storing 1000 times the largest data size previously achieved using DNA. Church encoded 700 terabytes of information into a single gram of DNA. For comparison, storing the same amount of information using the densest electronic medium would require 330 pounds of hard-drives.

His methodology was synthesizing 96-bit strands, with each base representing a binary value (ex: T and G = 1, A and C = 0). The "data" in question was 70 billion copies of his latest book, "Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves"

To read the DNA, Church simply sequenced it using next-generation sequencing, converting each of the bases back into binary. Each strand of DNA had a 19-bit address block at the start so the DNA can be sequenced out of order and then sorted into usable data using the addresses.

References

  1. [Adleman]
  2. [Parker]