# Difference between revisions of "Talk:CH391L/S13/DNA Computing"

Jump to: navigation, search

• Kevin Baldridge 16:39, 1 April 2013 (EDT):How did he select against those which were not applicable? The combinatorial approach that gave all possible answers yields so many possibilities, I don't see how this is any improvement over traditional computer brute force approach if you can't eliminate the problem of selecting possible answers.
• Dwight Tyler Fields 13:34, 8 April 2013 (EDT): This was a common criticism of using DNA for combinatorial problems. The initial reaction was quick, but it took Adleman a week to purify his results to the correct subset of valid answers. First he used PCR amplification to amplify only strands that start and end at the correct cities, A and G. Then he used gel electrophoresis to filter out all the strands which were not of the correct length (i.e. strands which visited each of seven towns only once would only contain links between six towns). Finally, he used affinity purification to filter out strands which do not visit each town (i.e. any strands which do not contain town A are first discarded, then those which do not contain B, and so on). Any strands left over must represent a valid route. In this case, the only valid route was encoded by the strand ABBCCDDEEFFG: A to B to C to D to E to F to G.
• Siddharth Das 16:39, 1 April 2013 (EDT): Granted this methodology isn't an elegant as computations conducted by normal computers, but "brute force" allows find solutions for highly combinatorial problems. In other words, DNA computing vs electronic computing is in essence memory vs. time, respectively. The amount of memory an electronic computer needs to allocate to solve a problem would never fulfill the requirements to screen for solutions such as combinatorial, factorization, and etc. For example in RSA cryptography, basically packages of data are stored in prime numbers (795028841 X 25209506681 = 20042284878777186721) shared between two parties. As an "eavesdropper", the number 20042284878777186721 (lock) means nothing to you and thus prompts you to factorize the incredibly huge number. Without using the RSA algorithm (key), the eavesdropper must rely on simply "brute forcing" the decryption process which is where DNA computing would be advantageous.
• Jeffrey E. Barrick 16:42, 1 April 2013 (EDT):DNA cryptography?
• Benjamin Gilman 17:15, 3 April 2013 (EDT): It's kind of out there, but DNA sequences could be pretty useful as key pads for decrypting a Vernam Cipher. DNA pieces can be generated randomly, copied accurately, and destroyed if necessary. Although it's not truly random, just taking an existing sequence block (like a chunk of a genome from NCBI) would be close enough to make the cipher difficult to break. Maybe someday spies will be carrying tubes of DNase around with them.
• Kevin Baldridge 16:45, 1 April 2013 (EDT):I think I might have shared this paper before, but it is very directly related to some of the topics you discuss here Single Cell Programmable Biocomputers
• Dwight Tyler Fields 13:56, 8 April 2013 (EDT): I did some digging and found this interesting 2012 iGEM project by Team Tsinghua that cites the paper. Their idea was to use quorum sensing to turn a whole bacterial colony into a logic gate of sorts.
• Alvaro E. Rodriguez M. 17:24, 4 April 2013 (EDT):Here is the NPR news article about the "Tiny DNA Switches Aim To Revolutionize 'Cellular' Computing", there is a 4 min download that we could listen too in class. Also could you add a small section/ or talk about in class?
• Dwight Tyler Fields 14:27, 8 April 2013 (EDT): Great find Alvaro. There is a great animated infographic at the end of the article that helps explain the concept.
• Neil R Gottel 20:40, 4 April 2013 (EDT):During class, we were wondering why Church didn't use a two-bits-per-nucleotide system (such as G=00, C=01, T=10, A=11). It doubles the storage requirement, but there's a very good reason: if parts of your data have 2-bit repeats (like 10101010101010 or 00000000000000), then you're gonna have a hell of a time during sequencing, because it's difficult for next-gen sequencers to differentiate between something like TTTTTTT and TTTTTTTT. Assigning two nucleotides to a single bit allows you to arbitrarily "complicate" the 2-bit repeated region into a sequence that won't mess up your sequencing results.
• Aurko Dasgupta 00:10, 5 April 2013 (EDT): Darn, that's a really significant downside to 2bits/base. Next-gen sequencing ruins everything! (/s)
• Aurko Dasgupta 00:07, 5 April 2013 (EDT): I think this qualifies as a DNA computation. Andy Ellington's lab developed a way to test for extremely low concentrations of specific DNA sequences based on hairpin assembly. It's supposed to be a technique of molecular diagnostics that can be used in personalized medicine.
• Thomas Wall 00:22, 5 April 2013 (EDT): Maybe this would be helpful or impossible, but could there just be a comparison about what electornic computers do better vs the strengths of DNA computers?
• Gabriel Wu 14:44, 5 April 2013 (EDT): Unclear if Feynman was the first to propose DNA computing, but in a talk given at the American Physical Society in 1959, Feynman proposes that "There's plenty of room at the bottom". The underlying thesis is a proposal to miniaturize computers, machines, and data storage. He uses the analogy of DNA to suggest that highly complex information (the information that encodes people, for example) can be stored on an extremely small scale. If anything, sounds like he came up with the idea of DNA as a storage medium (sorry, George Church) more than DNA as a computational platform. Although, Feynman never explicitly said that we should try to store data in DNA. He simply suggests that since biology can do it, we can too.
• Gabriel Wu 14:47, 5 April 2013 (EDT): We should come up with a list of computer science terms. Figure out which ones haven't been adopted by synthetic biologists yet and make that an iGEM project.