Talk:Molecular computing
 Please, contact directly: ramonrocabcn@gmail.com (sorry, previously mailing unavailable: try again)
http://openwetware.org/wiki/Molecular_computing
http://openwetware.org/wiki/User_talk:Ramon_Roca
http://openwetware.org/wiki/Talk:Molecular_computing
http://www.youtube.com/user/rmnDeliriaLegitur
<a name="BIBLIO">BIBLIO</a>
 <a name="Ayers"></a> Ayers, R.U.;
Information, entropy and progress. A new evolutionary paradigm.
American Institute of Physics, 1994.
Conciousness, from computer science and cognitive
psychology, is a kind
of priorizing metafunction that selects from the cacophony of internal
signals and creates a linear real time narrative of what's going on,
moment by moment; appears to be a natural consequence of information
exchange among elements of a distributed process system.
Any arbitrary shape can be regarded as a locus of intersecting surfaces of nth order (generated in terms of Fourier descriptors. The total amount of information embodied in simple shapes is determined by the shape category, the number of surfaces and the form or order of the surfacesdefining equations (e.g., quadratic, cubic, etc).  <a name="Brunat"></a> Brunat Blay, J.M. & Ventura
Capell, E.;
Universitat Politècnica de Catalunya, 2001.
Informació i codis.

En general, si A
és un
alfabet i n ≥ 1 és un
enter, el conjunt C=A^{n} és un codi. La factorització d'una
paraula de C*
consisteix a separar símbols de n
en n consecutivament.
Col.loquialment, però significativament, podem dir
que l'entropia és el límit de la compressió.
Teorema de Shannon: En general, si un codi de bloc C té M=q^{k} paraules, el podem posar en bijecció amb el codi total de longitud k sobre el mateix alfabet. Així, a cada paraula de C hi ha k=log_{q}M dígits d'informació (que determina la paraula) i nk de redundants. La taxa d'informació R(C) o quantitat d'informació per símbol és (log_{q}M)/n; la taxa de redundància és 1R(C). Col.loquialment, un codi bo, en el sentit de poca probabilitat d'error, és car en el sentit de costós de trametre a causa de la longitud gran. Un codi (amb resticcions naturals) pres aleatòriament és raonablement probable que tingui aquestes propietats.
Desafortunadament, la demostració no és constructiva, ... A més a més, si es vol una probabilitat d'error suficientment petita, el teorema implica que potser hauríem d'augmentar la longitud n fins a valors impracticables.
L'ordre d'un cos finit és el seu cardinal, és a dir, el nombre d'elements que té. Els codis linials són subespais vectorials; [...] finalment, l'ortogonalitat és el fonament de l'anomenada matriu de control d'un codi. [A un cos finit, la ortogonalitat determina certes propietats diferents de les ordinàries.] Per exemple, un vector no nul pot ser perpendicular a ell mateix. Un codi linial es defineix donantne una base (matriu generadora) o donant un sistema d'equacions linials homogeni amb una matriu de control de codi (generadora del codi dual ortogonal).
La distància mínima d'un codi perfecte (els radis de tangència i cobertura coincideixen) es senar. Els codis de Golay G23 (binari, [23,12,7]_{2}, 3corrector) i G11 (ternari, [11,6,5]_{3}, 2corrector) són perfectes, sistemàtics i linials (existeixen equivalents cíclics). Teorema de Best: Tots el codis perfectes sobre qualsevol alfabet, amb p≥3 i p≠6,8 són equivalents o bé a Rep2(n) o bé a G23. Utilitzant la transformada de Fourier podem descriure el codi de ReedSolomon RSq(s) com el conjunt de transformats de tots els polinomis de grau ≤ qd1 o, si es vol, de totes les paraules amb zeros a partir de la coordenada qd.
[If C is a linear (n,k,d) code, then n  k ³ d  1 (Singleton bound). A linear (n,k,d) code is called maximum distance
separable if d = n –k + 1.]  <a name="Cotterill"></a> Cotterill, R.;
Biophysics, an introduction.
Wiley, 2003.  Cover, T.M. & Thomas, J.A.;
Elements ofinformation theory.
Wiley, 1991.

Entropy is the minimum descriptive complexity of a
random variable, and mutual information (a measure of interdependence
of variables) is the communication rate in
the presence of noise. Indeed, we consider Kolmogorov complexity (the
minimal description or program length) to be more fundamental than
Shannon entropy. It is the ultimate data compression and leads to
logically consistent procedure for inference. [Occam's razor: the
simplest explanation is best.] The quantities H
(entropy), I (information), C
(capacity), D (relative
entropy), K (Kolmogorov's
complexity), W (double rate),
arise naturally in the following areas:
 data compression,
 data
transmission,
 network
information theory,
 ergodic
theory,
 hypothesis
testing,
 statistical
mechanics,
 inference,
 gambling and investment,
 probability
theory,
 complexity theory.
 Creighton, T.E.;
Proteins: Structures and molecular properties.
Freeman, 1997.
 French, A.P.;
Vibracionesy ondas (Curso de física del M.I.T.).
Reverté, 2000.
 Graub, W.;
Linear algebra.
Springer, 1981.  <a name="Kampis"></a>Kampis, K.;
Selfmodifying systems in biologyand cognitive science. A new framework for dynamics, information and complexity.
Pergamon, 1991.
 symbol alphabet = numbers
 admissible symbol strings = state manifolds
 grammar = constraints
 axioms = initial conditions
 rules of inference = vector field
 proof sequence
trajectory
 theorem attractor
The TuringChurch thesis can be interpreted as an isomorphism between formal and dynamic systems:
Determinism refers to a computational procedure (formal domain, necessitiy in models) whereas causality refers to a relationship between things (natural domain, necessity in reality). In other words: according to this view, instead of asking why things happen, one should ask how they happen. It is not enough to reproduce things properly  an idea sometimes expressed by contrasting 'explanatory' and 'predictive' models, or 'Erklärung' and 'Verstehen', or by drawing a distinction between 'models' and 'metaphors' or 'simulations'. Statistical description is cumulative, whereas dynamical description is generative. Encodings must be done separately from dynamics; the levels of representation must not mix. Models cannot be read out directly; they have to be interpreted. A model will be called relevant if it is both adequate and interpretable. Models (as observables plus descriptions) are not reducible to descriptions (variables plus relations). A concrete situation where this principle applies is the replacement of a 'statistical' or inductive law (let us say, the Kepler's law) with a dynamical or deductive law (e.g. Newton's equations of motion). I/O systems are referred to logical monism, expressed by Zenon's paradoxes (viz., the system remain constant until something changes, and modifyies the whole system anyone can swim twice in the same river). The encoding is the real memory of the system. Validity can be ensured only by considering complete information; but it does not mean a direct use of the complete information.
hierarchical sequences of questions:
With Gödel numbering (by instance, using the original symbols as exponents of a prime factorization) it becomes possible to encode and decode state transformations to and from states directly. Chess tree (a modest finite game) has about 10^{120} elements. However, what Waddington calls [in developmental biology] "morphogenetic fields" were not imagined as real fields but as nominal models which can perhaps be reinterpreted in terms of positional information. What distinguishes a mere label (a formal but meaningless object) from a symbol is its interpretation. Component systems (a finite set of diverse and interacting entity blocks, as linguistic or biological ones) have creative unpredictable dynamics. Such structures [LEGOlike] were foreseen (but later abandoned) by J. von Neumann under the name 'kinematic automata'. The result [organized complexity] is neither simplicity nor disordered complexity but something "between the crystal and the smoke" (H. Atlan). Elements, sets, sets of sets, etc. belong to different types in the Russellian set theory, and are not allowed to mix freely. This makes them qualitatively different. They can start to represent information in type structures, as is the case in von Neumann's definition of integer numbers where every number corresponds to an element of a different level of hierarchy [althought current set theory is typefree, not qualitative]. Therefore, componentsystems cannot have dynamic algorithms [because of unknown elements]. Von Neumann's selfreproducing automata possessed both excitable and nonexcitable states, and the construction of automata by other automata was conceived as a growth process involving the transform of nonexcitable cells to excitable ones by special signals.
Wittgenstein: Logic when already stablished may be used for describing formal implications, but the rules themselves do not follow any logic.
(Relative) Complexity is not a number but a way of looking at things. A typical chaosproducing mechanism is the iterated 1D map (symbolics dynamics) x_{n+1}= a  x_{n}^{2} x∈[a,a] a∈[0,2], which leads to chaos in a finite large interval of a. The most plastic representation of the idea of chaos by means of a taffypulling machine (Rössler, 1987; it makes cookies by filling and folding cut&paste the pastry automatically). This mathematical effect of chaos is best seen on the Bernoullishift transformation s: x_{n+1}= s(x_{n}) = 2x_{n} mod 1 x∈[0,1] (that is, we multiply by two and cut off the part which grew bigger than one). Fractals generate an illusion of complexity. [x_{n+1}=ax_{n} (1x_{n})] This equation specifies those points, the Poincaré map, where the trajectory cuts the line [within a system with partition 0,1; that is, the whole space is cut into two parts: i.e., the only real rule is after 0 there cannot be 0].
Badii utilized the idea that chaos is made up of combinations of the periodic orbits. A 'primitive' is defined by two conditions: it is periodic (i.e., in the infinity sequence there are arbitrarily long repetitions of it) and it cannot be broken down to other primitives. The [hierarchical] tree [structure] is constituted by the primitives and their admissible combinations (the first level, the primitive combination; 2nd level, pairwise combination; nth level, nary combinations, and so on).
This procedure [Gödel numbering], which is itself algorithmic, ensures the existence of a dynamics without the use of any further information. All we need is a componentsystem which produces them by complexityincreasing procedures. The 'privileged zero' properties and the imaginary Turing machines that make knots in their tapes (or apply other tricks) can produce complexity beyond any fixed limit. New types of thinking machines, molecular computers, and evolutionary technology can be envisaged at this point.
For this theoretical framework, where all automata can be conceived as embedded configurations in some cellular space, von Neumann formulated the following  (1) Computability.
When are automata logically universal? Can an individual automaton be logically
universal?  (2) Constructibility.
Can an automaton be constructed by another automaton? What kind of automata can be constructed? Does there exist
an automaton that has universal constructing hability?  (3) Selfreproduction.
Does there exist a selfreproducing automaton in some cellular space? Do selfreproducing automata exist which can do other things besides being
selfreproducing?  Von Neumann also posed a fourth question, concerning the evolvability of automata.
 Von Neumann was able to structurally realize this system in a
twodimensional cellular automaton array with with 29 states per cell, with a simple neighbourhood function where the next state of a cell depends on 5 cells: on itself and four neighbours (up, down, right, and
left). With this he gave a positive answer to (1)(3).  (Laing, 1977) He
has constructed selfinspecting Turing machines, that resemble molecules operating on each other and on themselves (a design related, at a distance, to Markov normal systems, and W. Stahl's molecular 'list
processors').  Leff, H.S. & Rex A.F. (ed.);
Maxwell's demon: Entropy, information, computing.
Hilger, 1990. Purcell and Pound discovered (1951) the existence of negative temperature.
 Classical
demons
and orthogonality in Hilbert space.  Reality is not subdivided.
The informationbearing degrees of freedom of a computer interact with the thermal reservoir represented by the remaining degrees of freedom. This interaction plays two roles. First of all, it acts as a sink for the energy dissipation involved in the computation. This energy dissipation han an unavoidable minimum arising from the fact that the computer performs irreversible operations. Secondly, the interaction acts as a source of noise causing errors. In particular thermal fluctuations give a supposedly switched element a small probability of remaining in its initial state, even after the switching force has been applied for a long time. It is shown, in terms of two simple models, that this source of error is dominated by one of two other error sources:(1) Incomplete switching due to inadequate time allowed for switching, and (2) decay of stored information due to thermal fluctuations.
R. Landauer; Irreversibility and heat generation in the computing process.IBM J. Res. Dev.5, 183191 (1961).
The biosynthesis of messenger RNA is discussed as a physical example of reversible computation [as a temporary memory device]. For a typical irreversible computer, which throws away about one bit for operation and kT ln 2 the aproximate lower bond on the energy dissipation of such machines; for a logically reversible computer, is kT ln 1, by construction. In biological systems, apparently, the speed and flexibility of irreversible erasure its extra cost in free energy (kT ln 4 per nucleotide, in this case).
C.H: Bennett; Logical reversibility of computation.IBM J. Res. Dev.17, 525532 (1973).
[...] This completes our description of how three indefinite capacity counters may be employed to represent the actions of a Turing tape: one counter for each right and left tape segment, and one counter for manipulating the contents of the first two.
[In order to represent two distinct numbers as a single number we employ Gödelization.] We have thus managed to carry out the desired tape transformations employing only two counters and (save for matters of bookkeeping detail of the sort as indicated earlier) the Minsky result can be obtained.
R. Laing; Maxwell's demon and computation.Phil. Sci.41, 171178 (1974).
The enzymatic apparatus of DNA replication, transcription , and translation appear to be nature's closest approach to a Brownian computer (i.e., a model allowing thermal noise to influence the trajectory so strongly that it becomes a random walk through the entire accessible, low potentialenergy, portion of the computer's configuration space), dissipating 20100 kT per step.
Because the intrinsic error rate is determined by the difference in barriers opposing corrects and incorrect transitions, it is a function of the particular chemical hardware, and does not represent a fundamental thermodynamic limit. In practice it can be made arbitrarily small by increasing the size and complexity of the recognition sites (to increase the potential energy difference DE between correct and incorrect reaction paths), by lowering the temperature (to increase the Boltzmann ratio e^{DE/kT} of correct to error transitions without changing DE) and by making the apparatus larger and more massive (to reduce tunneling).
The excess [fourfold higher concentration] phosphate keeps the enzyme [polynucleotide phosphorilase] from running backwards and synthesizing random RNA, but it also means that the cycle of specific synthesis followed by nonspecific degradation must waste about kT ln 4 per nucleotide even in the limit of zero speed. For an organism that has already spent around 20kT per nucleotide to produce RNA with near maximal speed and accuracy, the extra 1.4kT is obviously a small price to pay for being able to dispose of the RNA in a summary manner, without taking it back to its birthplace.
Levin and Chaitin definition of algorithmic entropy: A program is selfdelimiting it if needs no special symbol, other than the digits 0 and 1 of which it is composed, to marks its end; discrete objects other than binary strings, e.g., integers, or coarsegrained cells in a continous phase space, may be defined by indexing them by binary strings in a standard way (as Monte Carlo program describes distributions of macrostates).
C.H. Bennett; The thermodynamics of computation  a review.Int. J. Theor. Phys.21, 905940 (1982).
There really is no software, in the strict sense of disembodied information, but only inactive and relatively static hardware. Thus, the handling of information is inevitably tied to the physical universe. Evolution and the origin of life can be viewed as an optimization process, in which fluctuations (e.g., mutations) take us from one metastable ecology, to a new one. [As oscillations of a laser, under conditions where these temporal and spatial patterns are the only allowed behavior, seems strained.] We might, with equal justice, refer to the revolution of an electron around a hydrogen nucleus, or the rotation of a water wheel, as self organization. [Noise can be a qualitative influence on the development of systems, as induced noise transitions are invoked in simulated annealing technique for optimization in the presence of complex constraints. In equilibrium, noise is defined by the temperature.]
R. Landauer; Computation: A fundamental physical view. Phys. Scr.35, 8895 (1987).
[Incomprensible; molan las siguientes frases:]E. Lubkin; Keeping the entropy of measurement: Szilard revisited.
C.H. Bennet; Notes on computation (198?).
One bit of information is somehow equivalent to k ln 2 units of entropy, or about 2.3 x 10^{24} cal/Kelvin. Chemical brownian computers such as RNA polymerase are of course quantum systems, but because of the high temperature and short thermal de Broglie wavelength, quantum effects are subtle and qualitative (e.g., zeropoint and tunneling corrections to reaction rates) rather than quantitative. Feynman, instead of incorporating the direction of the computation (forward as opposed to backward) in the Hamiltonian, included it into the initial condition, which was now not a single basis state but rather a wavepacketlike superposition of basis states [i.e., as markovian processes].
 Nei,
M.;
Columbia University, 1987.
Molecular evolutionary genetics.
 Reichl,
L.E.;
Springer, 1992.
The transition to chaos. In conservative classical systems: Quantum manifestations.
The book begins with discussions of nonlinear resonance, Noether's theorem, integrability, KAM theory, and a definition of chaotic behavior; it continues with a detailed discussion of areapreserving maps with emphasis on self similarity, integrable and nonintegrable quantum systems, spectral properties, path integrals, and periodically driven systems: and concludes by showing how to apply the ideas to stochastic systems.
[La entropía de Kolmogorov (suma de los exponentes positivos de Liapunov) equivale a la pérdida de información por unidad de tiempo y permite distinguir entre comportamientos caóticos (divergentes), regulares (quasi convergentes) y estocásticos (distribuciones equi probabilísticas). Los flujos se acoplan simétricamente (relación de reciprocidad de Onsager), con distinto carácter tensorial escalar, vectorial, matricial pero específica y anisotrópicamente (principio de CuriePrigogine).F. Montero & F. Morán; Biofísica. Procesos de autoorganización en biología. Universidad Complutente, 1992]
 Strang,
G.;
Wellesley, 1986
Introduction to applied mathematics.
 Westhead,
D.R.; Parish, J.M. & Twyman, R.M.;
BIOS, 2002.
Bioinformatics.
 Asimov, I.;
Breve historia de la química.
Alianza, 1981.  duSautoy, M.;
La música de los números primos.
Acantilado, 2007.  Hawking, H. (ed.);
God created the integers.
Penguin, 2005.  Gracián, E.;
Los números primos.
RBA, 2010.
Movimiento armónico simple: vibración sinusoidal.
[z=a+jb]
Ampliemos
ahora
la
interpretación
del
símbolo
j
[j^{2}=1],
considerándolo como una instrucción para realizar una rotación de 90°
en sentido contrario a las agujas del reloj al desplazamiento que
precede. [...] Una magnitud de la forma jb (siendo b real) se denomina imaginaria
pura. Desde el punto de vista de las matemáticas, este término es quizá
poco afortunado, porque en la aplicación del concepto de número real a
complejo, un componente "imaginario" como jb está en igualdad de condiciones
que un componente real como a.
Pero
cuando
se
aplica
al
análisis
de
oscilaciones monodimensionales,
esta terminología se ajusta perfectamente, como ya hemos visto, a las
partes físicamente reales y no reales de un movimiento bidimensional
imaginado.
[cos θ + j
sen θ = e^{jθ}]
Este
resultado
es
muy
importante,
hablando
matemáticamente,
porque
proporciona una conexión clara entre la geometría plana (representada
por las funciones trigonométicas) y el álgebra (representada por la
función exponencial). R.P. Feynman la consideraba "como joya
asombrosa ... la fórmula matemática más notable". Fue establecida por
Leonhard Euler en 1748.
La resultante de dos o más vibraciones armónicas es
simplemente la suma de las vibraciones aisladas. Si dos movimientos
armónicos simples tienen frecuencias muy parecidas, la perturbación
combinada presenta lo que se denomina pulsación o batido [cuya
frecuencia es la mitad de la suma de la frecuencias individuales].
[...] Sin
embargo, es notable que una gran diversidad de deformaciones de los
sistemas físicos, entre los que podemos incluir alargamientos,
acortamientos, flexión y torsión (o combinaciones de los mismos) dan
como resultado fuerzas restauradoras proporcionales al desplazamiento y
por ello originan vibraciones armónicas simples (o una superposición de
vibraciones armónicas).
[...] That is, not the dynamics but an external
standardization
makes replication sustainable (Rössler conclussion to Wigner paradox).
Such a standardizing mechanism is easy
to implement in trivial
or 'pseudo'selfreproducing systems, where it is the reproducing (and
not the reproduced) machine that does the whole process anyway.
Component specificity is outside dynamics. It is a
classcharacteristic. There is an inherent discreteness present here, a
standardization made by the system. In such a systems there is no
premium for accuracy because the interactions are digitized. The
origin
of the world, the origin of music and the origin of arithmetics may
thus have a common root with the origin of errorfree replication, to
which the natural units are necessary [...]. Now if there are
such
systems, logical and discrete categories are no more just modes of
descriptions but modes of operation for them. (Logical operations
seem
to be a preferent mode of organized systems.) The easiest and more
complete access to component information is ensured by a
monodimensional arrangement of the components.
The word itself [information] comes from the Latin informatio,
which
means
illumination,
exposition,
outline,
unfolding
or
'body
of
knowledge'. Complexity only deals with syntactic properties
(i.e., relationships between symbols). From this point of view
[pragmatic information,
introduced by E. von Weizsäcker] communication
is conceived in the form of repeated cycles of communicational acts in
which novelty and reinforcement ('Erstmaligkeit und Bestätigung') are
communicated. They together shape
the value, the meaning and, through this, the syntactic information
content of messages. [...] the question is whether there is any induced
[behavioural] change in the system. If there is none, there is no
information. Most authors agree that information is not lawlike but
eventlike; it is no accident that Thom associates information with the
emergence of forms, the latter being his answer to the problem of
emergence of events in lawlike systems. Information corresponds to a
selective (i.e. unequal) dissipation of energy over the degree of
freedom of a system (Conrad 1972). [...] Its thermodynamical
implications are related to the principle of equipartition of
equilibrium systems. Less entropy does not imply more information, it
only means there is information [as Schrödinger related information
organization to order]. A machine is a prototypic mechanism system;
there is a clear (etymological) relationship between the two words. In
this respect,
a machine can be defined as a definite set of relations over a fixed
set of parts. It is the boundary conditions of the machines that make
them machines, systems amenable to a simplistic treatment [so
teleonomic behaviour is realized by boundary conditions]. Continous
selfmeasurement of systems are adopted in certain quantum mechanics
interpretations because for a quantum system to exist in a definite
state, a reduction of the free wave function to a bound wave function
is necessary. The notion of activation and inhibition can be a good
behavioural synonyms for the action of 'information' in systems.
[Logical gates 2 inputs and 4 possible combinations of the inputs are
related to the physical limits of computation, because of logical
irreversibility due to information erasure (dissipation of energy), as
the "windows" through which the componentcomponent interactions
occur.] This [evolutionary] process ensures through the achieved stages
of replication the stable existence of referential relations (which
otherwise would disappear). It is this stable relationship, the
stability of which has nothing to do with the nature of the signs
involved,
which makes posssible for a symbolic description to emerge in terms of
signifiersignified relations.
Does mathematics impose constraints on physics?
Obviously, the answer is no !!??. "If there is a finite number of
different symbols, or a method of representing any symbol by finite
means, then a symbol can be translated into binary code, and so for
each operation on words or symbols there is a corresponding computable
function" (JohnsonLaird, 1983). [Lambdacalculus: (i) reduction (makes
a number from a function), (ii) expansion (makes a function from a
number) and (iii) change of variable.] This inequivalence [between
flowcharts and recursive functions] has to do with potentially infinite
programs (Recall again that in the finite domain everything is
solvable). The theorem, due to M.S. Paterson and C.E. Hewitt (1970), is
this: there are recursive programs which cannot be represented by
[Hipótesis de Prout: H=1.]
<a
href="../info/La%20prisa%20y%20el%20silencio.html">La prisa y el
silencio</a>
THE PRIME PROJECT SPRECTRA (Golay codes and Gödel numbers):
• UNITY: Mendeleiev Suite (atoms/stars and notes)
• COMPLEXITY: Riemann Symphony (molecules/galaxies and harmony)
Michio Kaku (La Vanguardia, 18/V/2010) profundiza en la teoría de
cuerdas, hoy la más
plausible para englobar todos los fenómenos del cosmos, hasta su escala
cuántica. "Todo es
vibración y cada partícula elemental, una nota; la física, estudia
armonías; la química,
melodías. El Universo es sinfonía!" Se confirmaría la existencia de
universos paralelos o
multiverso con el descubrimiento de la partícula S.
[from "El País", august 2005] Theoretical physicist Michio Kaku
considers music an
 • subatomic particles are notes,
 • physics is harmony,
 • chemistry is melody, and
 • the universe, the complete symphony.
[Music is communication as a pure state, a language directed to
emotions.]
addenda:
 Sánchez Ramos, E.; Tablas
de logaritmos, trigonométricas y de cálculo de intereses
1960.
(vigésima segunda edición, estereotipada al galvanismo y considerablemente aumentada); Hernando,  Flammarion, C.; Astronomía
popular; Montaner y Simón, 1963. (Ciclo de Bethe: pág. 249 &
529.)  Honingh, A.; Measures
of
consonances in a goodnessoffit model for equaltempered scales. (http)
have checked how well several equal temperament systems approximate the frequency ratios from just intonation. As test sets, we chose intervals from the just major scale and interval from the harmonic series. To weight the intervals, Euler's Gradus function and Helmholtz' roughness function have been used. It turns out that for these cases a division of the octave in 12, 19, 41 or 53 would be a good choice.
 VirtualDemo FFT: Adobe Audition 1.0 (Cool Edit Pro Syntrillium), 19922003.
 [Everest, F. A. (1994). The Master HandBook Of Acoustics. McGraw Hill].
 <a id="rdf:#$uO5hB1" href="http://www.utm.edu/research/primes/"
last_modified="1055333164" add_date="1055333170" last_visit="1055424846">The Prime Pages (prime number research,
records and resources)</a>
 Gauss:
The problem of distinguishing prime numers from composite numbers and of resolving the latter into their prime factors is known to be one of
the most important and useful in arithmetic.  Primes are the
building blocks (factors) of the positive integers (the prime integers of an integer determines its properties). A rough estimate for the nth prime is
n log n.  A perfect number is one whose proper divisors sum to
the number itself. e.g. The number 6 has proper divisors 1, 2 and 3 and 1 + 2 + 3 = 6, 28 has divisors 1, 2, 4, 7 and 14 and 1 + 2 + 4 + 7 + 14
= 28.  Euclid
also showed that if the number 2^{n}  1 is prime then the number 2^{n1}(2^{n}  1) is a perfect number. The mathematician Euler (much later in 1747) was able to show that all even perfect numbers are of this form. It is not known to this day whether there are
any odd perfect numbers.  The next important developments were made by Fermat
at the beginning of the 17^{th} Century. He proved a speculation of Albert Girard that every prime number of the form 4 n + 1 can be written in a unique way as the sum of two squares and was able to show
how any number could be written as a sum of four squares.  Fermat's Little Theorem: This states that if p
is a prime then for any integer x we have x^{p} = x modulo p.
by n.
This proves one half of what has been called the Chinese hypothesis which dates from about 2000 years earlier, that an integer n is prime if and only if the number 2^{n}  2 is divisible  The statement that the density of primes is 1/log(n) is known as the Prime Number Theorem.
 Goldbach's and other conjetures: Every even n>2 is the sum
of two primes (Schnizel: every integer n>17 is the sum of three distinct primes). Every odd n>5 is the sum of three primes. Every even number is the difference of two primes. The prime gap following a prime p is bounded by a
constant times (log p)^{2}.
 Galileo: (n^{2}, 2n1) euclidian space. Stevinus: f_{i}/f_{d} = p/4 reciprocal space.
 The number of electrons (n)
a shell can hold is given by 2n^{2}
(http).
excerpts from
<a href="http://www.dspguide.com/pdfbook.htm">"The
Scientist and Engineer's Guide to Digital Signal Processing"
(Steven W.
Smith, 2001)</a>
(Ch.5.) Linear systems
Even/Odd DecompositionEquations for even/odd decomposition:
[n ] = (x [n] + x [Nn])/2
x_{o}
This may seem a strange definition of leftright symmetry, since N/21/2 (between two samples) is the exact center of the signal, not N/2. Likewise, this offcenter symmetry means that sample zero needs special handling. This decomposition is part of an important concept in DSP called circular symmetry. It is based on viewing the end of the signal as connected to the beginning of the signal. When even and odd signals are viewed in this circular manner, there are actually two lines of symmetry, one at point x[N/2] and another at point x[0]. In an odd signal, point 0 and point N/2 always have a value of zero. In an even signal, point 0 and point N/2 are equal to the corresponding points in the original signal. The mathematics of Fourier analysis inherently views the signal as being circular, although it usually has no physical meaning in terms of where
the data came from.(Ch.11) Fourier Transform Pairs
HarmonicsIf a signal is periodic with frequency f, the only frequencies composing the signal are integer multiples of f, i.e., f, 2f, 3f, 4f, etc. These frequencies are called harmonics. The first harmonic is f, the second harmonic is 2f, the third harmonic is 3f, and so forth. The first harmonic (i.e., f) is also given a special name, the fundamental frequency.
(Ch.27) Data Compression
Huffman Encoding The idea is to assign frequently used characters
fewer bits, and seldom used characters more bits. Huffman encoding
takes this idea to the extreme. In mathematical terms, the optimal
situation is
reached
when the number of bits used for each character is proportional to the
logarithm
of the character's probability of occurrence. A clever feature of
Huffman encoding is how the variable length codes can be packed
together. Now consider a Huffman encoded data stream, where each
character can have a variable number of
bits. How do you separate one character from the next? The answer lies
in the proper selection of the Huffman codes that enable the correct
separation. Each code consists of two parts: a number of zeros before a
one, and an optional binary code after the one. This allows the binary
data stream to be separated into codes without the need for delimiters
or other marker between the codes. The uncompression program 10 looks
at the stream of ones and zeros until a valid code is formed, and then
starting over looking for the next character. The way that the codes
are formed insures that no ambiguity exists in the separation.
A more
sophisticated version of
the Huffman approach is called arithmetic encoding. In this scheme,
sequences of characters are represented by individual codes, according
to their probability of
occurrence. To implement Huffman or
arithmetic encoding, the compression and uncompression algorithms must
agree on the binary codes used to represent each character (or groups
of characters). This can be handled in one of two
ways. The simplest is to use a predefined encoding table that is always
the
same, regardless of the information being compressed. More complex
schemes use encoding optimized for the particular data being used. This
requires
that the encoding table be included in the compressed file for use by
the
uncompression program. Both methods are common.
<a name="LZW"></a>LZW compression
LZW compression is named after
its developers, A. Lempel and J. Ziv, with later modifications by Terry
A. Welch. It is the foremost technique for general purpose data
compression due to its simplicity and versatility. Typically, you can
expect LZW to compress text, executable code, and
similar data files to about onehalf their original size. LZW also
performs
well when presented with extremely redundant data files, such as
tabulated
numbers, computer source code, and acquired signals. Compression ratios
of 5:1
are common for these cases. LZW is the basis of several personal
computer utilities that claim to "double the capacity of your hard
drive". LZW compression is always used in GIF image files, and offered
as an option in TIFF and PostScript.
LZW compression uses a code table. A common choice
is to provide 4096 entries in the table. Uncompression is achieved by
taking each code from the compressed file, and translating it through
the code table to find what character or characters it represents. The
LZW method achieves compression by using codes 256 through 4095 to
represent sequences of bytes. The longer the sequence assigned to a
single code, and the more often the sequence is repeated, the higher
the compression achieved. Although this is a simple approach, there are
two major obstacles that need to be overcome: (1) how to determine what
sequences should be in the code table, and (2) how to provide the
uncompression program the same code table used by the compression
program. The LZW algorithm exquisitely solves both these problems.
When the LZW program starts to encode a file, the
code table contains only the first 256 entries, with the remainder of
the table being blank. This means that the first codes going into the
compressed file are simply the single bytes from the input file being
converted to 12 bits. As the encoding continues, the LZW algorithm
identifies repeated sequences in the data, and adds them to the code
table. Compression starts the second time a sequence is encountered.
The key point is that a sequence from the input file is not added to
the code table until it has already been placed in the compressed file
as individual characters (codes 0 to 255). This is important because it
allows the uncompression program to reconstruct the code table directly
from the compressed data, without having to transmit the code table
separately.
Only a few dozen lines of code are
required for the most elementary LZW programs. The real difficulty lies
in the efficient management of the
code table. The brute force approach results in large memory
requirements
and a slow program execution. Several tricks are used in commercial LZW
programs to improve their performance. For instance, the memory problem
arises because it is not know beforehand how long each of the character
strings for each code will be. Most LZW programs handle this by taking
advantage of the redundant nature of the code table. This pattern
always holds: every code can be expressed as a previous code plus one
new character. The execution time of the compression algorithm is
limited by searching the code table to determine if a match is present.
In other words, don't assign the 4096 codes to sequential locations in
memory. Rather, divide the memory into sections based on what sequences
 <img src="LZWmethod.jpg" title="" alt="" style="width: 850px; height: 503px;">
[from "El País", august 2005]
 subatomic particles are notes,
 physics is harmony,
 chemistry is melody, and
 the universe, the complete symphony.
some links to explore
elements of number theory online
 <a id="rdf:#$tO5hB1"
href="http://mathworld.wolfram.com/GoldbachConjecture.html" last_modified="1055332946" add_date="1055332951" last_visit="1107251585" last_charset="ISO88591">Goldbach Conjecture
 from MathWorld</a>  <a id="rdf:#$sO5hB1"
href="http://www.ieeta.pt/%7Etos/goldbach.html" last_modified="1055332884" add_date="1055332939" last_visit="1055332962">Goldbach conjecture verification</a>
 <a id="rdf:#$xO5hB1" href="http://www.maths.warwick.ac.uk/%7Emiles/MA426/" last_modified="1058785765" add_date="1058785963" last_visit="1058785765">MA426 Elliptic curves</a>
 <a id="rdf:#$wO5hB1"
href="http://www.millersv.edu/%7Ebikenaga/numth/numnote.html" last_modified="1056721023" add_date="1056721040"
last_visit="1056725426">Notes on Elementary Number Theory</a>  <a id="rdf:#$vO5hB1"
href="http://www.homeofprimes.com/StatisticPrimes.html" last_modified="1055333666" add_date="1055333778"
last_visit="1055333971">Prime Statistics</a>
various
 <a id="rdf:#$XO5hB1" href="http://arxiv.org/abs/condmat/0303110"
last_modified="1047030276" add_date="1047030374" last_visit="1061287661">[condmat/0303110] Information Entropy and Correlations in Prime Numbers</a>
 <a id="rdf:#$YO5hB1" href="http://arxiv.org/abs/condmat/0307301" last_modified="1058270914" add_date="1058271012" last_visit="1058270914">[condmat/0307301] Fermats last theorem, the Riemann hypothesis, and synergistic glasses</a>
 <a id="rdf:#$WO5hB1" href="http://arxiv.org/abs/condmat/0309199" last_modified="1063105009" add_date="1063105116" last_visit="1063105009">[condmat/0309199] Small world effect in natural numbers network</a>
 <a id="rdf:#$ZO5hB1" href="http://arxiv.org/abs/condmat/0309251" last_modified="1063358049" add_date="1063358068" last_visit="1063358049">[condmat/0309251] Toward a dynamical model for prime numbers</a>
 <a id="rdf:#$+O5hB1" href="http://arxiv.org/abs/condmat/0310079" last_modified="1065515861" add_date="1065516014" last_visit="1065516438">[condmat/0310079] On metric structure of ultrametric spaces</a>
 <a id="rdf:#$0P5hB1" href="http://arxiv.org/abs/condmat/0310148" last_modified="1065771920" add_date="1065771951" last_visit="1065771920">[condmat/0310148] Hidden structure in the randomness of the prime number sequence</a>
 <a id="rdf:#$1P5hB1" href="http://arxiv.org/abs/condmat/0310317"
last_modified="1066294051" add_date="1066294073"
last_visit="1066294051">[condmat/0310317] The Easiest Hard Problem:
Number Partitioning</a>
 <a id="rdf:#$3P5hB1" href="http://arxiv.org/abs/condmat/0311158" last_modified="1068545726" add_date="1068545776" last_visit="1068545726">[condmat/0311158] Microscopic realizations of the Trap Model</a>
 <a id="rdf:#$2P5hB1" href="http://arxiv.org/abs/mathph/0310016" last_modified="1066297999" add_date="1066298057" last_visit="1066297999">[mathph/0310016] Thermodynamics of the Farey Fraction Spin Chain</a>
 <a id="rdf:#$4P5hB1" href="http://arxiv.org/abs/math.NT/0404128" last_modified="1081854614" add_date="1081854659" last_visit="1081854614">[math/0404128] From Physics to Number Theory via Noncommutative Geometry. Part I: Quantum Statistical Mechanics of Qlattices</a>
 <a id="rdf:#$.O5hB1" href="http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6TVG491BHNN9&_user=625012&_handle=WWAAAWVMsSAYWAUUWAUZCBWDAEAWAZYZWBVWVU&_fmt=full&_coverDate=10%2F01%2F2003&_rdoc=4&_orig=browse&_srch=%23toc%235534%232003%23996719998%23456513%21&_cdi=5534&_artOutline=Y&view=c&_acct=C000031722&_version=1&_urlVersion=0&_userid=625012&md5=73dcc8833d1df95837389c558beb728a#sec1" last_modified="1065167673" add_date="1065167758" last_visit="1065167673">ScienceDirect  Physica A: Statistical Mechanics and its Applications : Theory of analogous force on number sets</a>
 <a id="rdf:#$67afU2" href="http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PLEEE8000069000003036106000001&idtype=cvips&gifs=Yes" add_date="1097239329" last_charset="ISO88591">Families and clustering in a natural numbers network</a>
 <a id="rdf:#$icqq31" href="http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PLEEE8000068000002026206000001&idtype=cvips&gifs=Yes" last_modified="1100861472" add_date="1100861437" last_visit="1100861708" last_charset="ISO88591">Zeta function zeros, powers of primes, and quantum chaos</a>
 <a id="rdf:#$Q8.ry3" href="http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6TVG4CNGJB02&_user=625012&_handle=BWAAWCVMsSAYZAUUWAAUYAYDBBVAAUCDZYABVYCBYUUYYBCVU&_fmt=summary&_coverDate=10%2F01%2F2004&_rdoc=43&_orig=browse&_srch=%23toc%235534%232004%23996589999%23513146%21&_cdi=5534&view=c&_acct=C000031722&_version=1&_urlVersion=0&_userid=625012&md5=a00c3e7da0dd4632a06b47ca077b3bfa" last_modified="1105629299" add_date="1105629114" last_charset="UTF8">The gaps between the gaps: some patterns in the prime number sequence</a>
 <a id="rdf:#$T3p.P1" href="http://arxiv.org/abs/mathph/0503030" add_date="1111158263" last_charset="ISO88591">[mathph/0503030] Generalized Number Theoretic Spin ChainConnections to Dynamical Systems and Expectation Values</a>
 <a id="rdf:#$13zvP1" href="http://www.arxiv.org/abs/quantph/0507201" add_date="1122301371" last_charset="ISO88591">[quantph/0507201] FermiDirac statistics and the number theory</a>
 <a id="rdf:#$LGyCi" href="http://www.arxiv.org/abs/condmat/0507662" add_date="1122642067" last_charset="ISO88591">[condmat/0507662] Cluster Approximation for the Farey Fraction Spin Chain</a>
</body></html>