Talk:Molecular computing

From OpenWetWare

Jump to: navigation, search
  • Please, contact directly: ramonrocabcn@gmail.com (sorry, previously mailing unavailable: try again)

http://openwetware.org/wiki/Molecular_computing
http://openwetware.org/wiki/User_talk:Ramon_Roca
http://openwetware.org/wiki/Talk:Molecular_computing
http://www.youtube.com/user/rmnDeliriaLegitur (indice)

BIBLIO

  1. Ayers, R.U.;
    Information, entropy and progress. A new evolutionary paradigm.
    American Institute of Physics, 1994.

          Conciousness, from computer science and cognitive psychology, is a kind of priorizing metafunction that selects from the cacophony of internal signals and creates a linear real time narrative of what's going on, moment by moment; appears to be a natural consequence of information exchange among elements of a distributed process system.
          Any arbitrary shape can be regarded as a locus of intersecting surfaces of nth order (generated in terms of Fourier descriptors. The total amount of information embodied in simple shapes is determined by the shape category, the number of surfaces and the form or order of the surfaces-defining equations (e.g., quadratic, cubic, etc).


  2. Brunat Blay, J.M. & Ventura Capell, E.;
    Informació i codis.
    Universitat Politècnica de Catalunya, 2001.

          En general, si A és un alfabet i n ≥ 1 és un enter, el conjunt C=An és un codi. La factorització d'una paraula de C* consisteix a separar símbols de n en n consecutivament. Col.loquialment, però significativament, podem dir que l'entropia és el límit de la compressió.
          Teorema de Shannon: En general, si un codi de bloc CM=qk paraules, el podem posar en bijecció amb el codi total de longitud k sobre el mateix alfabet. Així, a cada paraula de C hi ha k=logqM dígits d'informació (que determina la paraula) i n-k de redundants. La taxa d'informació R(C) o quantitat d'informació per símbol és (logqM)/n; la taxa de redundància és 1-R(C). Col.loquialment, un codi bo, en el sentit de poca probabilitat d'error, és car en el sentit de costós de trametre a causa de la longitud gran. Un codi (amb resticcions naturals) pres aleatòriament és raonablement probable que tingui aquestes propietats.
         Desafortunadament, la demostració no és constructiva, ... A més a més, si es vol una probabilitat d'error suficientment petita, el teorema implica que potser hauríem d'augmentar la longitud n fins a valors impracticables.
          L'ordre d'un cos finit és el seu cardinal, és a dir, el nombre d'elements que té. Els codis linials són subespais vectorials; [...] finalment, l'ortogonalitat és el fonament de l'anomenada matriu de control d'un codi. [A un cos finit, la ortogonalitat determina certes propietats diferents de les ordinàries.] Per exemple, un vector no nul pot ser perpendicular a ell mateix. Un codi linial es defineix donant-ne una base (matriu generadora) o donant un sistema d'equacions linials homogeni amb una matriu de control de codi (generadora del codi dual -ortogonal-).
          La distància mínima d'un codi perfecte (els radis de tangència i cobertura coincideixen) es senar. Els codis de Golay G23 (binari, [23,12,7]2, 3-corrector) i G11 (ternari, [11,6,5]3, 2-corrector) són perfectes, sistemàtics i linials (existeixen equivalents cíclics). Teorema de Best: Tots el codis perfectes sobre qualsevol alfabet, amb p≥3 i p≠6,8 són equivalents o bé a Rep2(n) o bé a G23. Utilitzant la transformada de Fourier podem descriure el codi de Reed-Solomon RSq(s) com el conjunt de transformats de tots els polinomis de grau ≤ q-d-1 o, si es vol, de totes les paraules amb zeros a partir de la coordenada q-d.

        [If C is a linear (n,k,d) -code, then n - k ³ d - 1 (Singleton bound). A linear (n,k,d) -code is called maximum distance separable if  d = n –k + 1.]

  3. Cotterill, R.;
    Biophysics, an introduction.
    Wiley, 2003.

  4. Cover, T.M. & Thomas, J.A.;
    Elements of information theory.
    Wiley, 1991.

          Entropy is the minimum descriptive complexity of a random variable, and mutual information (a measure of interdependence of variables) is the communication rate in the presence of noise. Indeed, we consider Kolmogorov complexity (the minimal description or program length) to be more fundamental than Shannon entropy. It is the ultimate data compression and leads to logically consistent procedure for inference. [Occam's razor: the simplest explanation is best.] The quantities H (entropy), I (information), C (capacity), D (relative entropy), K (Kolmogorov's complexity), W (double rate), arise naturally in the following areas:

        • data compression,
        • data transmission,
        • network information theory,
        • ergodic theory,
        • hypothesis testing,
        • statistical mechanics,
        • inference,
        • gambling and investment,
        • probability theory,
        • complexity theory.

  5. Creighton, T.E.;
    Proteins: Structures and molecular properties.

    Freeman, 1997.

  6. French, A.P.;
    Vibraciones y ondas (Curso de física del M.I.T.).
    Reverté, 2000.

    •     Movimiento armónico simple: vibración sinusoidal. [z=a+jb] Ampliemos ahora la interpretación del símbolo j [j2=-1], considerándolo como una instrucción para realizar una rotación de 90° en sentido contrario a las agujas del reloj al desplazamiento que precede. [...] Una magnitud de la forma jb (siendo b real) se denomina imaginaria pura. Desde el punto de vista de las matemáticas, este término es quizá poco afortunado, porque en la aplicación del concepto de número real a complejo, un componente "imaginario" como jb está en igualdad de condiciones que un componente real como a. Pero cuando se aplica al análisis de oscilaciones monodimensionales, esta terminología se ajusta perfectamente, como ya hemos visto, a las partes físicamente reales y no reales de un movimiento bidimensional imaginado.
          [cos θ + j sen θ = ejθ] Este resultado es muy importante, hablando matemáticamente, porque proporciona una conexión clara entre la geometría plana (representada por las funciones trigonométicas) y el álgebra (representada por la función exponencial). R.P. Feynman la consideraba "como joya asombrosa ... la fórmula matemática más notable". Fue establecida por Leonhard Euler en 1748.
          La resultante de dos o más vibraciones armónicas es simplemente la suma de las vibraciones aisladas. Si dos movimientos armónicos simples tienen frecuencias muy parecidas, la perturbación combinada presenta lo que se denomina pulsación o batido [cuya frecuencia es la mitad de la suma de la frecuencias individuales]. [...] Sin embargo, es notable que una gran diversidad de deformaciones de los sistemas físicos, entre los que podemos incluir alargamientos, acortamientos, flexión y torsión (o combinaciones de los mismos) dan como resultado fuerzas restauradoras proporcionales al desplazamiento y por ello originan vibraciones armónicas simples (o una superposición de vibraciones armónicas).

  7. Graub, W.;
    Linear algebra.
    Springer, 1981.

  8. Kampis, K.;
    Self-modifying systems in biology and cognitive science. A new framework for dynamics, information and complexity.
    Pergamon, 1991.

          The Turing-Church thesis can be interpreted as an isomorphism between formal and dynamic systems:
        • symbol alphabet = numbers
        • admissible symbol strings = state manifolds
        • grammar = constraints
        • axioms = initial conditions
        • rules of inference = vector field
        • proof sequence = trajectory
        • theorem = attractor

        Determinism refers to a computational procedure (formal domain, necessitiy in models) whereas causality refers to a relationship between things (natural domain, necessity in reality). In other words: according to this view, instead of asking why things happen, one should ask how they happen. It is not enough to reproduce things properly - an idea sometimes expressed by contrasting 'explanatory' and 'predictive' models, or 'Erklärung' and 'Verstehen', or by drawing a distinction between 'models' and 'metaphors' or 'simulations'. Statistical description is cumulative, whereas dynamical description is generative. Encodings must be done separately from dynamics; the levels of representation must not mix. Models cannot be read out directly; they have to be interpreted. A model will be called relevant if it is both adequate and interpretable. Models (as observables plus descriptions) are not reducible to descriptions (variables plus relations). A concrete situation where this principle applies is the replacement of a 'statistical' or inductive law (let us say, the Kepler's law) with a dynamical or deductive law (e.g. Newton's equations of motion). I/O systems are referred to logical monism, expressed by Zenon's paradoxes (viz., the system remain constant until something changes, and modifyies the whole system -anyone can swim twice in the same river-). The encoding is the real memory of the system. Validity can be ensured only by considering complete information; but it does not mean a direct use of the complete information.
        With Gödel numbering (by instance, using the original symbols as exponents of a prime factorization) it becomes possible to encode and decode state transformations to and from states directly. Chess tree (a modest finite game) has about 10120 elements. However, what Waddington calls [in developmental biology] "morphogenetic fields" were not imagined as real fields but as nominal models which can perhaps be reinterpreted in terms of positional information. What distinguishes a mere label (a formal but meaningless object) from a symbol is its interpretation. Component systems (a finite set of diverse and interacting entity blocks, as linguistic or biological ones) have creative unpredictable dynamics. Such structures [LEGO-like] were foreseen (but later abandoned) by J. von Neumann under the name 'kinematic automata'. The result [organized complexity] is neither simplicity nor disordered complexity but something "between the crystal and the smoke" (H. Atlan). Elements, sets, sets of sets, etc. belong to different types in the Russellian set theory, and are not allowed to mix freely. This makes them qualitatively different. They can start to represent information in type structures, as is the case in von Neumann's definition of integer numbers where every number corresponds to an element of a different level of hierarchy [althought current set theory is type-free, not qualitative]. Therefore, component-systems cannot have dynamic algorithms [because of unknown elements]. Von Neumann's self-reproducing automata possessed both excitable and nonexcitable states, and the construction of automata by other automata was conceived as a growth process involving the transform of nonexcitable cells to excitable ones by special signals.
        Wittgenstein: Logic when already stablished may be used for describing formal implications, but the rules themselves do not follow any logic.
        (Relative) Complexity is not a number but a way of looking at things. A typical chaos-producing mechanism is the iterated 1D map (symbolics dynamics) xn+1= a - xn2 x[-a,a] a[0,2], which leads to chaos in a finite large interval of a. The most plastic representation of the idea of chaos by means of a taffy-pulling machine (Rössler, 1987; it makes cookies by filling and folding -cut&paste- the pastry automatically). This mathematical effect of chaos is best seen on the Bernoulli-shift transformation s: xn+1= s(xn) = 2xn mod 1 x[0,1] (that is, we multiply by two and cut off the part which grew bigger than one). Fractals generate an illusion of complexity. [xn+1=axn (1-xn)] This equation specifies those points, the Poincaré map, where the trajectory cuts the line [within a system with partition 0,1; that is, the whole space is cut into two parts: i.e., the only real rule is after 0 there cannot be 0].
        Badii utilized the idea that chaos is made up of combinations of the periodic orbits. A 'primitive' is defined by two conditions: it is periodic (i.e., in the infinity sequence there are arbitrarily long repetitions of it) and it cannot be broken down to other primitives. The [hierarchical] tree [structure] is constituted by the primitives and their admissible combinations (the first level, the primitive combination; 2nd level, pairwise combination; n-th level, n-ary combinations, and so on).
        This procedure [Gödel numbering], which is itself algorithmic, ensures the existence of a dynamics without the use of any further information. All we need is a component-system which produces them by complexity-increasing procedures. The 'privileged zero' properties and the imaginary Turing machines that make knots in their tapes (or apply other tricks) can produce complexity beyond any fixed limit. New types of thinking machines, molecular computers, and evolutionary technology can be envisaged at this point.
        For this theoretical framework, where all automata can be conceived as embedded configurations in some cellular space, von Neumann formulated the following hierarchical sequences of questions:
    • (1) Computability. When are automata logically universal? Can an individual automaton be logically universal?
    • (2) Constructibility. Can an automaton be constructed by another automaton? What kind of automata can be constructed? Does there exist an automaton that has universal constructing hability?
    • (3) Self-reproduction. Does there exist a self-reproducing automaton in some cellular space? Do self-reproducing automata exist which can do other things besides being self-reproducing?
    • Von Neumann also posed a fourth question, concerning the evolvability of automata.
      • Von Neumann was able to structurally realize this system in a two-dimensional cellular automaton array with with 29 states per cell, with a simple neighbourhood function where the next state of a cell depends on 5 cells: on itself and four neighbours (up, down, right, and left). With this he gave a positive answer to (1)-(3).
      • (Laing, 1977) He has constructed self-inspecting Turing machines, that resemble molecules operating on each other and on themselves (a design related, at a distance, to Markov normal systems, and W. Stahl's molecular 'list processors').
        [...] That is, not the dynamics but an external standardization makes replication sustainable (Rössler conclussion to Wigner paradox). Such a standardizing mechanism is easy to implement in trivial or 'pseudo'-self-reproducing systems, where it is the reproducing (and not the reproduced) machine that does the whole process anyway. Component specificity is outside dynamics. It is a class-characteristic. There is an inherent discreteness present here, a standardization made by the system. In such a systems there is no premium for accuracy because the interactions are digitized. The origin of the world, the origin of music and the origin of arithmetics may thus have a common root with the origin of error-free replication, to which the natural units are necessary [...]. Now if there are such systems, logical and discrete categories are no more just modes of descriptions but modes of operation for them. (Logical operations seem to be a preferent mode of organized systems.) The easiest and more complete access to component information is ensured by a monodimensional arrangement of the components.
        The word itself [information] comes from the Latin informatio, which means illumination, exposition, outline, unfolding or 'body of knowledge'. Complexity only deals with syntactic properties (i.e., relationships between symbols). From this point of view [pragmatic information, introduced by E. von Weizsäcker] communication is conceived in the form of repeated cycles of communicational acts in which novelty and reinforcement ('Erstmaligkeit und Bestätigung') are communicated. They together shape the value, the meaning and, through this, the syntactic information content of messages. [...] the question is whether there is any induced [behavioural] change in the system. If there is none, there is no information. Most authors agree that information is not law-like but event-like; it is no accident that Thom associates information with the emergence of forms, the latter being his answer to the problem of emergence of events in law-like systems. Information corresponds to a selective (i.e. unequal) dissipation of energy over the degree of freedom of a system (Conrad 1972). [...] Its thermodynamical implications are related to the principle of equipartition of equilibrium systems. Less entropy does not imply more information, it only means there is information [as Schrödinger related information -organization- to order]. A machine is a prototypic mechanism system; there is a clear (etymological) relationship between the two words. In this respect, a machine can be defined as a definite set of relations over a fixed set of parts. It is the boundary conditions of the machines that make them machines, systems amenable to a simplistic treatment [so teleonomic behaviour is realized by boundary conditions]. Continous self-measurement of systems are adopted in certain quantum mechanics interpretations because for a quantum system to exist in a definite state, a reduction of the free wave function to a bound wave function is necessary. The notion of activation and inhibition can be a good behavioural synonyms for the action of 'information' in systems. [Logical gates -2 inputs and 4 possible combinations of the inputs- are related to the physical limits of computation, because of logical irreversibility due to information erasure (dissipation of energy), as the "windows" through which the component-component interactions occur.] This [evolutionary] process ensures through the achieved stages of replication the stable existence of referential relations (which otherwise would disappear). It is this stable relationship, the stability of which has nothing to do with the nature of the signs involved, which makes posssible for a symbolic description to emerge in terms of signifier-signified relations.
        Does mathematics impose constraints on physics? Obviously, the answer is no [[!!??]]. "If there is a finite number of different symbols, or a method of representing any symbol by finite means, then a symbol can be translated into binary code, and so for each operation on words or symbols there is a corresponding computable function" (Johnson-Laird, 1983). [Lambda-calculus: (i) reduction (makes a number from a function), (ii) expansion (makes a function from a number) and (iii) change of variable.] This inequivalence [between flowcharts and recursive functions] has to do with potentially infinite programs (Recall again that in the finite domain everything is solvable). The theorem, due to M.S. Paterson and C.E. Hewitt (1970), is this: there are recursive programs which cannot be represented by flowcharts of a constant number of variables.

  9. Leff, H.S. & Rex A.F. (ed.);
    Maxwell's demon: Entropy, information, computing.
    Hilger, 1990.

          The information-bearing degrees of freedom of a computer interact with the thermal reservoir represented by the remaining degrees of freedom. This interaction plays two roles. First of all, it acts as a sink for the energy dissipation involved in the computation. This energy dissipation han an unavoidable minimum arising from the fact that the computer performs irreversible operations. Secondly, the interaction acts as a source of noise causing errors. In particular thermal fluctuations give a supposedly switched element a small probability of remaining in its initial state, even after the switching force has been applied for a long time. It is shown, in terms of two simple models, that this source of error is dominated by one of two other error sources:(1) Incomplete switching due to inadequate time allowed for switching, and (2) decay of stored information due to thermal fluctuations.
      R. Landauer; Irreversibility and heat generation in the computing process.IBM J. Res. Dev.5, 183-191 (1961).

          The biosynthesis of messenger RNA is discussed as a physical example of reversible computation [as a temporary memory device]. For a typical irreversible computer, which throws away about one bit for operation and kT ln 2 the aproximate lower bond on the energy dissipation of such machines; for a logically reversible computer, is kT ln 1, by construction. In biological systems, apparently, the speed and flexibility of irreversible erasure its extra cost in free energy (kT ln 4 per nucleotide, in this case).
      C.H: Bennett; Logical reversibility of computation.IBM J. Res. Dev.17, 525-532 (1973).

          [...] This completes our description of how three indefinite capacity counters may be employed to represent the actions of a Turing tape: one counter for each right and left tape segment, and one counter for manipulating the contents of the first two.
      [In order to represent two distinct numbers as a single number we employ Gödelization.] We have thus managed to carry out the desired tape transformations employing only two counters and (save for matters of bookkeeping detail of the sort as indicated earlier) the Minsky result can be obtained.
      R. Laing; Maxwell's demon and computation.Phil. Sci.41, 171-178 (1974).

          The enzymatic apparatus of DNA replication, transcription , and translation appear to be nature's closest approach to a Brownian computer (i.e., a model allowing thermal noise to influence the trajectory so strongly that it becomes a random walk through the entire accessible, low -potential-energy, portion of the computer's configuration space), dissipating 20-100 kT per step.
      Because the intrinsic error rate is determined by the difference in barriers opposing corrects and incorrect transitions, it is a function of the particular chemical hardware, and does not represent a fundamental thermodynamic limit. In practice it can be made arbitrarily small by increasing the size and complexity of the recognition sites (to increase the potential energy difference DE between correct and incorrect reaction paths), by lowering the temperature (to increase the Boltzmann ratio eDE/kT of correct to error transitions without changing DE) and by making the apparatus larger and more massive (to reduce tunneling).
      The excess [fourfold higher concentration] phosphate keeps the enzyme [polynucleotide phosphorilase] from running backwards and synthesizing random RNA, but it also means that the cycle of specific synthesis followed by nonspecific degradation must waste about kT ln 4 per nucleotide even in the limit of zero speed. For an organism that has already spent around 20kT per nucleotide to produce RNA with near maximal speed and accuracy, the extra 1.4kT is obviously a small price to pay for being able to dispose of the RNA in a summary manner, without taking it back to its birthplace.
          Levin and Chaitin definition of algorithmic entropy: A program is self-delimiting it if needs no special symbol, other than the digits 0 and 1 of which it is composed, to marks its end; discrete objects other than binary strings, e.g., integers, or coarse-grained cells in a continous phase space, may be defined by indexing them by binary strings in a standard way (as Monte Carlo program describes distributions of macrostates).
      C.H. Bennett; The thermodynamics of computation - a review.Int. J. Theor. Phys.21, 905-940 (1982).

          There really is no software, in the strict sense of disembodied information, but only inactive and relatively static hardware. Thus, the handling of information is inevitably tied to the physical universe. Evolution and the origin of life can be viewed as an optimization process, in which fluctuations (e.g., mutations) take us from one metastable ecology, to a new one. [As oscillations of a laser, under conditions where these temporal and spatial patterns are the only allowed behavior, seems strained.] We might, with equal justice, refer to the revolution of an electron around a hydrogen nucleus, or the rotation of a water wheel, as self organization. [Noise can be a qualitative influence on the development of systems, as induced noise transitions are invoked in simulated annealing technique for optimization in the presence of complex constraints. In equilibrium, noise is defined by the temperature.]
      R. Landauer; Computation: A fundamental physical view. Phys. Scr.35, 88-95 (1987).

          [Incomprensible; molan las siguientes frases:]
        • Purcell and Pound discovered (1951) the existence of negative temperature.
        • Classical demons and orthogonality in Hilbert space.
        • Reality is not subdivided.
      E. Lubkin; Keeping the entropy of measurement: Szilard revisited.

          One bit of information is somehow equivalent to k ln 2 units of entropy, or about 2.3 x 10-24 cal/Kelvin. Chemical brownian computers such as RNA polymerase are of course quantum systems, but because of the high temperature and short thermal de Broglie wavelength, quantum effects are subtle and qualitative (e.g., zero-point and tunneling corrections to reaction rates) rather than quantitative. Feynman, instead of incorporating the direction of the computation (forward as opposed to backward) in the Hamiltonian, included it into the initial condition, which was now not a single basis state but rather a wave-packet-like superposition of basis states [i.e., as markovian processes].
      C.H. Bennet; Notes on computation (198?).

  10. Nei, M.;
    Molecular evolutionary genetics.
    Columbia University, 1987.

  11. Reichl, L.E.;
    The transition to chaos. In conservative classical systems: Quantum manifestations.
    Springer, 1992.

          The book begins with discussions of nonlinear resonance, Noether's theorem, integrability, KAM theory, and a definition of chaotic behavior; it continues with a detailed discussion of area-preserving maps with emphasis on self similarity, integrable and nonintegrable quantum systems, spectral properties, path integrals, and periodically driven systems: and concludes by showing how to apply the ideas to stochastic systems.

          [La entropía de Kolmogorov (suma de los exponentes positivos de Liapunov) equivale a la pérdida de información por unidad de tiempo y permite distinguir entre comportamientos caóticos (divergentes), regulares (-quasi- convergentes) y estocásticos (distribuciones -equi- probabilísticas). Los flujos se acoplan simétricamente (relación de reciprocidad de Onsager), con distinto carácter tensorial -escalar, vectorial, matricial- pero específica y anisotrópicamente (principio de Curie-Prigogine).
      F. Montero & F. Morán; Biofísica. Procesos de autoorganización en biología. Universidad Complutente, 1992]


  12. Strang, G.;
    Introduction to applied mathematics.
    Wellesley, 1986

  13. Westhead, D.R.; Parish, J.M. & Twyman, R.M.;
    Bioinformatics.
    BIOS, 2002.

  14. Asimov, I.;
    Breve historia de la química.
    Alianza, 1981.

    • [Hipótesis de Prout: H=1.]


  15. duSautoy, M.;
    La música de los números primos.
    Acantilado, 2007.

  16. Hawking, H. (ed.);
    God created the integers.
    Penguin, 2005.

  17. Gracián, E.;
    Los números primos.
    RBA, 2010.


La prisa y el silencio

THE PRIME PROJECT -SPRECTRA- (Golay codes and Gödel numbers):

• UNITY: Mendeleiev Suite (atoms/stars and notes)
• COMPLEXITY: Riemann Symphony (molecules/galaxies and harmony)

Michio Kaku (La Vanguardia, 18/V/2010) profundiza en la teoría de cuerdas, hoy la más
plausible para englobar todos los fenómenos del cosmos, hasta su escala cuántica. "Todo es
vibración y cada partícula elemental, una nota; la física, estudia armonías; la química,
melodías. El Universo es sinfonía!" Se confirmaría -la existencia de universos paralelos o
multiverso- con el descubrimiento de la partícula S.
[from "El País", august 2005] Theoretical physicist Michio Kaku considers music an
analogy of the cosmos:
  • • subatomic particles are notes,
  • • physics is harmony,
  • • chemistry is melody, and
  • • the universe, the complete symphony.
[Music is communication as a pure state, a language directed to emotions.]

COSMOS (Guy Cogeval & Jean Clair, Museo de Montreál)


addenda:

    • Sánchez Ramos, E.; Tablas de logaritmos, trigonométricas y de cálculo de intereses
      (vigésima segunda edición, estereotipada al galvanismo y considerablemente aumentada);
      Hernando, 1960.

    • Flammarion, C.; Astronomía popular; Montaner y Simón, 1963. (Ciclo de Bethe: pág. 249 & 529.)

    • Honingh, A.; Measures of consonances in a goodness-of-fit model for equal-tempered scales. (http)
    In this paper we have checked how well several equal temperament systems approximate the frequency ratios from just intonation. As test sets, we chose intervals from the just major scale and interval from the harmonic series. To weight the intervals, Euler's Gradus function and Helmholtz' roughness function have been used. It turns out that for these cases a division of the octave in 12, 19, 41 or 53 would be a good choice.

    • VirtualDemo FFT: Adobe Audition 1.0 (Cool Edit Pro Syntrillium), 1992-2003.
    • [Everest, F. A. (1994). The Master HandBook Of Acoustics. McGraw Hill].

      • Gauss: The problem of distinguishing prime numers from composite numbers and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic.
      • Primes are the building blocks (factors) of the positive integers (the prime integers of an integer determines its properties). A rough estimate for the nth prime is n log n.
      • A perfect number is one whose proper divisors sum to the number itself. e.g. The number 6 has proper divisors 1, 2 and 3 and 1 + 2 + 3 = 6, 28 has divisors 1, 2, 4, 7 and 14 and 1 + 2 + 4 + 7 + 14 = 28.
      • Euclid also showed that if the number 2n - 1 is prime then the number 2n-1(2n - 1) is a perfect number. The mathematician Euler (much later in 1747) was able to show that all even perfect numbers are of this form. It is not known to this day whether there are any odd perfect numbers.
      • The next important developments were made by Fermat at the beginning of the 17th Century. He proved a speculation of Albert Girard that every prime number of the form 4 n + 1 can be written in a unique way as the sum of two squares and was able to show how any number could be written as a sum of four squares.
      • Fermat's Little Theorem: This states that if p is a prime then for any integer x we have xp = x modulo p.
        This proves one half of what has been called the Chinese hypothesis which dates from about 2000 years earlier, that an integer n is prime if and only if the number 2n - 2 is divisible by n.
      • The statement that the density of primes is 1/log(n) is known as the Prime Number Theorem.
      • Goldbach's and other conjetures: Every even n>2 is the sum of two primes (Schnizel: every integer n>17 is the sum of three distinct primes). Every odd n>5 is the sum of three primes. Every even number is the difference of two primes. The prime gap following a prime p is bounded by a constant times (log p)2.
          • Galileo: (n2, 2n-1) -euclidian space-. Stevinus: fi/fd = p/4 -reciprocal space-.
          • The number of electrons (n) a shell can hold is given by 2n2 (http).


          excerpts from

          "The Scientist and Engineer's Guide to Digital Signal Processing" (Steven W. Smith, 2001)

          (Ch.5.) Linear systems

          Even/Odd Decomposition

              Equations for even/odd decomposition:
          xe [n ] = (x [n] + x [N-n])/2
          xo [n ] = (x [n] - x [N-n])/2

              This may seem a strange definition of left-right symmetry, since N/2-1/2 (between two samples) is the exact center of the signal, not N/2. Likewise, this off-center symmetry means that sample zero needs special handling. This decomposition is part of an important concept in DSP called circular symmetry. It is based on viewing the end of the signal as connected to the beginning of the signal. When even and odd signals are viewed in this circular manner, there are actually two lines of symmetry, one at point  x[N/2] and another at point x[0]. In an odd signal, point 0 and point N/2 always have a value of zero. In an even signal, point 0 and point N/2 are equal to the corresponding points in the original signal. The mathematics of Fourier analysis inherently views the signal as being circular, although it usually has no physical meaning in terms of where the data came from.

          (Ch.11) Fourier Transform Pairs

          Harmonics 

              If a signal is periodic with frequency f, the only frequencies composing the signal are integer multiples of f, i.e., f, 2f, 3f, 4f, etc. These frequencies are called harmonics. The first harmonic is f, the second harmonic is 2f, the third harmonic is 3f, and so forth. The first harmonic (i.e., f) is also given a special name, the fundamental frequency.

          (Ch.27) Data Compression

          Huffman Encoding

              The idea is to assign frequently used characters fewer bits, and seldom used characters more bits. Huffman encoding takes this idea to the extreme. In mathematical terms, the optimal situation is reached when the number of bits used for each character is proportional to the logarithm of the character's probability of occurrence.  A clever feature of Huffman encoding is how the variable length codes can be packed together. Now consider a Huffman encoded data stream, where each character can have a variable number of bits. How do you separate one character from the next? The answer lies in the proper selection of the Huffman codes that enable the correct separation. Each code consists of two parts: a number of zeros before a one, and an optional binary code after the one. This allows the binary data stream to be separated into codes without the need for delimiters or other marker between the codes. The uncompression program 10 looks at the stream of ones and zeros until a valid code is formed, and then starting over looking for the next character. The way that the codes are formed insures that no ambiguity exists in the separation.

              A more sophisticated version of the Huffman approach is called arithmetic encoding. In this scheme, sequences of characters are represented by individual codes, according to their probability of occurrence.  To implement Huffman or arithmetic encoding, the compression and uncompression algorithms must agree on the binary codes used to represent each character (or groups of characters). This can be handled in one of two ways. The simplest is to use a predefined encoding table that is always the same, regardless of the information being compressed. More complex schemes use encoding optimized for the particular data being used. This requires that the encoding table be included in the compressed file for use by the uncompression program. Both methods are common.

          LZW compression


              LZW compression is named after its developers, A. Lempel and J. Ziv, with later modifications by Terry A. Welch. It is the foremost technique for general purpose data compression due to its simplicity and versatility. Typically, you can expect LZW to compress text, executable code, and similar data files to about one-half their original size. LZW also performs well when presented with extremely redundant data files, such as tabulated numbers, computer source code, and acquired signals. Compression ratios of 5:1 are common for these cases. LZW is the basis of several personal computer utilities that claim to "double the capacity of your hard drive". LZW compression is always used in GIF image files, and offered as an option in TIFF and PostScript. 

              LZW compression uses a code table. A common choice is to provide 4096 entries in the table. Uncompression is achieved by taking each code from the compressed file, and translating it through the code table to find what character or characters it represents. The LZW method achieves compression by using codes 256 through 4095 to represent sequences of bytes. The longer the sequence assigned to a single code, and the more often the sequence is repeated, the higher the compression achieved. Although this is a simple approach, there are two major obstacles that need to be overcome: (1) how to determine what sequences should be in the code table, and (2) how to provide the uncompression program the same code table used by the compression program. The LZW algorithm exquisitely solves both these problems.

              When the LZW program starts to encode a file, the code table contains only the first 256 entries, with the remainder of the table being blank. This means that the first codes going into the compressed file are simply the single bytes from the input file being converted to 12 bits. As the encoding continues, the LZW algorithm identifies repeated sequences in the data, and adds them to the code table. Compression starts the second time a sequence is encountered. The key point is that a sequence from the input file is not added to the code table until it has already been placed in the compressed file as individual characters (codes 0 to 255). This is important because it allows the uncompression program to reconstruct the code table directly from the compressed data, without having to transmit the code table separately.

              Only a few dozen lines of code are required for the most elementary LZW programs. The real difficulty lies in the efficient management of the code table. The brute force approach results in large memory requirements and a slow program execution. Several tricks are used in commercial LZW programs to improve their performance. For instance, the memory problem arises because it is not know beforehand how long each of the character strings for each code will be. Most LZW programs handle this by taking advantage of the redundant nature of the code table. This pattern always holds: every code can be expressed as a previous code plus one new character. The execution time of the compression algorithm is limited by searching the code table to determine if a match is present. In other words, don't assign the 4096 codes to sequential locations in memory. Rather, divide the memory into sections based on what sequences will be stored there.

          [from "El País", august 2005]
             
          Theoretical physicist Michio Kaku considers music an analogy of the cosmos:
          • subatomic particles are notes,
          • physics is harmony,
          • chemistry is melody, and
          • the universe, the complete symphony.


          some links to explore

          elements of number theory online 

          various

          [cond-mat/0303110] Information Entropy and Correlations in Prime Numbers
          [cond-mat/0307301] Fermats last theorem, the Riemann hypothesis, and synergistic glasses
          [cond-mat/0309199] Small world effect in natural numbers network
          [cond-mat/0309251] Toward a dynamical model for prime numbers
          [cond-mat/0310079] On metric structure of ultrametric spaces
          [cond-mat/0310148] Hidden structure in the randomness of the prime number sequence
          [cond-mat/0310317] The Easiest Hard Problem: Number Partitioning
          [cond-mat/0311158] Microscopic realizations of the Trap Model
          [math-ph/0310016] Thermodynamics of the Farey Fraction Spin Chain
          [math/0404128] From Physics to Number Theory via Noncommutative Geometry. Part I: Quantum Statistical Mechanics of Q-lattices
          ScienceDirect - Physica A: Statistical Mechanics and its Applications : Theory of analogous force on number sets
          Families and clustering in a natural numbers network
          Zeta function zeros, powers of primes, and quantum chaos

          The gaps between the gaps: some patterns in the prime number sequence
          [math-ph/0503030] Generalized Number Theoretic Spin Chain-Connections to Dynamical Systems and Expectation Values
          [quant-ph/0507201] Fermi-Dirac statistics and the number theory
          [cond-mat/0507662] Cluster Approximation for the Farey Fraction Spin Chain

          Personal tools