Talk:Molecular computing
- Please, contact directly: ramonrocabcn@gmail.com (sorry, previously mailing unavailable: try again)
http://openwetware.org/wiki/Molecular_computing
http://openwetware.org/wiki/User_talk:Ramon_Roca
http://openwetware.org/wiki/Talk:Molecular_computing
http://www.youtube.com/user/rmnDeliriaLegitur
<html><body><a href="indice.html">(indice)</a><br><h2><a name="BIBLIO">BIBLIO</a></h2><ol><li><a name="Ayers"></a> Ayers, R.U.;<br><b>Information, entropy and progress. A new evolutionary paradigm.</b><br>
American Institute of Physics, 1994.<br><br><ul>
Conciousness, from computer science and cognitive
psychology, is a kind
of priorizing metafunction that selects from the cacophony of internal
signals and creates a linear real time narrative of what's going on,
moment by moment; appears to be a natural consequence of information
exchange among elements of a distributed process system.<br>
Any arbitrary shape can be regarded as a locus of
intersecting surfaces
of nth order (generated in terms of Fourier descriptors. The total
amount of information embodied in simple shapes is determined by the
shape category, the number of surfaces and the form or order of the
surfaces-defining equations (e.g., quadratic, cubic, etc).
</ul></li><br><br><li><a name="Brunat"></a> Brunat Blay, J.M. & Ventura
Capell, E.;<br><b style="color: rgb(0, 0, 0);">Informació i codis.</b><br>
Universitat Politècnica de Catalunya, 2001.<br><br><ul>
En general, si <span style="font-style: italic;">A</span>
és un
alfabet i <span style="font-style: italic;">n ≥ 1</span> és un
enter, el conjunt <span style="font-style: italic;">C=A</span><sup
style="font-style: italic;">n</sup> és un codi. La factorització d'una
paraula de <span style="font-style: italic;">C*</span>
consisteix a separar símbols de <span style="font-style: italic;">n</span>
en <span style="font-style: italic;">n</span> consecutivament.
Col.loquialment, però significativament, podem dir
que l'entropia és el límit de la compressió. <br>
Teorema de Shannon: En general, si un codi de bloc <span
style="font-style: italic;">C</span>
té <span style="font-style: italic;">M=q<sup>k</sup></span> paraules,
el podem posar en bijecció amb el codi total de
longitud <span style="font-style: italic;">k</span> sobre el mateix
alfabet. Així, a cada paraula de <span style="font-style: italic;">C</span>
hi ha <span style="font-style: italic;">k=log<sub>q</sub>M</span>
dígits d'informació (que determina la paraula) i <span
style="font-style: italic;">n-k</span> de
redundants. La taxa d'informació <span style="font-style: italic;">R(C)</span>
o quantitat d'informació per símbol és<span style="font-style: italic;">
(log<sub>q</sub>M)/n</span>;
la taxa de redundància és <span style="font-style: italic;">1-R(C)</span>.
Col.loquialment,
un
codi
bo,
en
el
sentit
de poca probabilitat d'error,
és car en el sentit de costós de trametre a causa de la longitud gran.
Un codi (amb resticcions naturals) pres aleatòriament és
raonablement probable que tingui aquestes propietats.<br>
Desafortunadament, la demostració no és
constructiva, ... A més a més,
si es vol una probabilitat d'error suficientment petita, el teorema
implica que potser hauríem d'augmentar la longitud <span
style="font-style: italic;">n</span> fins a valors
impracticables. <br>
L'ordre d'un cos finit és el seu cardinal, és a dir,
el nombre d'elements que té. Els codis linials són subespais
vectorials; [...] finalment, l'ortogonalitat és el fonament de
l'anomenada matriu de control d'un codi. [A un cos finit, la
ortogonalitat determina certes propietats diferents de les ordinàries.]
Per exemple, un vector no nul pot ser perpendicular a ell mateix. Un
codi linial es defineix donant-ne una base (matriu generadora) o donant
un sistema d'equacions linials homogeni amb una matriu de control de
codi (generadora del codi dual -ortogonal-).<br>
La distància mínima d'un codi perfecte (els radis de
tangència i
cobertura coincideixen) es senar. Els codis
de Golay <span style="font-style: italic;">G23</span> (binari,
[23,12,7]<sub>2</sub>, <span style="font-style: italic;">3</span>-corrector)
i
<span style="font-style: italic;">G11</span> (ternari, [11,6,5]<sub>3</sub>,
<span style="font-style: italic;">2</span>-corrector) són
perfectes, sistemàtics i linials (existeixen equivalents cíclics).
Teorema de Best: Tots el codis perfectes sobre qualsevol alfabet, amb
p≥3 i p≠6,8 són equivalents o bé a Rep2(n) o bé a <span
style="font-style: italic;">G23</span>. Utilitzant la
transformada de Fourier podem descriure el codi de Reed-Solomon RSq(s)
com el conjunt de transformats de tots els polinomis de grau <span
style="font-style: italic;">≤
q-</span><span style="font-family: symbol; font-style: italic;">d</span><span
style="font-style: italic;">-1</span> o, si es vol, de totes les
paraules amb zeros a partir de la
coordenada <span style="font-style: italic;">q-<span
style="font-family: symbol;">d</span></span>.
</ul><br>
[If C is a
linear (n,k,d) -code, then n - k <span style="font-family: symbol;">³ </span>d
-
1
(Singleton
bound).
A
linear
(n,k,d)
-code
is called maximum
distance
separable if d = n –k + 1.] </li><br><li><a name="Cotterill"></a> Cotterill, R.;<br><b>Biophysics, an introduction.</b><br>
Wiley, 2003. </li><br><li>Cover, T.M. & Thomas, J.A.;<br><b>Elements of
information theory.</b><br>
Wiley, 1991.<br><br><ul>
Entropy is the minimum descriptive complexity of a
random variable, and mutual information (a measure of interdependence
of variables) is the communication rate in
the presence of noise. Indeed, we consider Kolmogorov complexity (the
minimal description or program length) to be more fundamental than
Shannon entropy. It is the ultimate data compression and leads to
logically consistent procedure for inference. [Occam's razor: the
simplest explanation is best.] The quantities H
(entropy), I (information), C
(capacity), D (relative
entropy), K (Kolmogorov's
complexity), W (double rate),
arise naturally in the following areas: <br><br><ul><ul><li> data
compression,</li><li>data
transmission, <br></li><li>network
information theory, <br></li><li>ergodic
theory, <br></li><li>hypothesis
testing, <br></li><li>statistical
mechanics, <br></li><li>inference, <br></li><li>gambling and investment, <br></li><li>probability
theory, <br></li><li>complexity
theory. </li></ul></ul></ul><br></li><li> Creighton, T.E.;<b><br>
Proteins: Structures and molecular properties.</b><br>
Freeman, 1997.<br></li><br><li>French, A.P.;<br><span style="font-weight: bold;">Vibraciones
y ondas (Curso de
física del M.I.T.).</span><br>
Reverté, 2000. <br></li><ul><br>
Movimiento armónico simple: vibración sinusoidal.
[<span style="font-style: italic;">z</span>=<span
style="font-style: italic;">a</span>+<span style="font-style: italic;">jb</span>]
Ampliemos
ahora
la
interpretación
del
símbolo
<span style="font-style: italic;">j</span>
[<span style="font-style: italic;">j</span><sup>2</sup>=-1],
considerándolo como una instrucción para realizar una rotación de 90°
en sentido contrario a las agujas del reloj al desplazamiento que
precede. [...] Una magnitud de la forma <span
style="font-style: italic;">jb</span> (siendo <span
style="font-style: italic;">b</span> real) se denomina imaginaria
pura. Desde el punto de vista de las matemáticas, este término es quizá
poco afortunado, porque en la aplicación del concepto de número real a
complejo, un componente "imaginario" como <span
style="font-style: italic;">jb</span> está en igualdad de condiciones
que un componente real como <span style="font-style: italic;">a</span>.
Pero
cuando
se
aplica
al
análisis
de
oscilaciones monodimensionales,
esta terminología se ajusta perfectamente, como ya hemos visto, a las
partes físicamente reales y no reales de un movimiento bidimensional
imaginado.<br>
[cos θ + <span style="font-style: italic;">j</span>
sen <span style="font-family: symbol;">θ</span> = e<sup><span
style="font-style: italic;">j</span><span style="font-family: symbol;">θ</span></sup>]
Este
resultado
es
muy
importante,
hablando
matemáticamente,
porque
proporciona una conexión clara entre la geometría plana (representada
por las funciones trigonométicas) y el álgebra (representada por la
función exponencial). R.P. Feynman la consideraba "como joya
asombrosa ... la fórmula matemática más notable". Fue establecida por
Leonhard Euler en 1748.<br>
La resultante de dos o más vibraciones armónicas es
simplemente la suma de las vibraciones aisladas. Si dos movimientos
armónicos simples tienen frecuencias muy parecidas, la perturbación
combinada presenta lo que se denomina pulsación o batido [cuya
frecuencia es la mitad de la suma de la frecuencias individuales].
[...] Sin
embargo, es notable que una gran diversidad de deformaciones de los
sistemas físicos, entre los que podemos incluir alargamientos,
acortamientos, flexión y torsión (o combinaciones de los mismos) dan
como resultado fuerzas restauradoras proporcionales al desplazamiento y
por ello originan vibraciones armónicas simples (o una superposición de
vibraciones armónicas).
</ul><br><li> Graub, W.;<br><b>Linear algebra.</b><br>
Springer, 1981.<br></li><br><li><a name="Kampis"></a>Kampis, K.;<br><b style="color: rgb(0, 0, 0);">Self-modifying systems in biology
and cognitive science. A new framework for dynamics, information and
complexity.</b><br>
Pergamon, 1991.<br><ul><br>
The Turing-Church
thesis
can be
interpreted as an isomorphism between formal and dynamic systems:
<ul><ul><li> symbol
alphabet = numbers </li><li> admissible
symbol strings = state manifolds</li><li> grammar =
constraints</li><li> axioms =
initial conditions</li><li> rules of
inference = vector field</li><li> proof sequence
= trajectory</li><li> theorem =
attractor</li></ul></ul></ul><br>
Determinism refers to a computational
procedure (formal domain, necessitiy in models) whereas causality
refers to a relationship between things (natural domain, necessity in
reality). In other words: according to this view, instead of asking why
things happen, one should ask how they happen. It is not enough to
reproduce things properly - an idea sometimes expressed by contrasting
'explanatory' and 'predictive' models, or 'Erklärung' and 'Verstehen',
or by drawing a distinction between 'models' and 'metaphors' or
'simulations'. Statistical description is cumulative, whereas dynamical
description is generative. Encodings must be done separately from
dynamics; the levels of representation must not mix. Models cannot be
read out directly; they have to be interpreted. A model will be called
relevant if it is both adequate and interpretable. Models (as
observables plus descriptions) are not reducible to descriptions
(variables plus relations). A concrete situation where this principle
applies is the replacement of a 'statistical' or inductive law (let us
say, the Kepler's law) with a dynamical or deductive law (e.g. Newton's
equations of motion). I/O systems are referred to logical monism,
expressed by Zenon's paradoxes (viz., the system remain constant until
something changes, and modifyies the whole system -anyone can swim
twice in the same river-). The encoding is the real memory of the
system. Validity can be ensured only by considering complete
information; but it does not mean a direct use of the complete
information.<br>
With
Gödel numbering (by instance, using the
original symbols as exponents of a prime factorization) it becomes
possible to encode and decode state transformations to and from states
directly. Chess tree (a modest finite game) has about 10<sup>120</sup>
elements.
However, what Waddington calls [in
developmental
biology] "morphogenetic fields" were not imagined as real fields but as
nominal models which can perhaps be reinterpreted in terms of
positional information. What distinguishes a mere label (a formal but
meaningless object) from a symbol is its interpretation. Component
systems (a finite set of diverse and interacting entity blocks, as
linguistic or biological ones) have creative unpredictable dynamics.
Such structures [LEGO-like] were foreseen (but later abandoned) by J.
von Neumann under the name 'kinematic automata'. The result [organized
complexity] is neither simplicity nor disordered complexity but
something "between the crystal and the smoke" (H. Atlan). Elements,
sets, sets of sets, etc. belong to different types in the Russellian
set theory, and are not allowed to mix freely. This makes them
qualitatively different. They can start to represent information in
type structures, as is the case in von Neumann's definition of integer
numbers where every number corresponds to an element of a different
level of hierarchy [althought current set theory is type-free, not
qualitative]. Therefore, component-systems cannot have dynamic
algorithms [because of unknown elements]. Von Neumann's
self-reproducing automata possessed both excitable and nonexcitable
states, and the construction of automata by other automata was
conceived as a growth process involving the transform of nonexcitable
cells to excitable ones by special signals.<br>
Wittgenstein:
Logic when already stablished may be
used for describing formal implications, but the rules themselves do
not follow any logic. <br>
(Relative) Complexity is not a number but a way of
looking at things. A typical chaos-producing mechanism is the iterated
1D map (symbolics dynamics) x<sub>n+1</sub>= a - x<sub>n</sub><sup>2</sup>
x<span style="font-family: symbol;">∈</span>[-a,a] a<span
style="font-family: symbol;">∈</span>[0,2], which leads to chaos in a
finite large interval of a. The most plastic representation of the idea
of chaos by means of a taffy-pulling machine (Rössler, 1987; it makes
cookies by filling and folding -cut&paste- the pastry
automatically). This mathematical effect of chaos is best seen on the
Bernoulli-shift transformation <span style="font-family: symbol;">s</span>:
x<sub>n+1</sub>= <span style="font-family: symbol;">s</span>(x<sub>n</sub>)
=
2x<sub>n</sub> mod 1 x<span style="font-family: symbol;">∈</span>[0,1]
(that is, we multiply by two and cut off the part which grew bigger
than one). Fractals generate an illusion of complexity. [x<sub>n+1</sub>=ax<sub>n</sub>
(1-x<sub>n</sub>)] This equation specifies those points, the Poincaré
map, where the trajectory cuts the line [within a system with partition
0,1; that is, the whole space is cut into two parts: i.e., the only
real rule is after 0 there cannot be 0].<br>
<span style="text-decoration: underline;">Badii
utilized the idea that
chaos is made up of
combinations of the periodic orbits</span>. A 'primitive' is defined by
two
conditions: it is periodic (i.e., in the infinity sequence there are
arbitrarily long repetitions of it) and it cannot be broken down to
other primitives. The [hierarchical] tree [structure] is constituted by
the primitives and their admissible combinations (the first level, the
primitive combination; 2nd level, pairwise combination; n-th level,
n-ary combinations, and so on). <br>
This procedure [Gödel numbering], which is itself
algorithmic, ensures the existence of a dynamics without the use of any
further information. All we need is a component-system which produces
them by complexity-increasing procedures. The 'privileged zero'
properties and the imaginary Turing machines that make knots in their
tapes (or apply other tricks) can produce complexity beyond any fixed
limit. New types of thinking machines, molecular computers, and
evolutionary technology can be envisaged at this point. <br>
For this theoretical framework, where all automata
can be
conceived as embedded configurations in some cellular space, von
Neumann formulated the following
hierarchical sequences of questions: </li><ul><li>(1) <span style="font-weight: bold;">Computability</span>.
When
are
automata logically universal? Can an individual automaton be logically
universal? </li><li>(2) <span style="font-weight: bold;">Constructibility</span>.
Can an automaton
be constructed by another
automaton? What kind of automata can be constructed? Does there exist
an automaton that has universal constructing hability? </li><li>(3) <span style="font-weight: bold;">Self-reproduction</span>.
Does there exist a self-reproducing automaton in
some cellular space? Do self-reproducing
automata exist which can do other things besides being
self-reproducing? </li><li>Von Neumann also posed a fourth question, concerning the <span
style="font-weight: bold;">evolvability</span>
of automata.</li><ul><li>Von Neumann was able to structurally realize this system in a
two-dimensional cellular automaton array with with 29 states per cell,
with a simple neighbourhood function where the next state of a cell
depends on 5 cells: on itself and four neighbours (up, down, right, and
left). With this he gave a positive answer to (1)-(3). <br></li><li>(Laing, 1977) He
has constructed self-inspecting Turing machines, that resemble
molecules operating on each other and on themselves (a design related,
at a distance, to Markov normal systems, and W. Stahl's molecular 'list
processors').<br></li></ul></ul>
[...] That is, not the dynamics but an external
standardization
makes replication sustainable (Rössler conclussion to Wigner paradox).
Such a standardizing mechanism is easy
to implement in trivial
or 'pseudo'-self-reproducing systems, where it is the reproducing (and
not the reproduced) machine that does the whole process anyway.
Component specificity is outside dynamics. It is a
class-characteristic. There is an inherent discreteness present here, a
standardization made by the system. In such a systems there is no
premium for accuracy because <u>the interactions are digitized. The
origin
of the world, the origin of music and the origin of arithmetics may
thus have a common root with the origin of error-free replication, to
which the natural units are necessary</u> [...]. <u>Now if there are
such
systems, logical and discrete categories are no more just modes of
descriptions but modes of operation for them</u>. (Logical operations
seem
to be a preferent mode of organized systems.) The easiest and more
complete access to component information is ensured by a
monodimensional arrangement of the components. <br>
The word itself [information] comes from the Latin <i>informatio</i>,
which
means
illumination,
exposition,
outline,
unfolding
or
'body
of
knowledge'. Complexity only deals with syntactic properties
(i.e., relationships between symbols). From this point of view
[pragmatic information,
introduced by E. von Weizsäcker] communication
is conceived in the form of repeated cycles of communicational acts in
which novelty and reinforcement ('Erstmaligkeit und Bestätigung') are
communicated. They together shape
the value, the meaning and, through this, the syntactic information
content of messages. [...] the question is whether there is any induced
[behavioural] change in the system. If there is none, there is no
information. Most authors agree that information is not law-like but
event-like; it is no accident that Thom associates information with the
emergence of forms, the latter being his answer to the problem of
emergence of events in law-like systems. Information corresponds to a
selective (i.e. unequal) dissipation of energy over the degree of
freedom of a system (Conrad 1972). [...] Its thermodynamical
implications are related to the principle of equipartition of
equilibrium systems. Less entropy does not imply more information, it
only means there is information [as Schrödinger related information
-organization- to order]. A machine is a prototypic mechanism system;
there is a clear (etymological) relationship between the two words. In
this respect,
a machine can be defined as a definite set of relations over a fixed
set of parts. It is the boundary conditions of the machines that make
them machines, systems amenable to a simplistic treatment [so
teleonomic behaviour is realized by boundary conditions]. Continous
self-measurement of systems are adopted in certain quantum mechanics
interpretations because for a quantum system to exist in a definite
state, a reduction of the free wave function to a bound wave function
is necessary. The notion of activation and inhibition can be a good
behavioural synonyms for the action of 'information' in systems.
[Logical gates -2 inputs and 4 possible combinations of the inputs- are
related to the physical limits of computation, because of logical
irreversibility due to information erasure (dissipation of energy), as
the "windows" through which the component-component interactions
occur.] This [evolutionary] process ensures through the achieved stages
of replication the stable existence of referential relations (which
otherwise would disappear). It is this stable relationship, the
stability of which has nothing to do with the nature of the signs
involved,
which makes posssible for a symbolic description to emerge in terms of
signifier-signified relations.<br>
<u>Does mathematics impose constraints on physics?</u>
Obviously, the answer is no [[!!??]]. "If there is a finite number of
different symbols, or a method of representing any symbol by finite
means, then a symbol can be translated into binary code, and so for
each operation on words or symbols there is a corresponding computable
function" (Johnson-Laird, 1983). [Lambda-calculus: (i) reduction (makes
a number from a function), (ii) expansion (makes a function from a
number) and (iii) change of variable.] This inequivalence [between
flowcharts and recursive functions] has to do with potentially infinite
programs (Recall again that in the finite domain everything is
solvable). The theorem, due to M.S. Paterson and C.E. Hewitt (1970), is
this: there are recursive programs which cannot be represented by
flowcharts of a constant number of variables.<br><br><li> Leff, H.S. & Rex A.F. (ed.);<br><b>Maxwell's demon: Entropy, information, computing.</b><br>
Hilger, 1990.<br><ul><br>
The information-bearing
degrees of freedom of a computer interact with the thermal reservoir
represented by the remaining degrees of freedom. This interaction plays
two roles. First of all, it acts as a sink for the energy dissipation
involved in the computation. This energy dissipation han an unavoidable
minimum arising from the fact that the computer performs irreversible
operations. Secondly, the interaction acts as a source of noise causing
errors. In particular thermal fluctuations give a supposedly switched
element a small probability of remaining in its initial state, even
after the switching force has been applied for a long time. It is
shown, in terms of two simple models, that this source of error is
dominated by one of two other error sources:(1) Incomplete
switching due to inadequate time allowed for switching, and (2) decay
of stored information due to thermal fluctuations.<br>
R. Landauer; <b>Irreversibility and heat generation in the
computing process.</b><i>IBM J. Res. Dev.</i><b>5</b>, 183-191
(1961).<br><br>
The
biosynthesis of messenger RNA is discussed as a physical example
of reversible computation [as a
temporary memory device]. For a typical
irreversible computer, which throws away about one bit for operation
and <i>kT</i> ln 2 the aproximate
lower bond on the energy dissipation
of such machines; for a logically reversible computer, is <i>kT</i> ln
1, by construction. In biological systems, apparently, the speed and
flexibility of irreversible erasure its extra cost in free energy (<i>kT</i>
ln 4 per nucleotide, in this case).<br>
C.H: Bennett; <b>Logical
reversibility of computation.</b><i>IBM
J.
Res. Dev.</i><b>17</b>, 525-532 (1973).<br><br>
[...] This completes our description of how three
indefinite capacity counters may be employed to represent the actions
of a Turing tape: one counter for each right and left tape segment, and
one counter for manipulating the contents of the first two. <br>
[In order to represent two distinct numbers as a
single number we employ Gödelization.] We have thus managed to carry
out the desired tape transformations employing only two counters and
(save for matters of bookkeeping detail of the sort as indicated
earlier) the Minsky result can be obtained. <br>
R. Laing; <b>Maxwell's demon and computation.</b><i>Phil. Sci.</i><b>41</b>,
171-178
(1974).<br><br>
The enzymatic apparatus of DNA
replication,
transcription , and translation appear to be nature's closest approach
to a Brownian computer (i.e., a model allowing thermal noise to
influence the trajectory so strongly that it becomes a random walk
through the entire accessible, low -potential-energy, portion of the
computer's configuration space), dissipating 20-100 <i>kT</i> per
step. <br>
Because the intrinsic error rate is determined by
the difference in barriers opposing corrects and incorrect transitions,
it is a function of the particular chemical hardware, and does not
represent a fundamental thermodynamic limit. In practice it can
be made arbitrarily small by increasing the size and complexity of the
recognition sites (to increase the potential energy difference <span
style="font-family: symbol;">D</span>E between correct and incorrect
reaction paths), by lowering the temperature (to increase the Boltzmann
ratio e<sup><span style="font-family: symbol;">D</span>E/<i>kT</i></sup>
of correct to error transitions without changing <span
style="font-family: symbol;">D</span>E)
and by making the apparatus larger and more massive (to reduce
tunneling).<br>
The excess [fourfold higher concentration] phosphate
keeps the enzyme [polynucleotide phosphorilase] from running backwards
and synthesizing random RNA, but it also means that the cycle of
specific synthesis followed by nonspecific degradation must waste about
<i>kT</i> ln 4 per nucleotide
even in the limit of zero speed. For an organism that has already spent
around 20<i>kT</i> per nucleotide to
produce RNA with near maximal speed and accuracy, the extra 1.4<i>kT</i>
is obviously a small price to
pay for being able to dispose of the RNA in a summary manner, without
taking it back to its birthplace.<br>
Levin
and Chaitin
definition of algorithmic entropy: A
program is self-delimiting it if needs no special symbol, other than
the digits 0 and 1 of which it is composed, to marks its end; discrete
objects other than binary strings, e.g., integers, or
coarse-grained cells in a continous phase space, may be defined by
indexing them by binary strings in a standard way (as Monte Carlo
program describes distributions of macrostates).<br>
C.H. Bennett; <b>The thermodynamics of
computation - a review.</b><i>Int. J.
Theor. Phys.</i><b>21</b>,
905-940 (1982).<br><br>
There really is no software, in
the strict sense of disembodied information, but only inactive and
relatively static hardware. Thus, the handling of information is
inevitably tied to the physical universe. Evolution and the origin of
life
can be viewed as an optimization process, in which fluctuations (e.g.,
mutations) take us from one metastable ecology, to a new one. [As
oscillations of a laser, under conditions where these temporal and
spatial patterns are the only allowed behavior, seems strained.] We
might, with equal justice, refer to the revolution of an electron
around a hydrogen nucleus, or the rotation of a water wheel, as self
organization. [Noise can be a qualitative influence on the
development of systems, as induced noise transitions are invoked in
simulated annealing technique for optimization in the presence of
complex constraints. In equilibrium, noise is defined by the
temperature.]<br>
R. Landauer; <b>Computation:
A fundamental physical view</b>. <i>Phys.
Scr.</i><b>35</b>, 88-95 (1987). <br><br>
[Incomprensible; molan las siguientes frases:]<br><ul><ul><li>Purcell and
Pound
discovered (1951) the existence of
negative temperature.</li><li>Classical
demons
and orthogonality in Hilbert space.</li><li>Reality is not
subdivided.</li></ul></ul>
E. Lubkin; <b>Keeping
the entropy of measurement: Szilard revisited.</b><br><br>
One bit of information is
somehow equivalent to <i>k ln 2</i>
units of entropy, or about 2.3 x 10<sup>-24</sup>
cal/Kelvin. Chemical brownian computers such as RNA polymerase
are of course quantum systems, but because of the high temperature and
short thermal de Broglie wavelength, quantum effects are subtle and
qualitative (e.g., zero-point and tunneling corrections to reaction
rates) rather than quantitative. Feynman, instead of incorporating the
direction of
the computation (forward as opposed to backward) in the Hamiltonian,
included it into the initial condition, which was now not a single
basis state but rather a wave-packet-like superposition of basis states
[i.e., as markovian processes].<br>
C.H. Bennet; <b>Notes on computation</b> (198?). <br></ul></li><br><li>Nei,
M.;<br><b>Molecular evolutionary genetics.</b><br>
Columbia University, 1987.<br></li><br><li>Reichl,
L.E.;<br><b>The transition to chaos. In conservative
classical systems: Quantum manifestations.</b><br>
Springer, 1992.<br><ul><br>
The book
begins with discussions of nonlinear resonance, Noether's theorem,
integrability, KAM theory, and a definition of chaotic behavior; it
continues with a detailed discussion of area-preserving maps with
emphasis on self similarity, integrable and nonintegrable quantum
systems, spectral properties, path integrals, and periodically driven
systems: and concludes by showing how to apply the ideas to stochastic
systems.<br><br>
[La entropía de Kolmogorov (suma
de los
exponentes positivos de Liapunov) equivale a la pérdida de información
por unidad de tiempo y permite distinguir entre comportamientos
caóticos (divergentes), regulares (-quasi- convergentes) y estocásticos
(distribuciones -equi- probabilísticas). Los flujos se acoplan
simétricamente (relación de reciprocidad de Onsager), con distinto
carácter tensorial -escalar, vectorial, matricial- pero específica y
anisotrópicamente (principio de Curie-Prigogine).
</ul><a name="Montero_Moran"></a><ul>
F. Montero & F.
Morán; <b>Biofísica. Procesos de
autoorganización
en biología.</b> Universidad Complutente, 1992]
</ul><br></li><br><li> Strang,
G.; <br><b>Introduction to applied mathematics.</b><br>
Wellesley, 1986<br></li><br><li> Westhead,
D.R.; Parish, J.M. &
Twyman, R.M.;<br><b>Bioinformatics</b>.<br>
BIOS, 2002. </li><br><li>Asimov, I.;<br><b>Breve historia de la química.</b><br>
Alianza, 1981. </li><br><ul>
[Hipótesis de Prout: H=1.]
</ul><br><br><li>duSautoy, M.;<br><span style="font-weight: bold;">La música de los números primos.</span><br>
Acantilado, 2007. </li><br><li>Hawking, H. (ed.);<br><span style="font-weight: bold;">God created the integers.</span><br>
Penguin, 2005. </li><br><li>Gracián, E.;<br><span style="font-weight: bold;">Los números primos.</span><br>
RBA, 2010. </li></ol><hr style="width: 100%; height: 2px;"><hr style="width: 100%; height: 2px;"><a
href="../info/La%20prisa%20y%20el%20silencio.html">La prisa y el
silencio</a><br><br>
THE PRIME PROJECT -SPRECTRA- (Golay codes and Gödel numbers):<br><br>
• UNITY: Mendeleiev Suite (atoms/stars and notes)<br>
• COMPLEXITY: Riemann Symphony (molecules/galaxies and harmony)<br><br>
Michio Kaku (La Vanguardia, 18/V/2010) profundiza en la teoría de
cuerdas, hoy la más<br>
plausible para englobar todos los fenómenos del cosmos, hasta su escala
cuántica. "Todo es<br>
vibración y cada partícula elemental, una nota; la física, estudia
armonías; la química,<br>
melodías. El Universo es sinfonía!" Se confirmaría -la existencia de
universos paralelos o<br>
multiverso- con el descubrimiento de la partícula S.<br>
[from "El País", august 2005] Theoretical physicist Michio Kaku
considers music an<br>
analogy of the cosmos:<br><ul><li>• subatomic particles are notes,</li><li>• physics is harmony,</li><li>• chemistry is melody, and</li><li>• the universe, the complete symphony.</li></ul>
[Music is communication as a pure state, a language directed to
emotions.]<br><br>
COSMOS (Guy Cogeval & Jean Clair, Museo de Montreál)<br><hr style="width: 100%; height: 2px;"><br><h2><span style="text-decoration: underline;">addenda:</span></h2><ul><ul><li>Sánchez Ramos, E.; <span style="font-weight: bold;">Tablas
de
logaritmos, trigonométricas y de
cálculo de intereses <br>
(vigésima segunda edición, estereotipada al
galvanismo y considerablemente aumentada);</span>
Hernando,
1960.</li></ul><br><ul><li>Flammarion, C.; <span style="font-weight: bold;">Astronomía
popular</span>; Montaner y Simón, 1963. (Ciclo de Bethe: pág. 249 &
529.) </li></ul><br><ul><li>Honingh, A.; <span style="font-weight: bold;">Measures
of
consonances in a goodness-of-fit model for equal-tempered scales. </span>(http)</li></ul></ul><div style="margin-left: 120px;"> In this paper we
have checked how well
several equal temperament systems approximate the frequency ratios from
just intonation. As test sets, we chose intervals from the just major
scale and interval from the harmonic series. To weight the intervals,
Euler's Gradus function and Helmholtz' roughness function have been
used. It turns out that for these cases a division of the octave in 12,
19, 41 or 53 would be a good choice.
</div><br><ul><ul><li>VirtualDemo FFT: <span style="font-weight: bold;">Adobe
Audition 1.0</span> (Cool Edit Pro Syntrillium), 1992-2003. </li><li style="margin-left: 40px;">[Everest, F. A. (1994). The Master
HandBook Of Acoustics. McGraw Hill].</li></ul></ul><hr style="width: 100%; height: 2px;"><ul><ul></ul></ul><ul><ul><li><a id="rdf:#$uO5hB1" href="http://www.utm.edu/research/primes/"
last_modified="1055333164" add_date="1055333170"
last_visit="1055424846">The Prime Pages (prime number research,
records and resources)</a></li></ul></ul><ul style="margin-left: 40px;"><ul><li><span style="text-decoration: underline; font-weight: bold;">Gauss:</span>
The problem of distinguishing prime numers from composite numbers and
of resolving the latter into their prime factors is known to be one of
the most important and useful in arithmetic.</li><li>Primes are the
building blocks (factors) of the positive
integers (the
prime integers of an integer determines its properties). A rough
estimate for the <span style="font-style: italic;">n</span>th prime is
<span style="font-style: italic;">n log n</span>.</li><li>A <i>perfect number</i> is one whose proper divisors sum to
the number itself. e.g. The number 6 has proper divisors 1, 2 and 3 and
1 + 2 + 3 = 6, 28 has divisors 1, 2, 4, 7 and 14 and 1 + 2 + 4 + 7 + 14
= 28. </li><li>Euclid
also showed that if the number 2<sup><i>n</i></sup> - 1 is prime then
the number 2<sup><i>n</i>-1</sup>(2<sup><i>n</i></sup> - 1) is a
perfect number. The mathematician Euler
(much later in 1747) was able to show that <i>all</i> even perfect
numbers are of this form. It is not known to this day whether there are
any <i>odd</i> perfect numbers.</li><li>The next important developments were made by Fermat
at the beginning of the 17<sup>th</sup> Century. He proved a
speculation of Albert
Girard that every prime number of the form 4 <i>n</i> + 1 can be
written in a unique way as the sum of two squares and was able to show
how any number could be written as a sum of four squares.</li><li><i>Fermat's Little Theorem</i>: This states that if <i>p</i>
is a prime then for any integer <span style="font-style: italic;">x</span>
we have <i>x</i><sup><i>p</i></sup>
= <span style="font-style: italic;">x</span> modulo <i>p</i>. <br>
This proves one half of what has been called the <i>Chinese hypothesis</i>
which dates from about 2000 years earlier, that an integer <i>n</i> is
prime if and only if the number 2<sup><i>n</i></sup> - 2 is divisible
by <i>n</i>. </li><li>The statement that the density of primes is 1/log(<i>n</i>) is
known as the <i>Prime Number Theorem</i>. </li><li>Goldbach's and other conjetures: Every even <span
style="font-style: italic;">n</span>>2 is the sum
of two primes (Schnizel: every integer <span
style="font-style: italic;">n</span>>17 is the sum of three
distinct primes). Every odd <span style="font-style: italic;">n</span>>5
is
the
sum
of
three
primes.
Every
even
number is the difference of two primes. The prime gap following a
prime <span style="font-style: italic;">p</span> is bounded by a
constant times (<span style="font-style: italic;">log p</span>)<sup>2</sup>.</li></ul></ul><ul style="margin-left: 40px;"><ul></ul><ul></ul></ul><ul><ul><li>Galileo: (n<sup>2</sup>, 2n-1)
-euclidian space-. Stevinus: <span style="font-family: symbol;">f</span><sub>i</sub>/<span
style="font-family: symbol;">f</span><sub>d</sub> = <span
style="font-family: symbol;">p</span>/4 -reciprocal space-.</li><li>The number of electrons (<span style="font-style: italic;">n</span>)
a
shell
can
hold
is
given
by
2<span style="font-style: italic;">n</span><sup>2</sup>
(http).</li></ul></ul><ul></ul><br><hr style="width: 100%; height: 2px;">excerpts from <br><h3><a href="http://www.dspguide.com/pdfbook.htm">"<span
style="font-weight: bold;">The
Scientist and Engineer's Guide to Digital Signal Processing"
(Steven W.
Smith, 2001)</span></a></h3><h3>(Ch.5.) Linear systems<br></h3><span style="text-decoration: underline;">Even/Odd Decomposition</span><br><br>
Equations for even/odd decomposition:<br><div style="margin-left: 320px;"><span style="font-style: italic;">x</span><sub
style="font-style: italic;">e</sub><span style="font-style: italic;">
[n ] = (x [n] + x [N-n])/2</span><br style="font-style: italic;"><span style="font-style: italic;">x</span><sub
style="font-style: italic;">o</sub><span style="font-style: italic;">
[n ] = (x [n] - x [N-n])/2</span><br></div><br>
This may seem a strange definition of left-right
symmetry, since <span style="font-style: italic;">N/2-1/2</span>
(between two samples) is the exact center of
the signal, not <span style="font-style: italic;">N/2</span>.
Likewise, this off-center symmetry means that
sample zero needs special handling. This decomposition is part of an
important concept in DSP called circular symmetry. It is based on
viewing the end of the signal as connected to the beginning of the
signal. When even and odd signals are viewed in this circular manner,
there are actually two lines of symmetry, one at point <span
style="font-style: italic;">x[N/2]</span> and
another at point <span style="font-style: italic;">x[0]</span>. In an
odd signal, point <span style="font-style: italic;">0</span> and point
<span style="font-style: italic;">N/2</span> always
have a value of zero. In an even signal, point <span
style="font-style: italic;">0</span> and point <span
style="font-style: italic;">N/2</span> are
equal to the corresponding points in the original signal. The
mathematics of Fourier analysis inherently views the signal as being
circular, although it usually has no physical meaning in terms of where
the data came from.<br><h3>(Ch.11) Fourier Transform Pairs</h3><span style="text-decoration: underline;">Harmonics</span> <br><br>
If a signal is periodic
with
frequency <span style="font-style: italic;">f</span>, the only
frequencies composing the signal are integer
multiples of <span style="font-style: italic;">f</span>, i.e., <span
style="font-style: italic;">f</span>, 2<span
style="font-style: italic;">f</span>, 3<span
style="font-style: italic;">f</span>, 4<span
style="font-style: italic;">f</span>, etc. These
frequencies are called harmonics. The first harmonic is <span
style="font-style: italic;">f</span>, the second
harmonic is 2<span style="font-style: italic;">f</span>,
the third harmonic is 3<span style="font-style: italic;">f</span>, and
so forth. The first harmonic (i.e., <span style="font-style: italic;">f</span>)
is
also
given
a
special
name,
the
fundamental
frequency.
<h3>(Ch.27) Data Compression</h3><span style="text-decoration: underline;">Huffman Encoding</span><br><br>
The idea is to assign frequently used characters
fewer bits, and seldom used characters more bits. Huffman encoding
takes this idea to the extreme. In mathematical terms, the optimal
situation is
reached
when the number of bits used for each character is proportional to the
logarithm
of the character's probability of occurrence. A clever feature of
Huffman encoding is how the variable length codes can be packed
together. Now consider a Huffman encoded data stream, where each
character can have a variable number of
bits. How do you separate one character from the next? The answer lies
in the proper selection of the Huffman codes that enable the correct
separation. Each code consists of two parts: a number of zeros before a
one, and an optional binary code after the one. This allows the binary
data stream to be separated into codes without the need for delimiters
or other marker between the codes. The uncompression program 10 looks
at the stream of ones and zeros until a valid code is formed, and then
starting over looking for the next character. The way that the codes
are formed insures that no ambiguity exists in the separation.<br><br>
A more
sophisticated version of
the Huffman approach is called arithmetic encoding. In this scheme,
sequences of characters are represented by individual codes, according
to their probability of
occurrence. To implement Huffman or
arithmetic encoding, the compression and uncompression algorithms must
agree on the binary codes used to represent each character (or groups
of characters). This can be handled in one of two
ways. The simplest is to use a predefined encoding table that is always
the
same, regardless of the information being compressed. More complex
schemes use encoding optimized for the particular data being used. This
requires
that the encoding table be included in the compressed file for use by
the
uncompression program. Both methods are common.<br><span style="text-decoration: underline;"><br><a name="LZW"></a>LZW compression</span><br><br>
LZW compression is named after
its developers, A. Lempel and J. Ziv, with later modifications by Terry
A. Welch. It is the foremost technique for general purpose data
compression due to its simplicity and versatility. Typically, you can
expect LZW to compress text, executable code, and
similar data files to about one-half their original size. LZW also
performs
well when presented with extremely redundant data files, such as
tabulated
numbers, computer source code, and acquired signals. Compression ratios
of 5:1
are common for these cases. LZW is the basis of several personal
computer utilities that claim to "double the capacity of your hard
drive". LZW compression is always used in GIF image files, and offered
as an option in TIFF and PostScript. <br><br>
LZW compression uses a code table. A common choice
is to provide 4096 entries in the table. Uncompression is achieved by
taking each code from the compressed file, and translating it through
the code table to find what character or characters it represents. The
LZW method achieves compression by using codes 256 through 4095 to
represent sequences of bytes. The longer the sequence assigned to a
single code, and the more often the sequence is repeated, the higher
the compression achieved. Although this is a simple approach, there are
two major obstacles that need to be overcome: (1) how to determine what
sequences should be in the code table, and (2) how to provide the
uncompression program the same code table used by the compression
program. The LZW algorithm exquisitely solves both these problems.<br><br>
When the LZW program starts to encode a file, the
code table contains only the first 256 entries, with the remainder of
the table being blank. This means that the first codes going into the
compressed file are simply the single bytes from the input file being
converted to 12 bits. As the encoding continues, the LZW algorithm
identifies repeated sequences in the data, and adds them to the code
table. Compression starts the second time a sequence is encountered.
The key point is that a sequence from the input file is not added to
the code table until it has already been placed in the compressed file
as individual characters (codes 0 to 255). This is important because it
allows the uncompression program to reconstruct the code table directly
from the compressed data, without having to transmit the code table
separately.<br><br>
Only a few dozen lines of code are
required for the most elementary LZW programs. The real difficulty lies
in the efficient management of the
code table. The brute force approach results in large memory
requirements
and a slow program execution. Several tricks are used in commercial LZW
programs to improve their performance. For instance, the memory problem
arises because it is not know beforehand how long each of the character
strings for each code will be. Most LZW programs handle this by taking
advantage of the redundant nature of the code table. This pattern
always holds: every code can be expressed as a previous code plus one
new character. The execution time of the compression algorithm is
limited by searching the code table to determine if a match is present.
In other words, don't assign the 4096 codes to sequential locations in
memory. Rather, divide the memory into sections based on what sequences
will be stored there.<br><ul><ul><ul><ul><ul><li style="margin-left: 40px;"><img src="LZWmethod.jpg"
title="" alt="" style="width: 850px; height: 503px;"></li></ul></ul></ul></ul></ul><hr style="width: 100%; height: 2px;">[from "El País", august 2005]<br>
<br><div style="margin-left: 40px;">Theoretical physicist Michio Kaku
considers music an analogy of the cosmos:<br><ul><li>subatomic particles are notes, <br></li><li>physics is harmony, <br></li><li>chemistry is melody, and <br></li><li>the universe, the complete symphony.</li></ul></div><br><hr style="width: 100%; height: 2px;"><h2><font size="+1">some links to explore</font><br></h2><h4> elements of number theory online </h4><ul><li><a id="rdf:#$tO5hB1"
href="http://mathworld.wolfram.com/GoldbachConjecture.html"
last_modified="1055332946" add_date="1055332951"
last_visit="1107251585" last_charset="ISO-8859-1">Goldbach Conjecture
-- from MathWorld</a></li><li><a id="rdf:#$sO5hB1"
href="http://www.ieeta.pt/%7Etos/goldbach.html"
last_modified="1055332884" add_date="1055332939"
last_visit="1055332962">Goldbach conjecture verification</a>
<dl><li><a id="rdf:#$xO5hB1"
href="http://www.maths.warwick.ac.uk/%7Emiles/MA426/"
last_modified="1058785765" add_date="1058785963"
last_visit="1058785765">MA426 Elliptic curves</a></li><li><a id="rdf:#$wO5hB1"
href="http://www.millersv.edu/%7Ebikenaga/numth/numnote.html"
last_modified="1056721023" add_date="1056721040"
last_visit="1056725426">Notes on Elementary Number Theory</a> </li><li><a id="rdf:#$vO5hB1"
href="http://www.homeofprimes.com/StatisticPrimes.html"
last_modified="1055333666" add_date="1055333778"
last_visit="1055333971">Prime Statistics</a></li></dl></li></ul><h4>various</h4><dl><p></p><dt><a id="rdf:#$XO5hB1" href="http://arxiv.org/abs/cond-mat/0303110"
last_modified="1047030276" add_date="1047030374"
last_visit="1061287661">[cond-mat/0303110] Information Entropy and
Correlations in Prime Numbers</a></dt><dt><a id="rdf:#$YO5hB1" href="http://arxiv.org/abs/cond-mat/0307301"
last_modified="1058270914" add_date="1058271012"
last_visit="1058270914">[cond-mat/0307301] Fermats last theorem, the
Riemann hypothesis, and synergistic glasses</a></dt><dt><a id="rdf:#$WO5hB1" href="http://arxiv.org/abs/cond-mat/0309199"
last_modified="1063105009" add_date="1063105116"
last_visit="1063105009">[cond-mat/0309199] Small world effect in
natural numbers network</a></dt><dt><a id="rdf:#$ZO5hB1" href="http://arxiv.org/abs/cond-mat/0309251"
last_modified="1063358049" add_date="1063358068"
last_visit="1063358049">[cond-mat/0309251] Toward a dynamical model
for prime numbers</a></dt><dt><a id="rdf:#$+O5hB1" href="http://arxiv.org/abs/cond-mat/0310079"
last_modified="1065515861" add_date="1065516014"
last_visit="1065516438">[cond-mat/0310079] On metric structure of
ultrametric spaces</a></dt><dt><a id="rdf:#$0P5hB1" href="http://arxiv.org/abs/cond-mat/0310148"
last_modified="1065771920" add_date="1065771951"
last_visit="1065771920">[cond-mat/0310148] Hidden structure in the
randomness of the prime number sequence</a></dt><dt><a id="rdf:#$1P5hB1" href="http://arxiv.org/abs/cond-mat/0310317"
last_modified="1066294051" add_date="1066294073"
last_visit="1066294051">[cond-mat/0310317] The Easiest Hard Problem:
Number Partitioning</a><br></dt><dt><a id="rdf:#$3P5hB1" href="http://arxiv.org/abs/cond-mat/0311158"
last_modified="1068545726" add_date="1068545776"
last_visit="1068545726">[cond-mat/0311158] Microscopic realizations of
the Trap Model</a></dt><dt><a id="rdf:#$2P5hB1" href="http://arxiv.org/abs/math-ph/0310016"
last_modified="1066297999" add_date="1066298057"
last_visit="1066297999">[math-ph/0310016] Thermodynamics of the Farey
Fraction Spin Chain</a></dt><dt><a id="rdf:#$4P5hB1" href="http://arxiv.org/abs/math.NT/0404128"
last_modified="1081854614" add_date="1081854659"
last_visit="1081854614">[math/0404128] From Physics to Number Theory
via Noncommutative Geometry. Part I: Quantum Statistical Mechanics of
Q-lattices</a></dt><dt><a id="rdf:#$.O5hB1"
href="http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6TVG-491BHNN-9&_user=625012&_handle=W-WA-A-A-WV-MsSAYWA-UUW-AUZCBWDAEA-WAZYZWBV-WV-U&_fmt=full&_coverDate=10%2F01%2F2003&_rdoc=4&_orig=browse&_srch=%23toc%235534%232003%23996719998%23456513%21&_cdi=5534&_artOutline=Y&view=c&_acct=C000031722&_version=1&_urlVersion=0&_userid=625012&md5=73dcc8833d1df95837389c558beb728a#sec1"
last_modified="1065167673" add_date="1065167758"
last_visit="1065167673">ScienceDirect - Physica A: Statistical
Mechanics and its Applications : Theory of analogous force on number
sets</a></dt><dt><a id="rdf:#$67afU2"
href="http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PLEEE8000069000003036106000001&idtype=cvips&gifs=Yes"
add_date="1097239329" last_charset="ISO-8859-1">Families and
clustering in a natural numbers network</a></dt><dt><a id="rdf:#$icqq31"
href="http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PLEEE8000068000002026206000001&idtype=cvips&gifs=Yes"
last_modified="1100861472" add_date="1100861437"
last_visit="1100861708" last_charset="ISO-8859-1">Zeta function zeros,
powers of primes, and quantum chaos</a></dt><dd><br></dd><dt><a id="rdf:#$Q8.ry3"
href="http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6TVG-4CNGJB0-2&_user=625012&_handle=B-WA-A-W-CV-MsSAYZA-UUW-AAUYAYDBBV-AAUCDZYABV-YCBYUUYYB-CV-U&_fmt=summary&_coverDate=10%2F01%2F2004&_rdoc=43&_orig=browse&_srch=%23toc%235534%232004%23996589999%23513146%21&_cdi=5534&view=c&_acct=C000031722&_version=1&_urlVersion=0&_userid=625012&md5=a00c3e7da0dd4632a06b47ca077b3bfa"
last_modified="1105629299" add_date="1105629114" last_charset="UTF-8">The
gaps
between
the
gaps:
some
patterns
in
the prime number sequence</a></dt><dt><a id="rdf:#$T3p.P1" href="http://arxiv.org/abs/math-ph/0503030"
add_date="1111158263" last_charset="ISO-8859-1">[math-ph/0503030]
Generalized Number Theoretic Spin Chain-Connections to Dynamical
Systems and Expectation Values</a></dt><dt><a id="rdf:#$13zvP1"
href="http://www.arxiv.org/abs/quant-ph/0507201" add_date="1122301371"
last_charset="ISO-8859-1">[quant-ph/0507201] Fermi-Dirac statistics
and the number theory</a></dt><dt><a id="rdf:#$LGyCi"
href="http://www.arxiv.org/abs/cond-mat/0507662" add_date="1122642067"
last_charset="ISO-8859-1">[cond-mat/0507662] Cluster Approximation for
the Farey Fraction Spin Chain</a></dt></dl><hr style="width: 100%; height: 2px;"></body></html>