User:Linas/Wacky thoughts

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Wild ideas and epiphanies[edit]

You should have an open mind, but not so open that your brain falls out.

Its impossible to make forward progress in life without also having understanding. The nature of understanding is that it often comes in form of revelations, A-Ha! moments, epiphanies, and sudden increases in intuition.

Sometimes, that new understanding is wild, far-fetched, nutty to the point of incoherent. In a way, the wildest new insights are the very best: these are the insights that lead to breakthroughs, and open up vast new regions. The crazier it sounds, the deeper it may be. Thus, one should not be ashamed of wild and wonderful thoughts, however half-baked and ill-formed they may be.

Below follows a compendium of some of my better wild ideas, worded so as to be coherent in the best way that I can. No holds barred, though: if it needed to sound nutty to get the point across, then the nutty wording was used. They may be wrong, they may be built on false assumptions, and they may make incorrect deductions; they may be "not even wrong" in the words of Pauli. I have not yet given the effort to make these rigorous, although I intend to. I only claim that they are inspired daydreams through topics in math and physics, that may offer insight and understanding, at least to me.

What am I doing, really?[edit]

I'm dabbling in quantum chaos, trying to understand, well, really basic things about quantum mechanics that should be obvious, but are not. Things like wave function collapse, and all that jazz. At the moment, I'm trying to figure out what a wave function is. I think a wave function has something to do with ergodic geodesics on manifolds, but I can't prove anything. Not long ago, I realized I didn't really understand what a holomorphic function is; what I was taught in college on this topic has lead me into a false sense of security on the topic. In particular, I had no idea just how insanely hyperbolic these things are, (and they are, in both very literal and very abstract senses).

Well, from there, it didn't take long to realize that I'm deeply confused about what a number actually is. Integers are bizarre, this is already known to number theorists; that they have an obsession about prime numbers is popularly known. Real numbers are even more bizarre; not only do they entail things like the continuum hypothesis, but there are also other closures of the rational numbers, known as the p-adic numbers. Real numbers do bizarre things, like play the equidistribution theorem game. Rational numbers are hardly blameless, as they possess deep symmetries under the modular group SL(2,Z), which is the symmetry group of almost all popularly described fractals. Some people think that fractals are weird. These people don't understand that fractals are merely an overt manifestation of the insane shit hidden under placid, smooth sea we call the rational numbers.

Mathematics education[edit]

Problem is, I was brainwashed in first grade into thinking that I knew what an integer was; only recently have I made strides in resolving the deep misconceptions about integers that were planted in my young, pliable mind. The American school system continues to utterly fail in the teaching of mathematics at the elementary school level; to this day, 6 and 7-year olds add 1+2 to get 3, and never, ever learn about modulo arithmetic until far, far far too late. See, modular arithmetic is a topic that is not only entirely appropriate for kindergarden, 1st grade, and 2nd grade, but it will make things like vulgar fractions plain and evident, instead of confusing, boring and dull. There is absolutely no reason whatsoever why 4th and 5th graders can't be introduced to the permutation group, or middle schoolers can't learn the basics of Galois theory. It is a shame and a disgrace that more than hundred yeas after Weierstrass, children graduate from high school with no conception of the monsters lurking just under the surface of the real numbers.

Anyway, it has been a big, backwards journey in my education; once upon a time, I studied quantum field theory and supersymmetry in grad school, and actually wrote a thesis on the confinement of quarks in the nucleon. But that was before I found out that I didn't know what numbers were.

Numerology and physics[edit]

Integers hold many deep, dark secrets. Dare I say, maybe even the secrets of the universe? I read a paper on string theory the other day that equated the strong coupling coefficient to 3=22-1 (plus corrections), the weak coupling coefficient to 7=23-1 (plus corrections), the fine structure constant to 127=27-1 (plus corrections on the order of ten percent) and planck's constant to 2127-1. The careful reader will note these are a series of Mersenne primes. Well, I fell out of my chair at that point. Of course, its just numerology, but it rattles the brain-pan, nonetheless.

Lie Groups, ergodic theory, Cantor topology[edit]

Another article I read recently pointed out that the charge of many simple Lie algebras, when used as generators of free groups, generate the disjoint cosets that underly the Banach-Tarski paradox. In particular, SU(3) is a simple Lie algebra; its charges are fractional, and thus the problem of quark confinement may be a manifestation of Banach-Tarski paradox. (Oh, I didn't understand the article, and it may well have been not even wrong, but ...) That should make anybody fall out of their chair.

Well, perhaps this idea is not so far fetched. The Anosov diffeomorphism on the tangent bundle of SL(2,R) is an example of a flow that has what it takes to be ergodic. And its known that ergodic systems have topologies of the Cantor set or something very much like it. And the types of sets that lead to the Banach-Taraski paradox are the kind of systems studied in the symbolic dynamics approach to chaos theory. So perhaps the leap is not that great.

The problem, of course, with these ideas is that SU(3) is kind of round and homogeneous; whereas chaos requires a hyperbolic space. So, how does one bridge the gap? To the rescue comes the fact that space and time are related by the Lorentz group SL(2,C), or, more prosaically, SO(3,1). The hyperbolic bits are in space-time itself. What we want to look for now is the manifestation of ergodic motion in the classical theory of Lie group-valued fiber bundles on Minkowski spacetime.

My KAM conjecture[edit]

Recently, I have noticed that dozens of different kinds of fractals all have the modular group symmetry; I have a web page, http://www.linas.org/math/sl2z.html exploring this relation. As I've been studying the modular group, and modular forms, I am suddenly struck by the fact that the KAM torus also probably has the modular group symmetry. How can this be? Well we have this suspicious link: the Jacobian elliptic functions and elliptic integrals occur naturally in the classical equations of motion of the pendulum. But the elliptic functions also have a close relationship to modular forms, which have modular group symmetry. But the KAM torus is nothing more (and nothing less) than the study of the chaotic dynamics of the perturbed pendulum (well, planetary orbits are elliptical too). So this is my very own fall-out-of-chair conjecture: that the modular group symmetry extends not only to fractals, but also to dynamical systems, starting first and foremost with the KAM torus.

If that didn't make things go bump, think of it this way: the modular group is usually defined in terms ot the upper half-plane, which, ohh, by the way, has this hyperbolic poincare metric on it. That means that geodesics will have positive Lyapunov exponents. But the upper half-plane can also be thought of as a Riemann surface, which, oh, by the way is a Kahler manifold, which is a symplectic manifold. But, duhh, this is what Hamiltonian dynamics is all about: its the expression of the symplectic group. So why, exactly, are we surprised that dyanmical systems are chaotic? More precisely, why is it that we don't see the modular group at every turn?

The Linas–Riemann conjecture[edit]

I've also been working on a conjecture that generalizes the Riemann hypothesis for dynamical systems. The conjecture starts by noticing that almost all of the discrete eigenvalues of the transfer operators of iterated functions are less than one. This essentially means that almost all iterated function systems have dynamics that consist mostly of decaying eigenstates. This is a fancy variant of the Lefschetz fixed-point theorem. Now we make the leap: the other place where we see decaying eigenstates is in quantum mechanics. Here, one establishes a simple model for some physical system, and then perturbs the system to get closer to the 'true physics'. The perturbations mean that the eigenstates of the simple system are not the true eigenstates of "real" system: the "true" eigenstates are mixtures, and so it appears that states "decay". But note also that the "true" system is invariably chaotic: for example, the "true" planetary orbits, the KAM torus, billiards, etc.

So my hypothesis is this: the true eigenstates of chaotic quantum systems are associated with the zeros of some zeta-type function for that dynamical system, and specifically, those zeros are organized in a straight line up the imaginary axis. If one tries to study the Hamiltonian dynamics of this system by considering discrete time intervals, one gets an iterated function system with decaying eigenstates. By focusing on the iteration, one is essentially focusing all attention on the "perturbations" and thus, will find only decaying eigenstates for the iterated map. However, these decaying eigenstates are really only the "off-diagonal" entries of the true Hamiltonian. The reason that number-theoretic things like Dirichlet characters and L-functions enter the picture is because iterated functions have periodic or almost-periodic orbits, and periodicity is naturally described in terms of the cyclic group. The Riemann zeta itself is associated, I think, with the (inverted ??) harmonic oscillator, which I think connects through the Gauss-Kuzmin-Wirsing operator (GKW operator). The line of reasoning I'm currently pursuing is is: harmonic-oscillator implies the pendulum implies the elliptic functions implies modular forms implies modular group implies de Rham curve which is the dyadic iteration of two contraction mappings and is continuous, non-differentiable, and a generalization of both the Koch snowflake curve and the Minkowski question mark function, which both have modular group symmetry and tie to GKW, which is a transfer operator that has direct ties to the Riemann zeta. Oh, and Farey numbers show up in there. As does Pell's equation, which has, dohh, a deep relationship to the Riemann hypothesis as well, and whose solutions form a semigroup of the modular group. What a coincidence... wherever you look, the modular group is always a step or two away from the Riemann zeta. Why is that? I don't think its a coincidence.

Based on the Artin conjecture I'm wildly guessing that there are symplectic manifolds whose geodesics have a self-similarity that is best described by a non-abelian symmetry. That symplectic dynamics is essentially hyperbolic dynamics is essentially chaotic dynamics is essentially why the Riemann zeta looks so chaotic.

Or let me rephrase that in another way: a tweak of the Riemann zeta function will yield a modular form, but precisly how is opaque to me right now. However, once this established, it will link the Riemann zeta to Kahler manifolds, and thus to the Hamiltonian that Sir Michael Berry conjectures for the zeros.

Quantizing Riemannian manifolds[edit]

The equations of motion for the geodesics of a Riemannian manifold are given by solving the Hamiltonian for the cotangent bundle of the Riemannian manifold. What I'd like to know: how does one quantize this classical system? What is the quantum Hamiltonian corresponding to any given Riemann manifold? What is the spectrum of a given Riemann surface? Do Riemann surfaces correspond one-to-one with Dirichlet L-functions, or is that my imagination? If they don't, then why not?

Quantum madness[edit]

Surely the fact that the 3+1 Minkowski spacetime that we live in happens to be hyperbolic is somehow deeply at the root of quantum mechanics. Why would I think this? Well, Hamiltonian dynamics on a hyperbolic manifold is chaotic, as a rule. The hint of the connection comes from the exactly-solvable models in two dimensions. Here, we see fuchsian groups involved in fractals, we see modular forms that have semi-fractal shapes, and we have theta functions that represent the Heisenberg group. Yet the Heisenberg group is in a certain way all about quantum mechanics. And theta functions also play yet another role in the Hurwitz zeta function. Sooo....

The Heisenberg group is an example of a sub-Riemannian manifold. These occur during the investigation of geodesic bundles. But its also the Hamiltonian flow of the classical harmonic oscillator. Hmm...

Current working hypothesis[edit]

My current working hypothesis regarding wave function collapse is that a bunch of paths (maybe? geodesics of some (symplectic?) manifold?) (aka Feynman paths) terminate abruptly (equivalently, their contribution to the measure evaporates to zero). This only happens when the measuring device can be sufficiently chaotic somehow; chaos is a necessary ingredient. The non-conservation of measure is the same mechanism as is at play with the Banach-Tarski paradox. In that paradox, measure seems to be magically created out of thin air; it's some strange thing that lie between the rational numbers and the reals. Conversely, in wave function collapse, measure is made to disappear. Chaos is needed to jam the trajectories into the "sinks" in the measure.

If this sounds totally absurd, well, there already is a manifestation of the phenomenon that is widely accepted: the non-conservation of bosons. Consider a photon: it starts somewhere, and then it terminates somewhere else. Every last path that contributes to the Feynman path integral of a photon eventually terminates on some fermion. Boom. The end.

QCD confinement[edit]

Some wild thoughts on QCD and confinement. From the point of view of the quarks, the interior of a baryon probably looks like a big hyperbolic space. In the metric of this space, the quarks feel only a modest force, that is asymptotic freedom. In such a space, the trajectories of the quarks would be ergodic. Confinement is probably a result that belongs to rigidity theory, in that weak perturbations of a manifold do not change the structure of a manifold; here, perturbations that try to dislodge a quark do not change the fact that they are confined. The tools of spectral analysis, applied to hyperbolic surfaces, typically exhibit a continuous spectrum and a discrete spectrum. For higher-dimensional hyperbolic spaces, one might expect to have only a discrete spectrum. Indeed, the absence of a continuous spectum is the sign of confinement: in physics, such models of confinement are known as bag models or chiral bag models; these, by design, have a discrete spectrum. So the basic program to prove QCD confinement is then to show that the quarks can be thought to live in a hyperbolic space, presumably a homogeneous space, and to show that the spectrum of geodesics in that space is only discrete. Discreteness of the spectrum implies that scattering states can only be bound states. Indeed, confinement is probably a property for just about any compact Lie group, so the real question is, why is nature using SU(3)?

Why neutrinos have mass[edit]

Actually, every spinor has to have a mass. Why? Because spinors are topological defects in what would otherwise be flat space-time. That sounds completely insane, how can that be? If I take a frame of reference (i.e. those things that make up a frame bundle), and parallel transport it, I get back something that is a twisted around whenever space is curved; indeed, this is more or less the definition of curvature. The amount of turning-around is given by a value in the rotation group, or, more precisely, it's Lie algebra. However, that rep of the algebra is the adjoint representation, and these are more or less always reducible. To talk about rotation "correctly", we should talk about the fundamental rep, and not the adjoint rep. But the fundamental rep doesn't act on frames, it acts on spinors. It is this tension between the adjoint rep on tangent spaces and the fact that its reducible to a product of fundamental reps that makes all the physicists go wild over supersymmetry. As well they should; its amazing the geometers haven't quite figured this out yet.

Anyway, its in this sense, I should be equating a non-vanishing curvature tensor with its representation as a (product of) spinors. And so, presto-chango, mumbo-jumbo, the presence of a spinor as a point particle in space is just a localized "charge". In fact, its the supersymmetric Hopf charge generator of , isn't it? That is, the physical particle we call a fermion is actually just a supersymmetric version of Dirac monopole, right? Well, the careful reader will note that a spinor has the wrong number of indices to be equated with a tensor, which is why mumbo-jumbo was needed. But the same reader will also know that there's no such thing as a magentic monople. Dirac solved this problem by running the "extra" magnetic flux lines down a solenoid, the so-called Dirac string, where they re-emerge at the other end as an anti-monopole. We solve the misssing-index problem the same way: run them down a supersymmetric "solenoid" so that the other (three) indecies emerge out the other end.

So we have that the presence of a fermion indicates a twist in space, i.e. a non-zero curvature, at that point where the spinor sits. Think Aharonov-Bohm effect: as I parallel-transport my way around a fermion, I come back to the same place, but not quite the same orientation. Like Bohm-Aharonov, I don't have to come in contact with the spinor to feel its presence from afar. Thus, I conclude that space is curved (in exactly the same way as Aharonov-Bohm concludes there's a phase shift due to a threading of the loop. Or maybe Berry phase is a better example, since its not H^1 but H^2 that we want. Right?) And so, in this way, its "inevitable" that fermions have masses, since they're really one and the same thing. OK, so now that's settled, now, for the tricky part: why does the mu neutrino weigh more than the electron neutrino?

UFO propulsion[edit]

One of the grand insights of cranks who study perpetual motion machines, time travel Tesla coils and UFO propulsion is that "twisting" or "spinning" have something to do with something. Indeed, rotating magnetic fields seems to be a recurring theme. Curiously, this theme of spinning and rotation permeates to the very top: we also have things like Roger Penrose's twistor theory.

I was recently struck by how this theme is also embedded into core physics curricula, without ever being made overt. For example: in action-angle coordinates, the action carries the units of angular momentum. Indeed, in the Bohr atom, the Sommerfeld-Wilson quantization condition is that

which states that the electron's orbit must be of an integer number of wavelengths: i.e. something spinning or orbiting is quantized into an integer. This is not new news. What did strike me was that the word action also means "the integral of the Lagrangian". Well, that's not new news either. What struck me was that this is not just another meaning for the word "action", but that in fact the "action" of "action-angle" is the same thing as the action of field theory. In particular, the action of quantum field theory carries units of h-bar, that is, the units of angular momentum. Could it be that the very nature of quantization in quantum mechanics and quantum field theory is just a big, up-scale version of the same quantization one sees in the representations of SU(2)?

Again, (as discussed elsewhere on this page), one might say that the grading of superalgebras and supermanifolds is again just an expression of the tension between the fundamental representation and the adjoint representation of a Lie group: its as if nature can't make up her mind, and wants to use both reps. The manifestation of that tension is angular momentum: angular momentum is the outcome of the action of a symmetry group on a manifold, and the action of symmetry groups on manifolds is quantized by means of the representations of that group.

Theory of Everything[edit]

Problem with current TOE's is that they have quantum mechanics built in as an assumption, rather than as an outcome. Yet the failure to fully understand the Riemann hypothesis seems to be give lie to such an approach. There is some basic quantization that ties together low-dimensional manifolds, simple groups, symmetric spaces and the like; all passing through the portal of Dirichlet L-functions; it seems to me that the solution to the Riemann hypothesis will be the Rosetta stone that will enable a true TOE to emerge.

This can be put in simpler terms: many second-year grad students in physics will get to calculate their first one-loop scattering amplitude for a pair of fermions. In the process, they will encounter the dilogarithm, probably for the first time. What is the dilog? Oh, not to worry, its an obscure function that only number theorists care about, will come the professor's reply. But why is that?

Big Boom Number Theory[edit]

Given all the above, I fully expect a big boom in number theory in the next few decades, which will unify the treatment of previously disparate work. The stuff going on with polylogarithm ladders can't possibly be a lone accident, as compared with what's going on with partition function (number theory). There is just too much in common between the various "coincidences" there, and other crazy things like Indra's pearls. This "boom" will give a new understanding of Riemann surfaces and the relationship to geometric quantization. Or you can choose to believe I'm crazy, because I have no evidence for this.

Maybe I am a crank. Who knows. I find it remarkable that simple things like the taking of a derivative of some plain-old boring function leads to structures like the Faà di Bruno's formula which has absolutely remarkable number theory lurking below its surface. In high-school calculus, they don't tell you that number theory and Taylor series can be joined in this way. Maybe my teachers didn'tknow. After all, the fractals in Newton's method were only recently observed (in the last few decades).

Cryptography and Bell polynomials[edit]

I note that the study of umbral calculus and in general sheffer sequences and binomial type is reminiscent of the study irreducible polynomials over finite fields and thus has a certain resemblance to the goings-on of elliptic curve cryptography. Cryptography is based on the fact that strong mixing is hard to undo, coupled to the fact that, for finite fields, that mixing can be precisely replicated without round-off errors. Put another way: the mathematics of encryption is closely related to the mathematics of ergodic theory, just re-expressed on subsets of rational numbers. More importantly, there is a very specific connection: the Minkowski question mark connects the algebraic numbers to the rational numbers and the rational numbers to the dyadic numbers. Rather than working with one Galois field at a time, one may work with multiple such fields "at once", "in parallel", as it were. When one does this, one is lead to consider the Cantor set structure of the dyadics, and how the symmetry structure of the cantor set manifests itself in the binomial coefficients and the Stirling numbers, and, when considered in terms of generating functions, these are expressed by means of Bell polynomials. Can one use this "parallelism" of representations of rational numbers to attack the ergodic and mixing problems of cryptography? Use the time evolution of ergodic systems, encoded via Bell polynomials, to track the path of a code point during cryptographic mixing, to break codes in a massively parallel fashion? OK, so I'm rambling now. This is all B.S. Never mind.

Is there any special relation between irreducible polynomials and elliptic curve cryptography besides the fact that one can define an elliptic curve over certain finite fields that can be constructed by taking the ring of polynomials modulo a prime and quotient the ideal generated by an irreducible polynomial in the ring? 142.150.204.138 19:01, 1 March 2006 (UTC)
Probably not. That's why I call this page "wacky thoughts". At one point, when I wrote this, I had some brainstorm of an idea that has since passed, and which I did not develop any further. I suppose its foolish to state that I keep having this nagging feeling that some crypto algorithms are more vulnerable than they seem. However, to suggest such a thing would buck the conventional wisdom of a crptography community far more knowledgable and experienced than I. Of course, I could hand-wave some more about my nagging feeling, and hypothesize some half-baked conections leading in all sorts of directions, but any reader should understand that my handwaving is in the category of "not even wrong" statements. (I did recently attend a talk on non-commutative projective geometry, where a connection was drawn between elliptic curves over finite fields and a certain 3D non-abelian projective variety. So to answer your question directly, "yes, there are more connections, but who knows if any of them can be used for code breaking.") linas 01:27, 3 March 2006 (UTC)

CP Violation and the geometric phase[edit]

In solid state physics, the geometric phase or Berry phase is used to explain the anomalous Hall effect in semiconductor materials such as GaAs. It is sometimes said that, in order to have the geometric phase play a role, one must have broken time-reversal symmetry in the description of the semiconductor. But broken T-symmetry, in the face of conserved CPT, implies broken CP symmetry. Is it possible that CP-violation seen in kaon and B meson decay is somehow explainable by means of the geometric phase?

The topology of the Cantor Set[edit]

One of the prime focuses of studying fractals, chaos and dynamical systems should be understanding how the Cantor set is mapped into the particular chaotic system at hand. In particular, one can say that a dynamical system is "fully understood" only if one can give a complete, exact mapping from the Cantor set onto the system. There are only a few systems that can be fully characterized in this way: the Koch snowflake curve pops to mind. In many other systems, e.g. the circle map, the presence of the Cantor set is visually/intuitively obvious, but a precise mapping is lacking.

What benefit is gained by looking at chaos in this way? For one, it gives a way of classifying a system. How many different core types are there? Who knows? Another reason is that the Cantor set has a remarkable set of symmetries or self similarities: these are given by a monoid subset of the modular group. The modular group itself is an example of a Fuchsian group, which is a case of a Kleinian group, and these are in turn connected to huge branches of mathematics and physics in various deep ways. Thus, it should be possible to use the symmetries of the Cantor set to study and classify, e.g. the scale dependence of the Lyapunov exponent (which is currently very vague: Mandelbrot hand-waves about multi-fractal measures. Others talk about the scaling of dripping faucets, that's all.)

Perhaps all dynamical systems can be understood to be orbifolds, where the group action is not time evolution, but is rather the action of the SL(2,Z) monoid. One should then ask for the geometric invariants and indexes that apply to the dynamical system. These invariants have yet to be discovered or given a name.

(Co-)Homology of operators[edit]

Consider the Hankel transform of a series. This can be considered to be an operator on the Banach space of sequences. It has a kernel, i.e. a number of sequences for whic it is zero. Some of the sequences in the kernel can be generated by the binomial transform. Three questions, then: 1) what is the description of the complete kernel for this transform? 2) What is the struture of the quotient space? 3) What is the general theory for dealing with the kernels and quotient spaces of operators?

Wacky thought: The Inverse Rademacher Problem[edit]

In 1937, Hans Rademacher gave an explicit series for the partition function (number theory). The partition function occurs in the series expansion of Euler's infinite product:

The product on the left is very highly singular on the unit circle: there is a pole at every possible root of unity, at every location where . Rademacher's achievement was to give an explicit expression for the on the right hand side.

Let us now consider a sum of the form

with some "arbitrary" constants. An afternoon spent graphing this function on the unit disk will convince one that "almost all" "random" sequences are highly singular on the boundary of the unit disk, with poles and zeros seemingly densely interleaved along the unit circle. A bit of staring will also hint that these are arranged in some fractal, self-similar pattern that is a bit hard to put one's finger's on, but is clearly present when graphed visually.

The "inverse Rademacher problem" is then to describe the behaviour of f(z) on the unit circle, in terms of a product of poles lying on the unit circle. By "describe", I mean to prove/clarify the following connections:

  • Prove that "almost any" really does result in a function whose poles are dense on the unit circle. This is a kind of ergodic theorem for analytic functions. Clearly, this excludes the entire contents of Hardy space.
  • The poles of Euler's product occur at all rational angles. The unit circle, with the rational angles removed, forms a fat Cantor set, with measure one. Then, given "almost any" , does a similar structure result? Given an explicit set of , can a "diffeomorphism" between the two Cantor sets be given explicitly? That is, can we enumerate the poles on the edge of the disk, in the same way that we can enumerate the disjoint bits of the Cantor set? Since elements of the Cantor set can be enumerated with a string of binary digits x, can one give an explicit function identifying giving the angular location of a pole as a function of x?
  • The Cantor set can be envisioned as an infinite binary tree. The tree can be manipulated via a set of rotation operators, which turn out to be hyperbolic remappings on the real number line. These rotations form a monoid inside the modular group SL(2,Z). It is this monoid that gives the Cantor set its fractal structure. An important "generalization" of the modular group are the Fuchsian groups. Conjecture: given "almost any" , is there a corresponding Fuchsian group that shuffles and exchanges poles for poles on the unit circle? Given an explicit set of , can this group be specified explicitly? Is this group adequate to explain the fractal-like behaviour of near the unit circle? What is that form?
--linas 04:52, 23 March 2006 (UTC)


Measure theory/Cantor set conjecture[edit]

The following conjecture is either wrong, or is a theorem in some textbook, with someone's name attached to it. I'm trying to figure out which. Conjecture: the space of all measures on the unit interval is homomorphic to the set of measures on the Cantor set, and is isomorphic under equivalence classes given by the action of the Fuchsian groups.

A handwaving proof starts with a lemma: The unit interval, with the set of all rational numbers removed, is isomorphic to the Cantor set. To prove the lemma, note that the unit interval with the set of all dyadic rationals removed is isomorphic to the Cantor set; this is the so-called "fat" Cantor set, one whose measure equals one. But its well-known that the dyadic rationals are mapped to the rationals by means of the Minkowski question mark function, so this completes the proof of the lemma. The lemma has a corollary: any function defined on the Cantor set is isomorphic to a function defined on the unit interval, with the rationals removed. Replace te word "function" by the word "measure", and we are done with part one.

What happenes if we are interested in discontinuities that are not at the rationals, but are at other numbers (e.g. various transcendentals, etc.)? Lemma: Any function on the unit interval which has at most discontinuities can be remaped to a function whose discontinuities lie at the rationals. Here, is the countable infinity aleph-null. Furthermore, this mapping can be explicitly given by a Fuchsian group. The mapping is in a certain sense "measure-preserving", and that furthermore, the Fuchsian group that provides the map is the unique one that preserves the measure. i.e. I think there's an isomorphism there as well). This concludes the hand-waving proof.

Clearly, there are details to be spelled out, but if you suspend disbeleif for a moment, I think there are broad applications in ergodic theory and related areas. The theorem states basically that, in order to understand the motion of an iterated map on the unit interval, it is sufficient to understand a corresponding shuffle of the elements of a Cantor set (or more precisely, a restructuring of the infinite binary tree), together with the the Fuchsian group that maps the binary tree into the unit interval. More generally, the theorem, if true, would explain some general features seen in iterated systems: period doubling (the Cantor set as a binary tree) and the occasional appearsance of the Farey fractions (this is just the mapping of the question mark function at work).

linas 17:19, 19 June 2006 (UTC)

Integrable systems[edit]

It is known, but perhaps not well-known, that a separable state in quantum mechanics corresponds to the Segre variety in algebraic geometry. The reason for this is almost trivial: the Segre map is the categorical product on the category of projective varieties. That is, the product of any two projective varieties is given by the Segre map. Now, projective space, endowed with the obvious inner product, the Hermitian form, is just projective Hilbert space, which is the natural space for discussing quantum mechanics.

Now, separability is important for quantum mechanics because it typically allows a problem to be easily solved, e.g. by separation of variables. Mathematicians have fancier words for this: a system of equations is integrable if there exists a foliation for the phase space on which the equations are formulated.

Another curious but little-known fact is that the Schroedinger equation can be expressed as the Hamiltonian flow of an ordinary energy functional on projective Hilbert space. This gives the Schroedinger equation a nice geometrical interpetation. Doubly so, as Hamiltonian flows are essentially classical objects. The probability of wave function collapse is give by a distance metric, the Fubini-Study metric.

So far, everything above is known fact. Now, here's the crazy thought: Can a quantum mechanical meaning be assigned to some general projective variety? It makes sense that this might be so: projective varieties behave sort-of like foliations; this is the content of Hilbert's Nullstellensatz and the coordinate ring. If they were true foliations, that would imply that a system of equations becomes integrable on the surface of a variety. Furthermore, it is a basic principle that the intersection of any two varieties is a variety, which is something one wants of a foliation as well. Thus, do varieties generalize the notion of separation of variables? The natural topology for algebraic varieties is the Zariski topology; is a broadening of the definition of this topology also the natural topology for the discussion of both foliations and for integrability conditions for differential equations?

Further connections between operator theory and algebraic varieties show up in K-theory and the theory of schemes and locally ringed spaces. One talks of the spectrum of a ring, which bears resemblance to spectral theory in general: there is the classical corrspondance between commutative algebras and operators.

linas 16:39, 5 July 2006 (UTC)

Rogue waves, solitons and mesons[edit]

One of the hot science news topics in oceanography is the acknowledgement that giant rogue waves exist, and are probably responsible for most deep-water maritime disasters. The waves are relatively long-lived (minutes to hours), but are not the traditional solitons seen in, for example, tidal bores or tsunamis. As such, they may possibly represent a new class of solutions to non-linear differential equations, existing in some netherworld between solitions and the linear waves that are solutions to the wave equation. Now, for the crazy thought: there is, in fact, a very similar phenomenon in particle physics, and this phenomenon has a name: the waves are called mesons. Here's the hand-waving proof: All baryons are conjectured to be solitons; this has been recognized since the 1960's, when T.H. Skyrme showed that the nucleon (aka proton/neutron) may be described by a topological soliton, the Skyrmion. The Skyrmion not only "explains" the stability of the baryon, but also gives the correct mass (to within 20%) and axial coupling constant (to within 10%). When coupled to quarks, such as in the chiral bag model, the agreement is further improved. The field from which the Skyrmion is constructed is the pion field. The novelty of Skyrme's construction is to treat the pion field not as the linear Klein-Gordon field commonly used in quantum field theory, but with a non-linear sigma model. The non-linearity is what allows the soliton solutions to exist. Now, here's the kicker: besides the pions, there are other, shorter-lived mesons as well. These mesons are normally described in terms of the quark model: they consist of a u-dbar quark pair in some excited, metastable state. They are fairly long lived, and have a definite mass. But they are not solitons, they decay. In fact, there is (currently) no known solution of the non-linear sigma model that describes these mesons. In fact, the non-linear sigma model does not even account for the pion mass or lifetime either. To understand the pion, one must work with the quark model. So: here's my crazy hypothesis: is it possible that there are "freak wave" solutions to the non-linear sigma model, and that these freak waves describe the mesons? Conversely, when the mathematics behind freak waves in deep ocean waters is finally developed and understood, will these solutions be found to correspond to resonances, having a metastability and a characteristic lifetime and decay profile? More preciely, will it be shown that there is no continuum of freak waves, and that rather, they are "quantized", occurring only with certain, distinct parameters labelled by integers? (Critique: the lifetime of the charged pion has to do with the fact that quark mass eigenstates are not weak interaction eigenstates; reconciling this with a "freak wave" description seems absurd. Similarly, the neutral pion decays into a pair of photons, and not into small-amplitude, dissipating Klein-Gordon waves. Reconciling this seems crazy as well).

linas 17:08, 12 July 2006 (UTC)

Fishing for quanta across the event horizon[edit]

The article on event horizons was recently re-written, and the question came up "can one touch the event horizon by extending a pole to it?" and "what happens if one dangles a fishing line down across the event horizon?" If you are not sure of the answers to these questions, then the article on event horizons contains more or less the right answer, at least as of July 2006. The following comes from Talk:Event horizon:

There's an entertaining way to try to intuit the physics behind the above questions. Suppose one is a distant observer, far from a black hole. Suppose one drops a fishing line with hook down towards the black hole, with the hook crossing the Schwarzschild radius. Now one gives the line tug, attempting to slow the fall of the whole thing. As the line slows, the event horizon for much of the line appears to recede off to the distance. As the hook is on the other side of the Schwarzschild radius, the receding horizon ends up stretching the line. As the line is stretched longer and longer, its perhaps not surprising that it eventually breaks. By pulling the line hard enough to stop the fall, the event horizon recedes to infinity, and the fishing line would need to get stretched to infinity, clearly an impossibility.

This picture is doubly nice because it provides intuition about the microscopic mechanism of breaking: the fishing line breaks because each of the atoms in the fishing line get pulled apart from one-another, as the distances between them increase as the line is slowed down. The unbounded stress that the fishing line feels is now not at all mysterious: the atoms are simply moving away from each other. It would perhaps be entertaining to try to understand quantum mechanics based on this insight, as it is simple, direct, and mechanical. The simplest model must surely be solvable: ask what the 1-D quantum harmonic oscillator does if the one and only spatial dimension is slowly stretched. I'm guessing that a system in the ground state evolves continuously into a superposition of excited states, and thus (if it were an electrically charged harmonic oscillator) must decay by a cascade of photons. That is, a slow stretching can be viewed as a perturbation (solvable via perturbation theory) that mixes the ground state with the excited states. Thus, a ground-state oscillator, when stretched, is no longer in the ground state. Perhaps this is also a simple and direct way of understanding Unruh radiation? Perhaps this an entirely equivalent formulation to Unruh radiation? Anyone care to work the problem and post the results somewhere? Interesting ...

linas 04:58, 10 July 2006 (UTC)

The secret fractal life of quantum field theory[edit]

It is a little-known fact that one-dimensional lattice models, such as the Potts model, have a self-similar fractal aspect to them that is rarely discussed or even mentioned. More profoundly, this hints at a broader phenomenon, which seems to be discussed not at all.

The one-dimensional Ising and Potts models have the remarkable property that their configuration space can be mapped to the real number line. Consider, for example, the Ising model on a lattice with 32 locations. Each location can be spin-up or spin-down: a binary value (more generally, a p-adic number). Every possible configuration of the 32-position lattice can be represented by a 32-bit binary number: a 32-bit int on the computer. Divided by 232, this maps to the unit interval. Given a point on the unit interval, there is a unique corresponding configuration of the system. Compute (for example) the energy of that configuration: plot the energy along the real number line. The resulting graph looks horridly discontinuous everywhere; on closer inspection it can be recognized as a fractal belonging to a certain well-known class of fractals: the derivative of the Minkowski question mark function (the resemblance is more easily seen by integrating along the real number line). "Resemblance" is the right word: it comes quite close, but is not quite exactly the question mark function, as some experimentation will show. To be precise, the resemblance is strongest for the Kac model, a variant of the Potts model with infinite-range forces. (See User:Linas/Lattice models for discussion/development)

The question mark function is not only a "self-similar fractal", but its self-similarities can be precisely expressed by the action of a dyadic groupoid subset of the modular group: Its self-similar in a certain precise way. From this point, we jump. A little bit of experimentation shows that just about any local interaction between sites in the lattice will result in a fractal-like curve. How can the action of the groupoid on such a curve be defined? Unclear. Do these ideas generalize to two and higher-dimension lattices? Clearly yes, although the representation on the real-number line no longer holds (well, one could use a space-filling curve, but thats not the point). What is the groupoid describing the self-similarity of the resulting object? Unclear. Can one use this technique to gain insight, or even solve previously unsolvable models? Unclear. Traditionally, symmetry groups in physics are associated with conserved currents and charges, via Noether's theorem. Here, the action is not of a group, but a groupoid, and it is discrete, not continuous. Actually, quasi-continuous: it has a continuous limit. What are the currents, charges and conserved quantities associated with the action of the groupoid? Unclear. But given that any stochastic model or quantum field theory would appear to have this kind of fractal structure in its configuration space makes it seem worthwhile to explore this idea further.

linas 05:39, 30 August 2006 (UTC)

Using QFT to understand differential equations[edit]

There are several interesting and difficult aspects to the study of differential equations. Given a particular solution, is it stable against perturbations? If the equation is chaotic, how does one describe solutions? How do they behave under perturbations? Consider, then, the space of all possible "small" perturbations of a solution of a differential equation. In physics, this space is the space on which quantum field theory is formulated: it is the very very large space on which Feynman path integrals and partition functions are set. In a certain sense, one can say that, by performing the second quantization of field theory, one is developing a framework for the discussion of all possible perturbations simultaneously. What I am noting here is that this quantization can be done not only for Klein-Gordon scalar fields on four-dimensional space-time, but for any differential equation, in any number of dimensions. By making this change of focus, can one gain insight into integrability conditions for differential systems? Can one gain insight into non-integrable systems? In a certain sense, this is exactly the research program that is driving the study of quantum groups, the Yang-Baxter equation, non-commutative geometry and myriad related topics. Unfortunately, these topics are often so dense and difficult, that usually, one looses the forest for the trees, when studying them.

linas 05:39, 30 August 2006 (UTC)

How to prove irreversibility[edit]

How can one show that a dynamical system with deterministic, time-reversible (i.e energy-conservative) equations of motion can somehow become irreversible? I think this question now has a clear answer in principle, even if actual examples are hard to prove in practice. Here's how its done:

First one must prove that a solution to the equations of motion is ergodic over some portion of phase space. Ergodicity implies that the trajectory of a classical "point particle" will be dense on some subset of phase space of non-zero measure. Here, the motion of a "classical point particle" should be understood to mean a single solution to the differential equations subject to some initial conditions. A trajectory that is dense can be visualized as a space-filling curve. When a trajectory is dense, one is justified in considering the closure of this set. Consider, for example, a discrete dynamical system that can only take values that are rational numbers. The rationals are dense in the reals, and the closure of the rationals may be taken to be the reals (well, also the p-adics...). The natural topology of the reals is fundamentally different than that on the rationals. In particular it is finer, which means that there are fewer functions that can be called smooth. When one considers physical observables on the closure, (such as correlation functions, densities, spectra and the like), these observables will usually have a radically different structure on the closure, than they do on the dense subset.

The canonical example of something that has a radically different form on the closure of a dense set, as compared to the dense set itself is the transfer operator. When the trajectories of a time-reversible dynamical system are considered, the transfer operator will be unitary, encapsulating the reversibility. The eigenvalues of the transfer operator will lie on the unit circle on the complex plane. But when one considers the transfer operator on the topological closure, one discovers that its spectrum is quite different, typically having eigenvalues that are not on the unit circle. Eigenvalues whose magnitude is less than one correspond to states that decay over time. Such eigenvalues are typically paired with others whose magnitude is greater than one: these are the states that "blow up" in the future, and are thus non-physical. In this way, one is forced into a situation which is manifestly irreversible: one can only keep the decaying states (and the steady states). (The eigenvalue one corresponds to the steady state(s): this is the largest allowed eigenvalue, and is also why the transfer operator is sometimes called the "Frobenius-Perron operator", in reference to the Frobenius-Perron theorem.)

So under what conditions is it acceptable to consider the closure of a dense ergodic set? It is acceptable when one can show that, by varying the initial conditions of the PDE's (but keeping the energy constant), the resulting trajectories result in the closure. That is, when the union of all possible trajectories forms the closure.

So has this sequence of steps ever been done for anything resembling a realistic physical system? I believe it has: it has been done for the Kac-Zwanzig model. This is a simple model of a single one-dimensional classical particle moving in a fixed, external potential, that is weakly coupled to N harmonic oscillators. It is completely deterministic and energy conservative. It is completely deterministic and energy conservative even in the case where the frequencies of the harmonic oscillators are more-or-less uniformly and randomly distributed. In such a case, the set oscillators can be envisioned as a model of a heat bath weakly coupled to the moving particle. For most fixed, external potentials, the system is chaotic but not ergodic (right?) for finite N. But something interesting happens in the limit. It has been recently shown by Gil Ariel in a PhD thesis that this system has a strong limit theorem, a strong form of a central limit theorem: that, when averaging over all possible initial conditions, the system behaves as a stochastic process. Now, stochastic processes and measure theory are really the same thing, just having two different vocabularies. A strong limit theorem is a theorem that states that the closure of a set of trajectories is equal to the union of the set of trajectories taken over all initial conditions -- this is exactly what we want. In essence, Ariel has proven the time-irreversibility of a more-or-less realistic system that is purely deterministic and reversible for finite N. A similar form of irreversibility has been proven before for much simpler, less "physically realistic" systems: the Baker's map, and I believe all Axiom A systems.

(Since WP has no article on strong limits: an example of a strong limit theorem is strong mixing: the core idea is that one can freely interchange the order in which two different limits can be taken, with the difference being at most a set of measure zero).

The the general study of irreversibility boils down to finding ergodic systems, and proving strong limit theorems on them. Off-topic, but I am very intrigued by the fact that the "classical" exactly-solvable ergodic systems, e.g. Anosov flows and in general motion in homogeneous spaces has a rich self-similar structure, and, when graphed, can strongly resemble Brownian motion. I'm also intrigued that there are integrable systems with highly non-trivial constants of motion: e.g. the Lax pairs of the Toda lattice. Hmmm.

Some people say that "in real life", its quantum mechanics that describes nature. So, a brief note: what happens when one quantizes a deterministic but ergodic dynamical system? Well, one finds that the spacing between energy levels goes as 1/N, and one finds that the wave functions are fractal, and space-filling. Here, "fractal and space-filling" means that, even for very small volumes of space, one can find a large number of wave functions all of whose probabilities are significant (and not exponentially small). They are also "space filling" in the sense that, by properly choosing phases between them, they can be made to destructively interfere over large regions. This has been shown for deterministic diffusion, and for dynamical billiards. The net result of having fractal wave functions with tightly-spaced energy levels is that it becomes (literally) impossible to prepare a pure state. That is, the only states that can be prepared are states which, over time, will come to occupy the entire space: that will macroscopically have the appearance of diffusing and spreading, despite the fact that the time evolution is completely and utterly unitary. The unitary evolution simply makes the wave functions (which occupied all of space) no longer interfere destructively, thus giving the appearance of spreading (when in fact "they were there all along"). The closely-spaced energy levels means that the ratio of energies between two energy levels seems to be irrational, and thus one has what looks like decoherence.

That is how I currently understand irreversibility in physics and mathematics.

linas 03:46, 5 October 2006 (UTC)

How to prove the Riemann hypothesis[edit]

The Riemann hypothesis arises when a stronger topological separation axiom is imposed on the topologies that commonly occur in abstract algebra, such as on the spectrum of a ring. In particular, there is a short exact sequence that maps spectra of operators to the kernels of the next space in the sequence, having the flavour of Fredholm theory. The spectrum of the operator in the Hilbert-Polya conjecture occurs early in this exact sequence, and is in this sense "dual" to the "usual spectrum" (obeying the weaker separation axiom). (Of course, in the prototypical case, the spaces are the spaces of functions on the Cantor set, which is pervasive in chaotic dynamical systems, and also occurs as the limiting set for cusps of cusped Riemann surfaces, modular forms, the Fuchsian group, etc. But the general case is more general). (Thus, for example, the eigenfunctions for the simple harmonic oscillator form a discrete spectrum only when square-integrability is imposed; otherwise the spectrum is continuous and is defined on the entire complex plane.) Remember: even if I can't (yet) prove this, you heard it here first.

linas 17:48, 25 November 2006 (UTC)

Fundamental group of Sierpinski carpet[edit]

What is the fundamental group of the Sierpinski carpet? The Menger sponge? For some of the difficulties in determining this, see the article on the Hawaiian earring.

linas 16 December 2006

Homologies of computing systems[edit]

I've recently been reading about what I thought were two completely unrelated concepts: parallel computation and algebraic topology. Hah! More fool me! They are deeply related!

The centerpiece of algebraic topology is homology theory, which, in the abstract, studies chain complexes over free abelian groups. Algebraic topology is notable, as it is one of the very first areas of mathematics from which the ideas of category theory sprung. Now, category theory, in its most basic and primitive form, uses certain common algebraic structures to make statements about other algebraic structures: its algebra applied to algebra. One of its core ideas is the monoid, which is a category with one object (but many morphisms, whose composition form a monoid). Now,a monoid is like a group, except it doesn't have inverses. Like a group, a monoid can act on a set, almost exactly in the same way as a group action. This is called an act. Just as monoids are categories with one object, acts are... prototype categories: they have many objects and morphisms. Now, here's the punchline: acts are also one and exactly the same thing as semiautomata, which are finite state machines, but with no output alphabet. Hmm!

Wait, there's more! The problem with finite automata is that they are not enough to describe parallel computation. To study parallel computation, one needs to pass messages between computers; finite automata are not enough, and one needs to have the concept of communicating sequential processes. Now, how does one get there? One takes a monoid (gasp!) and makes some of elements commute: i.e. one abelianizes portions of the monoid. The construction is due to Antoni Mazurkiewicz, and is known to computer scientists as a trace monoid. Lets see... something abelian, related to categories... where have we heard of that before? This begs a number of questions. What if one applies the Grothendieck group construction to the abelian portion of a trace monoid, and then uses this abelianzed bit to build a K-theory?

Recall that the Grothendieck group is a construction that takes an abelian monoid, and defines a formal inverse, so that one obtains an abelian group. Curiously, there is an analogous construction for general, non-abelian monoids. This construction again defines formal inverses, which are in fact monoid morphisms, and can be used to construct the quotient monoid. The result, however, is not always a group. This quotient monoid is the syntactic monoid, and is in fact the set of strings that are recognized by a finite state machine. Thus, in this strange sense, a finite state machine is the general, non-abelian generalization of the Grothendieck group!

Can one define some sort of connecting homomorphism on trace monoids, and use the snake lemma to build up a long exact sequence out of the short exact sequences that monoids already have? What is the resulting theory? Do the non-commuting parts of the trace monoid imply that it is a theory of non-commutative geometry, or simply that it is a theory of petri nets and distributed computing? Which is it?

There's another way to make this connection. The word problem in algebra is the problem of determining when two strings of letters describe the same algebraic object. Its been known for quite a long time that the word problem for groups is unsolvable, in that it can be recast in terms of formal language theory, and specifically in terms of Turing machines, and then apply the Godel incompleteness theorem to show that there are some groups that have very hard word problems. So the connection between algebra and computation is already there, in this particular domain. Is the connection broader? I think it is: the word problem for trace monoids is surely expressible in the language of non-commutative geometry. All the right pieces are there; whats missing are the grand theorems.

There is in fact yet another well-known, existing connection between algebra and computation. In algebraic geometry, one has the notion of a variety, which is a geometrical object described by a series of algebraic equations. One of the hard problems in algebraic geometry is the problem of finding a nice, orthogonal set of equations, the Gröbner basis, for describing the variety. The difficulty of finding Gröbner bases has lead to the study of algorithms for their computation. In a certain hazy sense, one can say that the problem of finding Gröbner bases is just the word problem on algebraic varieties; and one has some traction in this connection to computer science, as there are even various algorithms that can be explicitly studied.

Funny thing is: the study of algebraic geometry leads to the study of schemes, which in turn leads to algebraic stacks and the derived category. But wait... the derived category is something from homotopy theory. So once again, the question arises: can one make a derived category, but not over abelian categories, but instead over trace monoids, which agrees perfectly with the derived category on the abelian part of the trace monoid? Who knows? Where shall I find the time to find the answer?

linas 19 April 2007

Quantum lambda calculus[edit]

The study of integers leads naturally to the idea of modular forms. Modular forms themselves can be understood as a certain type of analytic function, or structure, on infinite binary trees. This follows because the structure of the modular group can be understood to be a binary tree: this is explicitly visible in the self-similarity of various modular functions, such as the Euler phi function. The study of modular forms can be generalized a bit, for example, to automorphic forms. In a certain sense, the binary tree is what is left over after ignoring the group structure of the modular group.

Now the infinite binary tree is a special case of a free object. Another interesting example of a free object is a term algebra. Can the concept of a modular form (as an analytic structure on a tree) be extended to an analytic structure on a term algebra? That is, can the concept of modularity be extended to structures that are not groups, structures that lack inverses, or at least, might not be closed under inverses? Or is the concept of modularity strictly limited to groups? Now, combinatorial analysis takes some steps in this direction when it looks at analytic structures anchored on combinatorial identities, but it does not go all the way.

To go "all the way" would require finding an analytic structure for embedding all of lambda calculus. One can do this in a naive (and unsatisfactory) kind of way by mapping the syntax tree of a given lambda expression onto a binary tree. However, the syntactic structure seems to get glossed over, so one wants to do a bit better, and explicitly exhibit the self-similarity of recursive functions (e.g. as studied in denotational semantics).

Achieving this would be an interesting mind-bender, since lambda calculus is naturally set in Caretesian-closed categories, and of course, maps to all computation via Turing machines. The goal is to provide a bridge across the space of "discrete things" -- the finite mathematics of computation, to the space of things that live in the cardinality of the continuum, analytic functions. By analogy, "integers are to modular forms as lambda calculus is to what?"

There is a sort-of bridge there, already, but it's dual, in a sense: this is the idea of topological finite automata as generalizations of quantum finite automata, which are in turn generalizations of non-deterministic finite automata. An interesting program would be to extend this generalization to full-fledged Turning machines, and then, via Church's theorem bridge across to lambda calculus. That is, define what a "quantum lambda calculus" would be.

linas 8 April 2008

The origins of Quantum Mechanics[edit]

In some deep sense, Hamiltonian systems, such as symplectic geometry and contact manifolds, satisfy the Markov property. That's essentially what the time derivative is doing: its saying "forget the distant past, only the immediate past matters". But if they are Markovian, this means that the Ornstein isomorphism theorem must apply to these systems. Well, of course, there are specific examples, Sinai's billiards and Anosov flow. But more generally? What does this mean? Well, there must be some irreducibly ergodic nub or core to the system. Probabilities arise. But wait: we know, from the Fisher information metric, that, when there's a probability, that the correct description really involves the square-root of the probability! But the square root of the probability is just the quantum wave function. That's where quantum mechanics comes from, at its heart. Where does it live? Well, the purely classical trajectories are amassed as a result of the asymptotic equipartition property. The quantum trajectories are just off to each side of this peak. Heisenberg's constant hbar simply provides the width of this peak.

linas 22 July 2012