Jump to content

Talk:Entropy as an arrow of time

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Clejan (talk | contribs) at 23:53, 30 January 2008 (→‎removing claims of proofs of the second law until they are no longer controversial: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconPhysics Unassessed
WikiProject iconThis article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.

Some thoughts

This article asks why the universe began with low entropy.

But heat death of the universe says this: "Recent work in inflationary cosmology has inverted the heat death notion, suggesting that the early universe was in a thermal equilibrium and thus heat death–like state before cosmic expansion. Meanwhile, in an expanding universe, some believe the maximum possible entropy increases far more quickly than the actual entropy with each time increment, pushing the universe continually further away from an equilibrium state despite increasing entropy. Furthermore, the very notion of thermodynamic modelling of the universe has been questioned, since the effects of such factors as gravity and quantum phenomena are very difficult to reconcile with simple thermodynamic models, rendering the utility of such models as predictive systems highly doubtful according to some."

In what sense did the universe begin at minimal entropy, if entropy is defined as thermal equilibrium?

Oneismany 08:55, 3 May 2006 (UTC)[reply]

Entropy's pretty fucked up, eh?

--Davidknippers 06:59, 12 May 2006 (UTC)[reply]

To answer the above question, the early universe is not considered to have been at minimal entropy, because its spectral distribution was that of a black-body (and consequently at thermal equilibrium.)

Tkircher 05:40, 30 July 2006 (UTC)[reply]

The true answer is that this statement is simply false. I will correct it soon. The early universe was NOT in a thermal equilibrium, which is a state of high entropy. Details will appear soon in heat death of the universe. Dan Gluck 20:12, 8 September 2006 (UTC) I have added the details there and you're invited to read. Dan Gluck 12:37, 11 September 2006 (UTC)[reply]

Entropy increases in both directions

Chris Wallace (computer scientist) demonstrated ([1]) that if the laws of physics are reversible, then given an arbitrary state, entropy will increase in both directions of time. There is a nifty interactive simulation (simplified) of this on Lloyd Allison's home page([2]), but that seems to be down at the moment. --njh 03:21, 2 December 2006 (UTC)[reply]

NB arbitrary state -- not Big bang. 1Z 16:11, 3 April 2007 (UTC)[reply]

Oh my

An equivalent way of saying this is that the information possessed by the demon on which atoms are considered "fast" or "slow", can be considered a form of entropy known as information entropy.- wtfhax

Three scenarios

The three scenairios at the end of the article miss the fact that the universe most likely is not expanding. Einstein's theory with conservation of energy, as opposed to the Big Bang, predicts Hubble constant as it is observed (accelerating with acceleration 2.5E-36 1/s2). From it predicts the density of the universe (6.5E-27 kg/m3), acceleration of space probes (7E-10 m/s2), the size of pieces of non luminous matter (1 m). It predicts also distances to some quasars. Usually when theory predicts so many things right and none wrong it is worth looking into it, especially when it is created by Einstein. Jim 19:43, 18 June 2006 (UTC)[reply]

What you have written is completely false. The theory of the BIg Bang is today fully accepted by virtually all cosmologists. Dan Gluck 14:24, 8 September 2006 (UTC)[reply]

Change of name of article

FYI, I am contemplating moving this article to arrow of time (physics) and possibly spitting it into several articles. I admit I am prejudiced: I think that cosmological explanations for the arrow of time are teleological hogwash, right up there with the work of the Bogdanov brothers. Also, entropy does have something to do with it, but this article fails to sketch this correctly. linas 15:47, 2 August 2006 (UTC)[reply]

Starting arrow of time (physics) sounds like it might be a reasonable idea as long as you reference it as it is discussed in physics books; however moving this article doesn’t make any sense. I was the one who moved it here in the first place. The entropy page was getting too long; see: Talk:Entropy/Archive2#Article is now 46 kilobytes long (the limit is 32) for a discussion of this. Moreover, as far as I am aware of, the term “arrow of time” was first used by Arthur Eddington in his 1928 discussions about “entropy”, see for example pgs. 69-88 and 295 in Eddington’s book The Nature of the Physical World. Hence the name of the article entropy (arrow of time).
As this book was written in a lose tone (without equations) for philosophers and for the general public, the idea spread rapidly from here. Harold Blum’s 1950 Time’s Arrow and Evolution cites Eddington’s 1928 book as the source of the term, as based on the argument that the second law of thermodynamics, i.e. the law that energetically quantifies the phenomenon of irreversibility, points in the direction of all real events in time.
Certainly, cleaning, removing teleological references, and reshuffling things, etc. or even starting a new article, such as arrow of time (physics) or arrow of time (cosmology), or even a new category, i.e. category:arrow of time, may be in order, but moving the article is bad idea – entropy is the arrow of time as it was originally defined by Eddington. Thanks:--Sadi Carnot 18:12, 2 August 2006 (UTC)[reply]
OK, well, there has been a lot of progress in mathematics and physics since 1928, or 1950, or even since Stephen Hawking invoked cosmology as an explanation (in the mid-1980's ???). None of the modern theories of the arrow of time deny the importance of entropy; however, they don't use entropy as the starting point; rather it is something that is a computable outcome.
Now, I suppose we could turn this article into a "what Eddington thought abou it all" article, but ... I wouldn't be the one to do that. Instead, what I saw was that the article on Arrow of time seemed to provide the layman's overview of the general idea, while this article seemed to somehow be more in-depth, more physics-based.
I tried to capture what I beleive is the state-of-the-art in understanding the arrow of time in the section titled "dynamical systems" that I added this morning. As best as I understand it, the idea of the arrow of time is explained by the structure of cetain topological vector spaces, and the fact that operators on these spaces are not time-symmetric even though the underlying microscopic physical systems are time-reversible. I am not sure how wide-spread or how accepted this idea is, but there are some books out on it, and relatively famous people like David Ruelle and many others (e.g. Caroline Series, David Mumford) have been writing about aspects it. I beleive this formulation is fully consistent wit formulations of entropy -- e.g. the Kolmogorov entropy can be preciely computed for some of these models. No one is denying that entropy is important, but its not the cause, its the effect.
So the net result is we have an artcle that talks about the arrow of time in five different ways -- from the old statistical mechanics point of view, from the cosmology point of view, from the weak-interaction/CP-violation point of view, and from the dynamical systems point of view. If you don't want to rename the article, then I'd suggest moving most of the content out of this article, and leaving this one to describe Eddington's ideas, and nothing more. Personally, I'd rather rename it, and re-organize it so that the discussion of entropy becomes historically interesting, but no longer cnetral to the modern understanding (as I understand it). linas 23:41, 2 August 2006 (UTC)[reply]
Linas, I would suggest starting up a second page and then move some of the physics content there. Remember, more than half of this page was written by people who contributed their writing, with the understanding that it was going to go on the entropy page. Regarding grammar, if you use abbreviations, such as PSL(2,R), please state what these are in word form (assume the reader may have printed this article out and is reading it somewhere); also, I wouldn’t use the phrase “as of 2006”. Something about it doesn’t sound right. We shouldn’t be filling articles with the latest research and theories from last month. If we did, it would be more like "science weekly" rather than an encyclopedia article. Just some food for thought. --Sadi Carnot 14:34, 5 August 2006 (UTC)[reply]
Linas, Entropy is still the concept used regarding this matter in almost every field in physics, so it is not only of historical value. Additionally, I agree that to say the second law is the CAUSE of the arrow of time is wrong, because it is not the cause - IT IS the arrow of time, in the sense that this is the simplest (and at least the most widely used) way to encompass all most known phenomena related to the arrow of time. In any case, it is not only of historic interest. I think what you mean is that the current research in the field involves trying to identify a more accurate description and mathematical insight for the arrow of time.

Regarding the article, this just means that we should separate the last part of the article under "current research", as opposed to the general discussion in entropy. I have done this, and am still doing some cleanup, as soon as I finish I'll put down the cleanup tag. Hope you're OK with this.Dan Gluck 20:00, 8 September 2006 (UTC) Rgarding the name, I think the best option is either not to change it, or to change to "thermodynamic arrow of time". arrow of time (physics) is not good because all arrows of time are physical.Dan Gluck 20:14, 8 September 2006 (UTC)[reply]

Essay on irreversibility

After a hopelessly long discussion with User:Jpod2 on my talk page about entropy and irreversibility and what not, I wrote a small essay that mostly captures what I believe to be the "state of the art" about the arrow of time. I post it below; it is far too raw (and controversial?) to be a real article. linas 04:10, 5 October 2006 (UTC)[reply]

How to prove irreversibility

How can one show that a dynamical system with deterministic, time-reversible (i.e energy-conservative) equations of motion can somehow become irreversible? I think this question now has a clear answer in principle, even if actual examples are hard to prove in practice. Here's how its done:

First one must prove that a solution to the equations of motion is ergodic over some portion of phase space. Ergodicity implies that the trajectory of a classical "point particle" will be dense on some subset of phase space of non-zero measure. Here, the motion of a "classical point particle" should be understood to mean a single solution to the differential equations subject to some initial conditions. A trajectory that is dense can be visualized as a space-filling curve. When a trajectory is dense, one is justified in considering the closure of this set. Consider, for example, a discrete dynamical system that can only take values that are rational numbers. The rationals are dense in the reals, and the closure of the rationals may be taken to be the reals (well, also the p-adics...). The natural topology of the reals is fundamentally different than that on the rationals. When one considers physical observables on the closure, (such as correlation functions, densities, spectra and the like), these observables will usually have a radically different structure on the closure, than they do on the dense subset.

The canonical example of something that has a radically different form on the closure of a dense set, as compared to the dense set itself is the transfer operator. When the trajectories of a time-reversible dynamical system are considered, the transfer operator will be unitary, encapsulating the reversibility. The eigenvalues of the transfer operator will lie on the unit circle on the complex plane. But when one considers the transfer operator on the topological closure, one discovers that its spectrum is quite different, typically having eigenvalues that are not on the unit circle. Eigenvalues whose magnitude is less than one correspond to states that decay over time. Such eigenvalues are typically paired with others whose magnitude is greater than one: these are the states that "blow up" in the future, and are thus non-physical. In this way, one is forced into a situation which is manifestly irreversible: one can only keep the decaying states (and the steady states). (The eigenvalue one corresponds to the steady state(s): this is the largest allowed eigenvalue, and is also why the transfer operator is sometimes called the "Frobenius-Perron operator", in reference to the Frobenius-Perron theorem.)

So under what conditions is it acceptable to consider the closure of a dense ergodic set? It is acceptable when one can show that, by varying the initial conditions of the PDE's (but keeping the energy constant), the resulting trajectories result in the closure. That is, when the union of all possible trajectories forms the closure.

So has this sequence of steps ever been done for anything resembling a realistic physical system? I believe it has: it has been done for the Kac-Zwanzig model. This is a simple model of a single one-dimensional classical particle moving in a fixed, external potential, that is weakly coupled to N harmonic oscillators. It is completely deterministic and energy conservative. It is completely deterministic and energy conservative even in the case where the frequencies of the harmonic oscillators are more-or-less uniformly and randomly distributed. In such a case, the set oscillators can be envisioned as a model of a heat bath weakly coupled to the moving particle. For most fixed, external potentials, the system is chaotic but not ergodic (right?) for finite N. But something interesting happens in the limit. It has been recently shown by Gil Ariel in a PhD thesis that this system has a strong limit theorem, a strong form of a central limit theorem: that, when averaging over all possible initial conditions, the system behaves as a stochastic process. Now, stochastic processes and measure theory are really the same thing, just having two different vocabularies. A strong limit theorem is a theorem that states that the closure of a set of trajectories is equal to the union of the set of trajectories taken over all initial conditions -- this is exactly what we want. In essence, Ariel has proven the time-irreversibility of a more-or-less realistic system that is purely deterministic and reversible for finite N. A similar form of irreversibility has been proven before for much simpler, less "physically realistic" systems: the Baker's map, and I believe all Axiom A systems.

(Since WP has no article on strong limits: an example of a strong limit theorem is strong mixing: the core idea is that one can freely interchange the order in which two different limits can be taken, with the difference being at most a set of measure zero).

The the general study of irreversibility boils down to finding ergodic systems, and proving strong limit theorems on them. Off-topic, but I am very intrigued by the fact that the "classical" exactly-solvable ergodic systems, e.g. Anosov flows and in general motion in homogeneous spaces has a rich self-similar structure, and, when graphed, can strongly resemble Brownian motion. I'm also intrigued that there are integrable systems with highly non-trivial constants of motion: e.g. the Lax pairs of the Toda lattice. Hmmm.

Some people say that "in real life", its quantum mechanics that describes nature. So, a brief note: when happens when one quantizes a deterministic but ergodic dynamical system? Well, one finds that the spacing between energy levels goes as 1/N, and one finds that the wave functions are fractal, and space-filling. Here, "fractal and space-filling" means that, even for very small volumes of space, one can find a large number of wave functions all of whose probabilities are significant (and not exponentially small). They are also "space filling" in the sense that, by properly choosing phases between them, they can be made to destructively interfere over large regions. This has been shown for deterministic diffusion, and for dynamical billiards. The net result of having fractal wave functions with tightly-spaced energy levels is that it becomes (literally) impossible to prepare a pure state. That is, the only states that can be prepared are states which, over time, will come to occupy the entire space: that will macroscopically have the appearance of diffusing and spreading, despite the fact that the time evolution is completely and utterly unitary. The unitary evolution simply makes the wave functions (which occupied all of space) no longer interfere destructively, thus giving the appearance of spreading (when in fact "they were there all along"). The closely-spaced energy levels means that the ratio of energies between two energy levels seems to be irrational, and thus one has what looks like decoherence.

That is how I currently understand irreversibility in physics and mathematics. linas 04:10, 5 October 2006 (UTC)[reply]

Second-person

This article is a good read, but it's too chatty and adopts a second-person perspective. This should be avoided, per the style guidelines. Chris Cunningham 16:17, 15 February 2007 (UTC)[reply]

I've changed the cleanup tag to a more specific one and tried to make some things "better" (whether I succeeded or not is another story). A fairly easy way to conform to guidelines would be to simply use the third person ("one") more often. Esn 05:50, 5 March 2007 (UTC)[reply]

A Mistake?

The Second Law of Thermodynamics allows for the entropy to remain the same. If the entropy is constant in either direction of time, there would be no preferred direction. However, the entropy can only be a constant if the system is in the highest possible state of disorder, such as a gas that always was, and always will be, uniformly spread out in its container. The existence of a thermodynamic arrow of time implies that the system is highly ordered in one time direction, which would by definition be the "past". Thus this law is about the boundary conditions rather than the equations of motion of our world.

Should not it be the 'highest possible state of order', and 'that the system is highly disordered in one time direction'? Forgive me if I am wrong. --210.84.41.237 05:46, 21 April 2007 (UTC)[reply]

Oh, and I found another example:

A more orderly state has become less orderly since the energy has become less concentrated and more diffused.

--210.84.41.237 05:51, 21 April 2007 (UTC)[reply]

I even order that this cloud was highly ordered in the past time, it isn't in the future so why should it be in past using statistic math never is about order it's about aproximation. We cannt pinpoint a gas molecule in time, position and speed cannt be measured at the same time. So time isn't about that might it be that time rather could be explained in states. For example two photons do hit each other or not hit each other. A single photon could not be affected by time it has no state change. While state changes can happen with multiple particles, in any direction. All these quantum scale events are almost random states. Altough some information (our world) keeps conserved within this system and so builds up next states based upon previous states. Since we are at a big lager scale (a result of to many minor states) to us the direction of time isn't reversable like it is in Quantum mechanics. WEll i gues the whole talk is hypothetical as we in our information world we cannt observe something going backward in time (wel ahum if we exclude the airflight to Elbonia). Peter-art —Preceding comment was added at 21:28, 31 October 2007 (UTC)[reply]

state changes instead of time ?

Reading some examples of gas clouds wondering if thats all right to use as example. How can a gas cloud be highly ordered in the past time, it isn't in the future so why should it have been in the past. Using statistic math doesnt make past things fixed, or framed, statistics is about aproximations. We cannt pinpoint a gas molecule in time, as position and speed cannt be measured at the same time. So i wondered time isn't about that perhaps at all.

Might it be that time rather could be explained in states. For example two photons do hit each other or dont hit each other. That's called a state change. While universe with a single photon could not be affected by time it has no state change possibilities and nothing take distance from.

In our world state changes happen with multiple particles, in any time direction. At quantum scale events and are "almost" random states. As some information (our world) keeps conserved within this system and so builds up next states based upon previous states. Our large scale world (as a result of many minor QM states) to us the direction of time isn't reversable like it is in Quantum mechanics. Well i gues the whole talk is hypothetical as we in our information world cannt observe something going backward in time that is if we exclude the airtrip to Elbonia).

If I do remember correctly somthing like this has been discused at Edge.org by Julian Barbor As he wrotes the folowing in this article the end of time Peter-art —Preceding comment was added at 21:52, 31 October 2007 (UTC)[reply]

Reference commentary

I suggest that the commentary currently being inserted into the references be copied here (where it belongs) for a proper debate to take place. I will respond further in a week (busy at moment). --Michael C. Price talk 17:45, 28 January 2008 (UTC)[reply]

removing claims of proofs of the second law until they are no longer controversial

I suggest removing mention of putative proofs of the 2nd law. For over 100 years people have been unsuccessfully trying to prove this from microscopic physics. I just checked the latest of such proofs: Mechanical Proof of the Second Law of Thermodynamics based on Volume Entropy by Michelle Campisi, , submitted to Stud. Hist. Phi. MP. I think I found 6 errors: 1. The energy eigenstates could have accidental degeneracies, not due to any symmetry. The author assumes no degeneracy because there are no symmetries. 2. In equilibrium, only the expectation value of the density matrix is expected to be diagonal, not the density matrix itself. Too strong of an assumption. In fact, at t_f, the density matrix is expected to be wildly oscilating. 3. The no level crossing assumption is unjustified 4. Eqn 19 does not follow from eqn 18. There is a term ln(K+3/2)*Sum_{n=0,K}(p_n-p’_n) which must be subtracted from the RHS of eqn 19. This subtraction invalidates the final inequality (S_f>=S_i). 5. The fact that the entropy seems to be monotonically increasing for any t_f>t_i regardless of whether equilibrium has been achieved is not what one would expect for small systems (no specialization to macroscopic systems) and means there is something wrong with the proof (1-4 above), unlike what the author claims, that the entropy only increases when the density matrix happens to be diagonal. 6. For a closed system, any micro state is equally probable FOR ERGODIC SYSTEMS (equal a priori probabilities, but showing that the system is ergodic is the trick), and the ordering of INITIAL probabilities assumed is not an equilibrium state, any more than the opposite ordering of probabilities ("negative temperature").

Next, I will debunk the Gemmer, Jochen; Otte, Alexander & Mahler, Günter (2001), "Quantum Approach to a Derivation of the Second Law of Thermodynamics", Phys. Rev. Lett. 86 (10). Give me a few days. Or it might be weeks.

My personal opinion about this is that the second law is not provable from microscopic dynamics (or initial conditions) without cheating (usually unintentional) and unjustified circular assumptions (ones which get the 2nd law as the result). I suspect there is something missing from our microscopic equations of motion, either quantum or classical, that once inserted would give irreversibility and the second law.