Talk:H-theorem

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Physics (Rated Start-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
 

Origin of the H formula ?[edit]

how did boltzmann come up with his definition for H? (Yosofun 06:42, 12 Apr 2005 (UTC))

Fluctuation theorem[edit]

I'm cutting the following line:

The final resolution of the time reversibility paradox is found in the Fluctuation Theorem.

There is stuff which could be said about the Fluctuation Theorem as a modern generalisation of work on the prediction of entropy increase.

But the FT has nothing to say about the Arrow of time, because, like Boltzmann's H-theorem, the entropy increase it predicts so well arises because one assumes one can ignore correlations and similar 'hidden information' at times after the initial time. If you start with a non-equilibrium state and try to use the Fluctuation Theorem to predict backwards how that non-equilibrium state came to be, it will predict that it arose spontaneously as a fluctuation from an equilibium state -- because those are the only possibilities it knows about.

The cause of time asymmetry in the observed universe must be looked for elsewhere. Jheald 12:28, 21 October 2005 (UTC)

Hi everybody![edit]

While reading my personal copy of Tolman's book and I saw the chapter on H - theorem The Shannon information entropy, which is talked about in another book I have been reading by Feynman called lectures on computation contains a lot about Shannon's ideas. I find this subject interesting and I can help with the article with some inline citations if I can find something that Toleman said in his book inside of the article that needs an inline source

According to the history of entropy article Shannon information entropy is related to "H" and interestingly Shannon recognized his same formula by reading the same book I am reading now Toleman.

By the way Jheald, nice to see you have been working on this article! Also thanks so much for recommending the Penrose book, I have been studying it very intently. I am kinda stuck on chapter 12 but I have been skipping ahead and reading the later chapters as well.

Penrose seems to talk about just about everything in his book, but don't remember H theorem, and yes I have looked in the index but was wondering if it was called something else.

Hobojaks (talk) 07:00, 22 February 2009 (UTC)

Note that Tolman was written in 1938, though it's still an outstanding book. Shannon only widely introduced the idea of the Shannon entropy ten years later. Relationships between the two are touched on here in the Shannon entropy article. They are still the subject of some controversy. The link between the two ideas has been propounded most vigorously by E.T. Jaynes and his subsequent followers, from 1957 onwards; others still decry it, though as early as 1930 G.N. Lewis had written that "Gain in entropy always means loss of information, and nothing more".
Penrose touches on the idea of the H-theorem, and the related issue of Loschmidt's paradox in section 27.5 (page 696 in the UK hardback); though he doesn't mention either by name, essentially takes the idea of coarse graining as read, and quickly moves on to the gravitational and cosmological issues he's most interested in.
H.D. Zeh's The Physical Basis of the Direction of Time, now into its fifth edition, is often given as the standard overview reference in this area; though note that it's written at a professional level, not a popular one. Chapter 3 looks at thermodynamics, before the book gets into decoherence and then black hole and cosmological thermodynamics. Jheald (talk) 11:03, 22 February 2009 (UTC)

Been reading Tolman[edit]

But I have to say it is a pretty hard book to digest in a short while. I get the feeling that I am going to be reading it for quite some time, because I find the section on quantum mechanics particularly difficult, and yet required to understand some of his most profound ideas.

I found an article on Boltzmann_entropy which I am starting to understand is related to H.

The citations I found from Tolman so far, don't need to go in the first paragraph if this makes the first paragraph seem to wordy, but I think the actual page numbers and wording might actually belong someplace at least in the article?

One thing that I have found in my experience to be true is that citations take more work than typing, so maybe the main contribution here is providing actual inline page numbers to a source for some of the material.

Um, I don't want to sound overbearing, and it's really good the way you're working through this; but can I suggest that you wait until you understand what is in the article first, so that you're confident you have a solid grasp of the big picture first, before you start trying to rewrite it?
While you're getting there, questions like "I don't understand this" or "This doesn't seem quite right to me" or "Why is the article saying that" are maybe more useful than trying to rewrite it as you're going along.
If you have access to a good university library, you also might find that some more modern "introduction to statistical mechanics" books may give a quicker, more accessible presentation. It's always good to have a look at what more than one source of exposition is saying (and the references would be good for the article, too!) Jheald (talk) 21:35, 24 February 2009 (UTC)
I'm not saying there isn't a lot wrong with this article, because there most definitely is. But I do think the best way to fix it is from the big picture first; i.e. to do some fairly comprehensive reading first, and work out what the most important things we want to say are first, rather than starting in on it piecemeal. Jheald (talk) 22:30, 24 February 2009 (UTC)
I agree to this methodology. However this suggests a more major undertaking than I had originally planned. I had at first thought to work on providing inline citations to the existing article with possibly some slight rewordings to bring the wikipedia text into agreement with those sources. That is OK though because the more I learn about H theorem the more interesting it becomes. Also the concept of the H theorem is deeper and more complicated than I first thought it would be! I especially appreciate your help in working through this. I am sure that there are both passive current and future readers of this discussion who are, or will be equally grateful for your contributions.

Hobojaks (talk) 21:08, 25 February 2009 (UTC)

Some material for possible inclusion[edit]

In classical thermodynamics, the H-theorem was a thermodynamic principle first introduced by Boltzmann in 1872. The H-theorem demonstrates the tendency of the molecules of a system to approach their equilibrium Maxwell-Boltzmann distribution[1]. The quantity called H by Boltzmann can be regraded as an appropriate measure of the extent to which the condition of the system deviates from that corresponding to equilibrium[2]. This quantity H has a tendency to decrease with time to a minimum and thus for the system to approach its equilibrium condition.[3].

The H-theorem has been restated in quantum mechanical form.[4]

The decrease in the quantity H is related to the increase in the entropy[citation needed]. For example the approach to equilibrium of an ideal gas in an irreversible process, can be demonstrated using an approach based on the Boltzmann equation[citation needed].

It appears to predict an irreversible increase in entropy, despite microscopically reversible dynamics. This has led to much discussion.

ergodic hypothesis[5] —Preceding unsigned comment added by 76.191.171.210 (talk) 07:23, 24 February 2009 (UTC)

Hobojaks (talk) 19:41, 23 February 2009 (UTC)

Problem list[edit]

The quantity H is defined as the integral over velocity space :


   \displaystyle 
   H 
   \ \stackrel{\mathrm{def}}{=}\  
   \int { P ({\ln P}) d^3 v} 
   = \left\langle { \ln P } \right\rangle

(1)

where P(v) is the probability. H is a forerunner of Shannon's information entropy.

(1) Exactly what is velocity space would it be better to call it phase space

  • The integral isn't over phase space. That was an adaptation suggested by Gibbs some years later. (Tolman p. 165 and onwards). Boltzmann's original integral was over the 3-dimensional space of possible molecular velocities, not the 6N-dimensional phase space.
Actually I have managed to find somewhat of a source for this. But it is only in a problem set. The source is Fundamentals of statistical and thermal physics, Rieff 1965, The index lists two places were it talks about H theorem the first is in a problem set. These problems do define H as an integral containing a d^3 term. But it is multiplied by a distribution function, times the log of a distribution function, which is just a little different from a probability. So still no source for the current formula, but possibly just the slightest hope for it?Hobojaks (talk) 04:05, 27 February 2009 (UTC)
  • On further consideration, it looks like this may well be wrong. Tolman presents Boltzmann as working in a 6-dimensional "μ-space". (Not the 6N-dimensional phase space, though).
On the other hand, the key input to the H-theorem is the time-reversibility of the transition coefficients. I don't think that depends on position, so it may not be impossible that Boltzmann first presented the derivation purely for the velocity distribution. We would need to check a reference that actually goes into the detail of the development of Boltzmann's thought to be sure - probably something by S. Brush. Jheald (talk) 22:12, 24 February 2009 (UTC)
I have been studying Toleman pg 45 as near as I can tell, "μ-space" is one of two flavors of phase space which he draws a distinction between. "...The phase space for any individual kind of element (molecule) is called the μ-space for that species of element." The other flavor of phase space is gama-space. According to Toleman the key distinction of "μ-space" is that if you switch two identical molecules this does not really count as two different states in "μ-space", but it would gama-space.Hobojaks (talk) 23:20, 25 February 2009 (UTC) I have discovered that in addition to page 45 the whole section 27 starting on page 74 is dedicated to explaining "μ-space". In section 27 it is made clear that "μ-space" is the phase space of a single molecule, and that a gamma-space for the whole system of many identical molecules can be made up by combining all the μ-spaces of the individual particles. He then explains a procedure for eliminating all the permutations that come about by switching two identical particles, adding that this concept will be of great importance when switching to quantum mechanics. μ-space in the case of a single molecule of a mono atomic gas would be six dimensional but in general the μ-space of a molecule would have 2f dimensions corresponding the the each of the generalized coordinates and generalized momentum for each degree of freedom p & q Hobojaks (talk) 08:43, 26 February 2009 (UTC)

1(a) Also a problem with the formula in question is that it says that P is the probability but it does not say the probability of what. I have found a citation in Rief Fundamentals of Statistical and thermal physics which may be relevant. This formula is for a limited special case of an ideal mono-atomic gas, and the problem tells the student to assume that for f they can use the equilibrium Maxwell distribution.

H=\int d^3\mathbf{v} f\,\log\,f[6]

The student is then ask to show that H = -S/k where S is the entropy per unit volume of a mono-atomic ideal gas.

Is perhaps the student being guided down the path that Boltzmann once followed. I can't be sure of this, but I can make one observation. There needs to be a symbol for two different versions of H. One for the extensive property of the entire system and one for the intensive property. The H in the specialized Rieff-Boltzmann H theorem is the H per unit volume, and clearly not the H for the 'system' as a whole. This is a point that I struggled with considerably, and would like to get conformation that I am correct on?

In order to prevent future readers of this article from suffering the difficulty I have suffered I propose that we develop some notation to distinguish between H and H per unit volume in the article, and clearly indicate which one we are talking about each time we indicate a formula!!!!

Note that Toleman's formulas for H also use the notation were there is an f function, but where f is more generalized.

Perhaps a substitution were f is substituted for P and it is clearly stated that f is the equilibrium Maxwell distribution, and that the formula is specialized is in order. Note that the start of the article gives a very restricted definition for the H theorem, which is no longer true of the more general presentation the article currently gives.

(2) I can't find this exact formula for H anywhere in Tolman, who lists a number of different forms. It looks like there is a strange cube term on the differential operator, like it is the third derivative of v, which I guess is velocity space, the article really does not say.

  • It's maybe an evolution of notation since Tolman's time, but the meaning is quite clear. The d3 v is being used as a shorthand for dv1 dv2 dv3.

(3) Shouldn't there be multiple integration symbols. \int... \int

  • Again, it's an evolution of notation. The integral is understood to be an integral over volume elements d3 v, rather than considering it a series of line integrals. The use of a single integral sign to indicate an integral over all the elements of a set is not unusual.

(4) Would it be better to have \delta p\, and \delta q\, indicating an inexact differentials of momentum and generalized coordinate? Especially in a variant of the formula that uses a summation rather than an integration?

  • Boltzmann's integral wasn't an integral over phase space, it was an integral over velocity space.

(5) As near as I can tell there are at least three different versions of H theorem, at least in the context that Tolman uses the term. There is the original version of Boltzmann then there is the version of Gibbs which modifies Boltzmann's early version. Then there is the quantum mechanical version.

  • Yes. This is something the article could perhaps make clearer. The basic structure of the argument is the same in each case however. The quantum mechanical version is (IMO) the one that is most stripped down to fundamentals, and easiest to see what is going on, so in my view it is the right choice that that is the one the article chooses to show how the H theorem comes about.

(6)There is another redefinition of entropy currently underway, that includes the concept that a black hole can have an entropy. Yet this new black hole including version of entropy is in contradiction with H theorem, at least if I understand correctly, but this is neglected in the article.

  • See Black hole information paradox, and in particular recent discussions on the talk page there. There is sort of a connection - in both cases information you used to have about the system is being thrown away in your later description of the system, despite a deterministic evolution. With the H-theorem we know how that's happening. With the BH information paradox it is yet quite not so clear. It may be because the BH is not always being described in a fully quantum mechanical way, with a definitely finite number of states; it may also be related to information becoming unobservable in decoherence.
Zeh has written on this and it is covered in his book; but I don't think raising the issue of BH unitarity, which is still quite murky, is helpful in understanding the H-theorem, which is much simpler, and is well understood. It might be worth a mention in the "See also" section, if one could judiciously word the right covering phrase to make the connection. But anything else here would be a unnecessary diversion into confusion, when what we should be trying to do is make the main ideas and issues with the H theorem come over as clearly and simply as we can.

(7)Some older authors (including Tolman) equate H with the macroscopic entropy, it seems this important idea has taken a back seat to the idea of equating H to Shannon entropy? According to Tolman this association is one of the greatest achievements of physics.

  • No, I think what Tolman is saying is quite right. The connection with Shannon entropy (for those who accept the connection) just makes the achievement even richer.
Gibbs's version of H (and its quantum analogue) - apart from a minus sign, and a completely irrelevant factor of kB - is essentially the standard, most general expression for microscopic entropy. I would say that the identification of this entropy at a microscopic level with classical thermodynamic entropy, so that one can write
S = -k \sum p \log p
is definitely one of the greatest achievements of physics, without any doubt whatsoever.
If you go with Jaynes then Shannon's work makes Gibbs's formula even more of a fundamental intellectual achievement, because Shannon's work tells us that we can give a precise meaning to what is quite an abstract mathematics. Shannon shows that Gibbs's - Σ p ln p is the natural measure of information. So the H in the formula precisely connects information with classical thermodynamic entropy. The importance of H is precisely that it connects with both.
But classical thermodynamic entropy increases. How can we relate that to an amount of information, and to deterministic dynamics? This is where the H theorem comes in. It shows that if we approximate what's going on in the molecular collisions, or in quantum processes, making the processes probabilistic instead of deterministic because we can't keep track of the full details of the system - i.e. if the only information we can keep track of is in the macroscopic variables, then we systematically lose information; so if we use S to describe classical thermodynamic entropy, which is a state function of the macroscopic variables of the system, S will tend to increase.
So in short: the idea, that S given by the formula above, directly relates to classical thermodynamic entropy, is and remains absolutely fundamental.
The further idea, that that S can naturally be interpreted as the amount of Shannon information, which would be needed to define the microscopic system fully, but which is not conveyed by macroscopic variables alone, is the icing on the cake, which people can take or leave according to taste. Jheald (talk) 21:16, 24 February 2009 (UTC)
I think a lot of the hostility shown towards Shannon's information entropy is a gut feeling that it somehow dilutes Boltzmann's (and Gibbs's) great achievement! It doesn't, of course, as Shannon's contribution is only secondary and derivative :P Physchim62 (talk) 23:59, 25 February 2009 (UTC)

8 There are actually two classical forms of H theorem. The original old school Boltzmann version and the 1902 Gibbs version, based on a different quantity which is essentially the value of double bar H in Toleman's notation. This latter idea is a quantity

"which characterizes the condition of the ensemble of systems which we use as 
appropriate for representing the continued behavior of the actual system of interest."(Toleman pg 166)


Some important formula[edit]

It would seem to me that equations 47.5 to 47.9 of on page 135 - 136 of tolemans book in the section titled definition of the quantity H are very important. When I get chance I will start to transcribe these here for possible inclusion. —Preceding unsigned comment added by Hobojaks (talkcontribs) 21:14, 24 February 2009 (UTC)

They are all variants on H = \sum p_i \log p_i, which is really the key idea. Best to go with that, I think. Jheald (talk) 22:36, 24 February 2009 (UTC)

A good inline reference for that formula would be (Toleman 1938 pg 460 formula 104.7)

If you look closely in that version there is a double bar over the H, and it might be nice to have the double bar over the wikipedia version as well. Apparently in Toleman's thinking double bar H is a property of an ensemble where as H with no bar is a property of an individual system.

Hobojaks (talk) 20:46, 25 February 2009 (UTC)

Something like H or \overline{\overline{H}}? Physchim62 (talk) 12:29, 26 February 2009 (UTC)


\overline{\overline{H}}= \sum p_k \log p_k[7]

This now looks exactly like the text from toleman's chapter The Quantum Mechanical H-theorem. Maybe someone can help me that understands the guidelines for inline citations to get the formant exactly correct for this important citation but then I propose that we include this line in the article.

It may also be wise to start with the 'quantum mechanical' formula as J heald has suggested using the rational that it is the simplest form of H theorem.

Not just that, but it gives a nice general notation based on transition probabilities between states, into which terms the classical versions can easily be cast as well. Jheald (talk) 11:26, 1 March 2009 (UTC)

The double bar tells us that we must constantly remember that this is not a property of a single system but an average property of an ensemble of systems.

I would use H, without the double bar, for the first time you are introducing the system. -- After all, it's only Tolman who uses the double bar, it's not "standard" notation. For other uses, we could than qualify H in difference ways, eg perhaps with a subscript, if we really want to emphasise that it is being calculated for a different probability distribution. Jheald (talk) 11:26, 1 March 2009 (UTC)

Also since we are contemplating including in this article boltzmann's original H quantity it will help to distinguish between the two quantities.

As a side note, Gibbs has a really nasty habit of stealing other peoples terminology, and introducing his own ideas, under the same name as another concept. Boltzmann's H-theorem, Clausisus's entropy, and Hamilton's vector are important on this list.

I have not really found anybody who has commented on the somewhat cosmic significance of the fact that in fact Boltzmann proved that the "true entropy" has to be a negative quantity. Sign switching was something that Gibbs delighted in, also switching the sign of the scalar part of the square of a quaternion. Hence V 'dot' V = -V^2 with Gibb's hermaphrodidical notation on the right, and Hamilton's in my point of view more elegant notation on the right. It would be interesting of a citation for this observation could be found, because I don't think it is OR on my part? It is in part what drives my interest in H - theorem. My other favorite book on thermodynamics relegates H -theorem to an apendex. Yet still a good source.Hobojaks (talk) 19:15, 26 February 2009 (UTC)

Notes[edit]

  1. ^ Toleman 1938 pg 134
  2. ^ toleman 1938 p 134
  3. ^ Toleman 1938 p 134
  4. ^ Tolman 1938 chapter XII The Quantum Mechanical H-Theorem.
  5. ^ Tolemam 1938 pg 65
  6. ^ Rieff 1965 pg. 546
  7. ^ Toleman 1938 pg 460 formula 104.7

Hobojaks (talk) 19:20, 26 February 2009 (UTC)

Plan for a little editing.[edit]

1)Complete, I am changing the order of the quantum mechanical version of H theorem, with the other section.

As per discussion this is the more relevant definition of H theorem. This may require some cleanup.

(2)Complete, I am moving the discussion of Shannon entropy into the first quantum mechanical section of the article.

(3)Completed, I am pasting in the inline citation from Toleman for the first formula on entropy as suggested by J-heald.

(4)First stage now complete,These changes being made, my work will be done with the quantum mechanical section, and I also plan to leave the first paragraph of the article alone for a while to let these changes cool down for a while.

(5) I would then like to reconstruct the section on the original Boltzmann H theorem along the lines as it was developed by Toleman, providing the proper inline citations. With a note about the formula offered by Rief.

5(a) My first step will be to copy formulas 47.5 to 47.9 found in Toleman page 135.

Toleman is pretty good about tracing the historical development of H theorem. I plan on making it clear that this is Toleman's development of the concept, leaving any parts of Boltzman's original formulation, not covered by Toleman a bow for another Ulysses. The two developments are close but the extent that Boltzman's development differs from Toleman's so far I have only Toleman's account.

I now have all the fomulas i wanted to add to the article except for one that I am still working on the proper math tags for.

See the really ugly period at the start of the lower part of the formula that I used to get the equals signs to line up????

\gamma \,

H= \sum( n_i \log n_i -  n_i \log \delta v_\gamma)[1]

.\,\,\,\,\,= \sum n_i \log n_i + const[2]

I plan on changing the formula current formula for H per unit volume, to make it more in line with the format and definitions of Reif with the appropriate inline citations. Or possibly doing away with it all together? But this would require more discussion. Can anybody provide an inline source citation for this formula?

(6)This will leave the article on H-theorem with still more work to do, but I feel like maybe it will be someone else's turn to take it from there for a while, and I plan on a cooling off period where I might participate in discussion, but not do any more editing of the article for a while. Hobojaks (talk) 20:36, 28 February 2009 (UTC)

No time now, but I'll try and comment in more detail on Monday. Jheald (talk) 11:28, 1 March 2009 (UTC)

Icky notation[edit]

I found things here that look like this:

ναβ(pβ-pα)

How uncouth.

I changed that to this:

ναβ(pβ − pα)

Note: In non-TeX math notation,

  • variables are to be italicized; parentheses and digits are NOT;
  • a minus sign is not a stubby little hyphen;
  • binary operators minus, plus, equals, etc. are preceded and followed by a space (don't worry about that in TeX; the software does it automatically); with binary operators I make the space non-breakable. (Minus and plus as unary operators are not followed by a space; thus +5 or −5 is correct.)

See Wikipedia:Manual of Style (mathematics).

Also:

 \int f(x)\,dx = g(x) + \text{constant} \,

is correct, but

 \int f(x)\,dx = g(x) + constant \,

is wrong. "Constant" should be set in text mode, not left to get italicized as if the letters were variables. This also affects spacing in some contexts. Michael Hardy (talk) 18:54, 2 March 2009 (UTC)

I feel like I have been blessed by the math tag gods here!!!
You definitely made the formulas look a lot better. I will need study your techniques. I am still mastering math tags.
The double bar over the H however it seems to me is very important. Taking it off comes at the expense having an equation from a verifiable source.
The double bar shows that the quantity we are talking about is an average over an ensembly, and that it is different from the other quantity H used in other sections of the article.

The "Criticism" section is complete nonsense[edit]

I am not sure who is the "source" being cited but the criticism clearly shows that its author has zero understanding for the proof presented in the article - and zero understanding for thermodynamics in general.

1) It's the very point of the proof that one divides the evolution into infinitesimal steps that are long enough that the "thermalization" occurs in the interval, and it produces nonzero probabilities for indistinguishable microstates. On the other hand, because this time can still be essentially arbitrarily short for macroscopic systems, the first-order perturbation theory becomes actually exact, so there is no error introduced by the first-order approximation of the Fermi's rule. At any rate, even if a higher-order corrected rule were used instead, one could still prove the inequality.

2) The derivation clearly talks about the probabilities of all possible microstates of the system. It makes no assumption about the number of "particles" or other degrees of freedom that describe the system. So in this sense, the derivation is absolutely exact and surely doesn't neglect any interactions. Only the "critic" is introducing some assumptions about the number of particles and the organization of the Hilbert space as a tensor product. The proof is doing nothing of the sort and is completely general in this sense.

3) It's completely absurd that an isolated system with many (N) particles will "sit in one microstate". Quite on the contrary, it will try to evolve into an equal mixture of completely all microstates with equal probabilities. That's what the proof shows - and what every sane person knows from the 2nd law of thermodynamics and everyday life, anyway.--Lumidek (talk) 09:27, 3 April 2009 (UTC)


I have to agree with the absurdity of this critique. Even if the case can be advanced for the potential invalidity of the H-theorem proof, it must be done under what most would consider rather extreme assumptions. Contrary to the criticism of its applicability, the H-theorem assumes rather mundane conditions, e.g. non interacting particles. Despite what the critic states, non-interaction represents most gases under "ordinary" conditions to an excellent degree for most applications.

While it is possible to prepare states that violate this theorem (e.g. Jaynes, 1971), they are systems which violate the assumptions of the theory, specifically with respect to the strength of potentials. This is not necessarily the case in N-body quantum systems, since even these systems may be treated as systems of N non interacting quantum particles, with successive perturbations (in the form of potentials) approaching the "true" system. The H-theorem will be perfectly valid for this system provided the potentials are sufficiently weak. It should be noted that a system can always be placed in a state that will violate the H-theorem, even in the absence of potentials. What is physically important is that the number of accessible states that will continue to violate the H-theorem as the system evolves decreases with exponential rapidity. For macroscopic systems this is essentially immediate, even if the initial system was H-theorem violating.

This section is out of place in this article, except perhaps as a note linking to another page, if the critic should like to create it, that dicusses scenarios in which the H-theorem is violated, which would necessarily be limited to those cases which strongly violate the assumptions supporting the theory. 68.19.49.85 (talk) 17:28, 14 October 2009 (UTC)

Terribly written article![edit]

1) If Boltzmann proved the H-theorem, why in the world does the article start off with the quantum version of it? First the classical treatment should be given, followed by the quantum version.

2) The article says to look up the Wikipedia entry on Shannon's information entropy in order to better understand the definition of H. This is ridiculous! The H-theorem article should not only be self-contained, but it should be able to explain the H-theorems's foundations better than any other article, since after all, it's about the H-theorem!

3) There are misplaced interjections, such as mentioning "a recent controversy called the Black hole information paradox." at the end of the Quantum mechanical H-theorem section, and a discussion about Loschmidt's paradox and molecular chaos at the end of the Boltzmann's H-theorem section. These should be discussed appropriately in the Analysis section or in their own section.

4) The Analysis section quotes various researchers without references and makes a personal attack on Gull in the paragraph "(It may be interesting ...". Overall it reads like an incoherent rant and rambling.

5) The article is full of sentences devoid of any information, like: "This has led to much discussion." "and thus a better feel for the meaning of H." " something must be wrong" " and therefore begs the question." (even linking to begs the question) "But this would not be very interesting."

"With such an assumption, one moves firmly into the domain of predictive physics: if the assumption goes wrong, it may produce predictions which are systematically and reproducibly wrong." — Preceding unsigned comment added by Michael assis (talkcontribs) 20:32, 10 November 2012 (UTC)

  1. ^ Toleman 1938 pg. 135 formula 47.7
  2. ^ Toleman 1938 pg. 135 formula 47.7