Talk:Entropy in thermodynamics and information theory

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Physics (Rated C-class, Low-importance)
WikiProject icon This article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Low  This article has been rated as Low-importance on the project's importance scale.
 
WikiProject Mathematics (Rated C-class, Low-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
C Class
Low Importance
 Field: Applied mathematics

Entropy in thermodynamics and information theory[edit]

This article was created from content copied from the information theory article. I believe this topic justifies its own article for the following reasons (130.94.162.64 02:29, 5 December 2005 (UTC)):

  • There is popular interest in the topic. (As seen from various comments on talk pages and so forth.)
  • There are deep, undeniable connections between thermodynamics and information theory.
  • People have been contributing information on this topic into various articles where it doesn't properly belong, and that information should have its place.
  • It will take a considerable amount of space to adequately explain the topic.
  • It will avoid the need to duplicate this information and clutter up other related articles such as entropy, reversible computing, information theory, and thermodynamics.
  • Those who wish to deny certain connections between thermodynamics and information theory will have a place to contribute verifiable information that supports their viewpoint, rather than just deleting information that would tend to oppose it.
  • There are without doubt some proposed connections between information theory and thermodynamics that simply are not true. (Zeilinger's principle looks like one example to me.)
  • The Wikipedia community has successfully written articles from NPOV about far more controversial topics than this one.

Just a reminder: Wikipedia is not the place for Original Research. If you want to add something, you need verifiable sources to back it up. I'm not that familiar with Wikipedia policy, but it's probably O.K. to mention recent and ongoing research from a Neutral Point Of View, as long as one places it in proper context and does not make grandiose claims about it. (BTW, I'm the same person as 130.94.162.64; my computer just died on me.)-- 130.94.162.61 19:03, 6 December 2005 (UTC)

Zeilinger's Principle[edit]

Does anyone want to write an article on this one? I know we don't have an article on every Principle that comes along, (nor do we want to), but it looks like some researchers went to a lot of trouble to refute it. I'm not sure how important it is. Maybe someone can just explain it a little more in this article. -- 130.94.162.64 03:24, 5 December 2005 (UTC)

On second thought, I do think some more explanation of Zeilinger's principle is in order. I seem to recall it was popular a number of years ago. -- 130.94.162.61 18:55, 6 December 2005 (UTC)

Difficulty of Research[edit]

The whole subject of information theory is murky, difficult to research, and shrouded in secrecy, (especially, for some reason I am as yet unaware of, as it relates to the things discussed in this article). Little academic research of note has been published on it for the last forty years. The journals that once covered information theory now cover mainly coding theory, its canonical application.

er shrouded in secrecy? If it's a scientific topic, then there are published papers on it. If there are no published papers on it, then it's not a scientific topic. This whole section reeks of POV. Take this off until more external citations are available. It's not that I'm not interested in the topic, it's just that I don't buy the whole shrouded in secrecy argument on any scientific topic.

4.249.3.221 (talk) 16:06, 28 June 2009 (UTC)

I would like to show, by a little example, why I believe this is so. Let me tell you, from an information-theoretic standpoint, that information surely is not quantized. Consider what Shannon's theorem tells us: that information can be transmitted across a noisy channel at any rate less than the channel capacity. That channel capacity can be ever so small as one wishes to make it, so that for example only a thousandth of a bit at a time can be transmitted through it over the noise, and yet by and by that information can be accumulated and put back together at the other end of the channel with near perfect fidelity.

Now consider a certain TLA (Three Letter Agency) that must work continually with infomation that it wishes to keep out of reach of an adversary. Every employee of that TLA (and there are many many thousands of them) is, in effect, a communications channel that conveys classified information to the adversary. Employees are human. They talk in their sleep. They publish papers. They have friends. What they know influences their actions, perhaps in ever so subtle degrees. An academic might avoid mention of a certain subject in a paper or conversation, and inadvertently make it conspicuous by its absence. People continually let little bits of infomation slip that, taken individually by themselves, would be harmless slips.

But consider a top secret memo that must be circulated widely in the TLA. Now the redundancy of that information is very high, and those little bits of information that inevitably slip can by and by be aggregated by the adversary. A powerful adversary or a determined researcher can conceivably find, aggregate, and reconstruct much information in this way. And the aggregate "leakage" channel capacity of all those employees is no doubt high indeed for that TLA.

I hope that one potential use for this theory was made clear that might explain some of the difficulty in researching it.

As another user put it in another talk page:

"Information theory leaks information."

-- 130.94.162.61 08:10, 7 December 2005 (UTC)

This example sounds like a lengthy description of Steganography. However, the fact that Information Theory can help with the detection of hidden messages does not mean that that the whole subject is "murky, difficult to research, and shrouded in secrecy"; At most, the example suggests that some applications of the subject may be "difficult to research." Similarly, the fact that Number Theory is the foundation of all public key ciphers does not imply that Number Theory is "murky, difficult to research, and shrouded in secrecy."
That said, the topic addressed by this particular page lies at the intersection of Statistical Mechanics, Information Theory and perhaps Thermodynamics, so the number of experts who can do the topic justice is probably small.

StandardPerson (talk) 04:33, 6 July 2011 (UTC)

Updated, re-editing needed[edit]

I expanded the text on this subject in the Information theory article, then spotted this page.

So I've copied it all across to here. Some editing may be needed to fit it into the evolving structure of the page here; it would then probably also be worth cutting down the treatment in the Information theory.

But I'm calling it a night just for now. -- Jheald 00:01, 10 December 2005 (UTC).

I had to make a slight correction. We cannot speak of a "joint entropy" of two distributions that are not jointly observable. The "joint distribution" formed by considering them as statistically independent random variables is a completely artificial (not to mention rather misleading) construction. (It completely fails to take into account which variable we observe first!) Maybe someone who knows more about this will expand on it. All we can really include in the article is information from verifiable sources, such as Hirschman's paper. If you have Original Research on this topic, you will have to write a paper on it so that we can refer to it here in the article. But you might think obut the Difficulty of Research before you do this. :) -- 130.94.162.64 15:20, 20 December 2005 (UTC)
We need an Expert on the subject to help with this article. -- 130.94.162.64 00:21, 19 March 2006 (UTC)


the coninuous case[edit]

H[f] = -\int_{-\infty}^{\infty} f(x) \log f(x)\, dx,\quad

I see no difficulties in this representation as long as f(x) is a probability density function, p. d. f.. It is the mean value of information of f(x) and its exponential represents a volume of the region covered by a uniform disgtribution (in analogy to the cardinality in the discrete case). Why would it not represent the logarithm of volume covered by any p. d. f.? A Gaussian p. d. f. covers a volume equal to

sqrt( 2 pi e variance )

Kjells 09:06, 29 March 2007 (UTC)

continuous case needs explanation[edit]

It easy to see that generalizing the Shannon formula to continuous case leads to an infinite entropy. Most irrational numbers yield an infinite information quantity. In fact, in the integral formula, the differential term dx witch is supposed to appear twice has been removed from the log argument, avoiding an infinite result. I am surprised that this trick is never discussed.

Entropy anecdote[edit]

I already mentioned that no history of the concept would be complete without this famous anecdote: Shannon asked John von Neumann which name he should give to the new concept he discovered: - \sum p_i \log_2 p_i \!. Von Neumann replied: "Call it H." Shannon: "H? Why H?" Von Neumann: "Because that's what Boltzmann called it."

Algorithms 20:04, 7 June 2007 (UTC)

Do you have a source for this anecdote ?138.231.176.8 Frédéric Grosshans (talk) 13:15, 28 June 2011 (UTC)
"My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage."
Conversation between Claude Shannon and John von Neumann regarding what name to give to the “measure of uncertainty” or attenuation in phone-line signals: M. Tribus, E.C. McIrvine, Energy and information, Scientific American, 224 (September 1971), pp. 178–184 . Cuzkatzimhut (talk) 22:34, 26 February 2012 (UTC)

Units in the continuous case[edit]

I think there need to be some explanition on the matter of units for the continuous case.

H[f] = -\int_{-\infty}^{\infty} f(x) \log_2 f(x)\, dx,\quad

f(x) will have the unit 1/x. Unless x is dimmensionless the unit of entropy will inclue the log of a unit which is weird. This is a strong reason why it is more useful for the continuous case to use the relative entropy of a distribution, where the general form is the Kullback-Leibler divergence from the distribution to a reference measure m(x). It could be pointed out that a useful special case of the relative entropy is:

H_{relative}[f] = -\int_{x_{min}}^{x_{max}} f(x) \log_2 (f(x)(x_{max}-x_{min}))\, dx,\quad

which should corresponds to a rectangular distribution of m(x) between xmin and xmax. It is the entropy of a general bounded signal, and it gives the entropy in bits.

Petkr 13:38, 6 October 2007 (UTC)

Paragraph about evolutionary algorithms should be removed[edit]

Paragraph cited below is very confusing and should be removed. Not only does it fail to clarify anything about the topic of the article it actually makes it much harder to understand by invoking a whole lot of concepts from the area of evolutionary algorithms. For readers with no knowledge from that area this paragraph will be completely incomprehensible. If an example is needed it should be as simple as possible.

Average information may be maximized using Gaussian adaptation - one of the evolutionary algorithms - keeping the mean fitness - i. e. the probability of becoming a parent to new individuals in the population - constant (and without the need for any knowledge about average information as a criterion function). This is illustrated by the figure below, showing Gaussian adaptation climbing a mountain crest in a phenotypic landscape. The lines in the figure are part of a contour line enclosing a region of acceptability in the landscape. At the start the cluster of red points represents a very homogeneous population with small variances in the phenotypes. Evidently, even small environmental changes in the landscape, may cause the process to become extinct.

Enemyunknown (talk) 03:48, 17 December 2008 (UTC)


Paragraph should be expanded & clarified, or given its own topic

While I agree that the paragraph quoted above is very dense, surely this is an argument for expansion and clarification, rather than deletion.

This paragraph addresses questions that are widely misunderstood or misrepresented in Creationism / Intelligent_design and Evolutionary Biology, yet a clear understanding of the issue should be pivotal for intellectually honest and well-informed participants in the debate about design.

The excision of this paragraph would amount to intellectual cowardice, especially when while stubs to other topics (e.g. Black Holes) remain.

StandardPerson (talk) 05:15, 6 July 2011 (UTC)

Proposed merger[edit]

FilipeS (talk · contribs) has proposed that the content of the article be moved into History of thermodynamics.

I would oppose this merger, because:

  • This is a distinct topic, of current interest and current disagreement, important in a full current understanding of thermodynamics, and well worthy of a full discussion at the length presented here. (See also the points made at the top of the page by the original anon who created this article).
  • A one-paragraph WP summary style mention would be appropriate in the history article; anything more would be excessive in that context. But a one-paragraph overview, directing the interested reader here for the full story, would be good both for there and for here. Jheald (talk) 09:04, 7 July 2009 (UTC)
Upon reflection, I must agree. I will withdraw the proposal. FilipeS (talk) 11:24, 7 July 2009 (UTC)

Information is physical[edit]

I think it would be apropriate with a link to Reversible computing in the section with "Information is physical", since this is an area of (atleast remotly) practical implications.

Original research?[edit]

I like this article, but the sentence

"This article explores what links there are between the two concepts..."

confuses me.

Isn't an exploration of links between two concepts just original research? — Preceding unsigned comment added by 80.42.63.247 (talk) 10:21, 13 September 2013 (UTC)

Negentropy[edit]

This section needs work to clarify the disparity between what I think was Brillouin's initial over-broad belief (expressed in his 1953 book "Science and Information Theory") that any operation one bit of information had a thermodynamic cost of kT ln 2 (where k is Boltzmann's constant), and our current understanding (due largely to Rolf Landauer) that some data operations are thermodynamically reversible in principle, while others have a thermodynamic cost. This understanding was already implicit in Szilard's 1929 analysis of his one-molecule engine (nicely explained at the end of the preceding section) where he showed that the cost is associated not with any one step, but with the whole cycle of the engine's operation, comprising the acquisition, exploitation, and resetting of one bit of information about the molecule.CharlesHBennett (talk) 20:12, 16 February 2014 (UTC)

Isn't there a more explicit deep connection?[edit]

From Shannon's booklet: "Thus when one meets the concept of entropy in communication theory, he has a right to be rather excited - a right to suspect that he has hold of something that may turn out to be basic and important. ... for unless I am quite mistaken, it is an important aspect of the more general significance of this theory."

H = - K*sum( pi*log2(pi) ) which corresponds directly to the classical general form of entropy S = - kB*sum( pi*ln(pi) ). With K=1 this H is Shannon information in units of bits (the base 2). S has units of kB=Joules/temperature. But temperature is a measure of the average kinetic energy per molecule, Joules/molecule, so the Joules in kB can cancel. This means S has a more fundamental unit of "molecules", a count very much like "bits". To precisely make the count in terms of molecules, the molecules would be counted in terms of the average kinetic energy. A slower-than-average molecule would get a count a little less than 1, a faster molecule would get > 1. The result would be converted to base 2 to make the count in bits. So I'm making a bit-wise count of physical entropy. I don't know how to get absolute entropy, but it should work for entropy changes, i.e. ΔS=ΔH*ln(2), where H is counted before and after as described, and ln(2) is just a log base conversion. This is the converse of what Szilard said in 1929, 1 bit = kB*ln(2). But my method of counting bits obviates the need for kB and shows the ln(2) to be a log base conversion factor instead of being explained as 2 required states. Ywaz (talk) 21:49, 21 February 2015 (UTC)