Jump to content

Talk:Bell's theorem

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Almaionescu (talk | contribs) at 16:10, 24 December 2013 (→‎Check before you doubt). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconMathematics B‑class High‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-priority on the project's priority scale.
WikiProject iconPhysics B‑class High‑importance
WikiProject iconThis article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-importance on the project's importance scale.


Jaynes (undue weight)

The theoretical section near the end concludes with Jaynes' criticism of Bell's assumptions. However, Jaynes later agreed that Bell's conclusions seemed to be true, and indeed, shocking. I am hunting for a good literature citation to support this claim. Richard Gill (talk) 17:29, 9 August 2013 (UTC)[reply]

Thanks J-Wiki for adding some nice references here! I have asked Steve Gull for his proof. It is a two-page note (a really neat little argument, very different from standard proofs). He faxed me it some years ago, but I can't find it right now, but he'll send it me again. If I scan it and put it on internet then, together with the discussion at the end of Jaynes' paper where Jaynes explicitly refers to Gull's proof (it appeared in a conference volume at which they both presented papers) everything is properly referenced. Richard Gill (talk) 13:21, 14 August 2013 (UTC)[reply]
Thanks for further adressing this topic, and clarifying it. It would be great if the Gull proof could be made generally available.J-Wiki (talk) 01:27, 15 August 2013 (UTC)[reply]
Here's a link: [1]. I plan to LaTeX this and make it more generally available. Richard Gill (talk) 14:58, 22 September 2013 (UTC)[reply]
And Steve Gull has even posted it on his own webpage [2]: [3]. Richard Gill (talk) 15:07, 22 September 2013 (UTC)[reply]
Professor Gill, thanks to you and Professor Gull for posting the sketch proof. I've placed a question I have about it on your user talk page.J-Wiki (talk) 20:03, 28 September 2013 (UTC)[reply]

Final comments

The final comments had a small reference to the detection loophole still being an issue for some. I have rewritten this to accord more with what I believe is mainstream current thinking: the mainstream view is *not* (and never was) that the nondetected photons are different from the others and local hidden variables still have a chance. The mainstream view was always that this is a hypothetical possibility which moreover most people did not take seriously, on general physical grounds. But the mainstream view is no longer that this is a minor philosophical nuisance which we shouldn't worry about. I think the mainstream view is that it *is* important to do an experiment which closes all "practical" loopholes simultaneously - I mean, loopholes which can be closed by practical arrangements in the experiment; not abstract philosophical loopholes which no-one can do anything about except make them appear ridiculous. In fact it is clear that the top experimental groups are now vying to be the first one to attain this goal so indeed it has in the last few years changed status from some kind of optional extra which no-one should worry about, into something important which will be a big step in physics. I reference two persons on this, their statements can be found in:

M. Giustina, A. Mech, S. Ramelow, B. Wittmann, J. Kofler, J. Beyer, A. Lita, B. Calkins, T. Gerrits, S. W. Nam, R. Ursin, and A. Zeilinger (2013). Bell violation using entangled photons without the fair-sampling assumption. Nature 497, 227--230.

B.G. Christensen, K.T. McCusker, J. Altepeter, B. Calkins, T. Gerrits, A. Lita, A. Miller, L.K. Shalm, Y. Zhang, S.W. Nam, N. Brunner, C.C.W. Lim, N. Gisin, and P.G. Kwiat (2013). Detection-Loophole-Free Test of Quantum Nonlocality, and Applications. arXiv:1306.5772 [quant-ph]

Z. Merali. Quantum Mechanics Braces for the Ultimate Test. Science 18 March 2011: 1380–1382. Science Magazine

N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner (2013). Bell nonlocality. arXiv:1303.2849 [quant-ph]

One reason for this importance is that quantum cryptographic applications depending on quantum entanglement are not safe if one can "fake" quantum entanglement with classical physical means. So various quantum communication protocols only become as secure as they are claimed to be once all practical loopholes have been closed simultaneously, and can be easily closed simultaneously in mass produced quantum communication networks. Richard Gill (talk) 14:27, 14 August 2013 (UTC)[reply]

Probability Calculation in Original Bell's Inequalities

I made this edit last year and it was immediately undone, but either I'm right or the assumptions made behind these calculations need to be explicitly stated. We have 3 statistical coin flips, which I am interpreting as independent events. There is a 99% chance that A = B and a 99% chance that B = C. It's the next line where there's an issue. The article adds 1% chance that A and B mismatch and 1% chance B and C mismatch to get 2% chance A and C mismatch. This violates basic probability rules because it ignores the possibility that A = C and neither equals B. The possibility that A = -1, B = +1 and C = -1. If that is an impossible situation then it needs to be stated and if that is a possible situation then the probability calculations needs to be changed to my edit from last year. — Preceding unsigned comment added by Slick023 (talkcontribs) 14:06, 13 October 2013 (UTC)[reply]

The point is not to calculate a probability (under some assumptions about independence etc) but to give a guaranteed bound for the probability, a bound guaranteed irrespective of possible dependencies. Boris Tsirelson (talk) 14:44, 13 October 2013 (UTC)[reply]
That's not how probability works. There is no such thing as a 100% confidence interval and nothing is "guaranteed". If you were to flip a coin, the bounds of what percent of the times you can get heads are 0% and 100%. Any result is possible but probability distributions dictate a band of numbers that are most likely to occur. The point is to calculate the probability using probability distributions correctly and to demonstrate that the deviation between those and the experimental results are statistically significantly.Slick023 (talk) 13:58, 26 October 2013 (UTC)[reply]
Right; but you write about relations between (theoretical) probabilities and (empirical) frequencies, while Bell inequalities are for probabilities only. True, confidence intervals and all that appear when we turn to experiments. But this is a separate matter; and in this matter, there is nothing special in Bell inequalities. What is somewhat special (but not unique) is, theoretical bounds on probabilities. It does not contradict to what you write. It is a different aspect of the problem. Here, probabilities are not derived from experiments. Intervals for them are not at all confidence intervals. They are theoretical bounds. Afterwards they should be compared with experimental data via usual statistical procedures, which is unproblematic. Boris Tsirelson (talk) 18:06, 26 October 2013 (UTC)[reply]
You misunderstood me; if theoretical bounds existed then so would 100% confidence intervals. You can calculate the number of times A & C and theoretically expected to be the same but it provides no guarantees. However, I think I've discovered my grievance with the calculation here. I was interpreting it that A & B are the same 99% of the time as a probability whereas I am supposed to interpret it as A & B are always the same exactly 99% of the time? Because that is the only way addition doesn't break basic probability rules. Slick023 (talk) 23:39, 26 October 2013 (UTC)[reply]
Also right; it is not easy to me to understand what do you really mean; but I try.
Generally, for a pair of events A and B, we have the joint distribution consisting of 4 probabilities that form a 2×2 array. Three degrees of freedom (since the sum is 1). If they are independent, only two degrees of freedom remain. If instead A implies B, another two degrees of freedom. There are a lot of possible special cases and one general case.
For three events A, B and C we have a three-dimensional array of size 2×2×2=8, and (generally) 7 degrees of freedom. Now imagine that we know that A and B are the same at least 99%. (This is just probability; whether you interprete it as the limit of frequence, via time etc., or not, is irrelevant for now.) Then our 8 numbers are restricted by an inequality (containing the constant 0.99). Knowing the same for B and C restricts our 8 numbers by a similar inequality (also with 0.99 inside). It appears that these two restrictions (combined) imply another inequality, with 0.98 inside, for A and C. And moreover, the 0.98 is tight, in the sense that it is reached by some 2×2×2 array satisfying the former two restrictions. (But, of course, it is technically much easier to do other way, without 2×2×2 array.) This is not (yet) related to frequences; just (theoretical) probabilities. Also not related to any independence; quite the general case, with the most general dependencies allowed. Boris Tsirelson (talk) 06:00, 27 October 2013 (UTC)[reply]
That's a good way of putting it. Thanks. — Arthur Rubin (talk) 06:12, 3 November 2013 (UTC)[reply]
The entire section, titled "Original Bell's inequality", which talks abut these 99% coin flips, is confusing and misleading. It smells like some novices attempt to somehow rationalize the inequalities in some intuitive way, but it fails to actually do that. Rather, it appears to be founded on some unjustified assumptions, which, taken literally, lead to the above discussion. Perhaps the entire section should be cut or completely re--written? User:Linas (talk) 03:16, 18 November 2013 (UTC)[reply]
I am new to this theory, but the whole discussion of coin flip correlations seems to be looking at the wrong problem. An attempt to measure a quantum variable where the probability of the correct result relies on the angle of measurement is a different problem to series of random binary decisions. If the probability of a matching result is cos^2(theta), then 5.7 degrees gives 99%, and 11.4 degrees gives 96%. There is no reason at all to suppose the A+B correlation and the B+C correlation should somehow correspond in a linear way.Natty Stott (talk) 19:09, 9 December 2013 (UTC)[reply]
Hope you understand that "the whole discussion" follows carefully refereed scientific articles, discussed a lot by very very competent, and highly motivated, and very astonished experts of different kind, all over the world, during decades. Boris Tsirelson (talk) 20:22, 9 December 2013 (UTC)[reply]

Let me try to explain the A, B, C story. Suppose you have a pair of particles which you can measure at distant locations. Suppose that the measurement devices have settings, which are angles. eg you measure something called spin in some direction. You choose the direction. For each particle, separately. Suppose the measurement outcome is binary (eg spin up, spin down). Suppose the two particles are perfectly correlated in the sense that whenever you measure them both in the same direction you get identically the same outcome (ie both spin up or both spin down). The only way to imagine how this works is that both particles leave their common source with somehow encoded in them both, what outcomes they will deliver when measured in any possible direction. How else could particle 1 know how to deliver the same answer as particle 2 when measured in the same direction? (They don't know in advance how they are going to be measured...).

Start with both settings equal to one another, say both at 0 degrees to some common reference direction. All the pairs of particles give the same outcome (each pair is either both spin up or both spin down). Now increase Alice's setting to +1 degree leaving Bob's at 0 degrees. A small fraction of the pairs, say f, now give different outcomes. If instead we had left Alice's setting at 0 degrees but decreased Bob's to -1 degrees, then again a fraction f of the pairs of particles turn out to give different outcomes.

It should not be hard to convince yourself that if Alice's setting is put at +1 degree and Bob's at -1 degree, at most a fraction 2f of the pairs can give different outcomes! Richard Gill (talk) 21:19, 17 December 2013 (UTC)[reply]

I put a version of my explanation into the article. Richard Gill (talk) 12:30, 22 December 2013 (UTC)[reply]

Hot steaming mess....

Wow. Not a bad article, but certainly not a good one. I'm not surprised, given the contentious nature of the topic. Issues:

  • Section "Importance of the theorem" is interesting, but has a number of non-sequiters in it, while also pointlessly repeating content from the intro.
  • Mentions of "faster than light" come out of nowhere, and are not really appropriate. The FTL nature of wave function collapse is interesting, but not really germane to the argument.
  • Section "Original Bell's inequality" appears to be some kind of strange, flawed attempt at a simplified explanation. It encourages wild misunderstandings, see talk section immediately above. It needs to be cut or entirely re-written.
  • Section "Bell inequalities are violated by quantum mechanical predictions" starts talking about observables X and Y. This is the first occurance of X and Y. What are they? Why should they commute? Most everything important to Bell's thm do NOT commute! Worse, it uses lambda as an expectation value; but the previous section defines lambda as a hidden variable! The symbol E is used as expected value earlier, yet here its used as a projector. That makes no sense. Big WTF here. OK, I cut the offending part, see below.
  • ...an overall lack of flow, organization. The whole thing is clearly a compendium of random facts inserted by random authors, Ugh.

What to do? User:Linas (talk) 03:09, 18 November 2013 (UTC)[reply]

Agreed, quite a mess.
However mentions of faster than light (superluminal) communication are not so inappropriate: after all, the conclusion of Bell's theorem is that (assuming that QM predictions do indeed fit reality to a sufficient degree) one of three formerly uncontroversial assumptions about the nature of reality has to be discarded: either we must discard locality (in favour of superluminal communication), or we must discard freedom (to choose measurement settings how we like, AKA no conspiracy), or we must discard something called realism roughly meaning that outcomes of measurements which were not actually performed can also be considered part of reality (aka counterfactual definiteness). Sorry I am using some somewhat technical language here. The whole Bell story started with people being unhappy that a wave function of one particle in one place would instantaneously collapse when something is measured on another particle far away. But this is only a problem if you think that wave particles are real things. The genius of Bell (and his predecessors EPR) was that he showed that weird things happened not only at the level of wave functions (which after all might be considered just some computational device, not as something physically real) but also at the level of hard outcomes of real lab measurements.
The section original Bell is indeed quite a mess though one can recognise one of the oldest versions of Bell's theorem here, and one which is used in many (not bad) popular explanations too.
The commuting observables were any of Alice's and any of Bob's (spin on one particle in one direction, spin on the other particle in some other direction). The non-commuting observables are any pair of Alice's spins, or any pair of Bob's spings.
What to do? Good question... Richard Gill (talk) 21:34, 17 December 2013 (UTC)[reply]
How about going through the article removing material which is not properly referenced and which is of specialist importance? Maybe that way we can recover a kind of viable living core to the article. Then after that, people who want to add specialist or controversial material can do so, but hopefully in a balanced way. Richard Gill (talk) 21:45, 17 December 2013 (UTC)[reply]

Apart from being a big mess, the article is filled with subtle caveats which, though much discussed in the philosophy of science, distract from the big picture and represent a particular point of view, which though legitimate, is not main-stream. Because of this, the basic argument does not come across. Richard Gill (talk) 14:29, 20 December 2013 (UTC)[reply]

I have cut a lot of the crap, and tried to improve what is left. Someone else, please help too! Richard Gill (talk) 12:32, 22 December 2013 (UTC)[reply]
For a proper understanding of the importance of the theorem (in my opinion, so this is open for discussion) all resulting cases should be treated, even if in short paragraphs (actually recommendable in short paragraphs as too much information may be too much). The superluminal communication case can be referenced with this experiment that places a lower bound on the speed http://www.nature.com/nature/journal/v454/n7206/full/nature07121.html?free=2 ; http://arxiv.org/abs/1303.0614 Alma (talk) 11:08, 23 December 2013 (UTC)[reply]
Nice references! I was aware of the Gisin et al. work. I too think that a discussion needs to be added about the metaphysical consequences of Bell's theorem - i.e., about "all resulting cases". What are they?
Many present day writers point out that (supposing a sufficiently - loophole free experiment indeed gets done in a few years, ie that Nature is convincingly seen to take the side of quantum mechanics) we will have three basic choices. After all, Bell's theorem can be formulated as stating that quantum mechanics is incompatible with the conjunction of *three* basic principles: realism (more precisely and technically: counterfactual definiteness), locality (more precisely and technically: local relativistic causality) and freedom (no-conspiracy principle or no super-determinism). So if we believe that Nature agrees (if only to a decent enough approximation) with quantum theory, then we are logically forced to abandon at least one of the triple: locality, realism, and freedom. The presently most popular choice is to abandon realism, in the sense of admitting fundamental, irreducible quantum randomness as a fact of Nature. However there is a strong school of people who prefer to abandon locality (in particular, the Bohmians; and Bell himself belonged to this category). There are just a few people who prefer to abandon freedom - most notably, Gerard 't Hooft (Nobel prize laureate).
There is a smaller but noisier category who disagrees with the validity of Bell's theorem and therefore have no need to make a choice of which beloved principle to abandon: they can keep hold of all.
Since I have written quite a lot about this myself, and also have a declared preference - abandon realism - I should not take the lead in writing such an overview. However, for what it's worth, my latest is http://arxiv.org/abs/1207.5103. I also like very much the writings of Boris Tsirelson on these topics. He has an excellent article on Citizendium: http://en.citizendium.org/wiki/Entanglement_(physics). And recently I came across http://arxiv.org/abs/1310.3288, which is about using photons from distant galaxies to further ensure freedom: the setting choices are determined outside of the backwards lightcone of the source. Richard Gill (talk) 15:22, 23 December 2013 (UTC)[reply]
Thank you Richard! I'm not sure you've seen this. Regarding the consequences of the theorem, I honestly see no point to treat the fourth case. A theorem is a theorem and no amount of denying will change that. As to whom should write about them, this article needs attention from an expert and you are the expert. The Wikipedia guidelines ask for the articles to be presented in a neutral manner - which you already did above by covering each consequence. I have already asked involvement for cleaning some articles in the Physics portal talk page, but so far received almost no reply, so there is little hope that someone else with your level of expertise will help with this article. About Bohm, do you think pointing the loophole of his theory would go in here? (correct me if I'm wrong - the issue is 'no backreaction', the particles don't influence the pilot wave). Alma (talk) 16:28, 23 December 2013 (UTC)[reply]
About the fourth category: a mathematical theorem is a mathematical theorem. But physicists might like to deny that the mathematical concepts which are used in the theorem correspond correctly to the physical or metaphysical concepts which are needed in the real world. So you could agree with the theorem as a piece of pure mathematics, but you could deny it has any relevance to physics. Some of the fourth category people are of this kind. Actually they do serve a useful purpose, namely to keep us sharp, refine our concepts, improve our theorems (weaken the assumptions / strengthen the conclusions). Richard Gill (talk) 17:47, 23 December 2013 (UTC)[reply]
About Bohmian mechanics: yes I think it fits in here. Bell himself was very attracted to it. There are a smallish number of very serious people doing serious work within that framework. Richard Gill (talk) 17:49, 23 December 2013 (UTC)[reply]

Cut text

I removed the following text from the article:

In the usual quantum mechanical formalism, the observables X and Y are represented as self-adjoint operators on a Hilbert space. To compute the correlation, assume that X and Y are represented by matrices in a finite dimensional space and that X and Y commute; this special case suffices for our purposes below. The von Neumann measurement postulate states: a series of measurements of an observable X on a series of identical systems in state produces a distribution of real values. By the assumption that observables are finite matrices, this distribution is discrete. The probability of observing λ is non-zero if and only if λ is an eigenvalue of the matrix X and moreover the probability is
where EX (λ) is the projector corresponding to the eigenvalue λ. The system state immediately after the measurement is
From this, we can show that the correlation of commuting observables X and Y in a pure state is
We apply this fact in the context of the EPR paradox.

There are multiple issues here:

  • What the heck are X, Y?
  • Why should I assume they commute?
  • lambda is introduced in a previous section as a hidden variable. Here it is used as an eignevalue. That's confusing.
  • E is introduced in previous sections as an expected value. Here it is used as a projection. This is misleading. Also, no one but no one ever uses E to stand for projection.
  • The last few sentences don't make sense...

So I cut the thing. User:Linas (talk) 03:51, 18 November 2013 (UTC)[reply]

Good. I have cut a whole lot more repetitious material, also cut a lot of nonsense. Richard Gill (talk) 12:31, 22 December 2013 (UTC)[reply]

The usual picture

The article includes the usual picture of a (negative) cosine curve (quantum mechanics: the singlet correlations) and a piecewise linear curve (local realism). The suggestion is made that local realism can only give us the second of the two. But actually, many many different curves are possible under local realism. Perhaps, in some sense, the piecewise linear curve is the best that local realism can do. But in what sense? Has anyone actually proved something about this? Richard Gill (talk) 12:30, 22 December 2013 (UTC)[reply]

I understand your concern with the strictness of the representation. There are two things here. On one hand, the picture is intended to provide visual aid for the general public ("it looks something like this"). On the other hand this article is of interest for (and was presumably mostly elaborated by) physical theoreticians. Here I will use the very good point that Simon Singh made in his book on the demonstration of Fermat's Last: while mathematical proofs remain true for all time, scientific theories and demonstrations can never be as flawless and can afford that because they can be tested. Based on this I issue the following opinion which is mine and may or may not be applicable: I think a risk/benefit analysis should suffice (how difficult is to find the proof if it exists; how fundamentally wrong is the representation - showing all the possibilities of local realism would contradict the current representation? hopefully not because wouldn't that disprove the theorem?; what is the impact on the reader; is it enough to place a caveat that the representation is not necessarily strict but a good approximation for a number of cases etc). Not all facts in all encyclopedias are rigorously proved and the effort to create a flawless article may not be proportional to the benefit. Alma (talk) 11:47, 23 December 2013 (UTC)[reply]
I have edited the legend to this picture so that it is hopefully both true and useful. In the meantime I just try to find out if any editors of wikipedia are aware of any results in this direction.
Fortunately, the correctness of Bell's theorem (as a mathematical theorem, not as a statement about physical reality!) is independent of the picture. And on further thought, I realized that the picture does illustrates several possible proofs of the theorem. The usual proof goes by showing that quantum mechanics allows the correlations of +/- 0.70... at the angles pi/2, 3pi/2 etc while under local realism one can go no further than +/-0.5. The picture illustrates the proof because the red curve does represent the best that local realism can do in this particular respect. Another proof of Bell's theorem, also sketched in the article, works by showing that the QM correlation is smooth (and hence flat) at its peaks and valleys (at the angles 0, pi, 2pi...) while any local realist correlation which achieves the same summits and depths is necessarily "pointed" at those points. The picture does indeed illustrate this too, since the red curve does illustrate this necessary feature of local realist correlations under the same constraints. So I added to the legend of the picture some remarks pointing out these connections between the text of the article, and the picture itself.
Actually I think the article - as it was up to a few days ago - was mostly written by amateurs and/or non specialists, together with people with strong personal opinions that the conventional understanding of Bell and all that, was wrong. Hence it was an incredible hodgepodge of conflicting material, repetitions, technical niceties; and moreover it was rather out of date, representing the field as it was maybe ten years ago; not as it is today. There has been an awful lot of progress both theoretical and experimental, and this has changed the relative importance of various issues. Richard Gill (talk) 14:58, 23 December 2013 (UTC)[reply]
Much better than before; thanks to Richard Gill. Boris Tsirelson (talk) 17:45, 23 December 2013 (UTC)[reply]
I posted my questions about what the picture could be telling us (in what sense is the saw-tooth the best approximation to the cosine) as an arXiv preprint http://arxiv.org/abs/1312.6403v1. I suspect that Steve Gull's proof of Bell's theorem using Fourier analysis could provide some good tools. See also http://arxiv.org/abs/1307.6839 by Kent and Pitalua-Garcia. Richard Gill (talk) 09:28, 24 December 2013 (UTC)[reply]

Wording

This wording looks a bit over-complicated and circumlocutious to me: '...one is forced to reject locality, realism, or the possibility of nondeterminism (the last leads to alternative superdeterministic theories, none of which has yet replicated the predictions of QM...'.

We have a double negative, '...forced to reject...the possibility of nondeterminism...'. Would it not be better to say something along the lines of, '...forced to assert (super)determinism'?

'Realism' also seems a bad word to use in this article. It is somewhat ambiguous and might be considered a bit biased against QM. Martin Hogbin (talk) 15:20, 23 December 2013 (UTC)[reply]

The sentence you quote is a bad one, I did not yet attempt to improve it.
But: realism is a technical term in this field. It is not biased against QM. And the metaphysical consequence of Bell's theorem is that assuming the predictions of QM are roughly true and that Bell's theorem is essentially correct (correct mathematical implementation of the key concepts), then we must reject at least one of realism, locality, or freedom. Personally I like to believe in QM and I am happy to reject realism. I think this is also the mainstream opinion, at least, among people who know what we are talking about...
There are alternative labels to give to these three concepts, most of them rather technical and unhelpful for general readers. Richard Gill (talk) 15:29, 23 December 2013 (UTC)[reply]
I cleaned up that bad sentence (from the intro). There is as yet no superdeterministic theory which expains QM. Gerard 't Hoofd seems to be the only notable physicist who thinks that one is possible and who made some first (as yet very incomplete) steps. The theory has to explain somehow how experimenters who choose measurement settings by tossing coins or using pseudo random numbers with seed equals their wife's birthdate, observe extraordinary correlations between these settings and between measurement outcomes of measurement made on distant pairs of photons, which somehow can only be explained by the distant photons somehow knowing in advance what the measurement settings on the other side of the experiment were going to be. The problem about superdeterminism is that it is essentially ludicrous. It might explain some important part of physics at the Planck scale but it doesn't (and can't!) scale up. Richard Gill (talk) 15:39, 23 December 2013 (UTC)[reply]
I also defined the three concepts. Richard Gill (talk) 15:48, 23 December 2013 (UTC)[reply]

Here is what Boris Tsirelson has to say, on http://en.citizendium.org/wiki/Entanglement_(physics)#Nonlocality_and_entanglement:

The words "nonlocal" and "nonlocality" occur frequently in the literature on entanglement, which creates a lot of confusion: it seems that entanglement means nonlocality! This situation has two causes, pragmatical and philosophical.

Here is the pragmatical cause. The word "nonlocal" sounds good. The phrase "non-CFD" (where CFD denotes counterfactual definiteness) sounds much worse, but is also incorrect; the correct phrase, involving both CFD and locality (and no-conspiracy, see the lead) is prohibitively cumbersome. Thus, "nonlocal" is often used as a conventional substitute for "able to produce empirical entanglement".

The philosophical cause. Many people feel that CFD is more trustworthy than RLC (relativistic local causality), and NC (no-conspiracy) is even more trustworthy. Being forced to abandon one of them, these people are inclined to retain NC and CFD at the expence of abandoning RLC.

However, quantum theory is compatible with RLC+NC. A violation of RLC+NC is called faster-than-light communication (rather than entanglement); it was never been observed, and never predicted by the quantum theory. Thus RLC and NC are corroborated, while CFD is not. In this sense CFD is less trustworthy than RLC and NC.

Richard, I think your definitions of the terms are excellent; clear and concise. It is just the word 'realism' that I find problematic. Without reading your definition of what the term is intended to mean in this article, the reader may be puzzled as to whether you are referring to Scientific realism, Philosophical realism, or something else. Intuitively it could be taken by some readers to suggest that QM is in some way 'unreal' or that it does not represent reality (whether that is true or not remains to be seen, of course). Bearing in mind Boris' and your comments, which seem to say that no simple term is actually the correct one, is there a another word or phrase that we could use. Martin Hogbin (talk) 12:28, 24 December 2013 (UTC)[reply]
"Realism and freedom are part of statistical thinking on causality: they relate to counterfactual reasoning, and to the distinction between selecting on X=x and do-ing X=x, respectively.", as per his paper here http://arxiv.org/pdf/1207.5103v2.pdf. Alma (talk) 13:26, 24 December 2013 (UTC)[reply]

Check before you doubt

Hi Sławomir, the references you undid because you 'doubt' they are free belong to an open source collection. Please check before you doubt (exmaple here https://archive.org/details/IntroductionToQuantumMechanics_718). After convincing yourself if they are opensource or not, please put them back. Regarding the compatibility, I fixed that, just before you undid everything. I was still working on it when you started working on it as you can check by the timestamps. Alma (talk) 15:05, 24 December 2013 (UTC)[reply]

That linked work contains a standard copyright notice: "No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher." That seems pretty clear cut to me, and the internet archive is violating this notice. Other than that, the only things I undid was to put the resources back into templates (you had removed the templates, along with in some cases important reference information like page numbers, titles, and doi codes) and restored the named references. For information on the fields in citation templates (including the URL field), see Template:Citation. It's better to use these templates than to format the references in an ad hoc manner, since they will then have a uniform appearance, are easier for automated processes to parse and keep up to date, and often contain more complete bibliographical details. Sławomir Biały (talk) 15:26, 24 December 2013 (UTC)[reply]
Sławomir, isn't there a way to check whether content belongs to public domain or not? Leggett's incompatibility theorem is stored by an university and part of a course. The notice on Griffith QM limits reproduction only and does not refer to storage or retrieval, should I have assumed these? And yes, I was still working on the templates, at the same time with you - why not ask me to undo the changes? Alma (talk) 15:38, 24 December 2013 (UTC)[reply]
The Griffiths text is clearly not in the public domain, unless there is explicit evidence that the publisher has released it. The primary reason I interrupted this sequence of edits as I did is that, if allowed to continue as it seemed, it was not only removing the references from the templates, but also appeared to be removing vital bibliographic data. I opted to "revert" rather than deal with this by normal editing largely because it would be quite awkward to recover this deleted data. For example, the edit to the Legget reference removed the title of the article, the page numbers of the article, the volume and issue information, and the doi. The edit to the Griffiths reference removed the publisher and the year. The edit to the EPR paper removed the doi, the volume, issue, and page number, and so forth. It's not clear to me that your intention was to restore this necessary information in the future (or to put things back into templates). If that was indeed the case, why remove the information in the first place? It would simply make more work for later. Sławomir Biały (talk) 15:59, 24 December 2013 (UTC)[reply]
I wasn't getting the look I wanted for the references before saving the page (there was no way to check how it will look, I don't know why -before saving it looked in a way and after saving it looked different) and was working with both the page and a word sheet. If you would have asked, you would have give me the chance to explain that. Working on references is tedious and does not massively change the articles, so someone who is not into details will probably do something else. On Griffith's book I take your word that it's not in the public domain, how about Leggett's? Alma (talk) 16:10, 24 December 2013 (UTC)[reply]