Quantum correlation

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In Bell test experiments, the term quantum correlation has come to mean the expectation value of the product of the outcomes on the two sides. In other words, the expected change in physical characteristics as one quantum system passes through an interaction site. In John Bell's 1964 paper that inspired the Bell tests, it was assumed that the outcomes A and B could each only take one of two values, -1 or +1. It followed that the product, too, could only be -1 or +1, so that the average value of the product would be

where, for example, N++ is the number of simultaneous occurrences ("coincidences") of the outcome +1 on both sides of the experiment.

In actual experiments though, detectors are not perfect and there are usually many null outcomes. The correlation can still be estimated using the sum of coincidences, since clearly zeros will not contribute to the average, but in practice, instead of dividing by Ntotal, it has become customary to divide by

the total number of observed coincidences. The legitimacy of this method relies on the assumption that the observed coincidences constitute a fair sample of the emitted pairs.

Following local realist assumptions as in Bell's 1964 paper, the estimated quantum correlation will converge after a sufficient number of trials to

where a and b are detector settings and λ is the hidden variable, drawn from a distribution ρ(λ).

The quantum correlation is the key statistic in the CHSH and some of the other "Bell inequalities", tests of which open the way for experimental discrimination between quantum mechanics on the one hand and local realism or local hidden variable theory on the other.


J. S. Bell, Speakable and Unspeakable in Quantum Mechanics, (Cambridge University Press 1987) ISBN 0-521-52338-9

See also[edit]