# Wave function collapse

(Redirected from Quantum collapse)

In quantum mechanics, wave function collapse is said to occur when a wave function—initially in a superposition of several eigenstates—appears to reduce to a single eigenstate (by "observation"). It is the essence of measurement in quantum mechanics and connects the wave function with classical observables like position and momentum. Collapse is one of two processes by which quantum systems evolve in time; the other is continuous evolution via the Schrödinger equation.[1] However, in this role, collapse is merely a black box for thermodynamically irreversible interaction with a classical environment.[2][3] Calculations of quantum decoherence predict apparent wave function collapse when a superposition forms between the quantum system's states and the environment's states. Significantly, the combined wave function of the system and environment continue to obey the Schrödinger equation.[4]

In 1927, Werner Heisenberg used the idea of wave function reduction to explain quantum measurement.[5] Nevertheless, it was debated, for if collapse were a fundamental physical phenomenon, rather than just the epiphenomenon of some other process, it would mean nature was fundamentally stochastic, i.e. nondeterministic, an undesirable property for a theory.[2][6] This issue remained until quantum decoherence entered mainstream opinion after its reformulation in the 1980s.[2][4][7] Decoherence explains the perception of wave function collapse in terms of interacting large- and small-scale quantum systems, and is commonly taught at post-introductory level (e.g. the Cohen-Tannoudji textbook).[8] The quantum filtering approach[9][10][11] and the introduction of quantum causality non-demolition principle[12] allows for a classical-environment derivation of wave function collapse from the stochastic Schrödinger equation.

## Mathematical description

Before collapse, the wave function may be any square-integrable function. This function is expressible as a linear combination of the eigenstates of any observable. Observables represent classical dynamical variables, and when one is measured by a classical observer, the wave function is projected onto a random eigenstate of that observable. The observer simultaneously measures the classical value of that observable to be the eigenvalue of the final state.[13]

### Mathematical background

The quantum state of a physical system is described by a wave function (in turn – an element of a projective Hilbert space). This can be expressed in Dirac or bra–ket notation as a vector:

${\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle .}$

The kets ${\displaystyle \scriptstyle {|\phi _{1}\rangle ,|\phi _{2}\rangle ,|\phi _{3}\rangle \cdots }}$, specify the different quantum "alternatives" available - a particular quantum state. They form an orthonormal eigenvector basis, formally

${\displaystyle \langle \phi _{i}|\phi _{j}\rangle =\delta _{ij}.}$

Where ${\displaystyle \delta _{ij}}$ represents the Kronecker delta.

An observable (i.e. measurable parameter of the system) is associated with each eigenbasis, with each quantum alternative having a specific value or eigenvalue, ei, of the observable. A "measurable parameter of the system" could be the usual position r and the momentum p of (say) a particle, but also its energy E, z-components of spin (sz), orbital (Lz) and total angular (Jz) momenta etc. In the basis representation these are respectively ${\displaystyle \scriptstyle {|\mathbf {r} ,t\rangle =|x,t\rangle +|y,t\rangle +|z,t\rangle ,|\mathbf {p} ,t\rangle =|p_{x},t\rangle +|p_{y},t\rangle +|p_{z},t\rangle ,|E\rangle ,|s_{z}\rangle ,|L_{z}\rangle ,|J_{z}\rangle ,\cdots }}$.

The coefficients c1, c2, c3... are the probability amplitudes corresponding to each basis ${\displaystyle \scriptstyle {|\phi _{1}\rangle ,|\phi _{2}\rangle ,|\phi _{3}\rangle \cdots }}$. These are complex numbers. The moduli square of ci, that is |ci|2 = ci*ci (* denotes complex conjugate), is the probability of measuring the system to be in the state ${\displaystyle \scriptstyle |\phi _{i}\rangle }$.

For simplicity in the following, all wave functions are assumed to be normalized; the total probability of measuring all possible states is unity:

${\displaystyle \langle \psi |\psi \rangle =\sum _{i}|c_{i}|^{2}=1.}$

### The process

With these definitions it is easy to describe the process of collapse. For any observable, the wave function is initially some linear combination of the eigenbasis ${\displaystyle \{|\phi _{i}\rangle \}}$ of that observable. When an external agency (an observer, experimenter) measures the observable associated with the eigenbasis ${\displaystyle \{|\phi _{i}\rangle \}}$, the wave function collapses from the full ${\displaystyle |\psi \rangle }$ to just one of the basis eigenstates, ${\displaystyle |\phi _{i}\rangle }$, that is:

${\displaystyle |\psi \rangle \rightarrow |\phi _{i}\rangle .}$

The probability of collapsing to a given eigenstate ${\displaystyle |\phi _{k}\rangle }$ is the Born probability, ${\displaystyle P_{k}=|c_{k}|^{2}}$. Post-measurement, other elements of the wave function vector, ${\displaystyle c_{i\neq k}}$, have "collapsed" to zero, and ${\displaystyle c_{k}=1}$.

More generally, collapse is defined for an operator ${\displaystyle {\hat {Q}}}$ with eigenbasis ${\displaystyle \{|\phi _{i}\rangle \}}$. If the system is in state ${\displaystyle |\psi \rangle }$, and ${\displaystyle {\hat {Q}}}$ is measured, the probability of collapsing the system to state ${\displaystyle |\phi _{i}\rangle }$ (and measuring ${\displaystyle |\phi _{i}\rangle }$) would be ${\displaystyle |\langle \psi |\phi _{i}\rangle |^{2}}$. Note that this is not the probability that the particle is in state ${\displaystyle |\phi _{i}\rangle }$; it is in state ${\displaystyle |\psi \rangle }$ until cast to an eigenstate of ${\displaystyle {\hat {Q}}}$.

However, we never observe collapse to a single eigenstate of a continuous-spectrum operator (e.g. position, momentum, or a scattering Hamiltonian), because such eigenfunctions are non-normalizable. In these cases, the wave function will partially collapse to a linear combination of "close" eigenstates (necessarily involving a spread in eigenvalues) that embodies the imprecision of the measurement apparatus. The more precise the measurement, the tighter the range. Calculation of probability proceeds identically, except with an integral over the expansion coefficient ${\displaystyle c(q,t)dq}$.[14] This phenomenon is unrelated to the uncertainty principle, although increasingly precise measurements of one operator (e.g. position) will naturally homogenize the expansion coefficient of wave function with respect to another, incompatible operator (e.g. momentum), lowering the probability of measuring any particular value of the latter.

### The determination of preferred-basis

The complete set of orthogonal functions which a wave function will collapse to is also called preferred-basis.[2] There lacks theoretical foundation for the preferred-basis to be the eigenstates of observables such as position, momentum, etc. In fact the eigenstates of position are not even physical due to the infinite energy associated with them. A better approach is to derive the preferred-basis from basic principles. It is proved that only special dynamic equation can collapse the wave function.[15] By applying one axiom of the quantum mechanics and the assumption that preferred-basis depends on the total Hamiltonian, a unique set of equations is obtained from the collapse equation which determines the preferred-basis for general situations. Depending on the system Hamiltonian and wave function, the determination equations may yield preferred-basis as energy eigenfunctions, quasi-position eigenfunctions, mixed energy and quasi-position eigenfunctions, i.e., energy eigenfunctions for the interior of a macroscopic object and quasi-position eigenfunctions for the particles on the surface, and so on.

### Quantum decoherence

Wave function collapse is not fundamental from the perspective of quantum decoherence.[16] There are several equivalent approaches to deriving collapse, like the density matrix approach, but each has the same effect: decoherence irreversibly converts the "averaged" or "environmentally traced over" density matrix from a pure state to a reduced mixture, giving the appearance of wave function collapse.

## History and context

The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematic und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise Mathematische Grundlagen der Quantenmechanik.[17] Consistent with Heisenberg, von Neumann postulated that there were two processes of wave function change:

1. The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement, as outlined above.
2. The deterministic, unitary, continuous time evolution of an isolated system that obeys the Schrödinger equation (or a relativistic equivalent, i.e. the Dirac equation).

In general, quantum systems exist in superpositions of those basis states that most closely correspond to classical descriptions, and, in the absence of measurement, evolve according to the Schrödinger equation. However, when a measurement is made, the wave function collapses—from an observer's perspective—to just one of the basis states, and the property being measured uniquely acquires the eigenvalue of that particular state, ${\displaystyle \lambda _{i}}$. After the collapse, the system again evolves according to the Schrödinger equation.

By explicitly dealing with the interaction of object and measuring instrument, von Neumann[1] has attempted to create consistency of the two processes of wave function change.

He was able to prove the possibility of a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove the necessity of such a collapse. Although von Neumann's projection postulate is often presented as a normative description of quantum measurement, it was conceived by taking into account experimental evidence available during the 1930s (in particular the Compton-Simon experiment was paradigmatic), but many important present-day measurement procedures do not satisfy it (so-called measurements of the second kind).[18][19][20]

The existence of the wave function collapse is required in

On the other hand, the collapse is considered a redundant or optional approximation in

The cluster of phenomena described by the expression wave function collapse is a fundamental problem in the interpretation of quantum mechanics, and is known as the measurement problem. The problem is deflected by the Copenhagen Interpretation, which postulates that this is a special characteristic of the "measurement" process. Everett's many-worlds interpretation deals with it by discarding the collapse-process, thus reformulating the relation between measurement apparatus and system in such a way that the linear laws of quantum mechanics are universally valid; that is, the only process according to which a quantum system evolves is governed by the Schrödinger equation or some relativistic equivalent.

Originating from de Broglie–Bohm theory, but no longer tied to it, is the physical process of decoherence, which causes an apparent collapse. Decoherence is also important for the consistent histories interpretation. A general description of the evolution of quantum mechanical systems is possible by using density operators and quantum operations. In this formalism (which is closely related to the C*-algebraic formalism) the collapse of the wave function corresponds to a non-unitary quantum operation.

The significance ascribed to the wave function varies from interpretation to interpretation, and varies even within an interpretation (such as the Copenhagen Interpretation). If the wave function merely encodes an observer's knowledge of the universe then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent.

## References

1. ^ a b J. von Neumann (1932). Mathematische Grundlagen der Quantenmechanik (in German). Berlin: Springer.
J. von Neumann (1955). Mathematical Foundations of Quantum Mechanics. Princeton University Press.
2. ^ a b c d Schlosshauer, Maximilian (2005). "Decoherence, the measurement problem, and interpretations of quantum mechanics". Rev. Mod. Phys. 76 (4): 1267–1305. Bibcode:2004RvMP...76.1267S. arXiv:. doi:10.1103/RevModPhys.76.1267. Retrieved 28 February 2013.
3. ^ Giacosa, Francesco (2014). "On unitary evolution and collapse in quantum mechanics". Quanta. 3 (1): 156–170. doi:10.12743/quanta.v3i1.26.
4. ^ a b Zurek, Wojciech Hubert (2009). "Quantum Darwinism". Nature Physics. 5: 181–188. Bibcode:2009NatPh...5..181Z. arXiv:. doi:10.1038/nphys1202. Retrieved 28 February 2013.
5. ^ Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys. 43: 172–198. Translation as 'The actual content of quantum theoretical kinematics and mechanics' here
6. ^ L. Bombelli. "Wave-Function Collapse in Quantum Mechanics". Topics in Theoretical Physics. Retrieved 2010-10-13.
7. ^ M. Pusey; J. Barrett; T. Rudolph (2012). "On the reality of the quantum state". arXiv:.
8. ^ C. Cohen-Tannoudji (2006) [1973]. Quantum Mechanics (2 volumes). New York: Wiley. p. 22.
9. ^ V. P. Belavkin (1979). Optimal Measurement and Control in Quantum Dynamical Systems (Technical report). Copernicus University, Toruń. pp. 3–38. arXiv:. 411.
10. ^ V. P. Belavkin (1992). "Quantum stochastic calculus and quantum nonlinear filtering". Journal of Multivariate Analysis. 42 (2): 171–201. arXiv:. doi:10.1016/0047-259X(92)90042-E.
11. ^ V. P. Belavkin (1999). "Measurement, filtering and control in quantum open dynamical systems". Reports on Mathematical Physics. 43 (3): A405–A425. Bibcode:1999RpMP...43..405B. arXiv:. doi:10.1016/S0034-4877(00)86386-7.
12. ^ V. P. Belavkin (1994). "Nondemolition principle of quantum measurement theory". Foundations of Physics. 24 (5): 685–714. Bibcode:1994FoPh...24..685B. arXiv:. doi:10.1007/BF02054669.
13. ^ Griffiths, David J. (2005). Introduction to Quantum Mechanics, 2e. Upper Saddle River, New Jersey: Pearson Prentice Hall. pp. 106–109. ISBN 0131118927.
14. ^ Griffiths, David J. (2005). Introduction to Quantum Mechanics, 2e. Upper Saddle River, New Jersey: Pearson Prentice Hall. pp. 100–105. ISBN 0131118927.
15. ^ S. Mei (2013). "on the origin of preferred-basis and evolution pattern of wave function". arXiv:.
16. ^ Wojciech H. Zurek, Decoherence, einselection, and the quantum origins of the classical,Reviews of Modern Physics 2003, 75, 715 or http://arxiv.org/abs/quant-ph/0105127
17. ^ C. Kiefer (2002). "On the interpretation of quantum theory – from Copenhagen to the present day". arXiv: [quant-ph].
18. ^ W. Pauli (1958). "Die allgemeinen Prinzipien der Wellenmechanik". In S. Flügge. Handbuch der Physik (in German). V. Berlin: Springer-Verlag. p. 73.
19. ^ L. Landau & R. Peierls (1931). "Erweiterung des Unbestimmtheitsprinzips für die relativistische Quantentheorie". Zeitschrift für Physik (in German). 69 (1-2): 56. Bibcode:1931ZPhy...69...56L. doi:10.1007/BF01391513.)
20. ^ Discussions of measurements of the second kind can be found in most treatments on the foundations of quantum mechanics, for instance, J. M. Jauch (1968). Foundations of Quantum Mechanics. Addison-Wesley. p. 165.; B. d'Espagnat (1976). Conceptual Foundations of Quantum Mechanics. W. A. Benjamin. pp. 18, 159.; and W. M. de Muynck (2002). Foundations of Quantum Mechanics: An Empiricist Approach. Kluwer Academic Publishers. section 3.2.4..