Jump to content

Measurement in quantum mechanics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 88.112.14.230 (talk) at 09:44, 1 January 2013 (Measurement from a practical point of view). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The framework of quantum mechanics requires a careful definition of measurement. The issue of measurement lies at the heart of the problem of the interpretation of quantum mechanics, for which there is currently no consensus.

Measurement from a practical point of view

Measurement is viewed in different ways in the many interpretations of quantum mechanics; however, despite the considerable philosophical differences, they almost universally agree on the practical question of what results from a routine quantum-physics laboratory measurement. To describe this, a simple framework to use is the Copenhagen interpretation, and it will be implicitly used in this section; the utility of this approach has been verified countless times[citation needed], and all other interpretations are necessarily constructed so as to give the same quantitative predictions as this in almost every case.

Qualitative overview

The quantum state of a system is a mathematical object that fully describes the quantum system. One typically imagines some experimental apparatus and procedure which "prepares" this quantum state; the mathematical object then reflects the setup of the apparatus. Once the quantum state has been prepared, some aspect of it is measured (for example, its position or energy). If the experiment is repeated, so as to measure the same aspect of the same quantum state prepared in the same way, the result of the measurement will often be different.

The expected result of the measurement is in general described by a probability distribution that specifies the likelihoods that the various possible results will be obtained.[1] (This distribution can be either discrete or continuous, depending on what is being measured.)

The measurement process is often said to be random and indeterministic. (However, there is considerable dispute over this issue; in some interpretations of quantum mechanics, the result merely appears random and indeterministic, in other interpretations the indeterminism is core and irreducible.) This is because an important aspect of measurement is wavefunction collapse, the nature of which varies according to the interpretation adopted.

What is universally agreed, however, is that if the measurement is repeated, without re-preparing the state, one finds the same result as the first measurement.[2] As a result, after measuring some aspect of the quantum state, we normally update the quantum state to reflect the result of the measurement; it is this updating that ensures that if an immediate re-measurement is repeated without re-preparing the state, one finds the same result as the first measurement. The updating of the quantum state model is called wavefunction collapse.

Quantitative details

The mathematical relationship between the quantum state and the probability distribution is, again, widely accepted among physicists, and has been experimentally confirmed countless times. This section summarizes this relationship, which is stated in terms of the mathematical formulation of quantum mechanics.

Measurable quantities ("observables") as operators

It is a postulate of quantum mechanics that all measurements have an associated operator (called an observable operator, or just an observable), with the following properties:

  1. The observable is a Hermitian (self-adjoint) operator mapping a Hilbert space (namely, the state space, which consists of all possible quantum states) into itself.
  2. The observable's eigenvalues are real. The possible outcomes of the measurement are precisely the eigenvalues of the given observable.
  3. For each eigenvalue there are one or more corresponding eigenvectors (which in this context are called eigenstates), which will make up the state of the system after the measurement.
  4. The observable has a set of eigenvectors which span the state space. It follows that each observable generates an orthonormal basis of eigenvectors (called an eigenbasis). Physically, this is the statement that any quantum state can always be represented as a superposition of the eigenstates of an observable.

Important examples of observables are:

  • The Hamiltonian operator, representing the total energy of the system; with the special case of the nonrelativistic Hamiltonian operator: .
  • The momentum operator: (in the position basis).
  • The position operator: , where (in the momentum basis).

Operators can be noncommuting. Two Hermitian operators commute if (and only if) there is at least one basis of vectors, each of which is an eigenvector of both operators (this is sometimes called a simultaneous eigenbasis). Noncommuting observables are said to be incompatible and cannot in general be measured simultaneously. In fact, they are related by an uncertainty principle, as a consequence of the Robertson–Schrödinger relation.

Measurement probabilities and wavefunction collapse

There are a few possible ways to mathematically describe the measurement process (both the probability distribution and the collapsed wavefunction). The most convenient description depends on the spectrum (i.e., set of eigenvalues) of the observable.

Discrete, nondegenerate spectrum

Let be an observable, and suppose that it has discrete eigenstates (in bra-ket notation) for and corresponding eigenvalues , no two of which are equal.

Assume the system is prepared in state . Since the eigenstates of an observable form a basis (the eigenbasis), it follows that can be written in terms of the eigenstates as

(where are complex numbers). Then measuring can yield any of the results , with corresponding probabilities given by

Usually is assumed to be normalized, in which case this expression reduces to

If the result of the measurement is , then the system's quantum state after the measurement is

so any repeated measurement of will yield the same result . (This phenomenon is called wavefunction collapse.)

Continuous, nondegenerate spectrum

Let be an observable, and suppose that it has a continuous spectrum of eigenvalues filling the interval (a,b). Assume further that each eigenvalue x in this range is associated with a unique eigenstate .

Assume the system is prepared in state , which can be written in terms of the eigenbasis as

(where is a complex-valued function). Then measuring can yield a result anywhere in the interval (a,b), with probability density function ; i.e., a result between y and z will occur with probability

Again, is often assumed to be normalized, in which case this expression reduces to

If the result of the measurement is x, then the new wave function will be

Alternatively, it is often possible and convenient to analyze a continuous-spectrum measurement by taking it to be the limit of a different measurement with a discrete spectrum. For example, an analysis of scattering involves a continuous spectrum of energies, but by adding a "box" potential (which bounds the volume in which the particle can be found), the spectrum becomes discrete. By considering larger and larger boxes, this approach need not involve any approximation, but rather can be regarded as an equally valid formalism in which this problem can be analyzed.

Degenerate spectra

If there are multiple eigenstates with the same eigenvalue (called degeneracies), the analysis is a bit less simple to state, but not essentially different. In the discrete case, for example, instead of finding a complete eigenbasis, it is a bit more convenient to write the Hilbert space as a direct sum of eigenspaces. The probability of measuring a particular eigenvalue is the squared component of the state vector in the corresponding eigenspace, and the new state after measurement is the projection of the original state vector into the appropriate eigenspace.

Density matrix formulation

Instead of performing quantum-mechanics computations in terms of wavefunctions (kets), it is sometimes necessary to describe a quantum-mechanical system in terms of a density matrix. The analysis in this case is formally slightly different, but the physical content is the same, and indeed this case can be derived from the wavefunction formulation above. The result for the discrete, degenerate case, for example, is as follows:

Let be an observable, and suppose that it has discrete eigenvalues , associated with eigenspaces respectively. Let be the projection operator into the space .

Assume the system is prepared in the state described by the density matrix ρ. Then measuring can yield any of the results , with corresponding probabilities given by

where denotes trace. If the result of the measurement is n, then the new density matrix will be

Alternatively, one can say that the measurement process results in the new density matrix

where the difference is that is the density matrix describing the entire ensemble, whereas is the density matrix describing the sub-ensemble whose measurement result was .

Statistics of measurement

As detailed above, the result of measuring a quantum-mechanical system is described by a probability distribution. Some properties of this distribution are as follows:

Suppose we take a measurement corresponding to observable , on a state whose quantum state is .

.

These are direct consequences of the above formulas for measurement probabilities.

Example

Suppose that we have a particle in a 1-dimensional box, set up initially in the ground state . As can be computed from the time-independent Schrödinger equation, the energy of this state is (where m is the particle's mass and L is the box length), and the spatial wavefunction is . If the energy is now measured, the result will always certainly be , and this measurement will not affect the wavefunction.

Next suppose that the particle's position is measured. The position x will be measured with probability density

If the measurement result was x=S, then the wavefunction after measurement will be the position eigenstate . If the particle's position is immediately measured again, the same position will be obtained.

The new wavefunction can, like any wavefunction, be written as a superposition of eigenstates of any observable. In particular, using energy eigenstates, , we have

If we now leave this state alone, it will smoothly evolve in time according to the Schrödinger equation. But suppose instead that an energy measurement is immediately taken. Then the possible energy values will be measured with relative probabilities:

and moreover if the measurement result is , then the new state will be the energy eigenstate .

So in this example, due to the process of wavefunction collapse, a particle initially in the ground state can end up in any energy level, after just two subsequent non-commuting measurements are made.

Wavefunction collapse

The process in which a quantum state becomes one of the eigenstates of the operator corresponding to the measured observable is called "collapse", or "wavefunction collapse". The final eigenstate appears randomly with a probability equal to the square of its overlap with the original state.[1] The process of collapse has been studied in many experiments, most famously in the double-slit experiment. The wavefunction collapse raises serious questions regarding "the measurement problem",[3] as well as questions of determinism and locality, as demonstrated in the EPR paradox and later in GHZ entanglement. (See below.)

In the last few decades, major advances have been made toward a theoretical understanding of the collapse process. This new theoretical framework, called quantum decoherence, supersedes previous notions of instantaneous collapse and provides an explanation for the absence of quantum coherence after measurement. While this theory correctly predicts the form and probability distribution of the final eigenstates, it does not explain the randomness inherent in the choice of final state.

von Neumann measurement scheme

The von Neumann measurement scheme, the ancestor of quantum decoherence theory, describes measurements by taking into account the measuring apparatus which is also treated as a quantum object. Let the quantum state be in the superposition , where are eigenstates of the operator that needs to be measured. In order to make the measurement, the measured system described by needs to interact with the measuring apparatus described by the quantum state , so that the total wave function before the interaction is . During the interaction of object and measuring instrument the unitary evolution is supposed to realize the following transition from the initial to the final total wave function:

where are orthonormal states of the measuring apparatus. The unitary evolution above is referred to as premeasurement. The relation with wave function collapse is established by calculating the final density operator of the object from the final total wave function. This density operator is interpreted by von Neumann as describing an ensemble of objects being after the measurement with probability in the state

The transition

is often referred to as weak von Neumann projection, the wave function collapse or strong von Neumann projection

being thought to correspond to an additional selection of a subensemble by means of observation.

In case the measured observable has a degenerate spectrum, weak von Neumann projection is generalized to Lüders projection

in which the vectors for fixed n are the degenerate eigenvectors of the measured observable. For an arbitrary state described by a density operator Lüders projection is given by

Measurements of the second kind

In a measurement of the second kind the unitary evolution during the interaction of object and measuring instrument is supposed to be given by

in which the states of the object are determined by specific properties of the interaction between object and measuring instrument. They are normalized but not necessarily mutually orthogonal. The relation with wave function collapse is analogous to that obtained for measurements of the first kind, the final state of the object now being with probability Note that many present-day measurement procedures are measurements of the second kind, some even functioning correctly only as a consequence of being of the second kind. For instance, a photon counter, detecting a photon by absorbing and hence annihilating it, thus ideally leaving the electromagnetic field in the vacuum state rather than in the state corresponding to the number of detected photons; also the Stern–Gerlach experiment would not function at all if it really were a measurement of the first kind.[4]

Decoherence in quantum measurement

One can also introduce the interaction with the environment , so that, in a measurement of the first kind, after the interaction the total wave function takes a form

which is related to the phenomenon of decoherence.

The above is completely described by the Schrödinger equation and there are not any interpretational problems with this. Now the problematic wavefunction collapse does not need to be understood as a process on the level of the measured system, but can also be understood as a process on the level of the measuring apparatus, or as a process on the level of the environment. Studying these processes provides considerable insight into the measurement problem by avoiding the arbitrary boundary between the quantum and classical worlds, though it does not explain the presence of randomness in the choice of final eigenstate. If the set of states

, , or

represents a set of states that do not overlap in space, the appearance of collapse can be generated by either the Bohm interpretation or the Everett interpretation which both deny the reality of wavefunction collapse. Both of these are stated to predict the same probabilities for collapses to various states as the conventional interpretation by their supporters. The Bohm interpretation is held to be correct only by a small minority of physicists, since there are difficulties with the generalization for use with relativistic quantum field theory. However, there is no proof that the Bohm interpretation is inconsistent with quantum field theory, and work to reconcile the two is ongoing. The Everett interpretation easily accommodates relativistic quantum field theory.

Philosophical problems of quantum measurements

What physical interaction constitutes a measurement?

Until the advent of quantum decoherence theory in the late 20th century, a major conceptual problem of quantum mechanics and especially the Copenhagen interpretation was the lack of a distinctive criterion for a given physical interaction to qualify as "a measurement" and cause a wavefunction to collapse. This is best illustrated by the Schrödinger's cat paradox. Certain aspects of this question are now well understood in the framework of quantum decoherence theory, such as an understanding of weak measurements, and quantifying what measurements or interactions are sufficient to destroy quantum coherence. Nevertheless, there remains less than universal agreement among physicists on some aspects of the question of what constitutes a measurement.

Does measurement actually determine the state?

The question of whether (and in what sense) a measurement actually determines the state is one which differs among the different interpretations of quantum mechanics. (It is also closely related to the understanding of wavefunction collapse.) For example, in most versions of the Copenhagen interpretation, the measurement determines the state, and after measurement the state is definitely what was measured. But according to the many-worlds interpretation, measurement determines the state in a more restricted sense: In other "worlds", other measurement results were obtained, and the other possible states still exist.

Is the measurement process random or deterministic?

As described above, there is universal agreement that quantum mechanics appears random, in the sense that all experimental results yet uncovered can be predicted and understood in the framework of quantum mechanics measurements being fundamentally random. Nevertheless, it is not settled[5] whether this is true, fundamental randomness, or merely "emergent" randomness resulting from underlying hidden variables which deterministically cause measurement results to happen a certain way each time. This continues to be an area of active research.[6]

If there are hidden variables, they would have to be "nonlocal".

Does the measurement process violate locality?

In physics, the Principle of locality is the concept that information cannot travel faster than the speed of light (also see special relativity). It is known experimentally (see Bell's theorem, which is related to the EPR paradox) that if quantum mechanics is deterministic (due to hidden variables, as described above), then it is nonlocal (i.e. violates the principle of locality). Nevertheless, there is not universal agreement among physicists on whether quantum mechanics is nondeterministic, nonlocal, or both.[5]

See also

References

  1. ^ a b J. J. Sakurai (1994). Modern Quantum Mechanics (2nd ed.). p. 24. ISBN 0201539292.
  2. ^ J. J. Sakurai (1994). Modern Quantum Mechanics (2nd ed.). p. 25. ISBN 0201539292.
  3. ^ George S. Greenstein and Arthur G. Zajonc (2006). The Quantum Challenge: Modern Research On The Foundations Of Quantum Mechanics (2nd ed.). ISBN 076372470X.
  4. ^ M.O. Scully, W.E. Lamb, A. Barut (1987). "On the theory of the Stern–Gerlach apparatus" (PDF). Foundations of Physics. 17: 575–583. Retrieved 9 November 2012.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  5. ^ a b Hrvoje Nikolić (2007). "Quantum mechanics: Myths and facts". Foundation of Physics. 37: 1563–1611. Retrieved 9 November 2012.
  6. ^ S. Gröblacher; et al. (2007). "An experimental test of non-local realism". Nature. 446 (871). Retrieved 9 November 2012. {{cite journal}}: Explicit use of et al. in: |author= (help)

Further reading