Gottesman–Knill theorem

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In quantum computing, the Gottesman–Knill theorem is a theoretical result by Daniel Gottesman and Emanuel Knill that states that an important subclass of quantum circuits, called stabilizer circuits, can be efficiently simulated on a classical computer. Stabilizer circuits are circuits that only use gates from the normalizer of the qubit Pauli group; this is referred to as the Clifford group, though this has nothing to do with the Clifford algebra.

The Gottesman–Knill theorem was published in a single author paper by Gottesman in which he credits Knill with the result through private communication.[1]

Formal statement[edit]

Theorem: A quantum circuit using only the following elements can be simulated efficiently on a classical computer:

  1. Preparation of qubits in computational basis states,
  2. Quantum gates from the Clifford group (Hadamard gates, controlled NOT gates, Phase Gate), and
  3. Measurements in the computational basis.

The Gottesman–Knill theorem shows that even some highly entangled states can be simulated efficiently. Several important types of quantum algorithms use only Clifford gates, most importantly the standard algorithms for entanglement purification and for quantum error correction. From a practical point of view, stabilizer circuits have been simulated in O(n log n) time using the graph state formalism.

References[edit]

  1. ^ Gottesman, Daniel (1998). "The Heisenberg Representation of Quantum Computers". arXiv:quant-ph/9807006v1.