Parity learning

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Parity learning is a problem in machine learning. An algorithm that solves this problem must find a function ƒ, given some samples (xƒ(x)) and the assurance that ƒ computes the parity of bits at some fixed locations. The samples are generated using some distribution over the input. The problem is easy to solve using Gaussian elimination provided that a sufficient number of samples (from a distribution which is not too skewed) are provided to the algorithm.

Noisy version ("Learning Parity with Noise")[edit]

In Learning Parity with Noise (LPN), the samples may contain some error. Instead of samples (xƒ(x)), the algorithm is provided with (xy), where y = 1 − ƒ(x) with some small probability. The noisy version of the parity learning problem is conjectured to be hard.[citation needed]

See also[edit]

References[edit]

  • Avrim Blum, Adam Kalai, and Hal Wasserman, “Noise-tolerant learning, the parity problem, and the statistical query model,” J. ACM 50, no. 4 (2003): 506–519.
  • Adam Tauman Kalai, Yishay Mansour, and Elad Verbin, “On agnostic boosting and parity learning,” in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 629–638, http://portal.acm.org/citation.cfm?id=1374466.
  • Oded Regev, “On lattices, learning with errors, random linear codes, and cryptography,” in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (Baltimore, MD, USA: ACM, 2005), 84–93, http://portal.acm.org/citation.cfm?id=1060590.1060603.