Progress in artificial intelligence

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Artificial intelligence
Major goals
Knowledge reasoning
Planning
Machine learning
Natural language processing
Computer vision
Robotics
Artificial general intelligence
Approaches
Symbolic
Deep learning
Recurrent neural networks
Bayesian networks
Evolutionary algorithms
Philosophy
Ethics
Existential risk
Turing test
Chinese room
Friendly AI
History
Timeline
Progress
AI winter
Technology
Applications
Projects
Programming languages
Glossary
Progress in machine classification of images
The error rate of AI by year. Red line - the error rate of a trained human on a particular task

Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys. However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[1] "Many thousands of AI applications are deeply embedded in the infrastructure of every industry."[2] In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems,[2][3] but the field is rarely credited for these successes.

To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject matter expert Turing tests. Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results.

Performance evaluation[edit]

In his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis.[4] But there are many other useful abilities that can be described as showing some form of intelligence. This gives better insight into the comparative success of artificial intelligence in different areas.

In what has been called the Feigenbaum test, the inventor of expert systems argued for subject specific expert tests.[5] A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior.[6]

Broad classes of outcome for an AI test may be given as:

  • optimal: it is not possible to perform better
  • super-human: performs better than all humans
  • high-human: performs better than most humans
  • par-human: performs similarly to most humans
  • sub-human: performs worse than most humans

Optimal[edit]

Super-human[edit]

High-human[edit]

Par-human[edit]

Sub-human[edit]

See also[edit]

References[edit]

  1. ^ AI set to exceed human brain power CNN.com (July 26, 2006)
  2. ^ a b Kurtzweil 2005, p. 264
  3. ^ National Research Council (1999), "Developments in Artificial Intelligence", Funding a Revolution: Government Support for Computing Research, National Academy Press, ISBN 0-309-06278-0, OCLC 246584055  under "Artificial Intelligence in the 90s"
  4. ^ Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460, ISSN 0026-4423, doi:10.1093/mind/LIX.236.433, retrieved 2008-08-18 
  5. ^ Feigenbaum, Edward A. (2003). "Some challenges and grand challenges for computational intelligence". Journal of the ACM. 50 (1): 32–40. doi:10.1145/602382.602400. 
  6. ^ Gray, Jim (2003). "What Next? A Dozen Information-Technology Research Goals". Journal of the ACM. 50 (1): 41–57. Bibcode:1999cs.......11005G. arXiv:cs/9911005Freely accessible [cs.GL]. 
  7. ^ Schaeffer, J.; Burch, N.; Bjornsson, Y.; Kishimoto, A.; Muller, M.; Lake, R.; Lu, P.; Sutphen, S. (2007). "Checkers is solved". Science. 317 (5844): 1518–1522. Bibcode:2007Sci...317.1518S. CiteSeerX 10.1.1.95.5393Freely accessible. PMID 17641166. doi:10.1126/science.1144079. 
  8. ^ "God's Number is 20". 
  9. ^ Bowling, M.; Burch, N.; Johanson, M.; Tammelin, O. (2015). "Heads-up limit hold'em poker is solved". Science. 347 (6218): 145–9. Bibcode:2015Sci...347..145B. PMID 25574016. doi:10.1126/science.1259433. 
  10. ^ see for example: https://www.chess.com/news/komodo-beats-nakamura-in-final-battle-1331
  11. ^ AlphaGo versus Lee Sedol
  12. ^ "Computer software sets new record for solving jigsaw puzzle". 
  13. ^ Reversi#Computer opponents
  14. ^ Sheppard, B. (2002). "World-championship-caliber Scrabble". Artificial Intelligence. 134: 241–275. doi:10.1016/S0004-3702(01)00166-7. 
  15. ^ Computer bridge#Computers versus humans
  16. ^ Tesauro, Gerald (March 1995). "Temporal difference learning and TD-Gammon". Communications of the ACM. 38 (3): 58–68. doi:10.1145/203330.203343. 
  17. ^ "The Arimaa Challenge". 2015. Retrieved Jan 12, 2017. 
  18. ^ Watson beats Jeopardy grand-champions. http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html
  19. ^ Jackson, Joab. "IBM Watson Vanquishes Human Jeopardy Foes". PC World. IDG News. Retrieved 2011-02-17. 
  20. ^ [1]
  21. ^ "Microsoft researchers say their newest deep learning system beats humans -- and Google - VentureBeat - Big Data - by Jordan Novet". VentureBeat. 
  22. ^ "Proverb, the Crossword-Solving Computer Program". American Crossword Puzzle tournament. Retrieved Dec 18, 2016. 
  23. ^ Proverb: The probabilistic cruciverbalist. By Greg A. Keim, Noam Shazeer, Michael L. Littman, Sushant Agarwal, Catherine M. Cheves, Joseph Fitzgerald, Jason Grosland, Fan Jiang, Shannon Pollard, and Karl Weinmeister. 1999. In Proceedings of the Sixteenth National Conference on Artificial Intelligence, 710-717. Menlo Park, Calif.: AAAI Press.
  24. ^ Wernick, Adam (24 Sep 2014). "'Dr. Fill' vies for crossword solving supremacy, but still comes up short". Public Radio International. Retrieved Dec 18, 2016. 
  25. ^ "One-shot Learning with Memory-Augmented Neural Networks; Page 5: Table 1". 19 May 2016. Retrieved 2017-06-04. 4.2. Omniglot Classification: "The network exhibited high classification accuracy on just the second presentation of a sample from a class within an episode (82.8%), reaching up to 94.9% accuracy by the fifth instance and 98.1% accuracy by the tenth." 
  26. ^ There are several ways of evaluating machine translation systems. People competent in a second language frequently outperform machine translation systems but the average person is often less capable. Some machine translation systems are capable of a large number of languages, like google translate, and as a result have a broader competence than most humans. For example, very few humans can translate from Arabic to Polish and French to Swahili and Armenian to Vietnamese. When comparing over several languages machine translation systems will tend to outperform humans.
  27. ^ Harris, Mark (12 Jan 2016). "Google reports self-driving car mistakes: 272 failures and 13 near misses". The Guardian. Retrieved Dec 18, 2016.