Weak artificial intelligence: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎top: Correction. Strong AI =!= AGI
Line 4: Line 4:


== Terminology ==
== Terminology ==
{{OR-section}}
“Weak AI” is sometimes called “narrow AI”, but the latter is usually interpreted as subfields within the former. Hypothesis testing about minds or part of minds are typically not part of narrow AI, but rather implementation of some superficial lookalike feature. Many currently existing systems that claim to use “artificial intelligence” are likely operating as a narrow AI focused on a specific problem, and are not weak AI in the traditional sense.
“Weak AI” is sometimes called “narrow AI”, but the latter is usually interpreted as subfields within the former.{{by whom}} Hypothesis testing about minds or part of minds are typically not part of narrow AI, but rather implementation of some superficial lookalike feature. Many currently existing systems that claim to use “artificial intelligence” are likely operating as a narrow AI focused on a specific problem, and are not weak AI in the traditional sense.


[[Siri]], [[Cortana]], and [[Google Assistant]] are all examples of narrow AI, but they are not good examples of a weak AI{{citation needed|date=January 2021}}{{discuss}}, as they operate within a limited pre-defined range of functions. They do not implement parts of minds, they use [[natural language processing]] together with predefined rules. They are in particular not examples of strong AI as there are no genuine intelligence nor self-awareness. AI researcher [[Ben Goertzel]], on his blog in 2010, stated Siri was "VERY narrow and brittle" evidenced by annoying results if you ask questions outside the limits of the application.<ref>Ben Goertzel blog-post retrieved 16 February 2014. Published 6 February 2010. http://multiverseaccordingtoben.blogspot.com/2010/02/siri-new-iphone-personal-assistant-some.html</ref>
[[Siri]], [[Cortana]], and [[Google Assistant]] are all examples of narrow AI, but they are not good examples of a weak AI{{citation needed|date=January 2021}}{{discuss}}, as they operate within a limited pre-defined range of functions. They do not implement parts of minds, they use [[natural language processing]] together with predefined rules. They are in particular not examples of strong AI as there are no genuine intelligence nor self-awareness. AI researcher [[Ben Goertzel]], on his blog in 2010, stated Siri was "VERY narrow and brittle" evidenced by annoying results if you ask questions outside the limits of the application.<ref>Ben Goertzel blog-post retrieved 16 February 2014. Published 6 February 2010. http://multiverseaccordingtoben.blogspot.com/2010/02/siri-new-iphone-personal-assistant-some.html</ref>

Revision as of 00:16, 18 October 2021

Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of mind, or, as narrow AI,[1][2][3] is focused on one narrow task. In John Searle's terms it “would be useful for testing hypotheses about minds, but would not actually be minds”.[4] It is contrasted with Artificial general intelligence, which is defined as a machine with the ability to apply intelligence to any problem, rather than just one specific problem. It is also contrasted with Strong AI, a machine that has consciousness, sentience and mind by virtue of running a computer program.[5][6]

Terminology

“Weak AI” is sometimes called “narrow AI”, but the latter is usually interpreted as subfields within the former.[by whom?] Hypothesis testing about minds or part of minds are typically not part of narrow AI, but rather implementation of some superficial lookalike feature. Many currently existing systems that claim to use “artificial intelligence” are likely operating as a narrow AI focused on a specific problem, and are not weak AI in the traditional sense.

Siri, Cortana, and Google Assistant are all examples of narrow AI, but they are not good examples of a weak AI[citation needed][discuss], as they operate within a limited pre-defined range of functions. They do not implement parts of minds, they use natural language processing together with predefined rules. They are in particular not examples of strong AI as there are no genuine intelligence nor self-awareness. AI researcher Ben Goertzel, on his blog in 2010, stated Siri was "VERY narrow and brittle" evidenced by annoying results if you ask questions outside the limits of the application.[7]

Impact

Some commentators[who?] think weak AI could be dangerous because of this "brittleness" and fail in unpredictable ways. Weak AI could cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles.[8]

See also

References

  1. ^ io9.com mentions narrow AI. Published 1 April 2013, retrieved 16 February 2014: http://io9.com/how-much-longer-before-our-first-ai-catastrophe-464043243
  2. ^ AI researcher Ben Goertzel explains why he became interested in AGI instead of narrow AI. Published 18 Oct 2013. Retrieved 16 February 2014. http://intelligence.org/2013/10/18/ben-goertzel/
  3. ^ TechCrunch discusses AI App building regarding Narrow AI. Published 16 Oct 2015, retrieved 17 Oct 2015. https://techcrunch.com/2015/10/15/machine-learning-its-the-hard-problems-that-are-valuable/
  4. ^ The Cambridge handbook of artificial intelligence. Frankish, Keith., Ramsey, William M., 1960-. Cambridge, UK. 12 June 2014. p. 342. ISBN 978-0-521-87142-6. OCLC 865297798.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: others (link)
  5. ^ Searle, John R. (September 1980). "Minds, brains, and programs" (PDF). Behavioral and Brain Sciences. 3 (3): 417–424. doi:10.1017/s0140525x00005756. ISSN 0140-525X.
  6. ^ Perera Molligoda Arachchige, Arosh S.; Svet, Afanasy (10 September 2021). "Integrating artificial intelligence into radiology practice: undergraduate students' perspective". European Journal of Nuclear Medicine and Molecular Imaging. doi:10.1007/s00259-021-05558-y. ISSN 1619-7089.
  7. ^ Ben Goertzel blog-post retrieved 16 February 2014. Published 6 February 2010. http://multiverseaccordingtoben.blogspot.com/2010/02/siri-new-iphone-personal-assistant-some.html
  8. ^ Retrieved 16 February 2014. http://io9.com/how-much-longer-before-our-first-ai-catastrophe-464043243