Weak AI

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of mind, or, as narrow AI,[1][2][3] is focused on one narrow task. In John Searle's terms it “would be useful for testing hypotheses about minds, but would not actually be minds”.[4]

It is contrasted with Strong AI, which is defined variously as:

Scholars like Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling" [5] (p. 85) (as, on the other hand, implied by the strong AI assumption).

Terminology[edit]

“Weak AI” is sometimes called “narrow AI”, but the latter is usually interpreted as subfields within the former.[by whom?] Hypothesis testing about minds or part of minds are typically not part of narrow AI, but rather implementation of some superficial lookalike feature. Many currently existing systems that claim to use “artificial intelligence” are likely operating as a narrow AI focused on a specific problem, and are not weak AI in the traditional sense.

Siri, Cortana, and Google Assistant are all examples of narrow AI, but they are not good examples of a weak AI[citation needed][discuss], as they operate within a limited pre-defined range of functions. They do not implement parts of minds, they use natural language processing together with predefined rules. They are in particular not examples of strong AI as there are no genuine intelligence nor self-awareness. AI researcher Ben Goertzel, on his blog in 2010, stated Siri was "VERY narrow and brittle" evidenced by annoying results if you ask questions outside the limits of the application.[6]

Impact[edit]

Some commentators[who?] think weak AI could be dangerous because of this "brittleness" and fail in unpredictable ways. Weak AI could cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles.[1]

See also[edit]

References[edit]

  1. ^ a b Dvorsky, George (1 April 2013). "How Much Longer Before Our First AI Catastrophe?". Gizmodo. Retrieved 27 November 2021.
  2. ^ Muehlhauser, Luke (18 October 2013). "Ben Goertzel on AGI as a Field". Machine Intelligence Research Institute. Retrieved 27 November 2021.
  3. ^ Chalfen, Mike (15 October 2015). "The Challenges Of Building AI Apps". TechCrunch. Retrieved 27 November 2021.
  4. ^ The Cambridge handbook of artificial intelligence. Frankish, Keith., Ramsey, William M., 1960-. Cambridge, UK. 12 June 2014. p. 342. ISBN 978-0-521-87142-6. OCLC 865297798.{{cite book}}: CS1 maint: others (link)
  5. ^ Lieto, Antonio (2021). Cognitive Design for Artificial Minds. London, UK: Routledge, Taylor & Francis. ISBN 9781138207929.
  6. ^ Goertzel, Ben (6 February 2010). "Siri, the new iPhone "AI personal assistant": Some useful niche applications, not so much AI". The Multiverse According to Ben. Retrieved 27 November 2021.