Weak AI

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Weak AI (also known as narrow AI)[1][2] defines non-sentient computer intelligence, typically focused on a narrow task. The intelligence of weak AI is limited. In 2011 Singularity Hub wrote: "As robots and narrow artificial intelligences creep into roles traditionally occupied by humans, we’ve got to ask ourselves: is all this automation good or bad for the job market?"[3]

Siri is a good example of narrow intelligence. Siri operates within a limited pre-defined range, there is no genuine intelligence, no self-awareness, no life despite being a sophisticated example of weak AI. In Forbes (2011), Ted Greenwald wrote: "The iPhone/Siri marriage represents the arrival of hybrid AI, combining several narrow AI techniques plus access to massive data in the cloud."[4] AI researcher Ben Goertzel, on his blog in 2010, stated Siri was "VERY narrow and brittle" evidenced by annoying results if you ask questions outside the limits of the application.[5]

Some commentators think weak AI could be dangerous. In 2013 George Dvorsky stated via io9: "Narrow AI could knock out our electric grid, damage nuclear power plants, cause a global-scale economic collapse, misdirect autonomous vehicles and robots..."[6] The Stanford Center for Internet and Society, in the following quote, contrasts strong AI with weak AI regarding the growth of narrow AI presenting "real issues."

Weak or "narrow" AI, in contrast, is a present-day reality. Software controls many facets of daily life and, in some cases, this control presents real issues. One example is the May 2010 "flash crash" that caused a temporary but enormous dip in the market.[7]

— Ryan Calo, Center for Internet and Society, Stanford Law School, 30th August 2011.

The following two excerpts from Singularity Hub summarise weak-narrow AI:

When you call the bank and talk to an automated voice you are probably talking to an AI…just a very annoying one. Our world is full of these limited AI programs which we classify as “weak” or “narrow” or “applied”. These programs are far from the sentient, love-seeking, angst-ridden artificial intelligences we see in science fiction, but that’s temporary. All these narrow AIs are like the amino acids in the primordial ooze of the Earth.[8]

We’re slowly building a library of narrow AI talents that are becoming more impressive. Speech recognition and processing allows computers to convert sounds to text with greater accuracy. Google is using AI to caption millions of videos on YouTube. Likewise, computer vision is improving so that programs like Vitamin d Video can recognize objects, classify them, and understand how they move. Narrow AI isn’t just getting better at processing its environment it’s also understanding the difference between what a human says and what a human wants.[9]

— Aaron Saenz, Singularity Hub, 10th Aug 2010.

  • Weak AI, an artificial intelligence system which is only intended to be applicable on a specific kind of problems (e.g. computer chess) and not intended to display human-like intelligence in general; see strong AI
  • Weak AI hypothesis, the position in philosophy of artificial intelligence that machines can demonstrate intelligence, but do not necessarily have a mind, mental states or consciousness.

See also[edit]


  1. ^ io9.com mentions narrow AI. Published 1 April 2013, retrieved 16 February 2014: http://io9.com/how-much-longer-before-our-first-ai-catastrophe-464043243
  2. ^ AI researcher Ben Goertzel explains why he became interested in AGI instead of narrow AI. Published 18 Oct 2013. Retrieved 16 February 2014. http://intelligence.org/2013/10/18/ben-goertzel/
  3. ^ Published 29 March 2011. Retrieved 16 February 2014. https://singularityhub.com/2011/03/29/cnbc-gives-5-minutes-to-robots-vs-economy-debate-video/
  4. ^ Retrieved 16 February 2014. http://www.forbes.com/sites/tedgreenwald/2011/10/13/how-smart-machines-like-iphone-4s-are-quietly-changing-your-industry/
  5. ^ Ben Goertzel blog-post retrieved 16 February 2014. Published 6 February 2010. http://multiverseaccordingtoben.blogspot.com/2010/02/siri-new-iphone-personal-assistant-some.html
  6. ^ Retrieved 16 February 2014. http://io9.com/how-much-longer-before-our-first-ai-catastrophe-464043243
  7. ^ Retrieved 16 February 2014. http://cyberlaw.stanford.edu/blog/2011/08/sorcerers-apprentice-or-why-weak-ai-interesting-enough
  8. ^ Published 10th Aug 2010. Retrieved 16 February 2014. https://singularityhub.com/2010/08/10/we-live-in-a-jungle-of-artificial-intelligence-that-will-spawn-sentience/
  9. ^ Published 10th Aug 2010. Retrieved 16 February 2014. https://singularityhub.com/2010/08/10/we-live-in-a-jungle-of-artificial-intelligence-that-will-spawn-sentience/