Eric Horvitz

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Eric Horvitz

Eric Joel Horvitz is an American computer scientist, and Distinguished Scientist at Microsoft, where he serves as director of Microsoft Research's main Redmond lab.[1]

Biography[edit]

Horvitz received his PhD in 1990 and his MD degree at Stanford University.[2] His doctoral dissertation, Computation and action under bounded resources, and follow-on research introduced models of bounded rationality founded in probability and decision theory. He did his doctoral work under advisors Ronald A. Howard, George B. Dantzig, Edward H. Shortliffe, and Patrick Suppes.

He is currently Distinguished Scientist at Microsoft, where he serves as director of Microsoft Research's main Redmond lab. He has been elected Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the National Academy of Engineering (NAE), the American Academy of Arts and Sciences, and of the American Association for the Advancement of Science (AAAS). He was elected to the ACM CHI Academy in 2013 and ACM Fellow 2014 For contributions to artificial intelligence, and human-computer interaction.[3]

He was awarded the Feigenbaum Prize, a biennial award for sustained and high-impact contributions to the field of artificial intelligence through the development of computational models of perception, reflection and action, and their application in time-critical decision making, and intelligent information, traffic, and healthcare systems.

He serves on the Scientific Advisory Committee of the Allen Institute for Artificial Intelligence (AI2) and is also chair of the Section on Information, Computing, and Communications of the American Association for the Advancement of Science (AAAS).

He has served as president of the Association for the Advancement of AI (AAAI), on the NSF Computer & Information Science & Engineering (CISE) Advisory Board, and on the council of the Computing Community Consortium (CCC).

Work[edit]

Horvitz's research interests span theoretical and practical challenges with developing systems that perceive, learn, and reason. His contributions include advances in principles and applications of machine learning and inference, information retrieval, human-computer interaction, bioinformatics, and e-commerce.

Horvitz played a significant role in the use of probability and decision theory in artificial intelligence. His work raised the credibility of artificial intelligence in other areas of computer science and computer engineering, influencing fields ranging from human-computer interaction to operating systems. His research helped establish the link between artificial intelligence and decision science. As an example, he coined the concept of bounded optimality, a decision-theoretic approach to bounded rationality.[4]

He studied the use of probability and utility to guide automated reasoning for decision making. The methods include consideration of the solving of streams of problems[5] in environments over time. In related work, he applied probability and machine learning to identify hard problems and to guide theorem proving.[6]

He has issued long-term challenge problems for AI[7]—and has espoused a vision of open-world AI,[8] where machine intelligences have the ability to understand and perform well in the larger world where they encounter situations they have not seen before.

He has explored synergies between human and machine intelligence, with methods that learn about the complementarities between people and AI. He is a founder of the AAAI conference on Human Computation and Crowdsourcing.[9]

He co-authored probability-based methods to enhance privacy, including a model of altruistic sharing of data called community sensing[10] and stochastic privacy.[11]

Horvitz speaks on the topic of artificial intelligence, including on NPR and the Charlie Rose show.[12][13][14] Online talks include both technical lectures and presentations for general audiences (TEDx Austin: Making Friends with Artificial Intelligence). His research has been featured in the New York Times and the Technology Review.[15][16][17][18]

Asilomar AI Study[edit]

He served as President of the AAAI from 2007-2009. As AAAI President, he called together and co-chaired the Asilomar AI study which culminated in a meeting of AI scientists at Asilomar in February 2009. The study considered the nature and timing of AI successes and reviewed concerns about directions with AI developments, including the potential loss of control over computer-based intelligences, and also efforts that could reduce concerns and enhance long-term societal outcomes. The study was the first meeting of AI scientists to address concerns about superintelligence and loss of control of AI and attracted interest by the public.[19]

In coverage of the Asilomar study, he said that scientists must study and respond to notions of superintelligent machines and concerns about artificial intelligence systems escaping from human control.[20] In a later NPR interview, he said that investments in scientific studies of superintelligences would be valuable to guide proactive efforts even if people believed that the probability of losing of control of AI was low because of the cost of such outcomes.[21]

One Hundred Year Study of AI[edit]

In 2014, he defined and funded with his wife the One Hundred Year Study of Artificial Intelligence at Stanford University.[22][23] A Stanford press release stated that sets of committees over a century will "study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play." A framing memo for the study calls out 18 topics, including monitoring and addressing possibilities of superintelligences and loss of control of AI.

Publications[edit]

Books:

  • 1990. Computation and action under bounded resources.
  • 1990. Toward normative expert systems: The Pathfinder project. With David Earl Heckerman, and Bharat N. Nathwani. Knowledge Systems Laboratory, Stanford University, 1990.

Articles, a selection:

References[edit]

  1. ^ "Eric Horvitz: Distinguished Scientist". 
  2. ^ "Big Thinkers Event: Eric Horvitz, Machine Intelligence and the Open World". Retrieved 12 March 2011. 
  3. ^ ERIC HORVITZ ACM Fellows 2014
  4. ^ Mackworth, Alan (July 2008). "Introduction of Eric Horvitz" (PDF). AAAI Presidential Address. 
  5. ^ Horvitz, Eric (February 2001), "Principles and Applications of Continual Computation", Artificial Intelligence Journal (Elsevier Science) 126: 159–196 
  6. ^ Horvitz, Eric J.; Ruan, Y.; Gomes, C.; Kautz, H.; Selman, B.; Chickering, D.M. (July 2001), "A Bayesian Approach to Tackling Hard Computational Problems" (PDF), Proceedings of the Conference on Uncertainty and Artificial Intelligence: 235–244 
  7. ^ Selman, B.; Brooks, R.; Dean, T.; Horvitz, E.; Mitchell, T.; Nilsson, N. (August 1996), "Challenge Problems for Artificial Intelligence", Proceedings of AAAI-96, Thirteenth National Conference on Artificial Intelligence, Portland, Oregon (AAAI Press): 1340–1345 
  8. ^ Horvitz, Eric (July 2008), "Artificial Intelligence in the Open World", AAAI Presidential Lecture 
  9. ^ Horvitz, Eric (February 2013), "Harnessing Human Intellect for Computing" (PDF), Computing Research news 25 (2) 
  10. ^ Krause, A.; Horvitz, E.; Kansal, A.; Zhao, F. (April 2008), "Toward Community Sensing", IPSN 2008 
  11. ^ Singla, A.; Horvitz, E.; Kamar, E.; White, R.W. (July 2014), "Stochastic Privacy" (PDF), AAAI 
  12. ^ Hansen, Liane (21 March 2009). "Meet Laura, Your Virtual Personal Assistant". NPR. Retrieved 16 March 2011. 
  13. ^ Kaste, Martin (11 Jan 2011). "The Singularity: Humanity's Last Invention?". NPR. Retrieved 14 Feb 2011. 
  14. ^ Rose, Charlie. "A panel discussion about Artificial Intelligence". 
  15. ^ Markoff, John (10 April 2008). "Microsoft Introduces Tool for Avoiding Traffic Jams". New York Times. Retrieved 16 March 2011. 
  16. ^ Markoff, John (17 July 2000). "Microsoft Sees Software 'Agent' as Way to Avoid Distractions". New York Times. Retrieved 16 March 2011. 
  17. ^ Lohr, Steve, and Markoff, John (24 June 2010). "Smarter Than You Think: Computers Learn to Listen, and Some Talk Back". New York Times. Retrieved 12 March 2011. 
  18. ^ Waldrop, M. Mitchell (March–April 2008). "TR10: Modeling Surprise". Technology Review. Retrieved 12 March 2011. 
  19. ^ Markoff, John (26 July 2009). "Scientists Worry Machines May Outsmart Man". York Times. 
  20. ^ Markoff, John (26 July 2009). "Scientists Worry Machines May Outsmart Man". York Times. 
  21. ^ Siegel, Robert (11 January 2011). "The Singularity: Humanity's Last Invention?". NPR. 
  22. ^ You, Jia (9 January 2015). "A 100-year study of artificial intelligence? Microsoft Research's Eric Horvitz explains". Science. 
  23. ^ Markoff, John (15 December 2014). "Study to Examine Effects of Artificial Intelligence". New York Times. 

External links[edit]