Eric Horvitz

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Eric Horvitz, Microsoft portrait

Eric Joel Horvitz (/ˈhɔːrvɪts/) is an American computer scientist, and Technical Fellow at Microsoft, where he serves as director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, Massachusetts, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.[1][2]


Horvitz received his PhD in 1990 and his MD degree at Stanford University.[3] His doctoral dissertation, Computation and action under bounded resources,[4] and follow-on research introduced models of bounded rationality founded in probability and decision theory. He did his doctoral work under advisors Ronald A. Howard, George B. Dantzig, Edward H. Shortliffe, and Patrick Suppes.

He is currently Technical Fellow at Microsoft, where he serves as director of Microsoft Research Labs and Microsoft Research AI. He has been elected Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the National Academy of Engineering (NAE), the American Academy of Arts and Sciences, and of the American Association for the Advancement of Science (AAAS). He was elected to the ACM CHI Academy in 2013 and ACM Fellow 2014 "For contributions to artificial intelligence, and human-computer interaction."[5]

In 2015, he was awarded the AAAI Feigenbaum Prize,[6] a biennial award for sustained and high-impact contributions to the field of artificial intelligence through the development of computational models of perception, reflection and action, and their application in time-critical decision making, and intelligent information, traffic, and healthcare systems.

In 2015, he was also awarded the ACM - AAAI Allen Newell Award.[7]

He serves on the Scientific Advisory Committee of the Allen Institute for Artificial Intelligence (AI2) and is also chair of the Section on Information, Computing, and Communications of the American Association for the Advancement of Science (AAAS).

He has served as president of the Association for the Advancement of AI (AAAI), on the NSF Computer & Information Science & Engineering (CISE) Advisory Board, and on the council of the Computing Community Consortium (CCC).

He was elected to the American Philosophical Society in 2018.[8]


Horvitz's research interests span theoretical and practical challenges with developing systems that perceive, learn, and reason. His contributions include advances in principles and applications of machine learning and inference, information retrieval, human-computer interaction, bioinformatics, and e-commerce.

Horvitz played a significant role in the use of probability and decision theory in artificial intelligence. His work raised the credibility of artificial intelligence in other areas of computer science and computer engineering, influencing fields ranging from human-computer interaction to operating systems. His research helped establish the link between artificial intelligence and decision science. As an example, he coined the concept of bounded optimality, a decision-theoretic approach to bounded rationality.[9]

He studied the use of probability and utility to guide automated reasoning for decision making. The methods include consideration of the solving of streams of problems[10] in environments over time. In related work, he applied probability and machine learning to identify hard problems and to guide theorem proving.[11]

He has issued long-term challenge problems for AI[12]—and has espoused a vision of open-world AI,[13] where machine intelligences have the ability to understand and perform well in the larger world where they encounter situations they have not seen before.

He has explored synergies between human and machine intelligence, with methods that learn about the complementarities between people and AI. He is a founder of the AAAI conference on Human Computation and Crowdsourcing.[14]

He co-authored probability-based methods to enhance privacy, including a model of altruistic sharing of data called community sensing[15] and stochastic privacy.[16]

Horvitz speaks on the topic of artificial intelligence, including on NPR and the Charlie Rose show.[17][18][19] Online talks include both technical lectures and presentations for general audiences (TEDx Austin: Making Friends with Artificial Intelligence). His research has been featured in the New York Times and the Technology Review.[20][21][22][23] Horvitz is interviewed in the 2018 documentary on artificial intelligence Do You Trust This Computer?

AI and Society[edit]

Asilomar AI Study[edit]

He served as President of the AAAI from 2007-2009. As AAAI President, he called together and co-chaired the Asilomar AI study which culminated in a meeting of AI scientists at Asilomar in February 2009. The study considered the nature and timing of AI successes and reviewed concerns about directions with AI developments, including the potential loss of control over computer-based intelligences, and also efforts that could reduce concerns and enhance long-term societal outcomes. The study was the first meeting of AI scientists to address concerns about superintelligence and loss of control of AI and attracted interest by the public.[24]

In coverage of the Asilomar study, he said that scientists must study and respond to notions of superintelligent machines and concerns about artificial intelligence systems escaping from human control.[24] In a later NPR interview, he said that investments in scientific studies of superintelligences would be valuable to guide proactive efforts even if people believed that the probability of losing of control of AI was low because of the cost of such outcomes.[25]

One Hundred Year Study on Artificial Intelligence[edit]

In 2014, Horvitz defined and funded with his wife the One Hundred Year Study of Artificial Intelligence at Stanford University.[26][27] According to Horvitz, the gift, which may increase in the future, is sufficient to fund the study for a century.[27] A Stanford press release stated that sets of committees over a century will "study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play." A framing memo for the study calls out 18 topics, including monitoring and addressing possibilities of superintelligences and loss of control of AI.

In 2014, the committee to choose the panelists to write reports was led by Horvitz,[27][28] and included:[27]

  • Russ Altman, a Stanford bioengineering and computer science professor
  • Yoav Shoham, a Stanford computer science professor
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley
  • Tom M. Mitchell, the chairman of the machine learning department at Carnegie Mellon University
  • Barbara J. Grosz, a Harvard computer scientist professor
  • Alan Mackworth, a University of British Columbia computer science professor

Horvitz's white paper outlining the project suggested 18 areas for consideration, including law, ethics, the economy, war, and crime.[27][29]

As of 2016, the panel consists of 17 tech leaders, including executives from Google, Facebook, Microsoft and IBM,[30] and is chaired by Peter Stone (professor) of the University of Texas.[31] The current committee chair is Peter Stone.[30]

2016 Report[edit]

The 2015 study panel released a report in September 2016, titled "Artificial Intelligence and Life in 2030", examining the mostly-positive effects of AI on a typical North American city where self-driving cars, package-delivering robots and surveillance drones become commonplace.[31] The panel advocated for increased public and private spending on the industry, recommended increased AI expertise at all levels of government, and recommended against blanket government regulation.[30][32] Panel chair Peter Stone argues that AI won’t automatically replace human workers, but rather, will supplement the workforce and create new jobs in tech maintenance.[30] Panelists stated that military A.I. applications were outside their current scope and expertise;[27] in Foreign Policy, Mark Hagerott asked the panelists to give more consideration to security in the future, complaining that "AI and cyber crime get (only) passing reference in the context of credit card crime in this report".[33]

While mainly focusing on the next fifteen years, the report was dismissive that superintelligent AI would ever be a threat to humanity, and made the controversial claim that superhuman robots are probably not "even possible".[32][34] Stone stated that "it was a conscious decision not to give credence to this in the report."[27]

Founding of Partnership on AI[edit]

He served as founding co-chair of an effort to establish the Partnership on AI, an organization that brought Amazon, Facebook, DeepMind, Google, IBM, and Microsoft together to found and fund a non-profit, multi-stakeholder organization. A press release in September 2016 states that the non-profit organization will be guided by balanced leadership that includes “academics, non-profits, and specialists in policy and ethics.” The goals are stated as including research activities, recommending best practices, and publishing in “areas such as ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness” of AI technologies.



  • 1990. Computation and action under bounded resources.
  • 1990. Toward normative expert systems: The Pathfinder project. With David Earl Heckerman, and Bharat N. Nathwani. Knowledge Systems Laboratory, Stanford University, 1990.

Articles, a selection:

  • Horvitz, Eric J. (July 2008), "Artificial Intelligence in the Open World", AAAI Presidential Lecture, Chicago, Ill.
  • Horvitz, E. (February 2001), "Principles and Applications of Continual Computation", Artificial Intelligence, 126: 159–196
  • Horvitz, Eric J.; Breese, John S.; Henrion, Max (1998), "Decision theory in expert systems and artificial intelligence" (PDF), International Journal of Approximate Reasoning, 2.3: 247–302
  • Henrion, Max; Breese, John S.; Horvitz, Eric J. (1991), "Decision analysis and expert systems", AI Magazine, 12.4: 64
  • Shwe, Michael A.; et al. (1991), "Probabilistic diagnosis using a reformulation of the INTERNIST-1/QMR knowledge base", Methods of Information in Medicine, 30.4: 241–255
  • Horvitz, Eric; Cooper, G.F.; Heckerman, D. (August 1989), "Reflection and action under scarce resources: Theoretical principles and empirical study", Proceedings of the IJCAI: 1121–1127
  • Horvitz, Eric J. (August 1988), "Reasoning under varying and uncertain resource constraints", Proceedings of AAAI: 111–116
  • Horvitz, Eric J. (1987), Reasoning about beliefs and actions under computational resource constraints, arXiv:1304.2759, Bibcode:2013arXiv1304.2759H
  • Horvitz, E. (May 1999), "Principles of Mixed-Initiative User Interfaces", Proceedings of CHI 1999, Pittsburgh, Penn.
  • Kamar, E.; Hacker, S.; Horvitz, E. (June 2012), "Combining Human and Machine Intelligence in Large-scale Crowdsourcing" (PDF), AAMAS 2012, Valencia, Spain


  1. ^ "Eric Horvitz: Distinguished Scientist".
  2. ^ Gershgorn, Dave. "Microsoft's new head of research has spent his career building powerful AI—and making sure it's safe". Quartz. Retrieved 2017-08-25.
  3. ^ [dead link]
  4. ^
  5. ^ ERIC HORVITZ ACM Fellows 2014
  6. ^ "The AAAI Feigenbaum Prize". AAAI. Retrieved 14 April 2016.
  7. ^ "ERIC HORVITZ - Award Winner". ACM. Retrieved 27 April 2016.
  8. ^ "Election of New Members at the 2018 Spring Meeting | American Philosophical Society".
  9. ^ Mackworth, Alan (July 2008). "Introduction of Eric Horvitz" (PDF). AAAI Presidential Address.
  10. ^ Horvitz, Eric (February 2001), "Principles and Applications of Continual Computation", Artificial Intelligence, 126: 159–196
  11. ^ Horvitz, Eric J.; Ruan, Y.; Gomes, C.; Kautz, H.; Selman, B.; Chickering, D.M. (July 2001), "A Bayesian Approach to Tackling Hard Computational Problems" (PDF), Proceedings of the Conference on Uncertainty and Artificial Intelligence: 235–244
  12. ^ Selman, B.; Brooks, R.; Dean, T.; Horvitz, E.; Mitchell, T.; Nilsson, N. (August 1996), "Challenge Problems for Artificial Intelligence", Proceedings of AAAI-96, Thirteenth National Conference on Artificial Intelligence, Portland, Oregon: 1340–1345
  13. ^ Horvitz, Eric (July 2008), "Artificial Intelligence in the Open World", AAAI Presidential Lecture
  14. ^ Horvitz, Eric (February 2013), "Harnessing Human Intellect for Computing" (PDF), Computing Research News, 25 (2)
  15. ^ Krause, A.; Horvitz, E.; Kansal, A.; Zhao, F. (April 2008), "Toward Community Sensing", Ipsn 2008
  16. ^ Singla, A.; Horvitz, E.; Kamar, E.; White, R.W. (July 2014), "Stochastic Privacy" (PDF), AAAI, arXiv:1404.5454, Bibcode:2014arXiv1404.5454S
  17. ^ Hansen, Liane (21 March 2009). "Meet Laura, Your Virtual Personal Assistant". NPR. Retrieved 16 March 2011.
  18. ^ Kaste, Martin (11 Jan 2011). "The Singularity: Humanity's Last Invention?". NPR. Retrieved 14 Feb 2011.
  19. ^ Rose, Charlie. "A panel discussion about Artificial Intelligence".
  20. ^ Markoff, John (10 April 2008). "Microsoft Introduces Tool for Avoiding Traffic Jams". New York Times. Retrieved 16 March 2011.
  21. ^ Markoff, John (17 July 2000). "Microsoft Sees Software 'Agent' as Way to Avoid Distractions". New York Times. Retrieved 16 March 2011.
  22. ^ Lohr, Steve, and Markoff, John (24 June 2010). "Smarter Than You Think: Computers Learn to Listen, and Some Talk Back". New York Times. Retrieved 12 March 2011.
  23. ^ Waldrop, M. Mitchell (March–April 2008). "TR10: Modeling Surprise". Technology Review. Retrieved 12 March 2011.
  24. ^ a b Markoff, John (26 July 2009). "Scientists Worry Machines May Outsmart Man". York Times.
  25. ^ Siegel, Robert (11 January 2011). "The Singularity: Humanity's Last Invention?". NPR.
  26. ^ You, Jia (9 January 2015). "A 100-year study of artificial intelligence? Microsoft Research's Eric Horvitz explains". Science.
  27. ^ a b c d e f g Markoff, John (15 December 2014). "Study to Examine Effects of Artificial Intelligence". The New York Times. Retrieved 1 October 2016.
  28. ^ Markoff, John (1 September 2016). "How Tech Giants Are Devising Real Ethics for Artificial Intelligence". The New York Times. Retrieved 1 October 2016.
  29. ^ "One-Hundred Year Study of Artificial Intelligence: Reflections and Framing". Eric Horvitz. 2014. Retrieved 1 October 2016.
  30. ^ a b c d Dussault, Joseph (4 September 2016). "AI in the real world: Tech leaders consider practical issues". Christian Science Monitor. Retrieved 1 October 2016.
  31. ^ a b "Report: Artificial intelligence to transform urban cities". Houston Chronicle. 1 September 2016. Retrieved 1 October 2016.
  32. ^ a b Peter Stone et al. "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016. Doc: Accessed: October 1, 2016.
  33. ^ Mark Hagerott (9 September 2016). "The problem with the Stanford report's sanguine estimate on artificial intelligence". Foreign Policy. Retrieved 1 October 2016.
  34. ^ Knight, Will (1 September 2016). "Artificial intelligence wants to be your bro, not your foe". MIT Technology Review. Retrieved 1 October 2016.

External links[edit]