From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

AI@50, formally known as the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years" (July 13–15, 2006), was a conference organized by James Moor, commemorating the 50th anniversary of the Dartmouth workshop which effectively inaugurated the history of artificial intelligence. Five of the original ten attendees were present: Marvin Minsky, Ray Solomonoff, Oliver Selfridge, Trenchard More, and John McCarthy.[1]

While sponsored by Dartmouth College, General Electric, and the Frederick Whittemore Foundation, a $200,000 grant from the Defense Advanced Research Projects Agency (DARPA) called for a report of the proceedings that would:

  • Analyze progress on AI's original challenges during the first 50 years, and assess whether the challenges were "easier" or "harder" than originally thought and, why
  • Document what the AI@50 participants believe are the major research and development challenges facing this field over the next 50 years, and identify what breakthroughs will be needed to meet those challenges
  • Relate those challenges and breakthroughs against developments and trends in other areas such as control theory, signal processing, information theory, statistics, and optimization theory.[2]

A summary report by the conference director, James Moor, was published in AI Magazine.[3]

Conference Program and links to published papers[edit]

AI: Past, Present, Future[edit]

The Future Model of Thinking[edit]

The Future of Network Models[edit]

The Future of Learning & Search[edit]

The Future of AI[edit]

The Future of Vision[edit]

  • Eric Grimson, Intelligent Medical Image Analysis: Computer Assisted Surgery and Disease Monitoring
  • Takeo Kanade, Artificial Intelligence Vision: Progress and Non-Progress
  • Terry Sejnowski, A Critique of Pure Vision

The Future of Reasoning[edit]

  • Alan Bundy, Constructing, Selecting and Repairing Representations of Knowledge
  • Edwina Rissland, The Exquisite Centrality of Examples
  • Bart Selman, The Challenge and Promise of Automated Reasoning

The Future of Language and Cognition[edit]

The Future of the Future[edit]

AI and Games[edit]

Future Interactions with Intelligent Machines[edit]

Selected Submitted Papers: Future Strategies for AI[edit]

Selected Submitted Papers: Future Possibilities for AI[edit]


  1. ^ Nilsson, Nils J. (2009). The Quest for Artificial Intelligence. Cambridge University Press. ISBN 978-0-521-12293-1. pp. 80-81
  2. ^ Knapp, Susan (2006-07-06). "Dartmouth receives grant from DARPA to support AI@50 conference". Dartmouth College Office of Public Affairs. Archived from the original on 2010-06-07. Retrieved 2010-06-11.
  3. ^ Moor, James (2006). "The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years" (PDF). AI Magazine. 27 (4): 87–91. ISSN 0738-4602.
  4. ^ Knapp, Susan (2006-07-24). "Artificial Intelligence: Past, Present, and Future". Vox of Dartmouth. Retrieved 2010-06-11.
  5. ^ Russell, Stuart (2006-07-12). "The Approach of Modern AI". Archived from the original (PPT) on 2012-03-24. Retrieved 2010-06-11.
  6. ^ Solomonoff, Ray J. (2006). "Machine Learning -- Past and Future" (PDF). Retrieved 2008-07-25.
  7. ^ Langley, Pat (2006). "Intelligent Behavior in Humans and Machines" (PDF). Retrieved 2008-07-25.
  8. ^ Kurzweil, Ray (14 July 2006). "Why We Can Be Confident of Turing Test Capability Within a Quarter Century". Archived from the original on 10 August 2006. Retrieved 25 July 2006.
  9. ^ Hall, J. Stoors (2007). "Self-improving AI: An Analysis". Minds and Machines. 17 (3): 249–259. doi:10.1007/s11023-007-9065-3. Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions. Self-archive Archived 2010-02-15 at the Wayback Machine
  10. ^ Bringsjord, Selmer (December 2008). "The Logicist Manifesto: At Long Last Let Logic-Based AI Become a Field Unto Itself". Journal of Applied Logic. 6 (4): 502–525. doi:10.1016/j.jal.2008.09.001. This paper is a sustained argument for the view that logic-based AI should become a self-contained field, entirely divorced from paradigms that are currently still included under the AI “umbrella”—paradigms such as connectionism and the continuous systems approach. The paper includes a self-contained summary of logic-based AI, as well as rebuttals to a number of objections that will inevitably be brought against the declaration of independence herein expressed.Self-archive
  11. ^ Müller, Vincent C. (March 2007). "Is There a Future for AI Without Representation?". Minds and Machines. 17 (1): 101–115. doi:10.1007/s11023-007-9067-1. This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents. Self-archive Archived 2009-11-17 at the Wayback Machine
  12. ^ Thórisson, Kristinn R. (March 2007). "Integrated A.I. systems". Minds and Machines. 17 (1): 11–25. doi:10.1007/s11023-007-9055-5. The broad range of capabilities exhibited by humans and animals is achieved through a large set of heterogeneous, tightly integrated cognitive mechanisms. To move artificial systems closer to such general-purpose intelligence we cannot avoid replicating some subset—quite possibly a substantial portion—of this large set. Progress in this direction requires that systems integration be taken more seriously as a fundamental research problem. In this paper I make the argument that intelligence must be studied holistically. I present key issues that must be addressed in the area of integration and propose solutions for speeding up rate of progress towards more powerful, integrated A.I. systems, including (a) tools for building large, complex architectures, (b) a design methodology for building realtime A.I. systems and (c) methods for facilitating code sharing at the community level.
  13. ^ Steinhart, Eric (October 2007). "Survival as a Digital Ghost". Minds and Machines. 17 (3): 261–271. doi:10.1007/s11023-007-9068-0. You can survive after death in various kinds of artifacts. You can survive in diaries, photographs, sound recordings, and movies. But these artifacts record only superficial features of yourself. We are already close to the construction of programs that partially and approximately replicate entire human lives (by storing their memories and duplicating their personalities). A digital ghost is an artificially intelligent program that knows all about your life. It is an animated auto-biography. It replicates your patterns of belief and desire. You can survive after death in a digital ghost. We discuss a series of digital ghosts over the next 50 years. As time goes by and technology advances, they are progressively more perfect replicas of the lives of their original authors.
  14. ^ Schmidt, Colin T. A. (October 2007). "Children, Robots and... the Parental Role". Minds and Machines. 17 (3): 273–286. doi:10.1007/s11023-007-9069-z. The raison d’être of this article is that many a spry-eyed analyst of the works in intelligent computing and robotics fail to see the essential concerning applications development, that of expressing their ultimate goal. Alternatively, they fail to state it suitably for the lesser-informed public eye. The author does not claim to be able to remedy this. Instead, the visionary investigation offered couples learning and computing with other related fields as part of a larger spectre to fully simulate people in their embodied image. For the first time, the social roles attributed to the technical objects produced are questioned, and so with a humorous illustration.
  15. ^ Anderson, Michael; Susan Leigh Anderson (March 2007). "The status of machine ethics: a report from the AAAI Symposium". Minds and Machines. 17 (1): 1–10. doi:10.1007/s11023-007-9053-7. This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
  16. ^ Guarini, Marcello (March 2007). "Computation, Coherence, and Ethical Reasoning". Minds and Machines. 17 (1): 27–46. doi:10.1007/s11023-007-9056-4. Theories of moral, and more generally, practical reasoning sometimes draw on the notion of coherence. Admirably, Paul Thagard has attempted to give a computationally detailed account of the kind of coherence involved in practical reasoning, claiming that it will help overcome problems in foundationalist approaches to ethics. The arguments herein rebut the alleged role of coherence in practical reasoning endorsed by Thagard. While there are some general lessons to be learned from the preceding, no attempt is made to argue against all forms of coherence in all contexts. Nor is the usefulness of computational modelling called into question. The point will be that coherence cannot be as useful in understanding moral reasoning as coherentists may think. This result has clear implications for the future of Machine Ethics, a newly emerging subfield of AI.

External links[edit]

Notes and comments[edit]

Conference blogger Meg Houston Maker provided on-the-scene coverage of the conference, including entries on: