AI@50: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m →‎Selected Submitted Papers: Future Strategies for AI: fix Storrs authorlink; Bringsjord DOI
→‎Selected Submitted Papers: Future Strategies for AI: Müller's selected paper: Is There a Future for AI Without Representation?
Line 90: Line 90:
* [[J. Storrs Hall]], Self-improving AI: An Analysis<ref> {{cite journal | title = Self-improving AI: An Analysis | journal = Minds and Machines | date = 2007 | first = J. Stoors | last = Hall | volume = 17 | issue = 3 | pages = 249-259| id = {{doi | 10.1007/s11023-007-9065-3}} | url = http://www.springerlink.com/content/0n70u4l8q7235840/ | accessdate = 2010-06-10 |authorlink= J. Storrs Hall| quote = Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions. }} [http://mol-eng.com/bootstrap.pdf Self-archive]</ref>
* [[J. Storrs Hall]], Self-improving AI: An Analysis<ref> {{cite journal | title = Self-improving AI: An Analysis | journal = Minds and Machines | date = 2007 | first = J. Stoors | last = Hall | volume = 17 | issue = 3 | pages = 249-259| id = {{doi | 10.1007/s11023-007-9065-3}} | url = http://www.springerlink.com/content/0n70u4l8q7235840/ | accessdate = 2010-06-10 |authorlink= J. Storrs Hall| quote = Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions. }} [http://mol-eng.com/bootstrap.pdf Self-archive]</ref>
* [[Selmer Bringsjord]], The Logicist Manifesto<ref>{{cite journal|last=Bringsjord|first=Selmer|title=The Logicist Manifesto: At Long Last Let Logic-Based AI Become a Field Unto Itself|journal=Journal of Applied Logic|year=2008|month=December|volume=6|issue=4|pages=502--525| id = {{doi |10.1016/j.jal.2008.09.001}} |url=http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B758H-4THJGN7-1&_user=10&_coverDate=12%2F31%2F2008&_rdoc=6&_fmt=high&_orig=browse&_srch=doc-info%28%23toc%2312927%232008%23999939995%23700611%23FLA%23display%23Volume%29&_cdi=12927&_sort=d&_docanchor=&_ct=15&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=7fc83ceaf3be566605e0e9d8a24b9283|accessdate=2010-06-10|authorlink=Selmer Bringsjord|quote=This paper is a sustained argument for the view that logic-based AI should become a self-contained field, entirely divorced from paradigms that are currently still included under the AI “umbrella”—paradigms such as connectionism and the continuous systems approach. The paper includes a self-contained summary of logic-based AI, as well as rebuttals to a number of objections that will inevitably be brought against the declaration of independence herein expressed.}}[http://kryten.mm.rpi.edu/SB_LAI_Manifesto_091808.pdf Self-archive]</ref>
* [[Selmer Bringsjord]], The Logicist Manifesto<ref>{{cite journal|last=Bringsjord|first=Selmer|title=The Logicist Manifesto: At Long Last Let Logic-Based AI Become a Field Unto Itself|journal=Journal of Applied Logic|year=2008|month=December|volume=6|issue=4|pages=502--525| id = {{doi |10.1016/j.jal.2008.09.001}} |url=http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B758H-4THJGN7-1&_user=10&_coverDate=12%2F31%2F2008&_rdoc=6&_fmt=high&_orig=browse&_srch=doc-info%28%23toc%2312927%232008%23999939995%23700611%23FLA%23display%23Volume%29&_cdi=12927&_sort=d&_docanchor=&_ct=15&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=7fc83ceaf3be566605e0e9d8a24b9283|accessdate=2010-06-10|authorlink=Selmer Bringsjord|quote=This paper is a sustained argument for the view that logic-based AI should become a self-contained field, entirely divorced from paradigms that are currently still included under the AI “umbrella”—paradigms such as connectionism and the continuous systems approach. The paper includes a self-contained summary of logic-based AI, as well as rebuttals to a number of objections that will inevitably be brought against the declaration of independence herein expressed.}}[http://kryten.mm.rpi.edu/SB_LAI_Manifesto_091808.pdf Self-archive]</ref>
* [[Vincent Müller]], Is There a Future for AI Without Representation?<ref>{{cite journal|last=Müller|first=Vincent C.|title=Is There a Future for AI Without Representation? |journal=Minds and Machines|year=2007|month=March|volume=17|issue=1|pages=101-115|doi=10.1007/s11023-007-9067-1|url=http://www.springerlink.com/content/t65jk1h2705383l8/|accessdate=2010-06-10|quote=This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents.}}</ref>
* [[Vincent Muller]], Is There a Future for AI Without Representation?
* [[Kristinn R. Thórisson]], Integrated A.I. Systems
* [[Kristinn R. Thórisson]], Integrated A.I. Systems



Revision as of 00:11, 11 June 2010

File:AI@50Logo.png
The 2006 AI@50 logo

AI@50, which is formally known as the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years" (July 13–15, 2006), commemorated the 50th anniversary of the Dartmouth Conferences which effectively inaugurated the history of artificial intelligence. Five of the original ten attendees were present: Marvin Minsky, Ray Solomonoff, Oliver Selfridge, Trenchard More, and John McCarthy. [1] [2]

While sponsored by Dartmouth College, General Electric, and the Frederick Whittemore Foundation, a $200,000 grant from the US government called for a report of the proceedings that would:

  • Analyze progress on AI's original challenges during the first 50 years, and assess whether the challenges were "easier" or "harder" than originally thought and, why
  • Document what the AI@50 participants believe are the major research and development challenges facing this field over the next 50 years, and identify what breakthroughs will be needed to meet those challenges
  • Relate those challenges and breakthroughs against developments and trends in other areas such as control theory, signal processing, information theory, statistics, and optimization theory.

Note

Many of the historic and distinguished AI researchers invited to present their papers at this conference may well deposit their taxpayer-funded papers in their individual or institutional repositories long before DARPA's official report is openly published on the Web or otherwise made freely available to the public, hence this page exists primarily to centralize links to the authors' sites and their self-archived papers.

Conference Program and links to published papers

AI — Past, Present, Future

The Future Model of Thinking

The Future of Network Models

The Future of Learning & Search

The Future of AI

The Future of Vision

  • Eric Grimson, Intelligent Medical Image Analysis: Computer Assisted Surgery and Disease Monitoring
  • Takeo Kanade, Artificial Intelligence Vision: Progress and Non-Progress
  • Terry Sejnowski, A Critique of Pure Vision

The Future of Reasoning

  • Alan Bundy, Constructing, Selecting and Repairing Representations of Knowledge
  • Edwina Rissland, The Exquisite Centrality of Examples
  • Bart Selman, The Challenge and Promise of Automated Reasoning

The Future of Language and Cognition

The Future of the Future

AI and Games

Future Interactions with Intelligent Machines

Selected Submitted Papers: Future Strategies for AI

Selected Submitted Papers: Future Possibilities for AI

Notes and comments

  • Meg Houston Maker [1], conference notes:
  • AI@50 Opening [2]
  • AI — Past, Present Future [3] — Brief abstracts of papers by John McCarthy and Marvin Minsky
  • First Polling Question [16]
  • Second Polling Question [17]
  • Third Polling Question [18]
  • Fourth Polling Question [19]
  • Fifth Polling Question [20]
  • Final Polling Question [23]

References

  1. ^ Moor, James (2006). "The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years" (PDF). AI Magazine. 27 (4): 87–91.
  2. ^ Nilsson, Nils J. (2009). The Quest for Artificial Intelligence. Cambridge University Press. ISBN 0521122937. {{cite book}}: More than one of |author= and |last= specified (help) pp. 80-81
  3. ^ Solomonoff, Ray J. (2006). "Machine Learning -- Past and Future" (PDF). Retrieved 2008-07-25. {{cite news}}: Check date values in: |date= (help)
  4. ^ Langley, Pat (2006). "Intelligent Behavior in Humans and Machines" (PDF). Retrieved 2008-07-25. {{cite news}}: Check date values in: |date= (help)
  5. ^ Kurzweil, Ray (2006-07-14). "Why We Can Be Confident of Turing Test Capability Within a Quarter Century". Retrieved 2006-07-25.
  6. ^ Hall, J. Stoors (2007). "Self-improving AI: An Analysis". Minds and Machines. 17 (3): 249–259. doi:10.1007/s11023-007-9065-3. Retrieved 2010-06-10. Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a "child machine" which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions. Self-archive
  7. ^ Bringsjord, Selmer (2008). "The Logicist Manifesto: At Long Last Let Logic-Based AI Become a Field Unto Itself". Journal of Applied Logic. 6 (4): 502--525. doi:10.1016/j.jal.2008.09.001. Retrieved 2010-06-10. This paper is a sustained argument for the view that logic-based AI should become a self-contained field, entirely divorced from paradigms that are currently still included under the AI "umbrella"—paradigms such as connectionism and the continuous systems approach. The paper includes a self-contained summary of logic-based AI, as well as rebuttals to a number of objections that will inevitably be brought against the declaration of independence herein expressed. {{cite journal}}: Unknown parameter |month= ignored (help)Self-archive
  8. ^ Müller, Vincent C. (2007). "Is There a Future for AI Without Representation?". Minds and Machines. 17 (1): 101–115. doi:10.1007/s11023-007-9067-1. Retrieved 2010-06-10. This paper investigates the prospects of Rodney Brooks' proposal for AI without representation. It turns out that the supposedly characteristic features of "new AI" (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: "New AI" is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks' proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents. {{cite journal}}: Unknown parameter |month= ignored (help)

External links