Soar (cognitive architecture)
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (June 2010) (Learn how and when to remove this template message)|
Soar is a cognitive architecture, created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University , now maintained by John Laird's research group at the University of Michigan. It is both a view of what cognition is and an implementation of that view through a computer programming architecture for artificial intelligence (AI). Since its beginnings in 1983 and its presentation in a paper in 1987, it has been widely used by AI researchers to model different aspects of human behavior.
The main goal of the Soar project is to be able to handle the full range of capabilities of an intelligent agent, from highly routine to extremely difficult open-ended problems. In order for that to happen, according to the view underlying Soar, it needs to be able to create representations and use appropriate forms of knowledge (such as procedural, declarative, episodic). Soar should then address a collection of mechanisms of the mind. Also underlying the Soar architecture is the view that a symbolic system is essential for general intelligence (see brief comment on neats versus scruffies). This is known as the physical symbol system hypothesis. The views of cognition underlying Soar are tied to the psychological theory expressed in Allen Newell's book, Unified Theories of Cognition.
While symbol processing remains the core mechanism in the architecture, recent versions of the theory incorporate non-symbolic representations and processes, including reinforcement learning, imagery processing, and emotion modeling (Laird, 2008). Soar's capabilities have always included a mechanism for creating new representations, by a process known as "chunking". Ultimately, Soar's goal is to achieve general intelligence, though this is acknowledged to be an ambitious and possibly very long-term goal.
Soar is based on a production system, and uses explicit production rules to govern its behavior (these are roughly of the form "if... then...", as also used in expert systems). Problem solving in Soar is modeled as a search through a problem space (the collection of different states which can be reached by the system at a particular time) for a goal state (which represents the solution for the problem). This is implemented by searching for the states which bring the system gradually closer to its goal. Each move consists of a decision cycle which has an elaboration phase (in which a variety of different pieces of knowledge bearing the problem are brought to Soar's working memory) and a decision procedure (which weighs what was found on the previous phase and assigns preferences to ultimately decide the action to be taken). In addition to problem space search, however, Soar can be used to instantiate reasoning techniques such as reinforcement learning which do not require detailed internal models of the environment. In this way, Soar is flexible to behaving when varying amounts of task knowledge are available.
SOAR originally stood for State, Operator And Result, reflecting this representation of problem solving as the application of an operator to a state to get a result. According to the project FAQ, the Soar development community no longer regards Soar as an acronym so it is no longer spelled all in caps though it is still representative of the core of the implementation.
If the decision procedure just described is not able to determine a unique course of action, Soar may use different strategies, known as weak methods to solve the impasse. These methods are appropriate to situations in which knowledge is not abundant. Some examples are means-ends analysis (which may calculate the difference between each available option and the goal state) and a type of hill-climbing. When a solution is found by one of these methods, Soar uses a learning technique called chunking to transform the course of action taken into a new rule. The new rule can then be applied whenever Soar encounters the situation again (that is, there will no longer be an impasse).
- Laird, John E. (2012). The Soar Cognitive Architecture. MIT Press. ISBN 978-0262122962.
- Newell, Allen (December 1990). Unified Theories of Cognition. Harvard University Press. ISBN 978-0674920996.
- Laird, Rosenbloom, and Newell (1987). "Soar: An Architecture for General Intelligence". Artificial Intelligence, 33: 1-64
- Old Soar FAQ
- Laird, 2008 Extending the Soar Cognitive Architecture
- Lehman, Laird, and Rosenbloom, 2006 A Gentle Introduction to Soar: 2006 update
- Rosenbloom, Laird, and Newell, 1993 The Soar Papers: Readings on Integrated Intelligence, Information Sciences Institute