Computational cognition (sometimes referred to as computational cognition science) is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information.
There are two main purposes for the productions of artificial intelligence: to produce intelligent behaviors regardless of the quality of the results, and to model after intelligent behaviors found in nature. In the beginning of its existence, there was no need for artificial intelligence to emulate the same behavior as human cognition. Until 1960s, economist Herbert Simon and Allen Newell attempted to formalize human problem-solving skills by using the results of psychological studies to develop programs that implement the same problem-solving techniques as people would. Their works laid the foundation for symbolic AI and computational cognition, and even some advancements for cognitive science and cognitive psychology.
The field of symbolic AI is based on the physical symbol systems hypothesis by Simon and Newell, which states that expressing aspects of cognitive intelligence can be achieved through the manipulation of symbols. However, John McCarthy focused more on the initial purpose of artificial intelligence, which is to breakdown the essence of logical and abstract reasoning regardless of whether or not human employs the same mechanism.
Over the next decades, the progress made in artificial intelligence started to be focused more on developing logic-based and knowledge-based programs, veering away from the original purpose of symbolic AI. Researchers started to believe that artificial intelligence may never be able to imitate some intricate processes of human cognition like perception or learning. They began to take a “sub-symbolic” approach to create intelligence without specifically representing that knowledge. This movement led to the emerging discipline of computational modeling, connectionism, and computational intelligence.
As it contributes more to the understanding of human cognition than artificial intelligence, computational cognitive modeling emerged from the need to define various cognition functionalities (like motivation, emotion, or perception) by representing them in computational models of mechanisms and processes. A computational model study complex system through the use of specific algorithms and extensive computational resources, or variables, to produce computer simulation. Simulation is achieved by adjusting the variables, changing one alone or even combining them together, to observe the effect on the outcomes. The results help experimenters make predictions about what would happen in the real system if those similar changes were to occur.
When computational models attempt to mimic human cognitive functioning, all the details of the function must be known for them to transfer and display properly through the models, allowing researchers to thoroughly understand and test an existing theory because no variables are vague and all variables are modifiable. Consider a model of memory built by Atkinson and Shiffrin in 1968, it showed how rehearsal leads to long-term memory, where the information being rehearsed would be stored. Despite the advancement it made in revealing the function of memory, this model fails to provide answers to crucial questions like: how much information can be rehearsed at a time? How long does it take for information to transfer from rehearsal to long-term memory? Similarly, other computational models raise more questions about cognition than they answer, making their contributions much less significant for the understanding of human cognition than other cognitive approaches.
Nevertheless, computational cognitive models can still contribute to the study of cognition mostly when it is combined with other research approaches, as implements by John Anderson with his ACT-R model. Anderson, a cognitive architecture, uses the functions of computational models and the findings of cognitive neuroscience to develop ACT-R, Adaptive Control of Thought-Rational. The model is based on the theory that the brain consists of several modules which perform specialized functions separate of each other. Since it only focuses on the properties appropriate for understanding the specific cognitive function of memory, the ACT-R model is classified as a symbolic approach to cognitive science.
Another approach which deals more with the semantic content of cognitive science is connectionism or neural network modeling. Connectionism relies on the idea that the brain consists of simple units or nodes and the behavioral response comes primarily from the layers of connections between the nodes and not from the environmental stimulus itself.
Connectionist network differs from computational modeling specifically because of two functions: neural back-propagation and parallel-processing. Neural back-propagation is a method utilized by connectionist network to show evidence of learning. After a connectionist network produce a response, the stimulated results are compared to real-life situational results. The feedback provided by the backward propagation of errors would be used to improve accuracy for the network’s subsequent responses. The second function, parallel-processing, stemmed from the belief that knowledge and perception are not limited to specific modules but rather are distributed throughout the cognitive networks. The present of parallel distributed processing has been shown in psychological demonstrations like the Stroop effect, where the brain seems to be analyzing the perception of color and meaning of language at the same time. However, this theoretical approach has been continually disproved because the two cognitive functions for color-perception and word-forming are operating separately and simultaneously, not parallel of each other.
The field of cognition may have benefitted from the use of connectionist network but because of the completed system, setting up the neural network models can be quite a tedious task and the results may be less interpretable than the system they are trying to model. Therefore, the results can be used as evidence for broad theory of cognition without explaining the particular process happening within the cognitive function. Other disadvantages of connectionism lie in the research methods it employs or hypothesis it tests, which has been proven inaccurate or ineffective often, taking connectionist models further from an accurate representation of how the brain functions. These issues cause neural network models to be ineffective on studying higher forms of information-processing, and hinder connectionism from advancing the general understanding of human cognition.
- Berkeley Computational Cognitive Science Lab
- Flinders Artificial Intelligence and Cognitive Science Group
- MIT Computational Cognitive Science Group
- NYU Computation and Cognition Lab
- Stanford Computation and Cognition Lab
- UCI Memory and Decision Lab
- Jacob Feldman
- McCorduck, Pamela (2004). Machines Who Think (2 ed.). Natick, MA: A. K. Peters, Ltd. pp. 100–101. ISBN 1-56881-205-1.
- Haugeland, John (1985). Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press. ISBN 0-262-08153-9.
- Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. pp. 145–215. ISBN 0-465-02997-3.
- Sun, Ron (2008). Introduction to computational cognitive modeling. Cambridge, MA: Cambridge handbook of computational psychology. ISBN 978-0521674102.
- "Stanford Encyclopedia of Philosophy, Computer Simulations in Science".
- "National Institute of Biomedical Imaging and Bioengineering, Computational Modeling".
- Eysenck, Michael (2012). Fundamentals of Cognition. New York, NY: Psychology Press. ISBN 978-1848720718.
- Polk, Thad; Seifert, Colleen (2002). Cognitive Modeling. Cambridge, MA: MIT Press. ISBN 0-262-66116-0.
- Anderson, James; Pellionisz, Andras; Rosenfeld, Edward (1993). Neurocomputing 2: Directions for Research. Cambridge, MA: MIT Press. ISBN 978-0262510752.
- Rumelhart, David; McClelland, James (1986). Parallel distributed processing, Vol. 1: Foundations. Cambridge, MA: MIT Press. ASIN B008Q6LHXE.
- Cohen, Jonathan; Dunbar, Kevin; McClelland, James (1990). "On The Control Of Automatic Processes: A Parallel Distributed Processing Account Of The Stroop Effect". Psychology Review. 97 (3): 332–361. doi:10.1037/0033-295x.97.3.332.
- Garson, James; Zalta, Edward (Spring 2015). "Connectionism". The Stanford Encyclopedia of Philosophy. Stanford University.