Language of thought hypothesis

From Wikipedia, the free encyclopedia
  (Redirected from Language of thought)
Jump to: navigation, search

In philosophy of mind, the language of thought hypothesis (LOTH) put forward by American philosopher Jerry Fodor describes thoughts as represented in a "language" (sometimes known as mentalese) that allows complex thoughts to be built up by combining simpler thoughts in various ways. In its most basic form, the theory states that thought follows the same rules as language: thought has syntax.

Using empirical data drawn from linguistics and cognitive science to describe mental representation from a philosophical vantage-point, the hypothesis states that thinking takes place in a language of thought (LOT): cognition and cognitive processes are only 'remotely plausible' when expressed as a system of representations that is "tokened" by a linguistic or semantic structure and operated upon by means of a combinatorial syntax.[1] Linguistic tokens used in mental language describe elementary concepts which are operated upon by logical rules establishing causal connections to allow for complex thought. Syntax as well as semantics have a causal effect on the properties of this system of mental representations.

These mental representations are not present in the brain in the same way as symbols are present on paper; rather, the LOT is supposed to exist at the cognitive level, the level of thoughts and concepts. The LOTH has wide-ranging significance for a number of domains in cognitive science. It relies on a version of functionalist materialism, which holds that mental representations are actualized and modified by the individual holding the propositional attitude, and it challenges eliminative materialism and connectionism. It implies a strongly rationalist model of cognition in which many of the fundamentals of cognition are innate.[2][3][4]


The hypothesis applies to thoughts that have propositional content, and is not meant to describe everything that goes on in the mind. It appeals to the representational theory of thought to explain what those tokens actually are and how they behave. There must be a mental representation that stands in some unique relationship with the subject of the representation and has specific content. Complex thoughts get their semantic content from the content of the basic thoughts and the relations that they hold to each other. Thoughts can only relate to each other in ways that do not violate the syntax of thought. The syntax by means of which these two sub-parts are combined can be expressed in first-order predicate calculus.

The thought "John is tall" is clearly composed of two sub-parts, the concept of John and the concept of tallness, combined in a manner that may be expressed in first-order predicate calculus as a predicate 'T' ("is tall") that holds of the entity 'j' (John). A fully articulated proposal for what a LOT would have to take into account greater complexities such as quantification and propositional attitudes (the various attitudes people can have towards statements; for example I might believe or see or merely suspect that John is tall).


1. There can be no higher cognitive processes without mental representation. The only plausible psychological models represent higher cognitive processes as representational and computational thought needs a representational system as an object upon which to compute. We must therefore attribute a representational system to organisms for cognition and thought to occur.

2. There is causal relationship between our intentions and our actions. Because mental states are structured in a way that causes our intentions to manifest themselves by what we do, there is a connection between how we view the world and ourselves and what we do.


Some philosophers have argued that our public language is our mental language, that a person who speaks English thinks in English. Others contend that people who do not know a public language (e.g. babies, aphasics) can think, and that therefore some form of mentalese must be present innately.[citation needed]

The notion that mental states are causally efficacious diverges from behaviorists like Gilbert Ryle, who held that there is no break between cause of mental state and effect of behavior. Rather, Ryle proposed that people act in some way because they are in a disposition to act in that way, that these causal mental states are representational. An objection to this point comes from John Searle in the form of biological naturalism, a nonrepresentational theory of mind that accepts the causal efficacy of mental states. Searle divides intentional states into low-level brain activity and high-level mental activity. The lower-level, nonrepresentational neurophysiological processes have causal power in intention and behavior rather than some higher-level mental representation.[citation needed]

Tim Crane, in his book The Mechanical Mind,[5] states that, while he agrees with Fodor, his reason is very different. A logical objection challenges LOTH’s explanation of how sentences in natural languages get their meaning. That is the view that “Snow is white” is TRUE if and only if P is TRUE in the LOT, where P means the same thing in LOT as “Snow is white” means in the natural language. Any symbol manipulation is in need of some way of deriving what those symbols mean.[5] If the meaning of sentences is explained in terms of sentences in the LOT, then the meaning of sentences in LOT must get their meaning from somewhere else. There seems to be an infinite regress of sentences getting their meaning. Sentences in natural languages get their meaning from their users (speakers, writers).[5] Therefore sentences in mentalese must get their meaning from the way in which they are used by thinkers and so on ad infinitum. This regress is often called the homunculus regress.[5]

Daniel Dennett accepts that homunculi may be explained by other homunculi and denies that this would yield an infinite regress of homunculi. Each explanatory homunculus is “stupider” or more basic than the homunculus it explains but this regress is not infinite but bottoms out at a basic level that is so simple that it does not need interpretation.[5] John Searle points out that it still follows that the bottom-level homunculi are manipulating some sorts of symbols.

LOTH implies that the mind has some tacit knowledge of the logical rules of inference and the linguistic rules of syntax (sentence structure) and semantics (concept or word meaning).[5] If LOTH cannot show that the mind knows that it is following the particular set of rules in question, then the mind is not computational because it is not governed by computational rules.[5][2] Also, the apparent incompleteness of this set of rules in explaining behavior is pointed out. Many conscious beings behave in ways that are contrary to the rules of logic. Yet this irrational behavior is not accounted for by any rules, showing that there is at least some behavior that does not act in accordance with this set of rules.[5]

Another objection within representational theory of mind has to do with the relationship between propositional attitudes and representation. Dennett points out that a chess program can have the attitude of “wanting to get its queen out early,” without having a representation or rule that explicitly states this. A multiplication program on a computer computes in the computer language of 1’s and 0’s, yielding representations that do not correspond with any propositional attitude.[2]

Susan Schneider has recently developed a version of LOT that departs from Fodor's approach in numerous ways. In her book, The Language of Thought: a New Philosophical Direction, Schneider argues that Fodor's pessimism about the success of cognitive science is misguided, and she outlines an approach to LOT that integrates LOT with neuroscience. She also stresses that LOT that is not wedded to the extreme view that all concepts are innate. She fashions a new theory of mental symbols, and a related two-tiered theory of concepts, in which a concept's nature is determined by its LOT symbol type and its meaning.[3]

Relation to connectionism[edit]

Connectionism is a recent applied approach to artificial intelligence that often accepts a lot of the same theoretical framework that LOTH accepts, namely that mental states are computational and causally efficacious and very often that they are representational. However, connectionism stresses the possibility of thinking machines, most often realized as neural networks, an inter-connectional set of nodes, and describes mental states as able to create memory by modifying the strength of these connections over time. Some popular types of neural networks are interpretations of units, and learning algorithm. "Units" can be interpreted as neurons or groups of neurons. A learning algorithm is such that, over time, a change in connection weight is possible, allowing networks to modify their connections. Connectionist neural networks are able to change over time via their activation. An activation is a numerical value that represents any aspect of a unit that a neural network has at any time. Activation spreading is the spreading or taking over of other over time of the activation to all other units connected to the activated unit.

Since connectionist models can change over time, supporters of connectionism claim that it can solve the problems that LOTH brings to classical AI. These problems are those that show that machines with a LOT syntactical framework very often are much better at solving problems and storing data than human minds, yet much worse at things that the human mind is quite adept at such as recognizing facial expressions and objects in photographs and understanding nuanced gestures.[5] Fodor defends LOTH by arguing that a connectionist model is just some realization or implementation of the classical computational theory of mind and therein necessarily employs a symbol-manipulating LOT.

Fodor and Zenon Pylyshyn use the notion of cognitive architecture in their defense. Cognitive architecture is the set of basic functions of an organism with representational input and output. They argue that it is a law of nature that cognitive capacities are productive, systematic and inferentially coherent - they have the ability to produce and understand sentences of a certain structure if they can understand one sentence of that structure.[6] A cognitive model must have a cognitive architecture that explains these laws and properties in some way that is compatible with the scientific method. Fodor and Pylyshyn say that cognitive architecture can only explain the property of systematicity by appealing to a system of representations and that connectionism either employs a cognitive architecture of representations or else does not. If it does, then connectionism uses LOT. If it does not then it is empirically false.[2]

Connectionists have responded to Fodor and Pylyshyn by denying that connectionism uses LOT, by denying that cognition is essentially a function that uses representational input and output or denying that systematicity is a law of nature that rests on representation.[citation needed]

Empirical testing[edit]

Since LOTH came to be it has been empirically tested. Not all experiments have confirmed the hypothesis;

  • In 1971, Roger Shepard and Jacqueline Metzler tested Pylyshyn’s particular hypothesis that all symbols are understood by the mind in virtue of their fundamental mathematical descriptions.[7] Shepard and Metzler’s experiment consisted of showing a group of subjects a 2-D line drawing of a 3-D object, and then that same object at some rotation. According to Shepard and Metzler, if Pylyshyn were correct, then the amount of time it took to identify the object as the same object would not depend on the degree of rotation of the object. Their finding that the time taken to recognize the object was proportional to its rotation contradicts this hypothesis.
  • There may be a connection between prior knowledge of what relations hold between objects in the world and the time it takes subjects to recognize the same objects. For example, it is more likely that subjects will not recognize a hand that is rotated in such a way that it would be physically impossible for an actual hand.[citation needed] It has since also been empirically tested and supported that the mind might better manipulate mathematical descriptions in topographical wholes.[citation needed] These findings have illuminated what the mind is not doing in terms of how it manipulates symbols.[citation needed]

See also[edit]


  1. ^ Stanford Encyclopedia of Philosophy
  2. ^ a b c d Murat Aydede (2004-07-27). "The Language of Thought Hypothesis". 
  3. ^ a b Schneider, Susan (2011). The Language of Thought: a New Direction. Boston: Mass: MIT Press. 
  4. ^ Fodor, Jerry A. (1975-01-01). The Language of Thought. Harvard University Press. ISBN 9780674510302. 
  5. ^ a b c d e f g h i Crane, Tim (2005). The mechanical mind : a philosophical introduction to minds, machines and mental representation (2nd, repr. ed.). London: Routledge. ISBN 978-0-415-29031-9. 
  6. ^ James Garson (2010-07-27). "Connectionism". 
  7. ^ Shepard, Roger N.; Metzler, Jacqueline (1971-02-19). "Mental Rotation of Three-Dimensional Objects". Science 171 (3972): 701–703. doi:10.1126/science.171.3972.701. PMID 5540314. 
  • Ravenscroft, Ian, Philosophy of mind. Oxford University press, 2005. pp 91.
  • Fodor, Jerry A., The Language Of Thought. Crowell Press, 1975. pp 214.
  • John R. Searle (June 29, 1972). "Chomsky's Revolution in Linguistics". New York Review of Books. 

External links[edit]