Naturalization of intentionality

From Wikipedia, the free encyclopedia
Jump to: navigation, search

According to Franz Brentano, intentionality refers to the “aboutness of mental states that cannot be a physical relation between a mental state and what is about (its object) because in a physical relation each of the relation must exist whereas the objects of mental states might nost.

Several problems arise for features of intentionality, which are unusual for materialistic relations. Representation is unique. When 'x represents y' is true, it's not the same as other relations between things, like when 'x is next to y' or when 'x caused y' or when 'x met y', etc. Representation is different because, for instance, when 'x represents y' is true, y need not exist. This isn't true when say 'x is the square root of y' or 'x caused y' or 'x is next to y'. Similarly, when 'x represents y' is true, 'x represents z' can still be false, even when y = z. Intentionality encompasses relations that are both physical and mental. In this case, “Billy can love Santa and Jane can search for unicorns even if Santa does not exist and there are no unicorns.”

History[edit]

Franz Brentano, the nineteenth century philosopher, spoke of mental states as involving presentations of the objects of our thoughts. This idea encompasses his belief that one cannot desire something unless they actually have a representation of it in their minds.

Dennis Stampe was one of the first philosophers in modern times to suggest a theory of content according to which content is a matter of reliable causes.

Fred Dretske's book, Knowledge and the Flow of Information (1981), was a major influence on the development of informational theories, and although the theory developed there is not a teleological theory, Dretske (1986, 1988, 1991) later produced an informational version of teleosemantics. He begins with a concept of carrying information that he calls "indicating", explains that indicating is not equivalent to representing, and then suggests that a representation's content is what it has the function of indicating

Related theories[edit]

Teleosemantics, also known as biosemantics, is used to refer to the class of theories of mental content that use a teleological notion of function. Teleosemantics is best understood as a general strategy for underwriting the normative nature of content, rather than any particular theory. What all teleological theories have in common is the idea that semantic norms are ultimately derivable from functional norms

The theory of asymmetric dependence, from Fodor, who says that his theory “distinguishes merely informational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice versa. He gives an example of this theory when he says, “if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If however, such tokens are caused by cows-on-dark-nights, etc. because they are caused by horses, but not vice versa, then they represent horses (or property horse).

Alternative theories[edit]

Indeterminacy of translation

20th century American philosopher Willard Van Orman Quine believed that linguistic terms do not have distinct meanings that accompany them because there are no such entities as "meanings”. In his books, Word and Object (1960) and Ontological Relativity (1968), Quine considers the methods available to a field linguist attempting to translate an unknown language in order to outline his thesis. His thesis, the indeterminacy of translation, notes that there are many different ways to distribute purpose and meanings among words. Whenever a theory of translation is made it is commonly based upon context. An argument over the correct translation of an unidentified term depends on the possibility that the native could have spoken a different sentence. The same problem of indeterminacy would appear in this argument once again since any hypothesis can be defended if one adopts enough compensatory hypotheses about other parts of the language. Quine uses as an example the word "gavagai" spoken by a native upon seeing a rabbit. One can go the simplest route and translate the word to "Lo, a rabbit”, but other possible translations such as "Lo, food" or "Let's go hunting" are completely reasonable given what the linguist knows. Subsequent observations can rule out certain possibilities as well as questioning the natives. But this is only possible once the linguist has mastered much of the natives' grammar and vocabulary. This is a big problem because this can only be done on the basis of hypotheses derived from simpler, observation-connected bits of language, which admit multiple interpretations, as we have seen.

The intentional stance

Daniel C. Dennett’s theory of mental content, the intentional stance, tries to view the behavior of things in terms of mental properties. According to Dennett: "Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do."

Dennett’s thesis has three levels of abstraction: 1. The most tangible is the physical stance, which is at the level of physics and chemistry. This level is concerned with things such as mass, energy, velocity, and chemical composition. 2. Somewhat more abstract is the design stance, which is at the level of biology and engineering. This level is concerned with things such as purpose, function and design. 3. Most abstract is the intentional stance, which is at the level of software and minds. This level is concerned with things such as belief, thinking and intent.

Dennett states that the more concrete the level, the more accurate in principle our predictions are. Though if one chooses to view an object through a more abstract level, he will gain greater computational power by getting a better overall picture of the object and skipping over any extraneous details. Also, switching to a more abstract level has its risks as well as its benefits. If we applied the intentional stance to a thermometer that was heated to 500 °C, trying to understand it through its beliefs about how hot it is and its desire to keep the temperature just right, we would gain no useful information. The problem would not be understood until we dropped down to the physical stance to comprehend that it has been melted. Whether to take a particular stance should be decided by how successful that stance is when applied. Dennett argued that it is best to understand human beliefs and desires at the level of the intentional stance.

References[edit]

  • What Minds Can Do : Intentionality in a Non-Intentional World by Pierre Jacob 1997 Cambridge University Press. [On a graduate level covers the issues of the naturalization of intentionality from a weak intentional realist perspective while providing critical but fair to a fault and practically thorough articulation of the intentional irrealism's various positions particularly as based upon Quine's epistemological "double standard" between pragmatic usefulness of propositional attitudes in comparison with their "emptiness" and "baselessness" for a scientific determination of reality : this is epitomized in Dennett's "intentional stance" and "patternalism", etc. Chapters covering artificial intelligence, teleology, etc.]