This is a good article. Click here for more information.

Eliminative materialism

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Eliminativists argue that modern belief in the existence of mental phenomena is analogous to the ancient belief in obsolete theories such as the geocentric model of the universe.

Eliminative materialism (also called eliminativism) is the claim that certain types of mental states that most people believe in do not exist.[1] It is a materialist position in the philosophy of mind. Some supporters of eliminativism argue that no coherent neural basis will be found for many everyday psychological concepts such as belief or desire, since they are poorly defined. Rather, they argue that psychological concepts of behaviour and experience should be judged by how well they reduce to the biological level.[2] Other versions entail the non-existence of conscious mental states such as pain and visual perceptions.[3]

Eliminativism about a class of entities is the view that the class of entities does not exist.[4] For example, materialism tends to be eliminativist about the soul; modern chemists are eliminativist about phlogiston; and modern physicists are eliminativist about the existence of luminiferous aether. Eliminative materialism is the relatively new (1960s–1970s) idea that certain classes of mental entities that common sense takes for granted, such as beliefs, desires, and the subjective sensation of pain, do not exist.[5][6] The most common versions are eliminativism about propositional attitudes, as expressed by Paul and Patricia Churchland,[7] and eliminativism about qualia (subjective interpretations about particular instances of subjective experience), as expressed by Daniel Dennett and Georges Rey.[3] These philosophers often appeal to an introspection illusion.

In the context of materialist understandings of psychology, eliminativism stands in opposition to reductive materialism which argues that mental states as conventionally understood do exist, and that they directly correspond to the physical state of the nervous system.[8][need quotation to verify] An intermediate position is revisionary materialism, which will often argue that the mental state in question will prove to be somewhat reducible to physical phenomena—with some changes needed to the common sense concept.

Since eliminative materialism claims that future research will fail to find a neuronal basis for various mental phenomena, it must necessarily wait for science to progress further. One might question the position on these grounds, but other philosophers like Churchland argue that eliminativism is often necessary in order to open the minds of thinkers to new evidence and better explanations.[8]

Overview[edit]

Various arguments have been put forth both for and against eliminative materialism over the last forty years. Most of the arguments in favor of the view are based on the assumption that people's commonsense view of the mind is actually an implicit theory. It is to be compared and contrasted with other scientific theories in its explanatory success, accuracy, and ability to allow people to make correct predictions about the future. Eliminativists argue that, based on these and other criteria, commonsense "folk" psychology has failed and will eventually need to be replaced with explanations derived from the neurosciences. These philosophers therefore tend to emphasize the importance of neuroscientific research as well as developments in artificial intelligence to sustain their thesis.

Philosophers who argue against eliminativism may take several approaches. Simulation theorists, like Robert Gordon[9] and Alvin Goldman[10] argue that folk psychology is not a theory, but rather depends on internal simulation of others, and therefore is not subject to falsification in the same way that theories are. Jerry Fodor, among others,[11] argues that folk psychology is, in fact, a successful (even indispensable) theory. Another view is that eliminativism assumes the existence of the beliefs and other entities it seeks to "eliminate" and is thus self-refuting.[12]

Schematic overview: Eliminativists suggest that some sciences can be reduced (blue), but that theories that are in principle irreducible will eventually be eliminated (orange).

Eliminativism maintains that the common-sense understanding of the mind is mistaken, and that the neurosciences will one day reveal that the mental states that are talked about in everyday discourse, using words such as "intend", "believe", "desire", and "love", do not refer to anything real. Because of the inadequacy of natural languages, people mistakenly think that they have such beliefs and desires.[2] Some eliminativists, such as Frank Jackson, claim that consciousness does not exist except as an epiphenomenon of brain function; others, such as Georges Rey, claim that the concept will eventually be eliminated as neuroscience progresses.[3][13] Consciousness and folk psychology are separate issues and it is possible to take an eliminative stance on one but not the other.[4] The roots of eliminativism go back to the writings of Wilfred Sellars, W.V. Quine, Paul Feyerabend, and Richard Rorty.[5][6][14] The term "eliminative materialism" was first introduced by James Cornman in 1968 while describing a version of physicalism endorsed by Rorty. The later Ludwig Wittgenstein was also an important inspiration for eliminativism, particularly with his attack on "private objects" as "grammatical fictions".[4]

Early eliminativists such as Rorty and Feyerabend often confused two different notions of the sort of elimination that the term "eliminative materialism" entailed. On the one hand, they claimed, the cognitive sciences that will ultimately give people a correct account of the workings of the mind will not employ terms that refer to common-sense mental states like beliefs and desires; these states will not be part of the ontology of a mature cognitive science.[5][6] But critics immediately countered that this view was indistinguishable from the identity theory of mind.[2][15] Quine himself wondered what exactly was so eliminative about eliminative materialism after all:

Is physicalism a repudiation of mental objects after all, or a theory of them? Does it repudiate the mental state of pain or anger in favor of its physical concomitant, or does it identify the mental state with a state of the physical organism (and so a state of the physical organism with the mental state)?[16]

On the other hand, the same philosophers also claimed that common-sense mental states simply do not exist. But critics pointed out that eliminativists could not have it both ways: either mental states exist and will ultimately be explained in terms of lower-level neurophysiological processes or they do not.[2][15] Modern eliminativists have much more clearly expressed the view that mental phenomena simply do not exist and will eventually be eliminated from people's thinking about the brain in the same way that demons have been eliminated from people's thinking about mental illness and psychopathology.[4]

While it was a minority view in the 1960s, eliminative materialism gained prominence and acceptance during the 1980s.[17] Proponents of this view, such as B.F. Skinner, often made parallels to previous superseded scientific theories (such as that of the four humours, the phlogiston theory of combustion, and the vital force theory of life) that have all been successfully eliminated in attempting to establish their thesis about the nature of the mental. In these cases, science has not produced more detailed versions or reductions of these theories, but rejected them altogether as obsolete. Radical behaviorists, such as Skinner, argued that folk psychology is already obsolete and should be replaced by descriptions of histories of reinforcement and punishment.[18] Such views were eventually abandoned. Patricia and Paul Churchland argued that folk psychology will be gradually replaced as neuroscience matures.[17]

Eliminativism is not only motivated by philosophical considerations, but is also a prediction about what form future scientific theories will take. Eliminativist philosophers therefore tend to be concerned with the data coming from the relevant brain and cognitive sciences.[19] In addition, because eliminativism is essentially predictive in nature, different theorists can, and often do, make different predictions about which aspects of folk psychology will be eliminated from folk psychological vocabulary. None of these philosophers are eliminativists "tout court".[20][21][22]

Today, the eliminativist view is most closely associated with the philosophers Paul and Patricia Churchland, who deny the existence of propositional attitudes (a subclass of intentional states), and with Daniel Dennett, who is generally considered to be an eliminativist about qualia and phenomenal aspects of consciousness. One way to summarize the difference between the Churchlands's views and Dennett's view is that the Churchlands are eliminativists when it comes to propositional attitudes, but reductionists concerning qualia, while Dennett is an anti-reductionist with respect to propositional attitudes, and an eliminativist concerning qualia.[4][22][23][24] More recently, Brian Tomasik and Jacy Reese Anthis have put forth various arguments in favor of eliminativism.[25][26]

Arguments for eliminativism[edit]

Problems with folk theories[edit]

Eliminativists such as Paul and Patricia Churchland argue that folk psychology is a fully developed but non-formalized theory of human behavior. It is used to explain and make predictions about human mental states and behavior. This view is often referred to as the theory of mind or just simply theory-theory, for it is a theory which theorizes the existence of an unacknowledged theory. As a theory in the scientific sense, eliminativists maintain, folk psychology needs to be evaluated on the basis of its predictive power and explanatory success as a research program for the investigation of the mind/brain.[27][28]

Such eliminativists have developed different arguments to show that folk psychology is a seriously mistaken theory and needs to be abolished. They argue that folk psychology excludes from its purview or has traditionally been mistaken about many important mental phenomena that can, and are, being examined and explained by modern neurosciences. Some examples are dreaming, consciousness, mental disorders, learning processes, and memory abilities. Furthermore, they argue, folk psychology's development in the last 2,500 years has not been significant and it is therefore a stagnating theory. The ancient Greeks already had a folk psychology comparable to modern views. But in contrast to this lack of development, the neurosciences are a rapidly progressing science complex that, in their view, can explain many cognitive processes that folk psychology cannot.[19][29]

Folk psychology retains characteristics of now obsolete theories or legends from the past. Ancient societies tried to explain the physical mysteries of nature by ascribing mental conditions to them in such statements as "the sea is angry". Gradually, these everyday folk psychological explanations were replaced by more efficient scientific descriptions. Today, eliminativists argue, there is no reason not to accept an effective scientific account of people's cognitive abilities. If such an explanation existed, then there would be no need for folk-psychological explanations of behavior, and the latter would be eliminated the same way as the mythological explanations the ancients used.[30]

Another line of argument is the meta-induction based on what eliminativists view as the disastrous historical record of folk theories in general. Ancient pre-scientific "theories" of folk biology, folk physics, and folk cosmology have all proven to be radically wrong. Eliminativists argue the same in the case of folk psychology. There seems no logical basis, to the eliminativist, for making an exception just because folk psychology has lasted longer and is more intuitive or instinctively plausible than the other folk theories.[29] Indeed, the eliminativists warn, considerations of intuitive plausibility may be precisely the result of the deeply entrenched nature in society of folk psychology itself. It may be that people's beliefs and other such states are as theory-laden as external perceptions and hence intuitions will tend to be biased in favor of them.[20]

Specific problems with folk psychology[edit]

Much of folk psychology involves the attribution of intentional states (or more specifically as a subclass, propositional attitudes). Eliminativists point out that these states are generally ascribed syntactic and semantic properties. An example of this is the language of thought hypothesis, which attributes a discrete, combinatorial syntax and other linguistic properties to these mental phenomena. Eliminativists argue that such discrete and combinatorial characteristics have no place in the neurosciences, which speak of action potentials, spiking frequencies, and other effects which are continuous and distributed in nature. Hence, the syntactic structures which are assumed by folk psychology can have no place in such a structure as the brain.[19] Against this there have been two responses. On the one hand, there are philosophers who deny that mental states are linguistic in nature and see this as a straw man argument.[31][32] The other view is represented by those who subscribe to "a language of thought". They assert that the mental states can be multiply realized and that functional characterizations are just higher-level characterizations of what's happening at the physical level.[33][34]

It has also been argued against folk psychology that the intentionality of mental states like belief imply that they have semantic qualities. Specifically, their meaning is determined by the things that they are about in the external world. This makes it difficult to explain how they can play the causal roles that they are supposed to in cognitive processes.[35]

In recent years, this latter argument has been fortified by the theory of connectionism. Many connectionist models of the brain have been developed in which the processes of language learning and other forms of representation are highly distributed and parallel. This would tend to indicate that there is no need for such discrete and semantically endowed entities as beliefs and desires.[36]

Physics eliminates intentionality[edit]

If we are to say that a thought is a kind of neural process, we have to say that when we think about Paris there is a network of neurons that is somehow about Paris. Consider various answers that might be given to this question. The neurons cannot be about Paris in the way a picture is, because unlike a picture they do not resemble Paris at all. But neither can they be about Paris in the way that a red octagonal “Stop” sign is about stopping even though it does not resemble that action. For a red octagon, or the word “Stop” for that matter, only mean what they do as a matter of convention, only because we interpret the shapes in question as representing the action of stopping. And when you think about Paris, no one is assigning a conventional interpretation to such-and-such neurons in your brain so as to make them represent Paris. To suggest that there is some further brain process that assigns such a meaning to the purported “Paris neurons” is, merely to commit a homunculus fallacy and explains nothing. For if we say that one clump of neurons assigns meaning to another, we are saying that the one represents the other as having such-and-such a meaning. That means that we now have to explain how the first possesses the meaning or representational content by virtue of which it does that, which entails that we have not solved the first problem at all but only added a second one to it. We have “explained” the meaning of one clump of neurons by reference to meaning implicitly present in another clump, and thus merely initiated a vicious explanatory regress. The only way to break the regress would be to postulate some bit of matter that just has its meaning intrinsically, without deriving it from anything else. But there can be no such bit of matter because physics has ruled out the existence of clumps of matter of the required sort.[37][38]

Evolution eliminates intentionality[edit]

Physicalism needs an account of how a clump of matter, the brain as a whole or a population of neurons wired together into a circuit, has unique propositional content. The best resource, perhaps physicalism’s only resource, for explaining how intentionality emerges and what it consists in has to be Darwin’s theory of natural selection. There is one huge reason for that supposition. Behavior is guided by intentional states that are purposive and thus it is a matter of means aimed at ends. Such purposive behavior inherits its purposiveness from the brain states that drive it. It is why the intentionality of the noises and the marks we make is derived from the original intentionality of neural circuits. But there is only one physically possible process that builds and operates purposive systems in nature: natural selection. That is why natural selection must have built and must continually shape the intentional causes of purposive behavior. Accordingly, we should look to Darwinian processes to provide a causal account of intentional content. That makes teleosemantics an inevitable research program. Teleosemantics’s stock example of how Darwinian processes build intentional content in neural circuitry is the frog’s purposive tongue snapping to feed itself flies. The neural circuitry in the frog that produces fly snapping has been tuned up phylogenetically by natural selection and ontogenetically, developmentally, by learning, via the law of effect—operant conditioning. Teleosemantics claims that the neural circuitry's intentional content consists in those phylogenetic and ontogenetic facts about it.

The problem facing teleosemantics is indeterminacy of intentional content. The most exquisite environmental appropriateness of the behavior produced by some neural circuit’s firing won’t narrow down its content to one unique proposition. This indeterminacy is referred to as the disjunction problem and it is a problem faced by all causal theories of content. In the actual environment in which frogs evolved, and in the actual environment in which this frog learned how to make a living, the neural circuitry that was selected for causing the frog’s tongue to snap at the fly at x, y, z, t is supposed to have the content “Fly at x,y,z,t.” But phylogenetic and ontogenetic Darwinian processes of selection cannot discriminate among indefinitely many other alternative neural contents with the same actual effects in tongue snapping behavior. It is now famous that there is no way any teleosemantic theory can tell whether the content of the relevant frog’s neural circuit is “Fly or black moving dot at x,y,z,t,” or “fly or beebee at x,y,z,t.” or any of a zillion other disjunctive objects of thought, so long as none of these disjuncts has ever actually been presented to the fly. This is the disjunction problem that encountered by all physicalist theories.

Any naturalistic, purely causal, non-semantic account of content will have to rely on Darwinian natural selection to build neural states cable of having content. This is what teleosemantics seeks to do. But that is exactly what a Darwinian process cannot do. The whole point of Darwin’s theory is that in the creation of adaptations, nature is not active, it is passive. What is really going on is environmental filtration—a purely passive and not very discriminating process that prevents most traits below some minimal local threshold from persisting. Natural selection is selection against. Literal selection for requires foresight, planning, and purpose. Darwin’s achievement was to show that the appearance of purpose belies the reality of purposeless, unforesighted, and unplanned mindless causation. All adaptation requires is selection against. That was Darwin’s point. But the combination of blind variation and selection-against is not possible without disjunctive outcomes.[39][40][41]

It is important to see that ‘selection-against’ isn’t the contradictory of ‘selection for.’ Why are they not contradictories? That is, why isn’t selection-against trait T just selection for trait not-T? Simply because there are traits that are neither selected-against nor selected-for. These are the neutral ones that biologists, especially molecular evolutionary biologists, describe as silent, switched off, junk, non-coding, etc. ‘Selection for’ and ‘selection-against’ are contraries, not contradictories.[39][41]

To see how the process of Darwinian selection-against works in a real case, consider an example: two distinct gene products, one of which is neutral or even harmful to an organism and the other of which is beneficial, which are coded for by genes right next to each other on the chromosomes. This is the phenomenon of genetic linkage. The traits that the genes coded for will be coextensive in a population because the gene-types are coextensive in that population. Mendelian assortment and segregation don’t break up these packages of genes with any efficiency. Only crossover, the breaking up and faulty re-annealing of chromosomal strings or similar processes can do this. As Darwin realized, no process producing variants in nature picks up on future usefulness, convenience, need, or adaptational value of anything at all. The only thing evolution (natural selection-against) can do about the free-riding maladaptive or neutral trait, whose genes are riding along close to the genes for an adaptive trait, is wait around for the genetic material to be broken at just the right place between their respective genes. Once this happens, then Darwinian processes can begin to tell the difference between them. But only when environmental vicissitudes break up the DNA on which the two adjacent genes sit, can selection-against get started—if one of the two proteins is harmful.[39][41]

Here is Darwinian theory’s disjunction problem: the process Darwin discovered can’t tell the difference between these two genes or their traits until cross-over breaks the linkage between one gene, that is going to increase its frequency, and the other one, that is going to decrease its frequency. If they are never separated, it will remain blind to their differences forever. What is worse, and more likely, one gene sequence can code for a favorable trait—a protein required for survival, while a part of the same sequence can code for a maladaptive trait, some gene product that reduces fitness. Natural selection will have an even harder time discriminating these two traits.[39][41]

Apply these features of the process Darwin discovered to the way neural circuits acquire content: first there is a phylogenetic, evolutionary process that builds neural circuitry and its connections. It selects against circuitry that fails to perform functions required for the organism’s survival and reproduction. In circumstances of strong competition, ones in which the bar to survival is set high, this results in neural circuits very finely attuned to their environments. In the case of frogs, neural circuits that send the tongue snapping in even very slightly inaccurate directions are strongly selected against. Whence comes the informational content we ascribe to the circuits which have survived selection against: ‘Fly at x,y,z, t.’ But of course the process has been unable to discriminate those circuits from ones that cause tongue snapping at disjunctive prey such as ‘flies or beebees’ or ‘flies or black spots on screens in frog’s visual field.’ We could of course intervene in the course of natural selection to select against neural circuits that have these latter contents, but there are indefinitely many of them and we will never be able to narrow down content to only one disjunct.

There is an equally daunting proximal-distal indeterminacy problem that also undermines telesematics’ prospects of identifying unique propositional content in neural circuitry. Is it the stimulation in the visual cortex to which the tongue snapping neurons respond, or is it to something further upstream, say the retinal excitations, or is it the photons bouncing off the fly’s body, or is it the shape of the fly or its motion, or some combination of them, or the fly itself, or the fly plus the ambient environmental conditions that make it available, or some other factor. As in the disjunction problem, there are indefinitely many links in the causal chain from external sources to the switching on of the right neural circuitry which are equally strongly selected for—i.e. not selected against, as the “referent,” “subject,” “topic” of the neural circuits’ ‘content.’

Move now from phylogentic to ontogenetic processes. Frogs cannot learn much at all, since they are not subject to substantial operant conditioning, but rats and humans can. Operant conditioning is also a matter of selecting-against. If it were a matter of selecting for, it would lose all its interest as a nonteleolgical account of learning. Operant conditioning over a course of training enables rats to learn certain distinctive behaviors. It does so through a process of feedback in the rat’s brain that builds neural circuitry by using classical conditioning. Teleosemantics bids us attribute propositional content to these circuits, in particular descriptions of the transient envrironment that makes the behavior the neural circuitry produces the appropriate one. Operant conditioning works by bulding any and every neural circuit that shares a reinforced effect downstream in whatever behavior that is reinforced. Since the behavior doesn’t narrow down the upstream causes of the neural circuitry, it cannot ever narrow down neural content to a unique disjunct.

When it comes to building content teleosemantics is the only plausible theory since Darwinian natural selection is the only way to get the appearance of purpose wherever in nature it occurs, and that includes inside the brain. If frogs are hard wired to snap tongues at flies, we have to treat the neural content (fly at x,y,z,t) as a matter of Darwinian shaping of the relevant neural circuits that control frog tongue flicking. In more complex organisms, natural selection first hard wires a capacity to carry information; then learning—classical and operant—shapes the actual informational content of neural circuitry.

If teleosemantics is the only plausible theory, and if it cannot solve the disjunction problem and the proximal-distal problem, by using evolution by natural selection, then the right conclusion for the materialist is to accept eliminativism by denying that neural states have as their informational content specific, particular, determinate statements which attribute non-disjunctive properties and relations to non-disjunctive subjects.[39][41][42][43][44]

Arguments against eliminativism[edit]

Intentionality and consciousness are identical[edit]

Some eliminativists reject intentionality while accepting the existence of qualia. Other eliminativists reject qualia while accepting intentionality. Many philosophers argue that intentionality cannot exist without consciousness and vice versa therefore any philosopher who accepts one while rejecting the other is being inconsistent. Therefore, they argue in order to be consistent a person must accept both qualia and intentionality or reject them together. The philosophers who argue for such a position include Philip Goff, Terence Horgan, Uriah Kriegal, and John Tienson.[45][46] For instance, the philosopher Keith Frankish accepts the existence of intentionality but holds to illusionism about consciousness because he rejects qualia. Philip Goff notes that beliefs are a kind of propositional thought. Is it coherent to accept the reality of thought whilst denying the reality of consciousness? That depends on whether or not there is a constitutive relationship between thought and consciousness. Keith Frankish assumes that he can account for thoughts, such as beliefs and other mental representations, without the postulation of consciousness. In this he follows the dominant view in analytic philosophy that there is no essential connection between thought and consciousness. This view was largely unquestioned in the twentieth century. However, there is now a growing movement in analytic philosophy defending the thesis that thoughts, and indeed mental representations in general, are identical with (or directly constituted of) forms of phenomenal consciousness. Uriah Kriegal has dubbed this movement the Phenomenal Intentionality Research Program. Clearly if the convictions of the Phenomenal Intentionality Research Program turn out to be correct, then Keith Frankish cannot assert the existence of thought but deny the existence of consciousness if thought just is a (highly evolved) form of consciousness as that would be contradictory.

Intuitive reservations[edit]

The thesis of eliminativism seems to be so obviously wrong to many critics, under the claim that people know immediately and indubitably that they have minds, that argumentation seems unnecessary. This sort of intuition pumping is illustrated by asking what happens when one asks oneself honestly if one has mental states.[47] Eliminativists object to such a rebuttal of their position by claiming that intuitions often are mistaken. Analogies from the history of science are frequently invoked to buttress this observation: it may appear obvious that the sun travels around the earth, for example, but for all its apparent obviousness this conception was proved wrong nevertheless. Similarly, it may appear obvious that apart from neural events there are also mental conditions. Nevertheless, this could equally turn out to be false.[20]

But even if one accepts the susceptibility to error of people's intuitions, the objection can be reformulated: if the existence of mental conditions seems perfectly obvious and is central in people's conception of the world, then enormously strong arguments are needed in order to successfully deny the existence of mental conditions. Furthermore, these arguments, to be consistent, need to be formulated in a way which does not pre-suppose the existence of entities like "mental states", "logical arguments", and "ideas", otherwise they are self-contradictory.[48] Those who accept this objection say that the arguments in favor of eliminativism are far too weak to establish such a radical claim; therefore there is no reason to believe in eliminativism.[47]

Self-refutation[edit]

Some philosophers, such as Paul Boghossian, have attempted to show that eliminativism is in some sense self-refuting, since the theory itself presupposes the existence of mental phenomena. If eliminativism is true, then the eliminativist must permit an intentional property like truth, supposing that in order to assert something one must believe it. Hence, for eliminativism to be asserted as a thesis, the eliminativist must believe that it is true; if that is the case, then there are beliefs and the eliminativist claim is false.[12][49]

Georges Rey and Michael Devitt reply to this objection by invoking deflationary semantic theories that avoid analysing predicates like "x is true" as expressing a real property. They are construed, instead, as logical devices so that asserting that a sentence is true is just a quoted way of asserting the sentence itself. To say, "'God exists' is true" is just to say, "God exists". This way, Rey and Devitt argue, insofar as dispositional replacements of "claims" and deflationary accounts of "true" are coherent, eliminativism is not self-refuting.[50]

Correspondence theory of truth[edit]

Alex Rosenberg developed a theory of structural resemblance or physical isomorphism which could explain how neural states can instantiate truth within the correspondence theory of truth. Neuroscientists use the word “representation” to identify the neural circuits’ encoding of inputs from the peripheral nervous system in, for example, the visual cortex. However, neuroscientists use the word “representation” without according it any commitment to intentional content. In fact, there is an explicit commitment to describing neural representations in terms of structures of neural axonal discharges that are physically isomorphic to the inputs that cause them. Suppose that this way of understanding representation in the brain is preserved in the long-term course of research providing an understanding of how the brain processes and stores information. Then there will be considerable vindication for the brain as a neural network whose physical structure is identical to the aspects of its environment that it tracks and where its representations of these features consist in this physical isomorphism.[41]

Experiments in the 1980's with macaque monkeys, have isolated the structural resemblance between input vibrations the finger feels, measured in cycles per second, and representations of them in neural circuits, measured in action-potential spikes per second. This resemblance between two easily measured variables makes it unsurprising that they would be among the first such structural resemblances to be discovered. Macaques and humans have the same peripheral nervous system sensitivities and can make the same tactile discriminations. Subsequent research into neural processing has increasingly vindicated a structural resemblance or physical isomophism approach to how information enters the brain, is stored, and deployed.[39][51]

It is important to emphasize that this isomorphism between brain and world is not a matter of some relationship between reality and a map of reality stored in the brain. Maps require interpretation if they are to be about what they map, and both eliminativism and neuroscience share a commitment to explaining the appearance of aboutness by purely physical relationships between informational states in the brain and what they “represent.” The brain-to-world relationship must be a matter of physical isomorphism— sameness of form, outline, structure that doesn’t require interpretation.[41]

This machinery can be applied to make “sense” of eliminativism in terms of the sentences the eliminativist speaks or writes. When we say that eliminativism is true, that the brain does not store information in the form of unique sentences, statements, expressing propositions or anything like them, there is a set of neural circuits that have no trouble coherently carrying this information. There is a possible translation manual that will guide us back from the vocalization or inscription eliminativists express to these neural circuits. These neural structures will differ from the neural circuits of those who explicitly reject eliminativism in ways that presumably our translation manual may be able to shed some light on: giving us a neurological handle on disagreement and on the structural differences in neural circuitry, if any, between asserting p and asserting not-p when p expresses the eliminativist thesis.[39]

Criticism[edit]

This physical isomorphism approach faces indeterminacy problems. Any given structure in the brain is going to be causally related to, and isomorphic in various respects to, many different structures in external reality. But we cannot discriminate the one it is intended to represent, or that it is supposed to be true “of.” These locutions are heavy with just the intentionality that eliminativism denies itself. Here is a problem of underdetermination or holism that eliminativism shares with intentionality-dependent theories of mind. Here, we can only invoke pragmatic criteria for discriminating successful structural representations —the substitute for true ones, from unsuccessful ones—the ones we previously would call the false ones.[39]

Daniel Dennett noted that it is possible that such indeterminacy problems remain only hypothetical without them occurring in reality. Dennett constructs a "Quinian crossword puzzle" that is 4x4 and the words that are to be written in the crossword puzzle must satisfy both the across and down definitions. Since there are multiple constraints on this crossword puzzle there is one solution. Thus we can think of the brain and its relation to the external world as a very large crossword puzzle that must satisfy exceedingly many constraints to which there is only one possible solution. Therefore, in reality we may end up with only one physical isomorphism between the brain and the external world.[44]

Pragmatic theory of truth[edit]

When indeterminancy problems arose because the brain is physically isomorphic to multiple structures of the external world, it was urged that we use a pragmatic approach to resolve the problem. Another approach argues that we should use the pragmatic theory of truth from the beginning to decide whether certain neural circuits store true information about the external world. Pragmatism was founded by Charles Sanders Peirce, John Dewey, and William James. Pragmatism was later refined by our understanding of the philosophy of science. According to pragmatism, to say that General Relativity is true is to say that the theory makes more accurate predictions about world events compared to other theories (Newtonian mechanics, Aristotle's physics, etc). Thus within pragmatism, in what sense can we say that information in brain-A is true while that does not hold about information in brain-B regarding the external world. Assume computer circuits lack intentionality and do not store information using propositions, then in what sense can we say that computer-A has true information while computer-B lacks true information about the external world. If the computers were instantiated in autonomous cars, we can test whether computer-A or computer-B successfully completed a cross-country road trip. For instance, if computer-A succeeded at the task while computer-B failed, the pragmatist can say that computer-A holds true information about the external world. The reason is that the information in computer-A allows it to make more accurate predictions (relative to computer-B) about the external world and helps it successfully move around in the environment. Similarly, if brain-A has information that enables the biological organism to make more accurate predictions about the external world and helps the biological organism successfully move around in the environment then we can say that brain-A has true information about the external world (relative to brain-B). Although not advocates of eliminativism, John Shook and Tibor Solymosi argue that pragmatism is a promising program for understanding advancements in neuroscience and integrating that into a philosophical picture of the world.[52]

Criticism[edit]

The reason naturalism can’t be pragmatic in its epistemology starts with its metaphysics. Science tells us that we are components of the natural realm, indeed latecomers in the scheme of things that goes back 13.8 billion years. The universe wasn’t organized around our needs and abilities, and what works for us is just a set of contingent facts that could have been otherwise. Among the explananda of the sciences is the set of things that work for us. Once we have begun discovering things about the universe that work for us, science sets out to explain why these discoveries do so. It’s clear that one explanation for why things work for us that we have to rule out as unilluminating, indeed question begging, is that they work for us because they work for us. If something works for us, enables us to meet our needs and wants, then there has to be an explanation for why it does so, reflecting facts about us and the world that produce the needs and the means to satisfy them.[42]

The explanation of why scientific methods work for us must be a causal explanation. It must show what facts about reality make the methods we employ to acquire knowledge suitable for doing so. The explanation has to show that the fact our methods work—for example, have reliable technological application among other things—is not a coincidence, still less a miracle or accident. That means there have to be some facts, events, processes that operate in reality, and which brought about our pragmatic success. The demand that success be explained is a consequence of science’s epistemology. If the truth of such explanations consists in the fact these explanations work for us (as pragmatism requires), then it turns out that the explanation of why our scientific methods work is that they work. That is not a satisfying explanation.[42]

Qualia[edit]

Another problem for the eliminativist is the consideration that human beings undergo subjective experiences and, hence, their conscious mental states have qualia. Since qualia are generally regarded as characteristics of mental states, their existence does not seem to be compatible with eliminativism.[53] Eliminativists, such as Daniel Dennett and Georges Rey, respond by rejecting qualia.[54][55] This is seen to be problematic to opponents of eliminativists, since many claim that the existence of qualia seems perfectly obvious. Many philosophers consider the "elimination" of qualia implausible, if not incomprehensible. They assert that, for instance, the existence of pain is simply beyond denial.[53]

Admitting that the existence of qualia seems obvious, Dennett nevertheless states that "qualia" is a theoretical term from an outdated metaphysics stemming from Cartesian intuitions. He argues that a precise analysis shows that the term is in the long run empty and full of contradictions. The eliminativist's claim with respect to qualia is that there is no unbiased evidence for such experiences when regarded as something more than propositional attitudes.[22][56] In other words, they do not deny that pain exists, but that it exists independently of its effect on behavior. Influenced by Ludwig Wittgenstein's Philosophical Investigations, Dennett and Rey have defended eliminativism about qualia, even when other portions of the mental are accepted.

Quining qualia[edit]

Daniel Dennett offers philosophical thought experiments to the conclusion that qualia do not exist.[57] First, he lists five properties of qualia:

  1. They are “directly” or “immediately” graspable during our conscious experiences.
  2. We are infallible about them.
  3. They are “private”: no one can directly access anyone else’s qualia.
  4. They are ineffable.
  5. They are “intrinsic” and “simple” or “unanalyzable.”

Inverted qualia[edit]

The first thought experiment Dennett uses to demonstrate that qualia lacks the listed necessary properties for it to exist involves inverted qualia. The inverted qualia case concerns two people who could have different qualia and yet have all the same external physical behavior. But now the qualia supporter might then present an “intrapersonal” variation. Suppose a devious neurosurgeon fiddles with your brain and you wake up to discover that the grass looks red. Wouldn’t this be a case where we could confirm the reality of qualia— by noticing how the qualia have changed while every other aspect of our conscious experience remains the same? Not quite, Dennett replies via the next intuition pump, “alternative neuro-surgery.” In fact there are two different ways the neurosurgeon might have accomplished the inversion above. First, she might have tinkered with something “early on,” so that the signals coming from the eye when you look at grass contain the information “red” rather than “green.” This would result in a genuine qualia inversion. But, alternatively, she might have instead tinkered with your memory. Here your qualia would remain the same, but your memory would be altered so that your current green experience would contradict your earlier memories of grass. Note, you would still feel that “the color of grass has changed”; only here it isn’t the qualia that has changed, but your memories. But now, would you be able to tell which of these scenarios is correct? No: your perceptual experience tells you that something has changed but not whether your qualia have changed. Dennett concludes, since (by hypothesis) the two different surgical invasions can produce exactly the same introspective effects while only one operation inverts the qualia, nothing in the subject’s experience can favor one of the hypothesis over the other. So unless he seeks outside help, the state of his own qualia must be as unknowable to him as the state of anyone else’s qualia. This is hardly the privileged access or immediate acquaintance or direct apprehension the friends of qualia had supposed qualia to enjoy! It’s questionable, in short, that we have direct, infallible access to our conscious experience.

The experienced beer drinker[edit]

The second thought experiment involves beer. Many people think of beer as an acquired taste: one’s first sip is often unpleasant, but one gradually comes to enjoy it. But wait, Dennett asks— what’s the “it” here? Compare the flavor of that first taste with the flavor now. Does the beer taste exactly the same both then and now, only now you like that taste whereas before you disliked that very same taste? Or is it that the way beer tastes gradually shifts— so that the taste you did not like at the beginning is not the very same taste you now like at the end? In fact most people simply cannot tell which is the correct analysis. But that is to give up again on the idea that we have special and infallible access to our qualia. Further, when forced to choose, many people feel that the second analysis is more plausible. But then if one’s reactions to an experience are in any way constitutive of it, the experience is not so “intrinsic” after all— and another qualia property falls.

Inverted goggles[edit]

The third thought experiment makes use of inverted goggles. Scientists have devised special eyeglasses that invert up and down for the wearer. When you put them on, everything looks upside down. When subjects first put them on they can barely walk around without stumbling. But when subjects wear these goggles for a while, something surprising occurs. They adapt and become able to walk around as easily as before. When you ask them whether they adapted by re-inverting their visual field or whether they simply got used to walking around in an upside- down world, they can’t say. So as in our beer-drinking case, either we simply do not have the special, infallible access to our qualia that would allow us to distinguish the two cases, or, perhaps, the way the world looks to us is actually a function of how we respond to the world— in which case qualia are not “intrinsic” properties of experience.

Criticism[edit]

Whether your memory of your qualia has been tampered with is something you need to appeal to third-person neurological evidence to determine does not seem to show that your qualia themselves – past or present – can be known only by appealing to that evidence. You might, for all Dennett has said, still be directly aware of your qualia from the first-person, subjective point of view even if you don’t know whether they are the same as or different from the sort of qualia you had yesterday – just as you might really be aware of the article in front of you even if you don’t know whether it was the same as or different from the article you saw yesterday. Questions about memory do not necessarily have a bearing on the nature of your awareness of objects present here and now (even if they have an obvious bearing on what you can justifiably claim to know about such objects), whatever those objects happen to be. Dennett’s assertion that scientific objectivity requires appealing exclusively to third-person evidence appears mistaken. What scientific objectivity requires is, not denial of the first-person subjective point of view, but rather a means of communicating inter-subjectively about what one can grasp only from that point of view. Given the relational structure first-person phenomena like qualia appear to exhibit – a structure that, Carnap devoted great effort to elucidating – such a means seems available: we can communicate what we know about qualia in terms of their structural relations to one another. Dennett’s position rests on a failure to see that qualia being essentially subjective is fully compatible with their being relational or non-intrinsic, and thus communicable. This communicability ensures that claims about qualia are epistemologically objective, that is, they can in principle be grasped and evaluated by all competent observers, even though they are claims about phenomena that are arguably not metaphysically objective, that is, they are about entities that exist only as grasped by a subject of experience. It is only the former sort of objectivity that science requires. It does not require the latter – and cannot plausibly require it if the first-person realm of qualia is what we know better than anything else.[58]

Illusionism[edit]

Illusionism is an active program within eliminative materialism to explain consciousness as an illusion. It is promoted by the philosophers Daniel Dennett, Keith Frankish, Jay Garfield, and the neuroscientist Michael Graziano.[59][60] The attention schema theory of consciousness is advanced by Michael Graziano and if the theory gains support from neuroscience it will succeed in explaining consciousness as an illusion.[61][62] According to David Chalmers, proponents argue that once we can explain consciousness as an illusion without the need for supposing a realist view of consciousness, we can construct a debunking argument against realist views of consciousness.[63] This line of argument draws from other debunking arguments like the evolutionary debunking argument in the field of metaethics. Such arguments note that morality is explained by evolution without the need to posit moral realism therefore there is a sufficient basis to debunk a belief in moral realism.[37]

Debunking Argument for Illusionism (version 1):[63]

  1. There is a correct explanation of our beliefs about consciousness that is independent of consciousness.
  2. If there is a correct explanation of our beliefs about consciousness that is independent of consciousness, those beliefs are not justified.
  3. Our beliefs about consciousness are not justified.

Debunking Argument for Illusionism (version 2):[63]

  1. There is an explanation of our phenomenal intuitions that is independent of consciousness.
  2. If there is an explanation of our phenomenal intuitions that is independent of consciousness, and our phenomenal intuitions are correct, their correctness is a coincidence.
  3. The correctness of phenomenal intuitions is not a coincidence.
  4. Our phenomenal intuitions are not correct.

Efficacy of folk psychology[edit]

Some philosophers argue that folk psychology is a quite successful theory.[11][64][65] Simulation theorists doubt that people's understanding of the mental can be explained in terms of a theory at all. Rather they argue that people's understanding of others is based on internal simulations of how they would act and respond in similar situations.[9][10] Jerry Fodor is one of the objectors that believes in folk psychology's success as a theory, because it makes for an effective way of communication in everyday life that can be implemented with few words. Such an effectiveness could never be achieved with a complex neuroscientific terminology.[11]

See also[edit]

References[edit]

  1. ^ Ramsey, William (2016-01-01). "Eliminative Materialism". In Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy (Winter 2016 ed.). Metaphysics Research Lab, Stanford University.
  2. ^ a b c d Lycan, W. G. & Pappas, G. (1972) "What is eliminative materialism?" Australasian Journal of Philosophy 50:149-59.
  3. ^ a b c Rey, G. (1983). "A Reason for Doubting the Existence of Consciousness", in R. Davidson, G. Schwartz and D. Shapiro (eds), Consciousness and Self-Regulation Vol 3. New York, Plenum: 1-39.
  4. ^ a b c d e Ramsey, William, "Eliminative Materialism", The Stanford Encyclopedia of Philosophy (Fall 2008 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2008/entries/materialism-eliminative/> Section 4.2.
  5. ^ a b c Rorty, Richard (1970). "In Defence of Eliminative Materialism" in The Review of Metaphysics XXIV. Reprinted Rosenthal, D.M. (ed.) (1971)
  6. ^ a b c Feyerabend, P. (1963) "Mental Events and the Brain" in Journal of Philosophy 40:295-6.
  7. ^ Churchland, Patricia; Churchland, Paul (1998). On the contrary : critical essays, 1987-1997. MIT Press. ISBN 9780262531658. OCLC 42328879.
  8. ^ a b http://plato.stanford.edu/entries/materialism-eliminative/#SpeProFolPsy, by William Ramsey
  9. ^ a b Gordon, R. (1986). Folk psychology as Simulation, Mind and Language 1: 158-171.
  10. ^ a b Goldman, A. (1992). In Defense of the Simulation Theory, Mind and Language7: 104-119.
  11. ^ a b c Fodor, Jerry (1987). Psychosemantics : the problem of meaning in the philosophy of mind. MIT Press. ISBN 9780262061063. OCLC 45844220.
  12. ^ a b Boghossian, P. (1990). "The Status of Content."Philosophical Review. 99: 157-84.
  13. ^ Jackson, F. (1982) "Epiphenomenal Qualia", The Philosophical Quarterly 32:127-136.
  14. ^ Sellars W. (1956). "Empiricism and the Philosophy of Mind", In: Feigl H and Scriven M (eds) The Foundations of Science and the Concepts of Psychology and Psychoanalysis: Minnesota Studies in the Philosophy of Science, Vol. 1. Minneapolis: University of Minnesota Press: 253-329. online
  15. ^ a b Savitt, S. (1974). Rorty's Disappearance Theory, Philosophical Studies 28:433-36.
  16. ^ Quine, W.V.O. (1960) Word and Object. MIT Press. Cambridge, Massachusetts (p. 265)
  17. ^ a b Niiniluoto, Ilkka. Critical Scientific Realism. Pg 156. Oxford University Press (2002). ISBN 0-19-925161-4.
  18. ^ Skinner, B.F. (1971) Beyond Freedom and Dignity. New York: Alfred Knopf.
  19. ^ a b c Churchland, P.S. (1986) Neurophilosophy: Toward a Unified Science of the Mind/Brain. Cambridge, Massachusetts: MIT Press.
  20. ^ a b c Churchland, P.M. and Churcland, P. S. (1998). Intertheoretic Reduction: A Neuroscientist's Field Guide. On the Contrary Critical Essays, 1987-1997. Cambridge, Massachusetts, The MIT Press: 65-79.
  21. ^ Dennett, D. (1978) The Intentional Stance. Cambridge, Massachusetts: MIT Press.
  22. ^ a b c Dennett, D. (1988) "Quining Qualia" in: Marcel, A and Bisiach, E (eds), Consciousness in Contemporary Science, 42-77. New York, Oxford University Press.
  23. ^ Churchland, P.M. (1985). "Reduction, Qualia and the Direct Inspection of Brain States," in Journal of Philosophy, 82, 8-28.
  24. ^ Churchland, P.M. (1992). A Neurocomputational Perspective: The Nature of Mind and the Structure of Science. Cambridge, Massachusetts: MIT Press. ISBN 0-262-03151-5. Chapt. 3
  25. ^ Tomasik, Brian (2014-08-09). "The Eliminativist Approach to Consciousness". The Center on Long-Term Risk. Retrieved 2020-05-17.
  26. ^ Anthis, Jacy (2018-06-21). "What is sentience?". Sentience Institute. Retrieved 2020-05-17.
  27. ^ Carruthers, P. & Smith, P. (1996) Theories of Theories of Mind. Cambridge: Cambridge University Press
  28. ^ Heal, J. (1994) "Simulation vs. Theory-Theory: What's at Issue?" In C. Peacocke (ed.), Objectivity, Simulation and the Unity of Consciousness Oxford: Oxford University Press.
  29. ^ a b Churchland, P.M. (1981) Eliminative Materialism and the Propositional Attitudes. Journal of Philosophy 78(2): 67-90.
  30. ^ Jackson, F. & Pettit, P. (1990). "In Defense of Folk Psychology". Philosophical Studies 59: 31-54.
  31. ^ Horgan, T. and Graham, G. (1990). In Defense of Southern Fundamentalism, Philosophical Studies 62: 107-134
  32. ^ Dennett, D. (1991). Two Contrasts: Folk Craft Versus Folk Science, and Belief Versus Opinion, in: Greenwood, J. (ed), The Future of Folk Psychology. New York: Cambridge University Press.
  33. ^ McLaughlin, B. and Warfield, T. (1994). "The Allure of Connectionism Reexamined", Synthese 101: 365-400.
  34. ^ Fodor, J. and Pylyshyn, Z. (1984). "Connectionism and Cognitive Architecture: A Critical Analysis", Cognition 28: 3-71.
  35. ^ Stich, S. (1983). From Folk Psychology to Cognitive Science. Cambridge, Massachusetts: MIT Press.
  36. ^ Ramsey, W., Stich, S. and Garon, J. (1990). Connectionism, Eliminativism and the Future of Folk Psychology, Philosophical Perspectives 4: 499-533.
  37. ^ a b Rosenberg, Alex (2012). "Morality: The Bad News". The Atheist's Guide to Reality: Enjoying Life without Illusions. W. W. Norton & Company. pp. 94–115. ISBN 9780393344110.
  38. ^ Rosenberg, Alex (2019). "What Exactly Was Kaiser Thinking; Can Neuroscience Tell Us What Talleyrand Meant?". How History Gets Things Wrong: The Neuroscience of Our Addiction to Stories. The MIT Press. pp. 95–111. ISBN 9780262537995.
  39. ^ a b c d e f g h Rosenberg, Alex. "Eliminativism without Tears" (PDF). Cite journal requires |journal= (help)
  40. ^ Fodor, Jerry (2011). What Darwin Got Wrong. Farrar, Straus and Giroux. ISBN 0374288798.
  41. ^ a b c d e f g Rosenberg, Alex (2012). "How Jerry Fodor slid down the slippery slope to Anti-Darwinism, and how we can avoid the same fate". European Journal for Philosophy of Science volume.
  42. ^ a b c Rosenberg, Alex (2018). "Philosophical challenges for scientism (and how to meet them?)". Scientism: Prospects and Problems. pp. 83–105. ISBN 9780190462758.
  43. ^ Dennett, Daniel (1996). "The Evolution of Meanings". Darwin's Dangerous Idea: Evolution and the Meanings of Life. Simon & Schuster. pp. 401–428. ISBN 9780684824710.
  44. ^ a b Dennett, Daniel Dennett (2013). "Radical Translation and a Quinian Crossword Puzzle". Intuition Pumps And Other Tools for Thinking. W. W. Norton & Company. pp. 175–178.
  45. ^ Frankish, Keith; Goff, Philip (2017). "Is Realism about Consciousness Compatible with a Scientifically Respectable World View? A response to Keith Frankish's 'Illusionism as a Theory of Consciousness'". Illusionism: as a theory of consciousness (PDF). Imprint Academic.
  46. ^ Horgan, Terence; Tienson, John (2002). "THE INTENTIONALITY OF PHENOMENOLOGY AND THE PHENOMENOLOGY OF INTENTIONALITY". Cite journal requires |journal= (help)
  47. ^ a b Lycan, W. "A Particularly Compelling Refutation of Eliminative Materialism" ((online)). Retrieved Sept. 26, 2006.
  48. ^ John Polkinghorne points out that such philosophers expect more attention to their works that "we would give to the scribblings of a mere automaton"
  49. ^ Boghossian, P. (1991). "The Status of Content Revisited." Pacific Philosophical Quarterly. 71: 264-78.
  50. ^ Devitt, M. & Rey, G. (1991). Transcending Transcendentalism in Pacific Philosophical Quarterly 72: 87-100.
  51. ^ Mountcastle, V.B.; Steinmetz, M.A.; Romo, R. (September 1990). "Frequency discrimination in the sense of flutter: psychophysical measurements correlated with postcentral events in behaving monkey" (PDF). The Journal of Neuroscience. 10 (9): 3032–3044. doi:10.1523/JNEUROSCI.10-09-03032.1990.
  52. ^ Shook, John; Solymosi, Tibor (2014). Pragmatist Neurophilosophy: American Philosophy and the Brain. Bloomsbury.
  53. ^ a b Nagel, T. 1974 "What is it like to be a Bat?" Philosophical Review, 83, 435-456.
  54. ^ Rey, G. (1988). A Question About Consciousness, in H. Otto & J. Tuedio (eds), Perspectives on Mind. Dorderecht: Reidel, 5-24.
  55. ^ Dennett, D. (1978). The Intentional Stance. Cambridge, Massachusetts: MIT Press.
  56. ^ Dennett, Daniel Clement (1991). "Qualia Disqualified; A philosophical Fantasy: Inverted Qualia". Consciousness Explained. Little, Brown and Company. pp. 369–412. ISBN 9780316180658.
  57. ^ Dennett, Daniel (1993). "Quining Qualia".
  58. ^ Feser, Edward (2006). "Consciousness; Eliminativism". Philosophy of Mind (A Beginner's Guide). Oneworld Publications. pp. 116–121. ISBN 9781851684786.
  59. ^ Dennett, Daniel (1991). Consciousness Explained. Little, Brown and Company.
  60. ^ Frankish, Keith (2017). Illusionism: as a theory of consciousness. Imprint Academic.
  61. ^ Graziano, Michael (2013). Consciousness and the Social Brain. Oxford University Press.
  62. ^ Graziano, Michael (2019). Rethinking Consciousness: A Scientific Theory of Subjective Experience. W. W. Norton & Company.
  63. ^ a b c Chalmers, David (2018). "The Meta-Problem Of Consciousness" (PDF). Journal of Consciousness Studies.
  64. ^ Kitcher, P. S. (1984). "In Defense of Intentional Psychology", Journal of Philosophy 81: 89-106.
  65. ^ Lahav, R. (1992). "The Amazing Predictive Power of Folk Psychology", Australasian Journal of Philosophy 70: 99-105.

Further reading[edit]

  • Baker, L. (1987). Saving Belief: A Critique of Physicalism, Princeton, NJ: Princeton University Press. ISBN 0-691-02050-7.
  • Broad, C. D. (1925). The Mind and its Place in Nature. London, Routledge & Kegan. ISBN 0-415-22552-3 (2001 Reprint Ed.).
  • Churchland, P.M. (1979). Scientific Realism and the Plasticity of Mind. New York, Press Syndicate of the University of Cambridge. ISBN 0-521-33827-1.
  • Churchland, P.M. (1988). Matter and Consciousness, revised Ed. Cambridge, Massachusetts, The MIT Press. ISBN 0-262-53074-0.
  • Rorty, Richard. "Mind-body Identity, Privacy and Categories" in The Review of Metaphysics XIX:24-54. Reprinted Rosenthal, D.M. (ed.) 1971.
  • Stich, S. (1996). Deconstructing the Mind. New York: Oxford University Press. ISBN 0-19-512666-1.

External links[edit]