MINDS WITHOUT MEANINGS AND NEUROPHILOSOPHY

While reading Fodor and Pylyshyn’s[1] recent book ‘Minds Without Meanings’, my mind was constantly drawn back to Paul Churchland’s (2012) book ‘Plato’s Camera’. Fodor and Churchland are philosophers, while Pylyshyn is a cognitive scientist. Despite the fact that Churchland and Fodor are philosophers their respective books chocked full of empirical data, experimental results and as well as theoretical claims. Churchland as well as F and P consider their theories to be empirical theories subject to empirical refutation like any other theory in psychology and neuroscience. Churchland draws his data primarily from neuroscientific evidence while F and P primarily use data drawn from cognitive science. In this blog I will consider some key areas of disagreement between F and P and Churchland and reflect on which theory best accounts for the data and propose some further experiments which will help us decide between the respective theorist’s views on mind and meaning.

F and P begin their book by outlining nine working assumptions that they make in their book (1) They are realist about Belief/Desire psychology, (2) They are Naturalists, (3) They accept the Type/Token distinction (4) They accept Psychological reality about linguistic posits (5) They assume propositions have compositional structure (6) Mental Representations have compositional structure (7) They accept the Representational Theory of the Mind (8) They accept the computational theory of the mind. (9) They argue that thought is prior to language.

There are a lot of objections that could be made to any of the above assumptions I laid out where I stand in relation to all of them in my last blog. Here I will discuss some assumptions that Churchland objects to. One area where he has always disagreed with Fodor is on the language of thought thesis; as a result, he would have serious issues with assumptions 1 and 9 above. Here is Churchland on the Language of Thought argument:

“For now, let me announce that, for better or for worse, the view to be explored and developed in this book is diametrically opposed to the view that humans are capable of cognition precisely because we are born with an innate ‘language of thought’. Fodor has defended this linguaformal view most trenchantly and resourcefully in recent decades, but of course the general idea goes back at least to Kant and Descartes. My own hypothesis is that all three of these acute gentlemen have been falsely taken in by what was, until recently, the only example of a systematic representational system available to human experience, namely, human language. Encouraged further by our own dearly beloved Folk Psychology, they have wrongly read back into the objective phenomenon of cognition-in-general a historically accidental structure that is idiosyncratic to a single species of animal (namely, humans), and which is of profoundly secondary importance even there. We do of course use language-a most blessed development we shall explore in due course-but language like structures do not embody the basic machinery of cognition. Evidently they do not do so for animals and not for humans either, because human neuronal machinery, overall differs from that of other animals in various small degrees, but not in fundamental kind.” (Paul Churchland ‘Plato’s Camera’ p.5)

“As noted earlier, Jerry Fodor is the lucid, forthright, and prototype perpetrator on this particular score, for his theory of cognitive activity is that it is explicitly language like from its inception (see, e.g. Fodor 1975)-a view that fails to capture anything of the very different, sublinguistic styles of representation and computation revealed to us by the empirical neurosciences and by artificial neuromodeling. Those styles go wholly unacknowledged. This would be failure enough. However, the ‘Language of Thought’ hypothesis fails in a second monumental respect, this time ironically by undervaluing the importance of language. Specifically, it fails to acknowledge the extraordinary cognitive novelty that the invention of language represents, and the degree to which it has launched humankind on an intellectual trajectory that is impossible for creatures denied the benefits of that innovation, that is to creatures confined only to the first and second level of learning. (ibid p.26)

Some of Churchland’s reasoning above unfair to F and P. So, for example, he argues that animals and babies neuronal activity doesn’t differ from ours to that great a degree. From this fact he concludes that since neither animals nor babies have language but share a lot of neural machinery with us there seems little point in assuming that humans primary thought processes involve a language of thought. But this is just a question begging assumption. We have reason to believe that children as young as four months have concepts of objects, there is evidence that children before learning their first words at twelve months have expectations of causality, number etc.[2]. There is also evidence that animals think using concepts, in fact it is a working assumption by most cognitive scientists that they do[3]. The fact that humans and animals and non-human babies share a lot of neural architecture is in principle neutral on the issue of whether normal human adults think using a language of thought. Nothing in F and P’s work precludes the possibility that animals and babies who have not yet acquired a public language are not thinking in concepts using a proto-language of thought. The degree to which pre-linguistic children and non-human animals have concepts and whether they can combine these concepts to think is an open empirical question[4]. So given the open ended empirical nature of the debate, Churchland cannot just assume that because normal language speaking humans having a lot of neural tissue in common with children and non-human animals, this has any bearing on the debate on the existence of a language of thought. The fact that he does make this move is simply evidence of him question begging against F and P.

Churchland also uses another argument that he thinks shows that Fodor’s LoT is false. He argues by claiming that our thinking is done through our language of thought Fodor is ignoring the incredible cognitive benefits that is conferred on our species through having a language. This argument simply doesn’t work. Fodor believes that our public language is derived from our private language of thought but from this fact it doesn’t follow that a public language has negligible importance. While a private language of thought will give us the ability to combine concepts in productive manner, giving us the ability to think a potentially infinite amount of thoughts, this solitary mode of thinking has its limits. When we have a public language we have not only our own thoughts to rely on but the thoughts of others. A creature who is born into a particular culture will inherit the combined wisdom of the society he is born into. If the culture keeps written records then the child will eventually be able to read the thoughts and experiences of people long dead who have lived in different places at different times. By sharing a language it makes it easier for members of a culture to explain to another person how to do various things, and this will have huge benefits. So a shared language with the ability to share information will have huge cognitive benefits, and nothing F and P have said denies this fact. So again Churchland’s attack hits the wrong target.

Churchland goes on to make further claims about the LoT hypothesis:

“But I am here making a rather more contentious claim, as will be seen by drawing a further contrast with Fodor’s picture of human cognition. On the Language of Thought (LoT) hypothesis, the lexicon of any public language inherits its meanings directly from the meanings of the innate concepts of each individual’s innate LoT. Those concepts derive their meanings, in turn, from the innate set of causal sensitivities they bear to various ‘detectable’ features of the environment. And finally, those sensitivities are fixed in the human genome, according to this view, having been shaped by many millions of years of biological evolution. Accordingly, every normal human at whatever stage of cultural evolution, is doomed to share the same conceptual framework as any other human, a framework that the current public language is doomed to reflect. Cultural evolution may therefore add to that genetic heritage, perhaps considerably, but it cannot undermine or supersede it. The primary core of our comprehensive conception of the world is firmly nailed to the human genome, and it will not change until the human genome has changed.

I disagree. The lexicon of a public language gets its meanings not from its reflection of an innate LoT, but from the framework broadly accepted or culturally entrenched sentences in which they figure, and by the patterns of inferential behaviour made normative thereby. Indeed, the sublinguistic categories that structure any individual’s thought processes are shaped, to a significant degree, by the official structure of the ambient language in which she was raised, not the other way around” (ibid p.28)

I have to admit that I share Churchland’s scepticism about Fodor’s idea that all our concepts are innate. The conclusion on the face of it seems to be incredible. In fact the conclusion is so incredible that the majority of theorists have simply rejected the argument outright. So, for example, Churchland doesn’t say what he thinks is wrong with Fodor’s argument. Rather he simply states that he doesn’t accept Fodor’s conclusions and thinks that concepts get their meaning publically through our shared culture and developmental history.

Before assessing Churchland’s alternative it is important to consider what evidence Fodor has to support his views on concepts. His ‘Mind’s without Meanings’ is his most recent attempt to explicate his views on concepts so it is worth working through the arguments sketched there. In their ‘Minds without Meanings’, F and P argue, that current views on the nature of concepts are radically wrongheaded, they dedicate an entire chapter to showing that all other theories on the nature of concepts fail.

They begin by arguing that concepts are not mental images and give the following four reasons: (1) We have many concepts that apply to things that we cannot picture, (2) Black Swan Arguments: We can have an image of (A) a black swan, but what about an image (B) that shows that all swans are white? Or an image (C) That shows that (A) and (B) are incompatible with each other? This is not possible because images cannot depict incompatibility. But we do have conceptual knowledge of incompatibility. Therefore images are not concepts. (3) Constituent Structure: Images have parts not constituents. If we take an image we can divide it up in as many different ways as possible and put it back together in any arbitrary way. Concepts however are combined according to rules. They have a syntax that governs how they can be put together. Pictures do not follow rules like this therefore they are not concepts. (4) Leibniz Law: Mental images supposedly occur in the brain (where else could they occur), but they cannot be identical with any brain area because they have properties that no brain area has. We can have a mental image of a purple cow, but there is no purple in the brain. Therefore upon pain of breaking Leibniz’s Law we have to admit that mental images are not brain states. But unless we want to become dualists (which contradicts F and P’s naturalist assumption above) we have to argue that mental images are really only something that seems to exist, in reality they are propositional at base. Therefore since mental images don’t really exist they cannot be concepts.

Secondly they argue that concepts are not definitions: Because (1) we have very few definitions of any concepts after over two thousand years of philosophers looking for them, (2) All concepts cannot have definitions logically some of them must be primitive concepts which the others are defined in terms of but we have no way of finding out what these primitive concepts are. Some people argue that the primitive concepts are abstract innate concepts like causality, agent, object etc. However F and P argue that these supposed primitives can be broken down further so are not really primitive see (Spelke on Objects 1990) the other approach is to say the primitive concepts are sensory concepts. However there are few concepts that can be explicated in terms of sensory primitives. (3) Fodor’s Paradox: If concepts were definitions we could not learn any concepts. Take the definition ‘Bachelors are unmarried men’ this means that the concept BACHELOR is the same as the concept UNMARRIEDMAN. So to learn BACHELOR is to learn that bachelors are unmarried men. But BACHELOR and UNMARRIEDMAN are the very same concept. So it follows that you cannot learn the concept of BACHELOR unless you already have the concept of UNMARRIEDMAN (and vice versa). Therefore you cannot learn the concept at all. So something is obviously radically wrong with the definition story.

Thirdly they argue that concepts are not Stereotypes because: Concepts compose but stereotypes do not. Therefore concepts are not stereotypes. They explicate this with their famous PetFish examples, and Uncat examples.

Fourthly they argue that concepts cannot be inferential roles. They claim that if concepts are inferential roles then we need to be able to say which conceptual content supervenes on which inferential roles. However they note that there are really only two ways of sorting doing this. (1) By appealing to Holism: But they argue that if holism is true and every inference that a concept is involved in is constitutive then, then the content of ones concepts alter as fast as ones beliefs do (Minds Without Meaning p. 55). But they note batty consequences follow from this theory. So, for example, because two people may agree in some judgement about concept x at time t1 but at t2 as they have both had their concepts modified because of new information they no longer even share the same concept. If people have their concepts modified because of their own idiosyncratic experience occurring moment to moment then this would make communication very difficult (2) By appealing to Analyticity: But we have good Quinean reasons to think that appeal to analyticity is a bad way of reasoning, because we cannot explicate analyticity in a non-circular manner etc.

The fifth reason is directly relevant to Churchland. F and P do not think that concepts can be explicated in terms of connectionist models. F and P have criticised connectionist models in detail in their 1988 paper. In ‘Mind’s Without Meanings’ they give an abbreviated version of their 1988 argument. Firstly they note that from the start connectionist models face serious difficulties, because the distinction between thoughts and concepts are not often noted in the literature. For them a thought is a mental representation that expresses a proposition (ibid p. 47). So on an associationist model WATER may be associated primarily with WET. But, they argue, it wouldn’t be right to equate having the thought ‘Water is Wet’ with associating the concept WATER with the concept WET. This is because the thought that ‘Water is Wet’ has logical form where we predicate the property of wetness to the object water. Once we use this predication we are making a claim that is true or false. The claim is true if the stuff ‘Water’ has the property associated with it re ‘Wetness’, it is false otherwise. The thought ‘Water is wet’ has logical form and is made true or false by things in the world. So even if concepts are associative nodes with in a semantic network on F and P’s model concepts are distinct from thought. And connectionist models cannot explain thought even if they can explain concepts.

However despite the fact that they think that connectionist models are in principle incapable of explaining what thought they agree to bracket this consideration. They then consider the question of whether connectionist models can explain what concepts are. Again they argue that they cannot.

They ask us to think of our total set of concepts as something that can be represented in a graph of finitely labelled nodes with paths connecting some of them to some others (ibid p.49). However there is a severe difficulty with this approach. For the connectionist the content of the concepts whether it is a concept of a dog is provided by the label. But the connectionist model is supposed to explain what the concept, and it cannot do this by relying on labels on pain of circularity. F and P note that Quine is right that most theories of meaning suffer from a serious circularity problem. But if a connectionist wants to explain conceptual content without question begging then he will need another approach.

F and P argue if we cannot on pain of circularity equate the content of a node its labels then we must say that the content is simply provided by nodes and their various connections to other nodes. But there is a problem with this approach. It means that corresponding nodes in isomorphic graphs have the same content whatever the labels of their connected nodes may be (ibid p.50). This project cannot work because it means that we could have the concept of SQUARE in one graph that has the same location as the concept ROUND has in another graph. So for F and P this argument shows that connectionist models are incapable in principle of explaining what the content of different concepts are; hence they cannot explain what concepts are.

They note that Paul Churchland (2006) tries to get over this difficulty by arguing that basic level concepts are grounded in sensory perception and that is how they get their content. This approach though won’t work because it is vulnerable to the same objections that Berkeley raised against concepts being associated with sense data.

This long detour through 5 main objections that F and P make to various different theories of conceptual content shows why Fodor originally argued that our concepts must be innate. For him concepts cannot be explained in terms of mental images which are faint impressions of our sensory experiences. They cannot be explained as definitions derived from basic sensory primitives or definitions derived from innate metaphysical concepts of causation, agency etc. They cannot be explained as something that we derive from proto-types and statistical generalisation. They cannot be explained as something derived from inferential roles. And they cannot be explained via connectionist models. Therefore if we have no other explanation of how concepts are acquired and we think people have concepts we will be forced to conclude that our concepts must be innate.

F and P don’t argue that we can avoid paradoxes about how concepts are learned (and by extension claiming that all concepts are innate) if we stop thinking of concepts as something that have intensions. Hence they sketch their purely referential theory of concepts, arguing that intuitions to the contrary this approach is viable.

I discussed Fodor’s objections to various different theories of concepts in my last blog. In a nutshell I think that he is right in his criticism of concepts as mental images. But that his arguments against concepts being prototypes badly misconstrues Eleanor Rosch’s prototype theory. I think that Fodor’s arguments against concepts being definitions is pretty convincing. But that his argument against inferential role semantics is very weak. Nonetheless here I just want to discuss Churchland’s (2012) objections to Fodor’s views on concepts.

Churchland argues that our lexicon is largely determined by our public language that we have learned in the idiosyncratic environment that we happen to have been born into. Any limited pre-linguistic concepts that we have will be to an extent over-written by the conceptual abilities that the particular culture we are born into gives us. So he disagrees with what he takes to be Fodor’s position that all our concepts are written into our genome and cannot be radically changed by the culture we are born into. It is unclear that F and P need to accept Fodor’s old argument for innate concepts. Since they now think our concepts are determined by our extensions entirely and that intensions play no role. However their views are still at odds with Churchland’s claim that the lexicon of our public language will radically modify our concepts which we use when thinking. For F and P since our concepts are determined by their extensions, our public language should not really affect the concepts we use to think about the world.

In the above discussion Churchland claimed that Fodor’s of concepts was incorrect. However he did not engage with Fodor’s arguments that there is no other way we can acquire concepts other than them being innate or being determined by their extensions. Obviously; the key argument that Churchland would object to, is F and P’s claim that concepts cannot be explicated in terms of connectionist models. Churchland has criticised both Fodor and Pylyshyn for what he views as their inadequate views on the nature of connectionist models:

“Fodor briefly turns to address, with more than a little scepticism, the prospects for a specifically ‘connectionist’ solution to his problem, but his discussion is hobbled by an out dated and stick-figured conception of how neural networks function, in both their representational and in their computational activities. His own reluctant summary (Fodor 2000, 46-50) wrongly makes localist coding (where each individual cell possesses a pro-prietary semantic significance) prototypical of this approach, instead of population or vector coding (where semantic significance resides only in the collective activation patterns across large groups of cells). And it wrongly assimilates their computational activities to the working out of ‘associations’ of various strengths between the localist-coded cells that they contain, instead of the very different business of transforming large vectors into other large vectors. (To be fair to Fodor, there have been artificial networks of exactly the kind he describes: Rumelhart’s now ancient ‘past-tense network’ [Rumelhart and McClelland 1986] may have been his introductory and still-dominant conceptual prototype. But that network was functionally inspired to solve a narrowly linguistic problem, rather than biologically inspired to address cognition in general. It in no way represents the mainstream approaches of current neuroanatomically inspired connectionist research). Given Fodor’s peculiar target, his critique is actually correct. But his target on this occasion is, as it happens a straw man. An in the meantime, vector-coding, vector-transforming feed-forward networks-both biological and artificial-chronically perform globally sensitive abductions as naturally and as effortlessly as a baby breathes in and out.” (ibid p.71)

“I here emphasize this fundamental dissociation, between the traditional semantic account of classical empiricism and the account held out to us by a network-embodied Domain-Portrayal Semantics, not just because I wish to criticize, and reject the former. The dissociation is worth emphasizing because the latter has been mistakenly, and quite wrongly, assimilated to the former by important authors in the recent literature (e.g. Fodor and Leopore 1992, 1999). A state-space or domain-portrayal semantics is there characterized as just a high-tech, vector-space version of Hume’s old concept empiricism. This is a major failure of comprehension, and it does nothing to advance the invaluable debate over the virtues and vices of the competing contemporary approaches to semantic theory. To fix permanently in mind the contrast here underscored, we need only to note that Hume’s semantic theory is irredeemably atomistic (simple concepts get their meanings one by one), while domain-portrayal semantics is irreducibly holistic (there are no ‘simple’ concepts, and concepts get their meanings only as a corporate body). Any attempt to portray the latter as just one version of the former will result in nothing but confusion” (ibid p.88)

In the above quote Churchland is explicating a discussion of the Frame Problem by Fodor in his ‘The Mind Doesn’t Work That Way’. In that book Fodor was criticising attempts of connectionist models to overcome the frame problem. Churchland complains that Fodor is wrong in his interpretation of connectionism because he is working from antiquated models. Churchland has a point; Fodor has an irritating habit of associating any theory that disagrees with his own as another form of empiricism. Nonetheless, Fodor’s misunderstanding has no real bearing on his criticism of connectionist models of concepts. We still don’t have an answer to how concepts get their contents in connectionist models.

However whatever we make of F and P’s criticisms of connectionist theories of concept content, Churchland has another argument against their views of concepts being entirely determined by their extensions:

“Before leaving this point, let me emphasize that this is not just another argument for semantic holism. The present argument is aimed squarely at Fodor’s atomism in particular, in that the very kinds of causal/informational connections that he deems necessary to meaning are in general impossible, save as they are made possible by the grace of the accumulated knit of background knowledge deemed essential to meaning by the semantic holist. That alone is what makes subtle, complex, and deeply context dependent features perceptually discriminable by any cognitive system. Indeed it is worth suggesting that the selection pressures to produce these ever more penetrating context-dependent discriminative responses to the environment are precisely what drove the evolutionary development of multi-layered networks and higher cognitive processes in the first place. Without such well-informed discriminative processes thus lifted into place, we would all be stuck at the cognitive level of the mindless mercury column in the thoughtless thermometer and the uncomprehending needle position of the vacant voltmeter.

Beyond that trivial level, therefore, we should adopt it as a (pre-revolutionary principle that there can be “No Representation without at least some comprehension… In sum, no cognitive system could ever possess the intricate kinds of causal or informational sensitivities variously deemed necessary by atomistic semantic theories, save by virtue of its possessing some systematic grasp of the world’s categorical/causal structure. The embedding network of presumptive general information so central to semantic holism is not the post facto ‘luxury’ it is on Fodor’s approach. It is epistemologically essential to any discriminative system above the level of an earthworm.” (ibid p.97)

The very points that Churchland makes above are addressed in chapter 4 and 5 of F and P’s ‘Minds without Meanings’. This part of a dispute is entirely and empirical one and I will address it in my next blog.

[1] Fodor and Pylyshyn will be referred to as F and P throughout this blog.

[2] For evidence of children’s conceptual abilities pre-learning a public language see Spelke 1990, Carey 2009, and Bloom 2000.

[3] For an excellent discussion of animal cognition and whether animals have concepts see Kristin Andrews ‘The Animal Mind: An Introduction to the Philosophy of Animal Cognition’ (2014)

[4] I discussed this question in my blog-post ‘What are Concepts and which Creatures have them?’

2 thoughts on “MINDS WITHOUT MEANINGS AND NEUROPHILOSOPHY

Leave a reply to surtymind Cancel reply