The Science of Language (27 page)

Read The Science of Language Online

Authors: Noam Chomsky

Keep in mind that there is nothing obviously wrong with assuming that human concepts are complex and composed in some way; that assumption cannot, as indicated, be ruled out on Fodor's grounds. It is also independently plausible because (ignoring for good reasons Fodor's (
1998
,
2008
) very
speculative and externalist-driven accounts of how ‘atomic’ concepts could be acquired) composition offers the only viable acquisition alternative. If so, let us assume some kind of ‘guided’ compositional clustering operation that, so far as we know, could be shared with animals. Then let us look elsewhere for an explanation of the uniqueness of human concepts. One plausible line of inquiry is looking to the features that make up
human conceptual capacities (+/–ABSTRACT, POLITY, INSTITUTION, and so on) and inquiring whether at least some of them are likely to be duplicated in animals’ concepts. It is difficult to be confident when speaking of the conceptual capacities of animals, but there is, I think, reason to doubt that they do or that – if they do – they are capable of employing what they have. While humans might describe and think of a
troop of Hamadryas baboons as having a single form of male-dominant ‘government’ in their social system, it is unlikely that the baboons themselves would think of their form of social organization at all, much less think of it as one of a range of possible forms of political/social organization – authoritarian patriarchic tribal hierarchy, cooperative democratic system, plutocracy, matriarchic statist-capitalist economy . . . Olive baboons are of their natures matriarchal; Hamadryas baboons are definitely not. And even if a troop of Hamadryas baboons should through loss of dominant males become matriarchal, it is not as if the remainder of the troop deliberated whether to become so, and chose to. It appears that they have nothing like the capacity for abstractness afforded routinely in our notions of social institution, or for that matter classes of fruit that include a wide range of different species. Nor could Hamadryas or olive baboons or any other ape think of their organization and the territory over which they have hegemony as we do. Where we can think of London as a territory and set of buildings
or
as an institution that could move to another region, nothing in ape behaviors or communicative efforts exhibits this ability to adopt either, or both, ways of thinking. Nor likely could any think of their territories in the following way: “London [the volume of air in its region] is polluted” or “London [its voting population] is voting Conservative this time.” Their concepts for their organization (assuming they have such) and for the territories over which they have hegemony just do not allow for this, nor would either be seen as a species of more general cases (POLITY?) that would invite speculation about whether they could re-organize in a different way, and plan to do so if they decide to. Further, and perhaps most important, if an ape should have or ever develop a concept analogous to our RIVER – say, RIVER
B
(‘RIVER-for-a-variety-of-baboon’) – its concept's features would very likely be restricted to those that can readily be extracted from sensory input, and its use would be restricted to meeting current demands, not allowing speculation about what one can expect to find in particular forms of geographic terrain. In a similar vein, it is hard to imagine a chimp developing a homologue to human concepts such as
JOE SIXPACK, SILLIEST IRISHMAN, or – for that matter – SILLY and IRISHMAN. In addition, on at least some plausible views of the lexicon and the meaning-relevant information it contains, mental lexicons must provide in some manner what are called “functional features,” such as TNS for tense (thought of syntactically and structurally) and several others that play roles in the composition of sentential concepts. These, clearly, are not in an ape's repertoire, and they certainly count as ‘abstract.’
The scope of animal concepts appears to be restricted in the ways that animal communication studies of the sort found in the work of
Gallistel and others indicate. To emphasize points made above: their conceptual features do not permit them to refer to the class of fruits, to forms of social institution, to rivers as channels with liquids that flow (distinct from creeks, streams, rivulets, etc.), to creatures such as humans, donkeys, and even ghosts and spirits with psychic continuity, to doors as solid and apertures, and so on. Rather, their concepts appear to involve
ways of gathering and organizing sensory inputs, not abstract notions such as INSTITUTION, PSYCHIC CONTINUITY, and the like that have dominant roles in human concepts. No doubt they have something like a ‘theory of mind’ and can respond to the actions of conspecifics in ways that mirror their own action (and deceit, etc.) strategies and routines. However, there is no obvious reason to assume of them that they understand a conspecific in terms of its executing action plans (projects), deliberating what to do next, and the like. That requires symbol systems that provide for ways of organizing concepts in the ways humans can, given language. Do they think? Why not? We say computers do, and it is apparent that little but usage of the commonsense term “think” turns on that. But can they think in articulated ways provided by boundless numbers of sententially organized concepts? No. Their lack of Merge indicates that.
Another line of inquiry – suggested obliquely above – notes that
human linguistically expressed conceptual packages allow for the operations of affixation in morphology, and for dissection when they appear at a semantic interface in a compositional sentential structure. The concept FRUIT expressed by the relevant morphological root gets different treatments when subjected to morphological variation: one gets
fruity
(which makes the associated concept abstract and adjectival),
fruitful
(dispositional notion),
fruitiness
(abstract again),
fruit
(verbal),
refruit
(produce fruit again), etc. So far as I know, no other creature has concepts that provide for the relevant kinds of morphosyntactic ‘fiddling.’
As for dissection: when one encounters sentences such as
Tom is a pig
(where Tom is an 8-year-old child), the circumstance of use and the structure of the sentence that predicates being a pig of Tom require for interpretation that one focus on a (usually small) subset of the features commonly taken to be piggish, treating these as the ones ‘called for’ by a specific state of Tom.
If he is wolfing (another metaphor) down (still another) pizza, GREEDY is likely to be one of the features dissected from the others and employed in this circumstance. Human languages and the concepts that they express provide for this kind of dissection, and the desire for
creativity in use routinely exhibited in metaphor depends on it. Perhaps animals have complexes of features for PIG. It is unlikely, though, that they have GREEDY (an abstract notion applied to more than pigs) or that their cognitive systems are equipped to easily dissect one part of their PIG concepts from others and apply that part to a situation, as is common with constructions that call for metaphorical readings. I assume that dissection applies only at an interpretational interface, SEM. Until that point, as indicated, a lexical item's semantic features can be thought of as carried along in a derivation as an atomic ‘package.’ Arguably, however, an animal's concepts remain functionally atomic all the way through whatever kinds of cognitive operations are performed on it. What is known about animal communication systems, and about the limited degree of flexibility in their behaviors, environments, and organization, suggests this.
The last two lines of inquiry, and to a degree even the first, point to the fact that the human conceptual materials contained in mental lexicons have properties that might be contributed by, but are certainly exploited by, the compositional operations of a uniquely human language faculty. Were these
properties of human conceptual materials ‘there’ before the introduction of Merge, were they instead invented anew only once the system came into place, or rather do they consist in ‘adaptations’ of prior conceptual materials to a compositional system? I do not attempt to answer that question: I know of no way to decide it one way or another, or to find evidence for a particular proposal. Clearly, however, the concepts humans express in their languages – or at least, many of them – are unique to humans.
I should mention one endless class of concepts that plausibly does depend on Merge. Apes and other creatures lack recursion – at least, in the form found with language. If they lack that, then – as Chomsky suggests – they lack
natural numbers. So NATURAL NUMBER and 53, 914, etc. are all concepts unavailable to other creatures.
5
There is plenty of evidence of this. While many organisms have an
approximate quantity system, and their approximations respect Weber's Law (as do very young children's), only humans with a partially developed language system have the capacity to enumerate (assuming that they employ it: for discussion, see p. 30). Only humans have the recursive capacity required to develop and employ a number system such as that found in the natural number sequence. Specifically, many organisms can reliably and in short order distinguish sets of objects with 30 members from
those with 15, and with accuracy that decreases in accordance with Weber's Law, sets of 20 from 15, 18 from 15, and so on. However, only humans can reliably distinguish a set with 16 from one with 15 members. They must count in order to do so, employing recursion when they do. The work of Elizabeth Spelke, Marc Hauser, Susan Carey, Randy Gallistel, and some of their colleagues and students offers insight and resources on this and some related issues.
To summarize: ‘exact’ number concepts aside, it is difficult to draw on the
demonstrated human uniqueness of Merge as it is currently understood to explain why human concepts are unique. We found a more promising way to proceed in the apparently human-specific natures of many of the semantic features that compose human concepts – the fact, for example, that humans even at an extremely young age seem to have the
feature PSYCHIC CONTINUITY built into their concepts for a wide range of organisms, and definitely for humans. Whether this approach ultimately proves successful and explains the difference is unclear. It is reasonably clear, however, that human conceptual resources are indeed
unique.
1
See Fodor (1998) for his view on “appearance properties.” As for why Chomsky's view of internal MOPs is only ‘like’ Fodor's: Fodor's MOPs are essentially clusters of beliefs, while Chomsky's are clusters of “semantic features” that are carried in a derivation/computation of sentence/expression from lexical items to a semantic interface SEM. Beliefs have no roles in this story, unless what is on the ‘other side’ of SEM includes systems of I-beliefs which themselves demand explanation and description in a naturalistic theory of another mental system.
2
I should emphasize that Fodor's externalism is not the radical one found in the work of – among others – Michael Tye, where the “perceptual content” of a thing is the thing itself (really, commonsense thing – making the view even more puzzling) in its entirety. See Fodor's review of Tye's recent book
Consciousness Revisited
in the
Times Literary Supplement
, 16 October, 2009. But do not take too seriously Fodor's blaming Putnam for Tye's views. Putnam influenced Fodor's own externalism.
3
See, however, below and
Appendix XII
.
4
Commenting on this paragraph and the difficulties it mentions with determining what is at SEM and what its contents ‘do,’ Chomsky reminded me of the fact that since little (really, nothing) is known about what is on ‘the other side’ of the semantic interface SEM, it is worth mentioning the difference between this case and the phonetic one PHON, where there is at least some understanding of the relevant articulatory and perceptual systems involved, and thus some understanding of what
PHON is, and where it should be placed in the mind's architecture. That is a useful reminder to not just me, but to anyone ‘doing semantics,’ whether the internalist variety I suggest pursuing where syntax and composition serve for compositionality and SEMs ‘configure’ understanding, the different variety of internalism found in Paul Pietroski's work, or various other flavors of internalist and externalist formalist, truth-conditional, Fodorian, and other efforts. That said, I suspect that there are good reasons for adopting an internalist approach, for placing ‘concepts’ in LIs, for relying on morphosyntactic processes for ‘semantic’ compositionality, and for treating SEMs as ‘acting’ in an adverbial form. For further discussion, see below, and
Appendix XII
.
5
Obviously apes do not have REAL NUMBER either, but that is irrelevant: neither do most adult humans.
 
Appendix VI:
Semantics and how to do it
VI.1
Introduction
 
Semantics (which I will gloss sometimes as the theory of meaning) is understood by most to be an attempt to construct theories that focus on word–
world relationships, whether they be referential (
Big Ben
referring to a clock and the structure it is in/on) or alethic (based on truth and correctness), so having to do with the
truth and correctness of sentences or perhaps propositions (
The US invaded Vietnam
). Chomsky questions the value of pursuing semantic theories if semantics is understood in this
way. His criticism often focuses on those efforts to construct theories of meaning (semantic theories) that appeal to what the theoretician must suppose are regular word–world connections of the sort required to construct a theory at all. It would not do for theoretical purposes if
mouse
were to variably refer to – well, to what, exactly? It does no good to answer, “mice.” The question simply arises again, with an added complication: someone seems to have thought that they have provided an answer, even an illuminating one. The fact is,
mouse
(and the associated concept MOUSE) can be used by people in whatever context they happen to be in to serve any number of purposes, and to refer to any number of things – computer scrolling devices, a person, any member of any of several species of rodent, a lump of fluff, a toy, a . . . (I do not exclude metaphorical uses; there is no reason to.)
Moreover, to point to the relevance of the discussion of the section above: reference of the
sort that human beings engage in routinely
demands a form of ‘constructivism.’ The use of a concept such as MOUSE assigns to a small grey creature (perhaps
mus musculus
) envisaged in some discourse domain not the sensory features of mice (or not just these, as an animal's concept – adjusted to its sensory systems – might), nor properties that mice themselves might actually have in some biophysical science of
mice, but the features that our commonsense concept MOUSE has. (Compare RIVER, HOUSE, etc.) We routinely assign properties such as
psychic continuity to these creatures when we employ the relevant concepts in acts of reference, effectively ‘making’ them into creatures with that property, along with whatever other properties a use of a sentence with
mouse
in it might or might not assign. Nothing like that fact – which indicates a form of constructivism on the part of the human mind – is taken up in the usual views of semantics and how to do it. So how do those who attempt semantic theories of the usual sort proceed? I outline some of their aims and strategies and what is wrong with them.
Before proceeding, a caveat: objections to attempts to construct naturalistic semantic theories that involve word–world (head–world) relationships do not apply to
all
kinds of formal semantic theories, including some that
introduce mental models conceived as domains (perhaps ‘worlds’) in which the sentences of an I-language are ‘true’ – essentially, true by stipulation. Similar points apply to
efforts to construct “discourse domains.” There may be other objections to efforts like these, but on the face of it, they can be thought of as syntactic (internalist) attempts to ‘express’ the meaning of expressions and their elements, plus any of the formal relations they introduce (some varieties of entailment, for example). In effect, despite use of terms such as “true of” and “refer” or “denote,” they do not go outside the head. Because of this, an internalist can and often does appropriate a range of insightful work in what goes under the name of “formal semantics,” and even parts of pragmatics.
And a reminder: in
remaining in the head, internalist semantics can and should be seen as a form of syntax. Whether internalist semantic efforts focus on the meanings of words and sentences, or on
discourse as in “discourse representation theory” or “dynamic semantics,” the focus (whether practitioners of formal semantics, discourse representation theory, dynamic semantics, and the like concur or not) is on symbols and their potential for employment, not on their actual use by a person on an occasion to refer and say something that he or she holds-true. As indicated, one can
introduce denatured versions of reference and truth to express the potential of a word, sentence, or discourse to be used. One can introduce mental models of varying degrees of complexity. In doing so, one can introduce a ‘relation’ that
Chomsky (
1986
,
2000
) calls “relation R.” It seems to amount to something like this: for each nominal in a referring position in a sentence, place a class of ‘entities’ in the model; for each verb with
n
arguments singular sets, or pairs, or triples . . ., and so on. ‘Reference’ in exercises like this is stipulative, as is truth. Why introduce them, then? Intuitively, they seem to respond to a virtually default view of how to understand the meanings of words, sentences, and discourses. I strongly suspect that this view of meaning appears to be the
obvious one because of the
constructivist contributions of the mind (with language) mentioned above. I suspect too that for theoretical purposes there are better ways of capturing what is at stake (ways that avoid the potential for being misled), but if handled carefully, a denatured notion of reference to things and situations in mental models has the advantage that it coordinates internalist (basically syntactic) semantics with formal semantic work that has been in place for a long time. As for ‘real’ reference and truth: for the former, keep in mind that it is something that people do, and do freely. It is not a good topic for naturalistic theorizing. For the latter, it is better to speak of truth-indications, as Chomsky does (see pp. 273–274).
The theoretical aims of semantic internalists such as Chomsky should be obvious by now, but in case they are not: they want to shift the attention of those who might wish to construct naturalistic theories of meaning from efforts to construct theories of language's use/application (which are bound to fail) to attempts to construct theories of the ‘internal content’ of words such as
mouse
and
run
and the complex expressions in which they appear – and so too for all other ‘words’ (lexical items) and expressions. Accounts that focus on the use of language introduce relations such as reference or denotation to things ‘outside,’ while naturalistic internalist accounts like Chomsky's focus on
attempts to provide biophysical descriptions and explanations of the intrinsic ‘semantic’ (meaning-related as opposed to sound-and-sign-related) properties of linguistic expressions inside the head and the internal combinatory operations (computations) that combine these to form the complex intrinsic contents that sentences/expressions have. If one wants to keep the technical term “semantic theory” at all, they suggest, one must think of a semantic theory as a theory of the intrinsic contents of words and expressions in the head and the ways the language faculty combines them. In this regard, it is like phonology, another form of internalist syntax. No one (with some exceptions noted in the main text) thinks that
phonology, however successful, begins to address, much less solve, the issues raised by acoustic and articulatory phonetics.
Yet externalist semanticists think that they can both address and manage to speak in a responsible way to something similar but even more peculiar, to what is ‘out there’ and how our minds relate to it in a ‘representational’ way. If construed as a naturalistic project, it is hopeless; if not, it is at best a description of how people use language sometimes and, as Wittgenstein noted, cannot be turned into a theory of any sort. If either, it is – as Chomsky pointed out in comments to me when discussing these issues – an effort at what Russell called theft rather than honest toil. The aims of the internalist in contrast are remarkably modest, for they focus entirely on ‘what happens’ in an internal module that allows it to automatically yield intrinsic contents. If the term “intrinsic content” bothers you, substitute for it something like “‘
information’ provided by a linguistic derivation at an I-language's
‘semantic’ interface with other systems in the head.” This simplified picture of semantics on internalist terms might need to be modified or changed to accommodate advances in the theory of language, of course. The point is to indicate the strategy, one that has led to some success.
VI.2
What is wrong with an externalist science of meaning: first pass
 
To many, including most defenders of semantics as it is usually conceived,
semantic internalism seems absurd. Internalism is not rejected because there has been great success in constructing externalist semantic theories. On the contrary: the proposals for how to proceed to construct an externalist semantic theory that can claim to be a science remain programmatic at best with no sign of progress, or identical with internalist model-theoretic efforts, or clearly wrong – a serious fault after several centuries of externalist efforts, one that indicates that something is wrong with the strategy and its basic assumptions. Elementary issues are left untouched, or fobbed off in some manner. It is no surprise. An externalist semanticist with naturalistic scientific intentions cannot hope to deal with the meanings of RIVER, PERSON, MOUSE, CITY, BOOK, or any of the thousands of other concepts routinely expressed at language's semantic interface. Nothing that suits the kinds of features that compose these concepts is actually to be found ‘out there’ in the subject matters of natural sciences that deal with natural objects. For no naturalistic externalist could seriously hold that there is a London ‘out there’ that at the same time (and within the same sentence) is considered for movement upstream to avoid flooding and also seen as a valuable territory with buildings, bridges, streets that everyone (no doubt) will regret abandoning if a move should take place. This abstract/concrete alternation is not an issue for the internal concept/MOP LONDON:
any
polity concept invites both
ABSTRACT and CONCRETE characterizations. Paul Pietroski nicely illustrates this (
2002
) with his sentence “France is hexagonal and it is a republic.” But externalist efforts continue, and are far more popular than any of several reasonable internalist options, at least at this point. Even the naïve and the neophytes prefer them. In effect, they are the default positions. Because they are, it is worth trying to state and undermine the assumptions that attract those who maintain externalist versions of natural language semantics. Otherwise, they will continue to attract and distract from serious work.
There is a clue to this popularity in the fact that the naïve and the neophyte are easily drawn to externalist views. It suggests the influence of
commonsense realism. Keeping in mind that the aim is a natural science of linguistic meanings, and remembering that common sense and its practically oriented forms of problem-solving have time and again distracted from
naturalistic scientific research much more than aiding it, this is no surprise. The concepts of common sense are the subject matter of the natural science of the meanings of natural language expressions, but the ways these concepts-meanings are used to think and speak of the world are not. In fact, as we have seen, their uses exhibit the
flexibility and interest- and context-sensitivity that natural language concepts invite and underwrite – flexibility that happily supports the human desire for the satisfactions given by exercising
linguistic creativity. Because of this, these concepts’ uses appear to be beyond the reach of objective naturalistic scientific research, however valuable observations of these uses might prove to be in offering evidence for an internalist and postulational natural science of these concepts. Nevertheless, it is simply a mistake for the natural scientist to look for the meanings of natural language expressions in the uses themselves to which words and sentences are put or the objects and situations that uses of natural language sentences help constitute. No doubt from the point of view of commonsense understanding and the world it envisages, London
is
a territory with structures and streets, and it
can
be moved without moving the territory and its structures and streets. For the practical problem-solving purposes of commonsense understanding, there is nothing wrong with construing a polity in both ways at once. It is in fact a great advantage; it helps underwrite (without exhausting by any means) the cognitive flexibility that dealing with everyday and social issues demands. However, the
aim of a
science
of natural language semantics cannot be content with a London that can have two contrary properties at the same time serving as the natural science referent of the phrase “the meaning/referent of ‘London.’” No target of a natural science of objects outside the head can look like this, conundrums such as photons described as both waves and particles aside: that is just what photons are from the point of view of quantum mechanics, and the ‘puzzle’ is not a puzzle for the science, just a puzzle for commonsense notions of waves and particles. In contrast, a London ‘out there’ with contrary properties is a serious problem for science, however true or false mental models of this London might be. Natural language POLITY concepts inside the head, however, can contain these contrary ‘ways to construe’ in a package of lexical semantic features. And to say that these packages have both ABSTRACT and CONCRETE as semantic features is not to say that
they
– LONDON and the family of concepts such as TOWN, CITY, VILLAGE, STATE . . . to which it belongs – are all themselves both abstract and concrete. It is to say that when any one is employed, it and the others can be used to construe something as abstract, as concrete, and even as both in the same sentence (although perhaps in different clauses: “France is a hexagonal republic” is a bit odd). No problems of self-predication should arise either: to let them
arise is mistaking what semantic features are, and what they ‘do’ at a sentence's semantic interface.
1
Note that from the point of view of naturalistic scientific research, the things and events that constitute the domain of the best science available of a specific domain can be and are taken to exist. We have no better understanding of existence than that the claims of the relevant theory are the best that can be offered, so far as we know at a time. The alternative to this claim of existence for the objects and events of a good theory – some form of instrumentalism, perhaps – is just a way of insisting on the priority of the things and events of common sense joined to a translation scheme that after many tries never succeeds. Getting even more desperate, the
phenomenalist tries to make some odd theoretical objects called “sense impressions,” “sensations,” or “sense data” into the subject matter not only of science, but common sense. Phenomenalism can be ignored, as can
instrumentalism; both are at best hopeless efforts to translate sciences into supposedly more familiar notions. What cannot be ignored are the claims of
commonsense realism; they held even Descartes and Newton (and many lesser scientists) in their grip. But by adopting a ‘two worlds’ view, these claims can be acknowledged and – so long as one does not let them infect the methods and entities of natural science – placed in their proper domain where their effect on science is neutralized. The world of common sense serves the purposes of commonsense understanding. The scientist, however, wants to find out what things are and how they came to be that way in an objective, well-controlled way. For the science of the meanings of natural language expressions, postulating a theory of phenomena found in the head yields this. The theory aims to describe and explain just what the human mind has available to it in terms of the
conceptual resources of
natural languages. In a way, too, it can help make sense of why the commonsense world has the shape and character it does. That fact underscores the differences between the frameworks, and so far as explaining why things seem to be the way they are in the commonsense domain goes, it clearly gives the priority to science and the internalist science of meaning in particular. If so, why again should anyone who wants a science of linguistically expressed concepts/meanings be attracted to externalist speculations?

Other books

Gaslight in Page Street by Harry Bowling
Whispers on the Ice by Moynihan, Elizabeth
The Reaping by Leighton, M.
Another World by Pat Barker
All That Remains by Michele G Miller, Samantha Eaton-Roberts
Adrift by Erica Conroy
A Fighting Chance by Annalisa Nicole
Watchlist by Jeffery Deaver