The Science of Language (10 page)

Read The Science of Language Online

Authors: Noam Chomsky

The point
is that if you really accept the achievements of the modern sciences since Newton, what you're trying to do is construct the best explanatory theory you can. And you would like to unify it with other guesses about what the best theories of other phenomena are. But if one of them can't be reduced to one of the others, it doesn't say anything. It just shows you that something is wrong.[C]
JM:
Returning to your remark about language providing innovation, in addition to coordination, integration, and the like. Science – unlike language – seems to offer a different kind of innovation – although again, unique in the animal kingdom. It provides new conceptual materials, new concepts. Language is innovative, but its innovation is compositional; it takes what is available and puts the pieces together
.
NC:
The language faculty itself uses the conceptual resources that are available . . . It's a little hard to say what language is ‘itself.’ Does the English language include the word
gravitation
? We're somewhere in an area now where our linguistic capacities and our science-creating
capacities are interacting. We don't understand either of these systems enough to know where to go on.
JM:
I'm thinking of language as primarily the core system, the computational system that operates so that one form of Merge yields argument structure and another provides various forms of edge effect, such as focus. That [system] seems best suited to dealing with concepts that are – as it were – available – and that will serve our interests as people who use common sense to try to deal with the world and think about what their position is and how they can change it, and the like. Science, on the other hand, really does seem to offer the opportunity to introduce new conceptual materials in a different kind of way
.
NC:
The sciences provide completely different kinds of ways of looking at the world, which are completely counter to common sense. In fact, common sense – at least, in the advanced sciences – has been completely abandoned; it's not even a criterion any more. But that's a very modern, very special development, which holds [only in] certain areas.
1
Descartes offered what amounts to a computational theory of vision, one that indicates that the visual system ‘solves problems’ such as determining visual depth by performing a geometric calculation of sorts entirely in the mind, given ‘input’ concerning the degree of convergence of the eyeballs. It might seem puzzling that he introduced the rudiments of a science of mind that shows how the mind offers humans sensations of depth while insisting that a science of mind that offers sensations of depth and other mental phenomena is out of reach. The puzzle disappears when it becomes clear that what he had in mind by science is a contact mechanics, which – if the discussion above is correct – is heavily tied to common sense and folk physics. For some discussion, see my introduction to the 2009 edition of
Chomsky (
1966
/2002/2009).
 
12
Chomsky's intellectual contributions
 
JM:
Noam, let me ask about what you take to be your most important contributions. Do you want to say anything about that?
NC:
Well, I think that the idea of studying language in all its variety as a biological object ought to become a part of future science – and the recognition that something very similar has to be true of every other aspect of human capacity. The idea that – there was talk of this in
Aspects
, but I didn't really spell it out – the belief . . .
 
[
Wait; I'll start over. B. F.] Skinner's observation is correct that the logic of behaviorism and the logic of evolution are very similar – that observation is correct. But I think his conclusion – and the conclusion of others – is wrong. Namely, that that shows that they're both correct. Rather, it shows that they're both incorrect, because the logic of behaviorism doesn't work for growth and development, and for the same reason, the notion of natural selection is only going to work in a limited way for
evolution.
1
So there are other factors. As I said in
Aspects
, there's certainly no possibility of thinking that what a child knows is based on a general procedure applied to experience, and there's also no reason to assume that the genetic endowment is just the result of various different things that happen to have happened in evolutionary history. There must be further factors involved – the kind that Turing [in his work on morphogenesis] was
looking for, and others were and are looking for. And the idea that maybe you can do something with that notion is potentially important. It's now more or less agreed that you can do something with that notion for, say, bacteria. If you can also do something with it for the most recent – and by some dimension most complex – outcomes of evolutionary history like language, that would suggest that maybe it holds all the way through.
JM:
Well, it would be pretty radical progress if we're actually at the stage now where we can begin to ask for language the old question, “Why are things the way they are?” I guess you think we're at that stage
.
NC:
To some extent. I think that there are even some answers . . . In recent work, I've been trying to compare what now seems plausible with what seemed plausible ten years ago. And a good deal of machinery that was thought to be needed has in fact been cut away. How far you can go with that – who can tell? That's like asking what really is specific to language. These questions were coming up all along; that's why I brought up that 1974 biolinguistics conference [I mentioned before]. When you read through the transcript, the questions kept coming up – what could it be that is specific to language? How could it be so remote from everything else in the biological world? It didn't make biological sense. But you were stuck with it. Well, by now you're less stuck with it, and you can begin to ask more seriously the basic questions of biology of language – some of them, answer even. There are still huge gaps. Take the first point you mentioned, about the nature of the
concepts. We have nothing to say about how they evolved.
JM:
But you do assume that they have to have been in place at the time that language developed through the introduction of
Merge . . .
NC:
That seems to be necessary to make sense out of the apparent contributions of language. They had to be, and the reason they had to be is that every living human being has basically the same ones. So they must have been there before the separation – before the trek from Africa – which means roughly fifty thousand years. So they predate fifty thousand years. And there's no real evidence that Merge really existed before roughly that time. Take a look at the work on the
evolution of language. It's mostly been misconstrued. There's lots of interesting work showing adaptations of the sensory-motor system that appear to be language-related. So for example, the ear and articulatory muscles seem to be geared to the range of sounds that are used in language. But that doesn't tell you anything. All that that tells you is that whatever grunts hominids were using may have played a role over hundreds of thousands of years in changing the structure of the middle ear. That wouldn't be too surprising. It's like any other animal – take frogs. Take a particular species of frogs; their auditory systems will be
correlated with their articulatory system. But that's precursors of language. Yes, that's going to be true for every organism. So everything that's found about the sensory-motor system – at most, what it's telling you is, well, these are precursors to language of the kind that you find in frogs. But there has to be that point at which you suddenly get that
explosive growth – this great leap in creative activity going on. It looks as though it's roughly at the point of the separation
of the breeding group all over the world. To the extent that that's true, you've got a really narrow window in which something happened, and the simplest assumption is that what happened is the recursive
procedure developed.
JM:
In the phonetic domain – you know Laura Petitto's work, of course – Laura's been suggesting that the fact that we can be at least bimodal in our use of language has something to do with the fact that
somewhere in the [human] superior temporal gyrus there is something that recognizes certain patterns of sounds and signs that repeat themselves on the order of 1, or 1.5 hertz. The suggestion is that this underlies human linguistic syllabic structure of the sort displayed in all natural languages. That raises two questions. One is with respect to what you just said. The language faculty provides instructions to the articulatory systems, whether they be sign or speech. Those instructions have to be of a particular sort . . .
NC:
. . . well, those systems are going to have certain characteristics that they evolved over millennia – in fact, maybe millions of years. They're going to have their characteristics, whatever they are. And if the capacity for using an infinite system pretty suddenly develops, they'll make use of those properties. Actually, it doesn't seem to me at all impossible . . . If you think of the most elementary properties of the
minimalist logic of evolution, everyone has to agree that at some stage of the game, a mutation led to an infinite generative process. You can't get around that unless you believe in miracles. So at some stage, something like Merge developed. Mutations develop in an individual, not in a community, which means that [Merge] developed in some individual. That individual suddenly had the capacity for an infinite range of thought,
planning, interpretation, and so on. He or she didn't have to externalize it. In fact, there would be no point in externalizing it, because it was in an individual. Well, if it's in an individual, it's going to get transmitted through children – through the group, somehow. The ability to plan and think and interpret and so on has selectional advantages. So whatever people happen to have this would probably do well in reproducing relative to the others. So a small breeding group would become dominant in a very short time. Pretty soon everyone has it. Nothing yet may have been articulated. Somewhere along the line,
externalization took place; and that has further advantages. But then the externalization is going to make use of the
sensory-motor apparatus that's around. It could have been sign; it could have been song; it could have been anything you've got available. And yes, it'll be adapted to that.
JM:
But doesn't it at least require a system that involves modulation of a linear signal of some sort?
NC:
Well, if you're going to externalize it, it's going to have to come out in time, which means it has to be linearized. The internal system may have no linear order; maybe everything is going on simultaneously.
It's at this point that intricate and interesting questions about the study of the structure of language enter into these considerations. Is there any
evidence, for example, that in the narrow syntax mapping to the semantic interface – in that part of language which goes to the semantic interface, but not to the sensory-motor system – in that part of language, is there any evidence for linear ordering? It's an interesting question. It was raised a long time ago by
Tanya Reinhart in the seventies when she was the first one to
argue that c-command
2
– which everyone was using; they were calling it different names, but it's now called c-command – didn't involve linearity, just hierarchy. It's a surprising proposal; but it looks as if it's plausible. And then comes more and more work trying to ask whether in fact in the generation of objects at the semantic interface – so, narrow syntax, however it maps it on – is there any linearity at all? Well, if there is none – which is the optimal assumption, because it requires the least complexity – then it would suggest from an evolutionary point of view that in fact
externalization is something peripheral, and that the effect of the sensory-motor system is minimal on the nature of language. Here you really are integrating deep inquiries into the structure and nature of language with speculations about evolution. They interact.[C]
1
See also
Appendix II
's discussion of varieties of views of evolution and emphasis on the idea that evolution now reduce its dependence on what Lewontin and Turing call “history,” and emphasize instead connections to other factors – in Chomsky's terminology, “third factor” considerations.
2
On c-command and its role, see
Appendix VII
.
 
13
Simplicity and its role in Chomsky's work
 
JM:
Could we talk a bit more about the notion of simplicity and its development in your work? There's always been that notion of
theoretical simplicity; it's continued throughout all of your work. It's simply taken to be characteristic of the nature of scientific
investigation. Then there's also that internal simplicity which you pursued in
LSLT
[
The Logical Structure of Linguistic Theory]
and
Aspects
[
of the Theory of Syntax
]
. . .
[C]
NC:
. . . and also in the earlier work. That second one leads directly to what's called “
the Minimalist Program.” That's just another name for it. At some point – sort of like in the fifties when you begin to try to reframe the methodological studies of
language into a biological perspective – sometimes you can reframe the methodological conditions into empirical hypotheses about how organic systems, or maybe all systems, are formed. And to the extent that you can do that, you can investigate them as empirical hypotheses and look for evidence elsewhere – say, in the formation of snowflakes, or insect navigation, and so on – and see if there really are principles of computational complexity or whatever that are simply a part of nature, just as other natural laws are. And if you can reduce aspects of language to those, you have an account that – in the technical terminology of linguistics, where
explanatory adequacy is solving Plato's problem – goes beyond explanatory adequacy.[C] You can begin to ask why the principles of Universal Grammar have these forms and not other forms. It becomes an empirical problem of biology; and it's on a par with others – actually, the kind that Turing was interested in.
 
JM:
That's the third
factor
.
NC:
That's the third factor.
JM:
What about parameters? Are they in any way a development of that notion of internal simplicity?
NC:
In a certain sense. What actually happened – not in an instant, but if you look back, you can see what happened – was this. Look at
structural
linguistics, including [Zellig] Harris's work, which was essentially a set of procedures for reducing linguistic materials – a corpus – to some organized form. Harris, by the way, pursued [this project] with the same kind of
integrity that Goodman [discussed below] did – and it led to conclusions that seem to me incorrect for the same reasons. For him there was no truth of the matter: you could do it this way, you could do it that way, depends on which processes work. Those were essentially a set of procedures for reducing organized materials to a structural description of a particular type, and it was guided by some methodological considerations of simplicity, among others, such as utility. I spent a long time trying to work on the procedures, and finally convinced myself that they're not going to work, for fundamental reasons. You can see it at the most elementary level.
Let's rethink these issues in terms of language acquisition [for that leads to
parameters]. Harris wouldn't have, but it's a parallel question. Reducing a corpus, or organized materials, to a specific form is analogous to taking the data of experience and
ending up with an I-language; it's an analogous procedure, [but] the second one happens to be an empirical problem of biology, and that's preferable to the methodological problem of organizing material. The first step would have to be breaking up noises into small units – maybe syllables, maybe phonemes, or
whatever. The next step is going to have to be what
George Miller back in the fifties when he was thinking about these things called “chunking”; you have to have larger units. What's the next larger unit above, say, a phoneme or syllable? From the point of view of the organization of the structure of grammar, the next larger unit is morphemes. But that can't be,
because there cannot be a procedure to find morphemes. The reason is that a morpheme is a notion that is abstract in its relationship to data; something is a morpheme because of the way that it fits into a much broader system. So you're not going to be able to find them by some procedure. That's why Harris's approach to statistical analysis of sequences of elements to find morpheme boundaries just can't work; it can only work for units that are kind of like beads on a string – one occurs after another, and morphemes just aren't like that. It's more or less like that in English, but English is a morphologically impoverished language. Even in English it won't work, but in slightly morphologically richer languages [it's obvious that] it can't work at all. In fact, the next larger unit is going to be something that's sometimes called a phonological word – something that has integrated phonological properties and is more or less like a word, but it's not going to be what we think of as words, like if I say
whyd'ja leave
, “whyd'ja” is a phonological word, but it's a complicated thing from the point of view of its syntax and semantics. But that's the next biggest unit. Well, that was supposed to be a peripheral unit for structural linguistics, but it's going to be the
fundamental unit from a procedural point of view. If you want to get anything linguistically more significant, like a morpheme or a phrase, or a construction, or whatever, you're not going to be able to find it by these procedures. They break down right at the first step. That leads to the natural, but now I think erroneous, conclusion that what you're given by Universal Grammar, your genetic endowment, establishes a format: here's the kind of system that will count as a language. And then the task of the child – from another point of view, the
linguist – is to find the optimal instantiation of the format, given the data. That's where the simplicity measure comes in – what's optimal, by some measure – and then you have to spell out the measure. The measure would be all of the notations that people use – phrases, [
t]
becomes [
š
] preceding [
iyV
], and so on. These, as I always understood them – going way back to the fifties – are simply expressions of your internally set conception of
simplicity. They are ways of mapping a rule system into a form where you can assign a number, namely the number of symbols, and measure the simplicity numerically: ultimately, simplicity is going to be a numerical measure. So all of these are ways of assigning a number to a system of rules, and you hope that the method is going to capture authentic linguistic generalizations, and hence they'll really mean something – they'll be a part of the nature of language and your cognitive structure, and so on. So it is an empirical problem, but there's no computationally feasible method for going from data to finding the optimal instantiation of the format. That's why
Morphophonemics of Modern Hebrew
([Chomsky]
1951
/1979) gives a relative maximum. It says, “Well, let's pick this one and show that it's better than any slight modification of it.” But it's not really picking one from the data. That was in the forties; and it's inconceivable – it can't be done. It's computationally intractable. So it can't be the method of language acquisition; it can't be the truth about language.
Well, this framework – format, instantiation, simplicity measure, evaluation – that framework lasted pretty much through the seventies, and it did raise serious conceptual barriers to trying to find out what's
distinctive about language – what's the third factor, so that we can assign it to something else, and the residue will be what's distinctive about language. It's just a barrier: you could raise the question, and pursue it to an extent, but not too
much. That's where the principles and parameters approach was important; it separated the question of
language acquisition from the question of the format. Language acquisition under this point of view was a matter of setting the values of parameters, and the format for Universal Grammar no longer has to meet the condition that it is so restrictive and so highly articulated that it leads to a small number of choices only and therefore makes the computational task tractable. It could [now] turn out that Universal Grammar is very
unrestricted. If you have a format-instantiation framework, it's necessary that the format be
highly restricted and highly articulated, or you'll never be able to choose an instantiation, or pick one over another.
It's kind of like the projection problem [that Nelson Goodman discussed in his
Fact, Fiction, and Forecast
]: if you don't have any constraints, you're not going to be able to solve the projection problem. You're going to have to have very narrow constraints to have a feasible approach to picking an instantiation that's the right one – maybe not just one, but at least a small number. So throughout the whole format-instantiation-evaluation framework [period], it was necessary for the format to be highly restricted and highly articulated, with lots of special mechanisms, and so on and so forth – and therefore very little contribution of the third factor, and lots of highly specific components of language. It also made the problem of studying the evolution of language completely hopeless.
The principles and parameters approach broke that impasse by separating the problem of acquisition entirely from the problem: “What's the format?” It leaves all the questions open. But at least the conceptual barrier to studying the third factor is removed. It is then not impossible – and you can try to show that it is true – that the format for grammar actually does involve, to a high degree, principles of computational efficiency, and so on – which may be not only extra-linguistic, but extra-organic – and the acquisition problem is then shunted aside. It's a matter of fixing the parameters.
Of course, that raises another question, “Why does language have principles and parameters, and why these parameters?” That becomes another interesting empirical question which maybe you can answer on third factor grounds, and maybe not. In any case, [principles and parameters] made it possible to pursue in a much more serious way a search for factors like eliminating redundancy, simple rule systems, computational efficiency, and so on, on the basis of principles that may well be non-linguistic, not a part of Universal Grammar, and therefore not a part of the distinguishing characteristics of language. By the 1990s, a
number of people – I was one, but also Michael Brody, Sam Epstein, and others – felt that there had been enough progress in this approach that it was actually an identifiable research domain, and that's when the name [minimalism]
came along, just to identify that domain. It was a matter of picking up old problems and looking at them on the basis of new understandings, and plenty of assumptions – like the assumption that the principles and parameters approach is correct. There are plenty of posits there; no one can tell you what the parameters are. The closest that anyone has tried to get is Mark Baker's overview, which is very
interesting but, as many people have pointed out, does not get to what are often called “microparameters,” the kinds of things that Richard Kayne particularly has worked
on. They're very different. So what the whole picture of the array of parameters will look like is very much an open
question.
JM:
It's a complicated task disentangling all the various contributing factors, dealing with a child's course of development . . .
NC:
But [that's not at all an issue for most linguists]. Most linguists, and social scientists in general, are so data-oriented that they find it scandalous to accept [methodological] principles that really ought to be obvious – for example, the idea [see Chomsky
1986
and the introduction to Chomsky
1980
/2005] that you should try to study language acquisition in a pure case,
uncontaminated by the innumerable factors that actually enter – your parents speak one language, and the kids on the street speak another. That's obviously going to have all kinds of complicated effects on language acquisition. But if you really want to find the principles of language, you have to abstract away from that. That's why scientists do experiments. Galileo had to fight this battle, one that you would hope would have been over by now. Well, it's not over in his field. And the same thing is true over here [in linguistics]. So, for example, an awful lot of what is in any language – say, Arabic – is the result of
historical events that in themselves tell you nothing about the language faculty. Take the Norman
Conquest. The Norman Conquest had a huge effect on what became English. But it clearly had nothing to do with the evolution of language – which was all finished long before the Norman Conquest. So if you want to study the distinctive properties of language – what really makes it different from the digestive system – and some day maybe [study] the evolution of those properties, you're going to have to abstract away from the Norman Conquest. But that means abstracting away from the whole mass of data that interests the linguist who wants to work on a particular language. There's no contradiction in this; it's just a sane approach to trying to answer certain kinds of far-reaching questions about the nature of language. But that's often considered scandalous.
JM:
[Switching a bit,] what's your view now of the
status of
LSLT
? Is the work you've been doing recently a return to the project you find in
LSLT
? How do you see the historical relationship between that work and the Minimalist
Program?
NC:
It's different.
LSLT
was caught between many conflicting impulses. One was to do a distributional analysis, for methodological reasons, for some entity called “language,” whatever that is; and [that impulse] had in the back[ground] motivation[s] like reducing a corpus to a grammar. Another was the biological framework that was just beginning to be thought about. It was discussed in the Skinner review, which was about the same time – maybe a
little later – and in other works that were around.
LSLT
was sort of caught between them: now drop the first and turn to the second, and [try to] see what you're looking at.

Other books

Stowaway by Becky Black
Dark Lycan by Christine Feehan
The Z Club by Bouchard, J.W.
Lover's Leap by Emily March
The Other Side of the Island by Allegra Goodman
A Bridge of Her Own by Heywood, Carey
The Best of Everything by Roby, Kimberla Lawson
Deep Purple by Parris Afton Bonds
The Darkness by Lundy, W.J.