JM:
OK, now I'd like to get clear about the current status of Universal Grammar (UG). When you begin to focus in the account of acquisition on the
notion of biological development, it seems to throw into the study of language a lot more – or at least different – issues than had been anticipated before. There are not only the questions of the structure of the particular faculty that we happen to have, and whatever kinds of states it can assume, but also the study of how that particular faculty developed . . .
NC:
How it evolved? Or how it develops in the individual? Genetically, or developmentally?
JM:
Well, certainly genetically in the sense of how it came about biologically, but also the notion of
development in a particular individual, where you have to take into account – as you make very clear in your recent work – the contributions of this third factor that you have been emphasizing. I wonder if that doesn't bring into
question the nature of modularity [of language] – it's an issue that used to be discussed with a set of assumptions that amounted to thinking that one could look at a particular part of the brain and ignore the rest of it
.
NC:
I never believed that. Way back about fifty years ago, when we were starting to talk about it, I don't think anyone assumed that that had to be
true. Eric Lenneberg was interested in – we were all interested in – whatever is known about localization, which does tell us something about what the faculty is. But if it was distributed all over the brain, so be it . . .
JM:
It's not so much the matter of localization that is of interest to me, but rather the matter of what you have to take into account in producing an account of development. And that seems to have grown in recent years
.
NC:
Well,
the third factor was always in the background. It's just that it was out of reach. And the reason it was out of reach, as I tried to explain in the LSA paper (2005a), was that as long as the concept of Universal Grammar, or linguistic theory, is understood as a format and an evaluation procedure, then
you're almost compelled to assume it is highly language-specific and very highly articulated and restricted, or else you can't deal with the acquisition problem.
That makes it almost impossible to understand how it could follow any general principles. It's not like a logical contradiction, but the two efforts are tending in opposite directions. If you're trying to get Universal Grammar to be articulated and restricted enough so that an evaluation procedure will only have to look at a few examples, given data, because that's all that's permitted, then it's going to be very specific to language, and there aren't going to be general principles at work. It really wasn't until the
principles and parameters conception came along that you could really see a way in which this could be divorced. If there's anything that's right about that, then the format for
grammar is completely divorced from acquisition; acquisition will only be a matter of parameter setting. That leaves lots of questions open about what the parameters are; but it means that whatever is left are the properties of language. There is no conceptual reason any more why they have to be highly articulated and very specific and restricted. A conceptual barrier has been removed to the attempt to see if the third factor actually does something. It took a long time before you could get anywhere with that.
JM:
But as the properties of language become more and more focused on Merge and, say, parameters, the issue of development in the particular individual seems to be becoming more and more difficult, because it seems to involve appeals to other kinds of scientific enterprise that linguists have never in fact touched on before. And I wonder if you think that the study of linguistics is going to have to encompass those other areas
.
NC:
To the extent that notions such as efficient computation play a role in determining how the language develops in an individual, that ought to be a general biological, or maybe even a general physical, phenomenon. So if you get any evidence for it from some other
domain, well and good. That's why when Hauser and Fitch and I were writing (Hauser, Chomsky & Fitch
2002
), we mentioned optimal foraging strategies. It's why in recent papers I've mentioned things like
Christopher Cherniak's work [on non-biological innateness (
2005
) and on brain wiring (Cherniak, Mikhtarzada, Rodriguez-Esteban & Changizi
2004
)], which is suggestive. You're pretty sure that that kind of result will show up in biology all over the place, but it's not much studied in biology. You can see the reasons.
The intuition that biologists have is basically Jacob's, that simplicity is the last thing you'd look for in a biological organism, which makes some sense if you have a long
evolutionary history with lots of accidents, and this and that happens. Then you're going to get a lot of jerry-rigging; and it appears, at least superficially, that when you look at an animal, it's going to be jerry-rigged. So it's tinkering, as Jacob says. And maybe that's true, and maybe it isn't – maybe
it looks true because you don't understand enough. When you don't understand anything, it looks like a pile of gears, levers, and so on. If you understood enough, maybe you'd find there's more to it. But at least the logic makes some sense. On the other hand, the logic wouldn't hold if language is a case of pretty sudden emergence. And that's what the archeological evidence seems to suggest. You have a time span that's pretty narrow.
JM:
To press a
point of simplicity for a moment: you've remarkably shown that there's a very considerable degree of simplicity in the faculty itself – in what might be taken to be distinctively linguistic aspects of the faculty of language. Would you expect that kind of simplicity in whatever third factor contributions are going to be required to make sense of growth of language in a child?
NC:
To the extent that they're real, then yes – to the extent that they contribute to growth. So how does a child get to know the subjacency condition [which restricts movement of a constituent to crossing a single bounding node]? Well, to the extent that that follows from some principle of
efficient computation, it'll just come about in the same way as cell division comes about in terms of spheres. It won't be because it's genetically determined, or because of experience; it's because that's the way the world works.
JM:
What do you say to someone who comes along and says that the cost of introducing so much simplicity into the faculty of language is having to in the long run deal with other factors outside of the faculty of language that contribute to the growth of language, and also consists, in part, at least, of pushing into another area whatever kinds of global considerations might be relevant to not only language itself, but its use?
NC:
I don't understand why that should be considered a cost; it's a benefit.
JM:
OK; for the linguist interested in producing a good theory, that's plausible
.
NC:
In the first place, the question of cost and benefit doesn't arise; it's either true or it isn't. If it is true – to the extent that it's true – it's a source of gratification that carries the study of language to a higher level. Sooner or later, we expect it to be integrated with the whole of science – maybe in ways that haven't been envisioned. So maybe it'll be integrated with the study of insect navigation some day; if so, it's all to the good.
JM:
Inclusiveness: is it still around?[C]
NC:
Yes; it's a natural principle of economy, I think. Plainly, to the extent that language is a system in which the computation just involves rearrangement of what you've already got, it's simpler than if the system adds new
things. If it adds new things, it's only specific to language. Therefore, it's more complex; therefore, you don't want it, unless you can prove that it's there. At least, the burden of proof is on assuming you need to add new things. So
inclusiveness is basically the null hypothesis. It says language is just what the world determines, given the initial fact that you're going to have a
recursive procedure. If you're going to have a recursive procedure, the best possible system would be one in which everything else follows from
optimal computation – we're very far from showing that, but insofar as you can show that anything works that way, that's a success. What you're showing here is a property of language that does not have to be attributed to genetic endowment. It's just like the discovery that polyhedra are the construction materials. That means you don't have to look for the genetic coding that tells you why animals such as bees are going to build nests in the form of polyhedra; it's just the way they're going to do it.
JM:
Inclusiveness used to depend to a large extent upon the lexicon as the source of the kind of ‘information’ to be taken into account in a computation; does the lexicon still have the important role that it used to have?
NC:
Unless there's something more primitive than the lexicon. The lexicon is a complicated
notion; you're fudging lots of issues. What about compound nouns, and idioms, and what kinds of constructive procedures go on in developing the lexicon – the kind of thing that Kenneth Hale was playing with? So ‘
lexicon’ is kind of a cover for a big mass of problems. But if there's one aspect of language that is unavoidable, it's that in any language, there's some assembly of the possible properties of the
language – features, which just means linguistic properties. So there's some process of assembly of the features and, then, no more access to the features, except for what has already been assembled. That seems like an overwhelmingly and massively supported property of language, and an extremely natural one from the point of view of computation, or use. So you're going to have to have some kind of lexicon, but what it will be, what its internal structure will be, how morphology fits into it, how compounding fits in, where idioms come in – all of those problems are still sitting there.
JM:
Merge – the basic computational principle: how far down does it go?
NC:
Whatever the lexical atoms are, they have to be put together, and the easiest way for them to be put together is for some process to just form the object that consists of
them. That's Merge. If you need more than that, then ok, there's more – and anything more will be specific to language.
JM:
So in principle, von Humboldt might have been right, that the lexicon is not this – I think his term was “completed, inert mass” . . .
NC:
. . . but something created . . .
JM:
. . . something created and put together. But if it's put together, is it put together on an occasion, or is there some sort of storage involved?
NC:
It's got to be storage. We can make up new words, but it's peripheral to the language [system's core computational operations].[C]
As for Humboldt, in fact, I think that when he was talking about the
energeia
and the lexicon, I think he was actually referring to usage. In fact, almost all the time, when he talks about infinite use of finite means, he doesn't mean what we mean – infinite generation – he means use; so, it's part of your life.
JM:
But he did recognize that use depended rather heavily upon systems that underlie it, and that effectively supported and provided the opportunity for the use to . . .
NC:
. . . that's where it fades off into obscurity. I think now that the way that I and others who have quoted him has been a bit misleading, in that it
sounds as if he's a precursor of generative grammar, where perhaps instead he's really a precursor of the study of language use as being unbounded, creative, and so on – in a sense, coming straight out of the Cartesian tradition, because that's what
Descartes was talking about. But the whole idea that you can somehow distinguish an internal competence that is already infinite from the use of it is a very hard notion to grasp. In fact, maybe the person who came closest to it that I've found is neither
Humboldt nor Descartes, but [A.W.] Schlegel in those strange remarks that he made about poetry [see Chomsky,
1966
/2002/2009)]. But it was kind of groping around in an area there was no way of understanding, because the whole idea of a recursive infinity just didn't exist.
JM:
But didn't Humboldt distinguish . . . he did have a distinction between what he called the Form of language and its character, and that seems to track something like a distinction between competence and
use . . .
NC:
It's hard to know what he meant by it. When you read through it, you can see it was just groping through a maze that you can't make any sense of until you at least distinguish, somehow, competence from performance. And that requires having the notion of a recursive procedure and an internal capacity that is ‘there’ and already infinite, and can be used in all the sorts of ways he was talking about. Until you at least begin to make those distinctions, you can't do much except grope in the wilderness.
JM:
But that idea was around, as you've pointed out.
John Mikhail pointed it out in Hume; it was around in the seventeenth and
eighteenth centuries . . .
NC:
. . . something was around. What Hume says, and what John noticed, is that you have an infinite number of responsibilities and duties, so there has to
be some procedure that determines them; there has to be some kind of system. But notice again that it's a system of usage – it determines usage. It's not that there's a class of duties characterized in a finite manner in your brain. It's true it has to be that; but that wasn't what he was talking about. You could say it's around in Euclid, in some sense. The idea of a finite axiom system sort of incorporates the idea; but it was never clearly articulated.