JM:
I want to ask some questions about the ‘perfection’ of the language faculty. First, a background matter: if you speak of perfection and in particular
perfection in design of the language faculty – or at least, the mapping to the SEM interface – you seem to be invited to answer the question, “design for what?”
NC:
I think that's misleading. That's because of connotations of the word
design
. Design suggests a designer, and a function of the designed thing or operation. But in biology, ‘design’ just means the way it is.
JM:
The structure, whatever it is . . .
NC:
How is the galaxy designed? Because the laws of physics say that that's the way it's designed. It's not for anything, and nobody did it. It's just what happens under certain physical circumstances. I wish there were a better word to use, because it does carry these unfortunate connotations. In a sense – a negative sense – there's a function. If the structure were dysfunctional, it wouldn't survive. And OK, in that sense, it's designed for something. It doesn't mean it's well designed for survival. So take language and
communication. Language is poorly designed for communication, but we get by with it, so it's not dysfunctional enough to disappear [or at least, disappear with regard to its use for communication, which isn't its only use, by any means]. Take, for example, trace erasure [or in the more recent terminology of copies, non-pronunciation of copies]. It's good for efficiency of structure, but it's very bad for communication. Anyone who tries to write a parsing program [encounters it] . . . most of the program is about how to find the gaps. Where are the gaps, and what's in them? If you just repeated – if you spelled out [or pronounced or otherwise exhibited] copies – the problem would be gone. But from a computational point of view, that would be poor design, because it's extra computation, so there's no point in it. So you cut it out. And there's case after case like that. So take garden path sentences and
islands, for example. Islands prevent you from saying things you would like to say. You can't say, “who did you wonder why visited yesterday.” It's a
thought; you know what it means. But the design of language on computational grounds doesn't allow it. To the extent that we understand them, at least, these things follow from efficient computational structure. But computational structure has no function. It's like cells breaking up into spheres instead of cubes: it just works; but if it broke up into cubes, it would work too, it just can't [because of third factor constraints on possible shapes – in this case, physical ones]. Here too I think that what you find more and more is just efficient design from a computational point of view independent of any use you might want to put it to. And I think that from an evolutionary point of view, that is exactly what should be expected. That's what these papers are about that I probably forgot to send you.
We know almost nothing about the evolution of language, which is why people fill libraries with speculation about it. But we do know something. You can roughly fix the time span. You can argue fifty thousand years more or less, but that doesn't matter; it's basically instantaneous [from an evolutionary point of view]. Something suddenly happened, and then there's this huge explosion of artifacts and everything else. Well, what happened? The only thing that could have happened – it's hard to think of an alternative – is that suddenly the capacity for
recursive enumeration developed. That allows you to take whatever simple thoughts are that a chimpanzee may have, like act or action or something and turn it into an infinite array of thoughts. Well, that carries advantages. But even that is not so
trivial, because Haldane, I think it was, proved – eighty years or so ago now, I guess – that beneficial
mutations almost never survive. The probability of a beneficial mutation surviving is almost minuscule. It does, of course, happen sometimes, so you get some changes. But that suggests that whatever it was that gave this may have happened many times and just died out. But at some point, by some accident, the beneficial mutation survived. But it survived in an individual; mutation doesn't take place in a group. So the individual that had this property – which does carry advantages: you can talk to yourself, at least, and you can plan, you can imagine, things like that. That partially gets transmitted to offspring. By enough accidents, it could dominate a small breeding group. And at that point, there becomes some reason to
communicate. And so you develop ancillary systems. You know, morphology, phonology, and all the externalization systems. And they are messy. There's no reason for them to be computationally good. You're taking two completely independent systems. The sensory-motor system has apparently been
around for hundreds of thousands of years. It doesn't seem to have adapted to language, or only marginally. So it's just sitting there. You've got this other system – whatever developed internally – and there's every reason to expect that it might be close to computationally perfect, for there are no forces acting on it. So it
would be like cell division. So then, when you're going to map them together, it's going to be a mess.
JM:
But wait, when I think to myself, I think to myself . . .
NC:
In English, yes. But that's when you think to yourself consciously. And of course, we don't know what's going on unconsciously. So consciously, yes, because that is our mode of externalization, and we reinternalize it. Here, I think, is where a lot of the experimentation going on is very misleading. There's a lot of work recently that's showing that before people make a
decision, something is going on in the brain that is related to it. So if it's a decision to pick up a cup, something is going on in the motor areas before you make the decision. I think it's misinterpretation. It's before the decision becomes conscious. But lots of things are going on unconsciously. There's this philosophical dogma that everything has to be accessible to consciousness. That's just religious belief. Take mice. I don't know whether they're conscious or not, but I assume that they make decisions that are unconscious. So when we talk to ourselves, the part that is reaching consciousness is reconstructed in terms of the form of externalization that we use. But I don't think that tells you much about the internal use of
language. It's evidence for it, just like speech is evidence for it.[C]
Anyhow, whatever this first person was who had the mutation, maybe the mutation just gave Merge. That's the simplest
assumption. If that happened, that person would not be conscious of thinking; he or she would just be doing it. He or she would be able to make decisions on the basis of internal planning, observations and expectations, and whatever. Now if enough people in the community had the same mutation, there would come a point where someone had the bright idea of externalizing it, so that they could contact somebody else. This may not have involved any evolutionary step at all. It may have [just been a matter of] using other cognitive faculties to figure out a hard problem. If you look at language – one of the things that we know about it is that most of the
complexity is in the externalization. It is in phonology and morphology, and they're a mess. They don't work by simple rules. Almost everything that's been studied for thousands of years is externalization. When you teach a language, you mostly teach the externalization. Whatever is going on internally, it's not something that we're conscious of. And it's probably very simple. It almost has to be, given the evolutionary conditions.
JM:
If you give up the idea that you have to answer the question, what is it for . . .
NC:
It's not for anything . . .
JM:
But put it this way: don't you then have to give up also talk about interfaces, and talk about organs, because . . .
NC:
It has to relate to the interfaces, for otherwise it would die out. It would be a
lethal mutation. But lethal mutations are no different from beneficial mutations from nature's point of view; they just die out. And in fact many of them remain. Why do we have an appendix?
JM:
You can't even say that it's for thought, then?
NC:
If it weren't adaptable to thought, it probably would just have died out. But functioning for something is a remote
contingency; that was Haldane's point. If it's beneficial, it'll probably die out anyway, because statistically that's just what happens. But something may survive. And if it survives, it may be for physical reasons. The more that's being learned about evolution and development, the more it looks like most things happen because they have to; there's no other way. Speculations in the 1970s that suggested – at least for me – the principles and
parameters approach to the study of language, such as [François] Jacob's speculations about the proliferation of organisms – well, they turned out to be pretty solid. The idea that basically there's one organism, that the difference – as he put it poetically – the difference between an elephant and a fly is just the rearrangement of the timing of some fixed regulatory mechanisms. It looks more and more like it. There's deep conservation; you find the same thing in bacteria that you find in humans. There's even a theory now that's taken seriously that there's a
universal genome. Around the Cambrian explosion, that one genome developed and every organism's a modification of it.
JM:
Due to difference of timing in development, difference of gene position . . .
NC:
Yes, so it doesn't sound as crazy as it used to. They've found in the kinds of things that they've studied, like bacteria, that the way that evolutionary development takes place seems to be surprisingly uniform, fixed by physical law. If anything like that applies to language, you'd expect that the
internal, unconscious system that is probably mapping linguistic expressions into thought systems at an interface ought to be close to perfect.
JM:
So language came about as a result of an accident – maybe some minor rearrangement of the human genome – and other creatures don't have it because they didn't have the same accident, at least in a form that survived . . .
NC:
In fact, the human line may have had the accident many times, and it just never took off. And the accident could have been – no one knows enough about the brain to say anything – but there was an explosion of brain
size around a
hundred thousand years ago which may have had something to do with it. It might be a consequence of some change in brain configuration about which people know nothing. And it's almost impossible to study it because there's no comparative evidence – other animals don't have it – and you can't do direct experimentation on humans in the way they used to do at McGill [University] . . .
JM:
To our shame . . . What happens then to the strong minimalist thesis?
NC:
Maybe it's even true. Of course, it would have to be pared down to apply just to the
cognitive [conceptual-intentional, or SEM] interface, and the mapping to the sensory-motor interface may not even be a part – strictly speaking, may not even be a part of language in substantial respects – in this technical sense of
language
. It's just part of the effort to connect these two systems that have nothing to do with each other, and so it could be very messy, not meet any nice computational properties. It's very variable; the Norman invasion changes it radically, it changes from generation to generation so you get dialects and splits, and so on. And it's the kind of thing you have to learn; a child has to learn that stuff; when you study a language, you have to learn it. And a lot of it is probably pretty rigid. It's not that everything goes; there are certain constraints on the mapping. I think that there's a research project there, to try to figure out [just what they are]. That's what
serious phonology and morphology ought to be – to find out the constraints in which this mapping operates and ask where they come from. Are they computational constraints? I think it opens up new questions. And the same for
syntax. You can find some cases where you can give an argument that computational efficiency explains the principles, but . . .
It's interesting that people have
expectations for language that they never have in biology. I've been working on Universal
Grammar for all these years; can anyone tell you precisely how it works [– how it develops into a specific language, not to mention how that language that develops is used]? It's hopelessly complicated. Can anyone tell you how an insect works? They've been working on a project at MIT for thirty years on nematodes. You know the very few [302] neurons; you know the wiring diagram. But how does the animal work? We don't know that.
JM:
OK. But now what happens to
parameters? I guess you're pretty much committed to saying that all of the research on them should shift to focus on the mapping to the sensory-motor interface, PHON
.
NC:
I guess that most of the parameters, maybe all, have to do with the
mappings [to the sensory-motor interface]. It might even turn out that there isn't a finite number of parameters, if there are lots of ways of solving this
mapping problem. In the field, people try to distinguish
roughly between macroparameters and microparameters. So you get Janet Fodor's serious work on this. You get these kinds of things that
Mark Baker is talking about – head-final, polysynthesis [which Baker
suggests are among the best candidates for macroparameters]. It's probable that there's some small store that just may go back to computational issues [hence, mapping to the SEM interface]. But then you get into the microparameters. When you really try to study a language, any two speakers are different. You get into a massive proliferation of parametric differences – the kinds of stuff that
Richard Kayne does when you study dialects really seriously. Very small changes sometimes have big effects. Well, that could turn out to be one of the ways of solving the cognitive problem of how to connect these unrelated systems. And they vary; they could change easily.