The Genius in All of Us: New Insights Into Genetics, Talent, and IQ (30 page)

Read The Genius in All of Us: New Insights Into Genetics, Talent, and IQ Online

Authors: David Shenk

Tags: #Psychology, #Cognitive Psychology & Cognition, #Cognitive Psychology

   Lewis Terman’s most important claim for IQ—that it reveals a person’s fixed, innate intelligence—relies entirely on the assertion that individual IQ scores remain the same throughout people’s lives.
This simply is not true
. While one study reported a majority of people’s scores changing relatively little over time, that same study reported that, “in a nontrivial minority of children, naturalistic IQ change is marked and real.” Other large studies showed a significant majority of students experiencing an IQ swing of 15 points or more over time. (Sternberg and Grigorenko, “The predictive value of IQ,” p. 13.)

It also means that Spearman’s IQ test has ironically sowed the seeds of its own destruction. In so efficiently documenting narrow bands of academic achievement decade after decade, the test that he devised to prove the fixedness of intelligence inadvertently demonstrated how flexible and buildable intelligence really is.

James Flynn: “At any particular time, factor analysis will extract
g
(iQ)—and intelligence appears unitary. Over time, real-world cognitive skills assert their functional autonomy and swim freely of
g
—and intelligence appears multiple. If you want to see
g
, stop the film and extract a snap shot; you will not see it while the film is running. Society does not do factor analysis.” (Flynn,
What Is Intelligence?
p. 18.)

IQ is as changeable as much as 30 points, as reported in Sherman and Key; and as much as 18 points, as reported in Jones and Bayley. (Sherman and Key results reported in Ceci,
On Intelligence
, chapter 5; Jones and Bayley, “The Berkeley Growth Study,” pp. 167–73.)

    
Their unavoidable conclusion was that “children develop only as the environment demands development
”:
Sherman and Key, “The intelligence of isolated mountain children,” pp. 279–90.

   Other studies have demonstrated that IQ scores drift lower during the summer months (except for those attending an academic camp) and that they rise steadily as the school year progresses. In other words, schooling itself has a direct effect on IQ scores. “Contrary to the traditional belief that information contained on IQ tests is potentially available to all children, regardless of environmental conditions,” writes Stephen Ceci, “it has been known for many decades that a child’s experience of schooling exerts a strong influence on intelligence
test performance … This relationship is still substantial after potentially confounding variables, such as the tendency for the most intelligent children to begin schooling earlier and remain there longer, are controlled.” (Ceci,
On Intelligence
, chapter 5.)

To the extent that scores did show some stability across a large population, it seemed largely a function not of innate intelligence but population inertia. Inertia is the tendency for things to remain in their same relative state—of rest or motion—unless and until something comes along to change the dynamic. It’s true of molecular physics and it’s equally true of human action and populations. Most people performing at the middle of the intellectual pack at age ten are going to be performing at the middle of the intellectual pack at age twenty or thirty. This observation says nothing about intelligence; it’s simple population dynamics. You could say the same thing about almost any trait: by and large, the funniest ten-year-olds are also going to be the funniest twenty-year-olds, the fastest ten-year-olds are also going to be the fastest twenty-year-olds; the biggest-toed ten-year-olds are also going to be the biggest-toed twenty-year-olds. There will be plenty of individual exceptions, but in a large group, this consistency of order is always going to be the norm.

Another way of illustrating population inertia is to consider the annual New York City marathon, with its ninety thousand runners. If one were to list the order of runners at the ten-mile mark, and then compare that order to the order at the finish line, you would find a very solid correlation. Almost none of the runners at the finish would be in exactly the same position as before, and of course some would be way off, but on the whole, the correlation of runners’ ten-mile positions to twenty-six-mile positions would be very high. Why? Because by mile ten, runners have already established their pace, their level of endurance, their level of competitiveness, and so on; the pack has taken shape and will keep roughly the same shape throughout the race. Obviously, this correlation has absolutely nothing to do with the underlying cause of each runner’s performance. It simply reflects the dynamic of any competition.

So it is with IQ. Without question, there are wide differences in intellectual abilities throughout life, and if you test one hundred thousand kids at age ten and then test them again at age twenty-six, you’re going to find that, on average, they remain in roughly the same intellectual pecking order. Many individual scores
will diverge—IQ scores are known to swing as much as thirty points over time in individuals with changing circumstances—but as a group, the age-ten numbers will correlate rather well with the age-twenty-six numbers.

Surprise, surprise: most people who are pretty good at academics at age ten (compared to others the same age) are also pretty good at age twenty-six; most who are excellent at age ten are also excellent at age twenty-six. That’s what IQ stability tells us—and that’s
all
it tells us. It does not suggest inborn limits, and it doesn’t even hint at the extraordinary power of individuals to change their own circumstances and lift their intellectual performance.

Intelligence scores of infants are
not
predictive of future scores or life success. That population is still too much in flux; individuals have not yet hit their stride; the pack has not yet taken shape; population inertia has not yet set in.

    
Comparing raw IQ scores over nearly a century, Flynn saw that they kept going up
:
Nippert, “Eureka!”

    
IQ test takers improved over their predecessors by three points every ten years
.

   These comparisons draw on the raw scores—not the weighted scores that are annually recalibrated so that the average is always 100.

    
Using a late-twentieth-century average score of 100, the comparative score for the year 1900 was calculated to be about 60—leading to the truly absurd conclusion, acknowledged Flynn, “that a majority of our ancestors were mentally retarded
.

   This retroactive analysis illustrates the logical flaw in continually using a curved IQ score to dismiss the competence of anyone scoring below 100.

    
“[The intelligence of] our ancestors in 1900 was anchored in everyday reality,” explains Flynn
.
“We differ from them in that we can use abstractions and logic and the hypothetical.”

Flynn adds:

When [asked]: “What do dogs and rabbits have in common,” Americans in 1900 would be likely to say, “You use dogs to hunt rabbits.” The correct [contemporary test] answer, that both are mammals, assumes that the important thing about the world is to classify it in terms of the taxonic categories
of science … Our ancestors found pre-scientific spectacles more comfortable than post-scientific spectacles, [because that’s what] showed them what they considered to be most important about the world … (Flynn, “Beyond the Flynn Effect.”)

    
Examples of abstract notions that simply didn’t exist in the minds of our nineteenth-century ancestors include the theory of natural selection (formulated in 1864), and the concepts of control group (1875) and random sample (1877)
.

This comes from a 2006 lecture by James Flynn. An extended excerpt:

Over the last century and a half, science and philosophy have expanded the language of educated people, particularly those with a university education, by giving them words and phrases that greatly increase their critical acumen. Each of these terms stands for a cluster of interrelated ideas that virtually spell out a method of critical analysis applicable to social and moral issues. I will call them
“shorthand abstractions”
(or SHAs), it being understood that they are abstractions with peculiar analytic significance.

I will name [some] SHAs followed by the date they entered educated usage (dates all from the Oxford English Dictionary on line):

(1) Market (1776: economics). With Adam Smith, this term altered from the merely concrete (a place where you bought something) to an abstraction (the law of supply and demand). It provokes a deeper analysis of innumerable issues. If the government makes university education free, it will have to budget for more takers. If you pass a minimum wage, employers will replace unskilled workers with machines, which will favor the skilled. If you fix urban rentals below the market price, you will have a shortage of landlords providing rental properties. Just in case you think I have revealed my politics, I think the last a strong argument for state housing.

(2) Percentage (1860: mathematics). It seems incredible that this important SHA made its debut into educated usage less than 150 years ago. Its range is almost infinite. Recently in New Zealand, there was a debate over the introduction of a contraceptive drug that kills some women. It was pointed out that the extra fatalities from the drug amounted to 50 in one million (or 0.005 %) while without it, an extra 1000 women (or 0.100 %) would have fatal abortions or die in childbirth.

(3) Natural selection (1864: biology). This SHA has revolutionized our understanding of the world and our place in it. It has taken the debate about the relative influences of nature and nurture on human behavior out of the realm of speculation and turned it into a science. Whether it can do anything
but mischief if transplanted into the social sciences is debatable. It certainly did harm in the 19th century when it was used to develop foolish analogies between biology and society. Rockefeller was acclaimed as the highest form of human being that evolution had produced, a use denounced even by William Graham Sumner, the great “Social Darwinist.” I feel it made me more aware that social groups superficially the same were really quite different because of their origins. Black unwed mothers who are forced into that status by the dearth of promising male partners are very different from unwed mothers who choose that status because they genuinely prefer it.

Other books

The Archon's Apprentice by Neil Breault
First Crossing by Tyla Grey
Flowers for the Dead by Barbara Copperthwaite
Where the River Ends by Charles Martin
Taking Chances by M Andrews
Empire of Light by Gregory Earls
Openly Straight by Konigsberg, Bill
Remember This by Shae Buggs
Little Sister Death by William Gay