Read Power, Sex, Suicide: Mitochondria and the Meaning of Life Online
Authors: Nick Lane
Tags: #Science, #General
Mitochondrial DNA has its uses not just for reconstructing prehistory, but also in forensics, especially in establishing the identity of unknown remains. Such forensic studies are based on exactly the same assumptions—that everyone inherits a single type of mitochondrial DNA, only from their mother. Among the most celebrated forensic cases was that of the last Russian Tsar, Nicholas II, who was shot, along with his family, by a firing squad in 1918. In 1991, the Russians exhumed a Siberian grave containing nine skeletons, one of which was thought to be that of Nicholas II himself.
The trouble was that two bodies were missing; either something funny had been going on, or this was not the correct grave. Mitochondrial DNA was called to the rescue, but this didn’t quite match that of the Tsar’s living relatives.
Curiously, the putative Tsar’s mitochondrial DNA was heteroplasmic—he had a mixture, so his true identity remained in doubt. The matter was finally laid to rest when the body of the Tsar’s younger brother, Georgij Romanov, the Grand Duke of Russia, was also exhumed. He had died of tuberculosis in 1899, and his grave was known with certainty. Because both should have inherited exactly the same mitochondrial DNA from their mother, a perfect match would establish beyond doubt the identity of the Tsar; and indeed the match was perfect: the Grand Duke, too, was heteroplasmic.
While proving the utility of mitochondrial DNA analysis, the episode raised some awkward practical questions—in particular, exactly how common is heteroplasmy? Mitochondrial heteroplasmy doesn’t always derive from paternal ‘seepage’ into the egg, but can also result from mitochondrial mutations. If the DNA in a single mitochondrion mutates, then both types can be amplified during embryonic development, leading to a mixture in the adult body. Such mixtures tend to come to light only when they cause disease, so their real incidence is not known; if they don’t cause disease they can easily be overlooked. The practical bearing on forensics was important enough for several research groups to look into it; and their findings, consistent between the groups, came as a surprise. At least 10 per cent, and perhaps 20 per cent of humans, are heteroplasmic. Much of the mixture appears to come from new mutations, rather than paternal seepage.
These findings have two important implications. First, heteroplasmy is far more common than we had imagined, and this must hold consequences for the ‘selfish’ mitochondrial model of sexes: if we can survive quite happily with two competing populations of mitochondria (without overt disease in most cases) then clearly the conflict between mitochondria has been overstated to some extent. And second, the rate of mitochondrial mutation is far higher than expected. Attempts to calibrate the rate by comparing the sequences of distantly related family members have come to mixed conclusions, but the burden of evidence suggests that one mutation occurs every 40 to 60 generations, which is to say every 800 to 1200 years. In contrast, if we calibrate the rate of divergence on the basis of known colonization dates and fossil evidence, we calculate a rate of about 1 mutation every 6000 to 12 000 years. This discrepancy is substantial. If we use the faster clock to calculate the date of our last common ancestor, Mitochondrial Eve, we are forced to conclude she lived about 6000 years ago, more commensurate with Biblical Eve than African Eve, who supposedly lived 170 000 years ago. Clearly the recent date is not correct, but how do we explain such a big disparity?
An important fossil finding in south-west Australia may give a clue to the answer. The fossil is an anatomically modern human, and is famous as the source of the world’s oldest mitochondrial DNA. It was discovered near Lake
Mungo in 1969, and later tentatively dated to about 60 000 years old. In 2001, an Australian team reported the mitochondrial DNA sequence, and it came as a shock—nothing similar has ever been found in a living person. The line has fallen extinct.
2
This raises several deep questions. In particular, we classified the Neanderthals earlier as a separate subspecies that fell extinct, on the basis of an extinct mitochondrial sequence, but we are now faced with an anatomically modern human being that had done the same thing. Applying the same rules we are obliged to say that the human too represented a separate subspecies that had fallen extinct, yet we know from the anatomical appearance that we must share nuclear genes. Presumably, there was some genetic continuity between the populations. The simplest way to reconcile the discrepancy is to conclude that a mitochondrial sequence does not invariably record the history of a population; but this forces us to question our interpretation of the past based on mitochondrial sequences alone.
What might have happened? Imagine an anatomically modern human population living in Australia. Let’s say they migrated there from Africa less than 100 000 years ago. Later, a new migrant population arrives, and there is a limited degree of interbreeding. If a new-migrant mother mates with an indigenous father, and they have a healthy daughter, then her mitochondrial DNA will be 100 per cent new-migrant (assuming there is no recombination), but her nuclear genes will be 50 per cent indigenous. If everyone else fails to leave a continuous line of daughters, and our mixed child alone mothers a new population, then the indigenous mitochondrial DNA will fall extinct, while at least some indigenous nuclear genes will survive. In other words, interbreeding is quite compatible with the extinction of a lineage of mitochondrial DNA, and if we try to reconstruct history by mitochondrial DNA alone we might easily be misled. Exactly the same applies to Neanderthals, so we can’t conclude from their mitochondrial DNA that they disappeared without trace. (Richard Dawkins comes to a similar conclusion, from different considerations, in
The Ancestors Tale
.) But is this scenario likely, or merely a technical possibility? It implies the survival of only a single line of daughters; would all the indigenous mitochondrial lines really fall extinct so easily?
They might. I mentioned that mitochondrial DNA works like a surname—and surnames easily fall extinct, as first shown by the Victorian polymath Francis Galton in his book
Hereditary Genius
, of 1869. It seems the average ‘lifespan’ of a surname is only about two hundred years. In the UK, about three
hundred families claim descent from William the Conqueror, but none can prove unbroken descent down the male line. All five thousand feudal knighthoods listed in the Domesday Book of 1086 are now extinct, and the average duration of a hereditary title in the middle ages was three generations. In Australia, the 1912 census showed that half the children descended from just one-ninth of the men and one-seventh of the women. The essential point, stressed by Australian fertility expert Jim Cummins, is that reproductive success is extremely unevenly distributed in populations. Most lines fall extinct; and just the same applies to mitochondrial DNA.
Is this just neutral drift, or does natural selection come into it? Again the Lake Mungo fossil provides a clue. In 2003, James Bowler, one of the discoverers of the fossil in 1969, and his colleagues, showed that the 60 000-year date for the fossil is incorrect. They re-dated the remains to around 40 000 years ago, based on a far more complete analysis of the stratigraphy. The new date is interesting, as it coincides with a period of climate change, when the lakes and rivers dried out and much of south-western Australia became an arid desert. In other words the Mungo line of mitochondrial DNA fell extinct at a time of changing selection pressures.
This raises the spectre of natural selection acting on mitochondrial genes. According to orthodoxy, it doesn’t. If sequence changes accumulate slowly over thousands of years, and the entire trail of changes can be tracked by comparing the genomes of living people, then none of the intermediary changes could have been eliminated by natural selection—the whole succession of changes must have been random, neutral mutations. Yet this cannot explain the discrepancy between a high mutation rate and a slow rate of divergence—of evolution. Natural selection can. If the lines that evolve the fastest (which is to say diverge the most) are eliminated by natural selection, then the survivors must have smaller evolutionary variations. I mentioned earlier that we should not confound a high mutation rate with a high rate of evolution. This is a case in point. The mutation rate is fast but the evolution rate is slower, because a proportion of the mutations have negative consequences, and so are eliminated by selection. The discrepancy is squared by selection.
In the case of the Lake Mungo fossil, the extinction of the mitochondrial DNA line might be put down to natural selection, but this would go against the mantra. Could natural selection be the answer? In fact there is now good evidence that natural selection does operate on mitochondrial genes.
In 2004, Douglas Wallace, the guru of mitochondrial geneticists, and his group at the University of California, Irvine, published fascinating evidence that
natural selection does indeed operate on mitochondrial genes. Wallace himself, in two decades at Emory University in Atlanta, pioneered the mitochondrial typing of human populations, and his work in the early 1980s underpinned the famous 1987
Nature
paper by Cann, Stoneking, and Wilson, which we considered at the beginning of this chapter. Wallace’s worldwide genetic tree defines a number of mitochondrial lineages, which he termed
haplogroups
, and which later came to be known as the daughters of Eve. To these groups he assigned alphabetical letters, known as the Emory classification. Bryan Sykes at Oxford later used the letters as the basis of personal names in his popular bestseller
The Seven Daughters of Eve
, which referred only to European lines.
Wallace (inexplicably unmentioned in Sykes’s book) is not just the guru of mitochondrial population genetics, but also of mitochondrial diseases, which number in multiples of hundreds, quite out of proportion to the small number of genes. These diseases are often caused by tiny variations in the mitochondrial sequence. Not surprisingly, given his interest in the grave consequences of such variations to health, Wallace has long raised suspicions that mitochondrial genes might be subject to natural selection. Obviously, if they cause a crippling disease they are likely to be eliminated by natural selection.
Wallace and his colleagues first drew attention to statistical evidence of ‘purifying selection’ in the early 1990s. Over the following decade, Wallace kept these findings at the back of his mind. In many studies of mitochondrial genetics, he noted repeatedly that the geographical distribution of mitochondrial genes in human populations was not random, as predicted by the theory of neutral drift, but that particular genes thrived in certain places—often a telltale sign of selection at work. Of all the abundant lines of mitochondrial DNA in Africa, for example, but a handful ever left the dark continent; most remained strictly African. The great variety of mitochondrial DNA in the rest of the world blossomed from just a few selected groups. Similarly, in Asia, of all the mitochondrial variety, only a few types ever managed to settle in Siberia, and later migrate to the Americas. Might it be, asked Wallace, that some mitochondrial genes are adapted to particular climates, and do better there, whereas others are penalized if they leave home?
By 2002, Wallace and colleagues were beginning to look into the matter more seriously, and signalled their outlook in some thoughtful discussion papers, but it was not until 2004 that they finally found proof. The idea is breathtakingly simple, and yet holds important implications for human evolution and health. The mitochondria, they said, have two main roles: to produce energy, and to produce heat. The balance between energy generation and heat production can vary, and the actual setting might be critical to our health. Here’s why.
Much of our internal heat is generated by dissipating the proton gradient across the mitochondrial membranes (see
page 183
). Since the proton gradient
can either power ATP production
or
heat production, we are faced with alternatives: any protons dissipated to produce heat cannot be used to make ATP. (As we saw in
Part 2
, the proton gradient has other critical functions too, but if we assume that these remain constant, they don’t affect our argument.) If 30 per cent of the proton gradient is used to produce heat, then no more than 70 per cent can be used to produce ATP. Wallace and colleagues realized that this balance could plausibly shift according to the climate. People living in tropical Africa would gain from a tight coupling of protons to ATP production, so generating less internal heat in a hot climate, whereas the Inuit, say, would gain by generating more internal heat in their frigid environment, and so would necessarily generate relatively little ATP. To compensate for their lower ATP production, they would need to eat more.
Wallace set out to find any mitochondrial genes that might influence the balance between heat production and ATP generation, and found several variants that plausibly affected heat production (by uncoupling electron flow from proton pumping). The variants that produced the most heat were favoured in the Arctic, as expected, while those that produced the least were found in Africa.
While this seems no more than common sense, the connotations conceal a twist worthy of a murder mystery. Recall from
Part 4
(
page 183
) that the rate of free-radical formation doesn’t depend on the speed of respiration, but rather on how fully the respiratory chains are packed with electrons. If electron flow is very sluggish, because there is little demand for energy, electrons build up in the chains and can escape to form free radicals. In
Part 4
, we saw that a fast rate of free-radical formation can be reduced if electron flow is maintained down the chains—and this can be achieved by dissipating the proton gradient to generate heat. We compared the situation with a hydroelectric dam on a river, in which the overflow channels prevent flooding. The pressing need to dissipate the proton gradient may have overridden its wastefulness, and given rise to endothermy, just as the need to prevent flooding may override the waste of water through the overflow channels. The long and short of it is that raising internal heat generation lowers free-radical formation at rest, whereas lowering internal heat production increases the risk of free-radical production when at rest.