Trick or Treatment (4 page)

Read Trick or Treatment Online

Authors: Simon Singh,Edzard Ernst M.D.

The fate of a nation is of major historic importance, yet the application of the clinical trial would have even greater significance in the centuries ahead. Medical researchers would go on to use clinical trials routinely to decide which treatments worked and which were ineffective. In turn, this would allow doctors to save hundreds of millions of lives around the world because they would be able to cure diseases by confidently relying on proven medicines, rather than mistakenly advocating quack remedies.

Bloodletting, because of its central role in medicine, was one of the first treatments to be submitted to testing via the controlled clinical trial. In 1809, just a decade after Washington had undergone bloodletting on his deathbed, a Scottish military surgeon called Alexander Hamilton set out to determine whether or not it was advisable to bleed patients. Ideally, his clinical trial would have examined the impact of bloodletting on a single disease or symptom, such as gonorrhoea or fever, because the results tend to be clearer if a trial is focused on one treatment for one ailment. However, the trial took place while Hamilton was serving in the Peninsular War in Portugal, where battlefield conditions did not afford him the luxury of conducting an ideal trial – instead, he examined the impact of bloodletting on a broad range of conditions. To be fair to Hamilton, this was not such an unreasonable design for his trial, because at the time bloodletting was touted as a panacea – if physicians believed that bloodletting could cure every disease, then it could be argued that the trial should include patients with every disease.

Hamilton began his trial by dividing a sample of 366 soldiers with a variety of medical problems into three groups. The first two groups were treated by himself and a colleague (Mr Anderson) without resorting to bloodletting, whereas the third group was treated by an unnamed doctor who administered the usual treatment of employing a lancet to bleed his patients. The results of the trial were clear:

It had been so arranged, that this number was admitted, alternately, in such a manner that each of us had one third of the whole. The sick were indiscriminately received, and were attended as nearly as possible with the same care and accommodated with the same comforts…Neither Mr Anderson nor I ever once employed the lancet. He lost two, I four cases; whilst out of the other third thirty-five patients died.’

 

The death rate for patients treated with bloodletting was ten times greater than for those patients who avoided bloodletting. This was a damning indictment on drawing blood and a vivid demonstration that it caused death rather than saved lives. It would have been hard to argue with the trial’s conclusion, because it scored highly in terms of two of the main factors that determine the quality of a trial.

First, the trial was carefully controlled, which means that the separate groups of patients were treated similarly except for one particular factor, namely bloodletting. This allowed Hamilton to isolate the impact of bloodletting. Had the bloodletting group been kept in poorer conditions or given a different diet, then the higher death rate could have been attributed to environment or nutrition, but Hamilton had ensured that all the groups received the ‘same care’ and ‘same comforts’. Therefore bloodletting alone could be identified as being responsible for the higher death rate in the third group.

Second, Hamilton had tried to ensure that his trial was fair by guaranteeing that the groups that were being studied were on average as similar as possible. He achieved this by avoiding any systematic assignment of patients, such as deliberately steering elderly soldiers towards the bloodletting group, which would have biased the trial against bloodletting. Instead, Hamilton assigned patients to each group ‘alternately’ and ‘indiscriminately’, which today is known as
randomizing
the allocation of treatments in a trial. If the patients are randomly assigned to groups, then it can be assumed that the groups will be broadly similar in terms of any factor, such as age, income, gender or the severity of the illness, which might affect a patient’s outcome. Randomization even allows for unknown factors to be balanced equally across the groups. Fairness through randomization is particularly effective if the initial pool of participants is large. In this case, the number of participants (366 patients) was impressively large. Today medical researchers call this a
randomized controlled trial
(or RCT) or a
randomized clinical trial
, and it is considered the gold standard for putting therapies to the test.

Although Hamilton succeeded in conducting the first randomized clinical trial on the effects of bloodletting, he failed to publish his results. In fact, we know of Hamilton’s research only because his documents were rediscovered in 1987 among papers hidden in a trunk lodged with the Royal College of Physicians in Edinburgh. Failure to publish is a serious dereliction of duty for any medical researcher, because publication has two important consequences. First, it en courages others to replicate the research, which might either reveal errors in the original research or confirm the result. Second, publication is the best way to disseminate new research, so that others can apply what has been learned.

Lack of publication meant that Hamilton’s bloodletting trial had no impact on the widespread enthusiasm for the practice. Instead, it would take a few more years before other medical pioneers, such as the French doctor Pierre Louis, would conduct their own trials and confirm Hamilton’s conclusion. These results, which were properly published and disseminated, repeatedly showed that bloodletting was not a lifesaver, but rather it was a potential killer. In light of these findings, it seems highly likely that bloodletting was largely responsible for the death of George Washington.

Unfortunately, because these anti-bloodletting conclusions were contrary to the prevailing view, many doctors struggled to accept them and even tried their best to undermine them. For example, when Pierre Louis published the results of his trials in 1828, many doctors dismissed his negative conclusion about bloodletting precisely because it was based on the data gathered by analysing large numbers of patients. They slated his so-called ‘numerical method’ because they were more interested in treating the individual patient lying in front of them than in what might happen to a large sample of patients. Louis responded by arguing that it was impossible to know whether or not a treatment might be safe and effective for the individual patient unless it had been demonstrated to be safe and effective for a large number of patients: ‘A therapeutic agent cannot be employed with any discrimination or probability of success in a given case, unless its general efficacy, in analogous cases, has been previously ascertained…without the aid of statistics nothing like real medicine is possible.’

And when the Scottish doctor Alexander MacLean advocated the use of medical trials to test treatments while he was working in India in 1818, critics argued that it was wrong to experiment with the health of patients in this way. He responded by pointing out that avoiding trials would mean that medicine would for ever be nothing more than a collection of untested treatments, which might be wholly ineffective or dangerous. He described medicine practised without any evidence as ‘a continued series of experiments upon the lives of our fellow creatures.’

Despite the invention of the clinical trial and regardless of the evidence against bloodletting, many European doctors continued to bleed their patients, so much so that France had to import 42 million leeches in 1833. But as each decade passed, rationality began to take hold among doctors, trials became more common, and dangerous and useless therapies such as bloodletting began to decline.

Prior to the clinical trial, a doctor decided his treatment for a particular patient by relying on his own prejudices, or on what he had been taught by his peers, or on his misremembered experiences of dealing with a handful of patients with a similar condition. After the advent of the clinical trial, doctors could choose their treatment for a single patient by examining the evidence from several trials, perhaps involving thousands of patients. There was still no guarantee that a treatment that had succeeded during a set of trials would cure a particular patient, but any doctor who adopted this approach was giving his patient the best possible chance of recovery.

Lind’s invention of the clinical trial had triggered a gradual revolution that gained momentum during the course of the nineteenth century. It transformed medicine from a dangerous lottery in the eighteenth century into a rational discipline in the twentieth century. The clinical trial helped give birth to modern medicine, which has enabled us to live longer, healthier, happier lives.

Evidence-based medicine

 

Because clinical trials are an important factor in determining the best treatments for patients, they have a central role within a movement known as
evidence-based medicine
. Although the core principles of evidence-based medicine would have been appreciated by James Lind back in the eighteenth century, the movement did not really take hold until the mid-twentieth century, and the term itself did not appear in print until 1992, when it was coined by David Sackett at McMaster University, Ontario. He defined it thus: ‘Evidence-based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.’

Evidence-based medicine empowers doctors by providing them with the most reliable information, and therefore it benefits patients by increasing the likelihood that they will receive the most appropriate treatment. From a twenty-first-century perspective, it seems obvious that medical decisions should be based on evidence, typically from randomized clinical trials, but the emergence of evidence-based medicine marks a turning point in the history of medicine.

Prior to the development of evidence-based medicine, doctors were spectacularly ineffective. Those patients who recovered from disease were usually successful despite the treatments they had received, not because of them. But once the medical establishment had adopted such simple ideas as the clinical trial, then progress became swift. Today the clinical trial is routine in the development of new treatments and medical experts agree that evidence-based medicine is the key to effective healthcare.

However, people outside the medical establishment sometimes find the concept of evidence-based medicine cold, confusing and intimidating. If you have any sympathy with this point of view, then, once again, it is worth remembering what the world was like before the advent of the clinical trial and evidence-based medicine: doctors were oblivious to the harm they caused by bleeding millions of people, indeed killing many of them, including George Washington. These doctors were not stupid or evil; they merely lacked the knowledge that emerges when medical trials flourish.

Recall Benjamin Rush, for example, the prolific bleeder who sued for libel and won his case on the day that Washington died. He was a brilliant, well-educated man and a compassionate one, who was responsible for recognizing addiction as a medical condition and realizing that alcoholics lose the capacity to control their drinking behaviour. He was also an advocate for women’s rights, fought to abolish slavery and campaigned against capital punishment. However, this combination of intelligence and decency was not enough to stop him from killing hundreds of patients by bleeding them to death, and encouraging many of his students to do exactly the same.

Rush was fooled by his respect for ancient ideas coupled with the ad hoc reasons that were invented to justify the use of bloodletting. For example, it would have been easy for Rush to mistake the sedation caused by bloodletting for a genuine improvement, unaware that he was draining the life out of his patients. He was also probably confused by his own memory, selectively remembering those of his patients who survived bleeding and conveniently forgetting those who died. Moreover, Rush would have been tempted to attribute any success to his treatment and to dismiss any failure as the fault of a patient who in any case was destined to die.

Although evidence-based medicine now condemns the sort of bloodletting that Rush indulged in, it is important to point out that evidence-based medicine also means remaining open to new evidence and revising conclusions. For example, thanks to the latest evidence from new trials, bloodletting is once again an acceptable treatment in very specific situations – it has now been demonstrated, for instance, that bloodletting as a last resort can ease the fluid overload caused by heart failure. Similarly, there is now a role for leeches in helping patients recover from some forms of surgery. For example, in 2007 a woman in Yorkshire had leeches placed in her mouth four times a day for a week and a half after having a cancerous tumour removed and her tongue reconstructed. This was because leeches release chemicals that increase blood flow and thus accelerate healing.

Despite being an undoubted force for good, evidence-based medicine is occasionally treated with suspicion. Some people perceive it as being a strategy for allowing the medical establishment to defend its own members and their treatments, while excluding outsiders who offer alternative treatments. In fact, as we have seen already, the opposite is often true, because evidence-based medicine actually allows outsiders to be heard – it endorses any treatment that turns out to be effective, regardless of who is behind it, and regardless of how strange it might appear to be. Lemon juice as a treatment for scurvy was an implausible remedy, but the establishment had to accept it because it was backed up by evidence from trials. Bloodletting, on the other hand, was very much a standard treatment, but the establishment eventually had to reject its own practice because it was undermined by evidence from trials.

There is one episode from the history of medicine that illustrates particularly well how an evidence-based approach forces the medical establishment to accept the conclusions that emerge when medicine is put to the test. Florence Nightingale, the Lady with the Lamp, was a woman with very little reputation, but she still managed to win a bitter argument against the male-dominated medical establishment by arming herself with solid, irrefutable data. Indeed, she can be seen as one of the earliest advocates of evidence-based medicine, and she successfully used it to transform Victorian healthcare.

Other books

The Jewish Gospels by Daniel Boyarin
The Mountains Rise by Michael G. Manning
Dawning by Vivi Anna
Seduce Me Please by Nichole Matthews
The Oracle Rebounds by Allison van Diepen
The Alibi Man by Tami Hoag
Fever by Lara Whitmore
Incredible Dreams by Sandra Edwards