How to Read a Paper: The Basics of Evidence-Based Medicine (47 page)

11
Devlin NJ, Appleby J, Buxton M.
Getting the most out of PROMs: putting health outcomes at the heart of NHS decision-making
. King's Fund, London, 2010.

12
Basch E. Standards for patient-reported outcome-based performance measures standards for patient-reported outcome-based performance measures viewpoint.
JAMA: The Journal of Medical Association
2013;
310
(2):139–40 doi: 10.1001/jama.2013.6855[published Online First: Epub Date]|.

13
Makoul G, Clayman ML. An integrative model of shared decision making in medical encounters.
Patient Education and Counseling
2006;
60
(3):301–12.

14
Elwyn G, Edwards A, Kinnersley P. Shared decision-making in primary care: the neglected second half of the consultation.
The British Journal of General Practice
1999;
49
(443):477–82.

15
Elwyn G, Edwards A, Kinnersley P, et al. Shared decision making and the concept of equipoise: the competences of involving patients in healthcare choices.
The British Journal of General Practice
2000;
50
(460):892–9.

16
Edwards A, Elwyn G, Hood K, et al. Patient-based outcome results from a cluster randomized trial of shared decision making skill development and use of risk communication aids in general practice.
Family Practice
2004;
21
(4):347–54.

17
Edwards A, Elwyn G, Mulley A. Explaining risks: turning numerical data into meaningful pictures.
BMJ: British Medical Journal
2002;
324
(7341):827.

18
Stiggelbout A, Weijden T, Wit MD, et al. Shared decision making: really putting patients at the centre of healthcare.
BMJ: British Medical Journal
2012;
344
:e256.

19
Elwyn G, Hutchings H, Edwards A, et al. The OPTION scale: measuring the extent that clinicians involve patients in decision-making tasks.
Health Expectations
2005;
8
(1):34–42.

20
Coulter A, Collins A.
Making shared decision-making a reality. No decision about me, without me
. The King's Fund, London, 2011.

21
Gravel K, Légaré F, Graham ID. Barriers and facilitators to implementing shared decision-making in clinical practice: a systematic review of health professionals' perceptions.
Implementation Science
2006;
1
(1):16.

22
Elwyn G, Rix A, Holt T, et al. Why do clinicians not refer patients to online decision support tools? Interviews with front line clinics in the NHS.
BMJ Open
2012;
2
(6) doi: 10.1136/bmjopen-2012-001530[published Online First: Epub Date]|.

23
Elwyn G, Lloyd A, Joseph-Williams N, et al. Option Grids: shared decision making made easier.
Patient Education and Counseling
2013;
90
:207–12.

24
Elwyn G, Lloyd A, Williams NJ, et al. Shared decision-making in a multidisciplinary head and neck cancer team: a case study of developing Option Grids.
International Journal of Person Centered Medicine
2012;
2
(3):421–6.

25
Thomson R, Kinnersley P, Barry M. Shared decision making: a model for clinical practice.
Journal of General Internal Medicine
2012;
27
(10):1361–7.

26
March L, Irwig L, Schwarz J, et al. n of 1 trials comparing a non-steroidal anti-inflammatory drug with paracetamol in osteoarthritis.
BMJ: British Medical Journal
1994;
309
(6961):1041–6.

27
Lillie EO, Patay B, Diamant J, et al. The n-of-1 clinical trial: the ultimate strategy for individualizing medicine?
Personalized Medicine
2011;
8
(2):161–73.

28
Moore A, Derry S, Eccleston C, et al. Expect analgesic failure; pursue analgesic success.
BMJ: British Medical Journal
2013;
346
:f2690.

Chapter 17

Criticisms of evidence-based medicine

What's wrong with EBM when it's done badly?

This new chapter is necessary because evidence-based medicine (EBM) has long outlived its honeymoon period. There is, quite appropriately, a growing body of scholarship that offers legitimate criticisms of EBM's assumptions and core approaches. There is also a somewhat larger body of misinformed critique – and a grey zone of ‘anti-EBM’ writing that contains more than a grain of truth but is itself one-sided and poorly argued. This chapter seeks to set out the legitimate criticisms and point the interested reader towards more in-depth arguments.

To inform this chapter, I have drawn on a number of sources: a widely cited short article by BMJ columnist and common-sense general practitioner (GP), Spence [1]; a book by Timmermans and Berg [2] called
The Gold Standard: The challenge of evidence-based medicine and standardization in health care
; a paper by Timmermans and Mauck [3] on the promises and pitfalls of EBM; a ‘20 years on’ reflection by some EBM gurus [4]; Goldacre's [5] book ‘Bad Pharma’; and some additional materials on evidence-based policymaking referenced in Section ‘Why is ‘evidence-based policymaking’ so hard to achieve?’.

The first thing we need to get clear is the distinction between EBM when it is practised badly (this section) and EBM when it is practised well (next section). As a starter for this section, I am going to reproduce two paragraphs from the preface to this book, written for the first edition way back in 1995 and still unchanged in this fifth edition:

Many of the descriptions given by cynics of what evidence-based medicine is (the glorification of things that can be measured without regard for the usefulness or accuracy of what is measured, the uncritical acceptance of published numerical data, the preparation of all-encompassing guidelines by self-appointed ‘experts’ who are out of touch with real medicine, the debasement of clinical freedom through the imposition of rigid and dogmatic clinical protocols, and the over-reliance on simplistic, inappropriate, and often incorrect economic analyses) are actually criticisms of what the evidence-based medicine movement is fighting against, rather than of what it represents
.
Do not, however, think of me as an evangelist for the gospel according to evidence-based medicine. I believe that the science of finding, evaluating and implementing the results of medical research can, and often does, make patient care more objective, more logical, and more cost-effective. If I didn't believe that, I wouldn't spend so much of my time teaching it and trying, as a general practitioner, to practise it. Nevertheless, I believe that when applied in a vacuum (that is, in the absence of common sense and without regard to the individual circumstances and priorities of the person being offered treatment or to the complex nature of clinical practice and policymaking), ‘evidence-based’ decision-making is a reductionist process with a real potential for harm.

Let's unpack these issues further. What does ‘EBM practised badly’ look like?

First, bad EBM cites numbers derived from population studies but asks no upstream questions about where those numbers (or studies) came from. If you have spent time on the wards or in general practice, you will know the type of person who tends to do this: a fast-talking, technically adept individual who appears to know the literature and how to access it (perhaps via apps on their state-of-the-art tablet computer), and who always seems to have an NNT (number needed to treat) or odds ratio at his or her fingertips. But the fast talker is less skilled at justifying why
this
set of ‘evidence-based’ figures should be privileged over some other set of figures. Their evidence, for example, may come from a single trial rather than a high-quality and recent meta-analysis of all available trials. Self-appointed fast-talking EBM ‘experts’ tend to be unreflective (i.e. they don't spend much time thinking deeply about things) and they rarely engage
critically
with the numbers they are citing. They may not, for example, have engaged with the arguments about surrogate endpoints I set out on page 81.

Bad EBM considers the world of published evidence to equate to the world of patient need. Hence, it commits two fallacies: it assumes that if (say) a randomised controlled trial (RCT) exists that tested a treatment for a ‘disease’, that disease is necessarily a real medical problem requiring treatment; and it also assumes that if ‘methodologically robust’ evidence does not exist on a topic, that topic is unimportant. This leads to a significant bias. The evidence base will accumulate in conditions that offer the promise of profit to the pharmaceutical and medical device industries – such as the detection, monitoring and management of risk factors for cardiovascular disease [6]; the development and testing of new drug entities for diabetes [7], or the creation and treatment of non-diseases such as ‘female hypoactive sexual desire’ [8]). Evidence will also accumulate in conditions that government chooses to recognise and prioritise for publicly funded research, but it will fail to accumulate (or will accumulate much more slowly) in Cinderella conditions that industry and/or government deem unimportant, hard-to-classify or ‘non-medical’, such as multi-morbidity [9], physical activity in cardiovascular prevention [10], domestic violence [11] or age-related frailty [12].

Bad EBM has little regard for the patient perspective and fails to acknowledge the significance of clinical judgement. As I pointed out in Section ‘The patient perspective’, the ‘best’ treatment is not necessarily the one shown to be most efficacious in RCTs but the one that fits a particular set of individual circumstances and aligns with the patient's preferences and priorities.

Finally, bad EBM draws on bad research – for example, research that has used weak sampling strategies, unjustified sample sizes, inappropriate comparators, statistical trick-cycling, and so on. Chapter 6 set out some specific ways in which research (and the way it is presented) can mislead. Whilst people behaving in this way will often claim to be members of the EBM community (e.g. their papers may have ‘evidence-based’ in the title), the more scholarly members of that community would strongly dispute such claims.

What's wrong with EBM when it's done well?

Whilst I worry as a clinician about EBM done badly, the academic in me is more interested in its limitations when done well. This is because there are good philosophical reasons why EBM will never be the fount of all knowledge.

A significant criticism of EBM, highlighted by Timmermans and Berg in their book, is the extent to which EBM is a formalised method for imposing an unjustifiable degree of standardisation and control over clinical practice. They argue that in the modern clinical world, EBM can be more or less equated with the production and implementation of clinical practice guidelines. ‘Yet’, they argue (p. 3), ‘such evidence is only rarely available to cover all the decision moments of a guideline. To fill in the blanks and to interpret conflicting statements that might exist in the literature, additional, less objective steps [such as consensus methods] are necessary to create a guideline’ [2].

Because of these (sometimes subtle) gaps in the research base, Timmermans and Berg contend that an ‘evidence-based’ guideline is usually not nearly as evidence-based as it appears to be. But the
formalisation
of the evidence into guidelines, which may then become ossified in protocols or computerised decision support programmes, lends an unjustified level of significance – and sometimes coercion – to the guideline. The rough edges are sanded down, the holes are filled in and the resulting recommendations start to acquire biblical significance!

One nasty side effect of this ossification is that
yesterday's
best evidence drags down
today's
guidelines and clinical pathways. An example is the lowering of blood glucose in type 2 diabetes. For many years, the ‘evidence-based’ assumption was that the more intensively a person's blood glucose was controlled, the better the outcomes would be. But more recently, a large meta-analysis showed that intensive glucose control had no benefit over moderate control, but was associated with a twofold increase in the incidence of severe hypoglycaemia [13]. Yet, UK GPs were still being performance-managed through a scheme called the
Quality and Outcomes Framework
(QOF) to strive for intensive glucose control
after
the publication of that meta-analysis had shown an adverse benefit–harm ratio [14]. This is because it takes time for practice and policy to catch up with the evidence – but the existence of the QOF, introduced to make care more evidence-based, actually had the effect of making it
less
evidence-based!

Other books

Beautiful Boy by David Sheff
The Venetian by Mark Tricarico
While Galileo Preys by Joshua Corin
The Drifter by Kate Hoffmann
Dragonfly by Erica Hayes
We Were Kings by Thomas O'Malley
Little Tease by Amy Valenti