How to Read a Paper: The Basics of Evidence-Based Medicine (46 page)

Box 16.2 Characteristics of a good decision aid (from reference [17])
Decision aids are different from traditional patient information materials because they do not tell people what to do. Instead, they set out the facts and help people to deliberate about the options. They usually contain:
 
  • a description of the condition and symptoms;
  • the likely prognosis with and without treatment;
  • the treatment and self-management support options and outcome probabilities;
  • what's known from the evidence and not known (uncertainties);
  • illustrations to help people understand what it would be like to experience some of the most frequent side effects or complications of the treatment options (often using patient interviews);
  • a means of helping people clarify their preferences;
  • references and sources of further information;
  • the authors' credentials, funding source and declarations of conflict of interest.

Increasingly commonly, decision aids are available online, allowing the patient to click through different steps in the decision algorithm (with or without support from a health professional). In my view, the best way to get your head round shared decision-making tools is to take a look at a few—and if possible, put them to use in practice. The UK National Health Service has a website with links to tools for sharing decisions, from abdominal aortic aneurysm repair to stroke prevention in atrial fibrillation: see
http://sdm.rightcare.nhs.uk/pda/
. A similar (and more comprehensive) range of decision tools is available from this Canadian site:
http://decisionaid.ohri.ca/AZinvent.php
.

Option grids

Studies using the ‘OPTION’ instrument suggest that patient involvement in evidence-based decision-making is not always as high as the idealists would like it to be [19]. These days, most health professionals are (allegedly) keen to share decisions with patients in principle, but qualitative and questionnaire research has shown that they perceive a number of barriers to doing so in practice, including time constraints and lack of applicability of the decision support model to the unique predicament of a particular patient [21]. It is relatively uncommon for doctors to refer patients to decision support websites, partly because they feel they are already sharing decisions in routine consultation talk, and partly because they feel that patents do not wish to be involved in this way [22].

The reality of a typical general practice consultation, for example, is a long way from the objective reality of a formal decision algorithm. When a patient attends with symptoms suggestive of (say) sciatica, the doctor has 10 min to make progress. Typically, they will examine the patient, order some tests and then have a rather blurry conversation about how (on the one hand) the patient's symptoms might resolve with physiotherapy but (on the other hand) they might like to see a specialist because some cases will need an operation. The patient typically expresses a vague preference for either conservative or interventionist management, and the doctor (respecting the ‘empowered’ views) goes along with the patient's preference.

If the doctor is committed to evidence-based shared decision-making, he or she may try using a more structured approach to shared decision-making as set out in section ‘Shared decision-making’, for example, by logging on to an online algorithm or by using pie charts or pre-programmed spreadsheets to elicit numerical scores of how much the patient values particular procedures and outcomes vis a vis one another. But very often, such tools will have been tried once or twice and then abandoned as technocratic, time-consuming, overly quantitative and oddly disengaged from the unique personal illness narrative that fills the consultation.

The good news is that our colleagues working in the field of shared decision-making have recently acknowledged that the perfect may be the enemy of the good. Most discussions about management options in clinical practice do not require—and may even be thrown off kilter by—an exhaustive analysis of probabilities, risks and preference scores. What most people want is a brief but balanced list of the options, setting out the costs and benefits of each and including an answer to the question ‘what would happen if I went down this route?’.

Enter the option grid (
http://www.optiongrid.org
): the product of a collaborative initiative between patients, doctors and academics [23]. An option grid is a one-page table covering a single topic (so far complete are sciatica, chronic kidney disease, breast cancer, tonsillitis and a dozen or so more). The grid lists the different options as columns, with each row answering a different question (such as ‘what does the treatment involve?’, ‘how soon would I feel better?’ and ‘how would this treatment affect my ability to work?’). An example is shown in
Figure 16.2
.

Figure 16.2
Example of an option grid.

Source:
http://www.optiongrid.org/optiongrids.php
. Reproduced with permission of Glyn Elwyn.

Option grids are developed in a similar way to PROMs, but there is often more of a focus on involvement of the multidisciplinary clinical team, as in this example of an option grid for head and neck cancer management [24]. The distinguishing feature of the option grid approach is that it promotes and supports what has been termed
option talk
—that is, the discussions and deliberations around the different options [25]. The grids are, in effect, analog rather than digital in design.

The reason I see this approach as progress from more algorithmic approaches to shared decision-making introduced in section ‘The patient perspective’ is that the information in an option grid is presented in a format that allows both reflection and dialogue. The grid can be printed off (or indeed, the patient can be given the url) and invited to go away and consider the options before returning for a further consultation. And unlike the previous generation of shared decision-making tools, neither the patient nor the clinician needs to be a ‘geek’ to use them.

n
of 1 trials and other individualised approaches

The last approach to involving patients that I want to introduce in this chapter is the
n
of 1 trial. This is a very simple design in which each participant receives, in randomly allocated order, both the intervention and the control treatment.

An example is probably the best way to explain this. Back in 1994, some Australian GPs wanted to address the clinical issue of which painkiller to use in osteoarthritis [26]. Some patients, they felt, did fine on paracetamol (which has relatively few side effects), while others did not respond so well to paracetamol but obtained great relief from a non-steroidal anti-inflammatory drug (NSAID). In the normal clinical setting, one might try paracetamol first and move to the NSAID if the patient did not respond. But supposing there was a strong placebo effect? The patient might conceivably have limited confidence in paracetamol because it is such a commonplace drug, whereas an NSAID in a fancy package might be subconsciously favoured.

The idea of the
n
of 1 trial is that all treatments are anonmyised, prepared in identical formulations and packaging, and just labelled ‘A’, ‘B’, and so on. The participants do not know which drug they are taking, hence their response is not influenced by whether they ‘believe in’ the treatment. To add to scientific rigour, the drugs may be taken in sequence such as ABAB or AABB, with ‘washout’ periods in between.

March and colleagues'
n
of 1 trial of paracetamol versus NSAIDs did confirm the clinical hunch that some patients did markedly better on the NSAID but many did equally well on paracetamol. Importantly, unlike a standard randomised trial, the
n
of 1 design allowed the researchers to identify which patients were in each category. But the withdrawal rate from the trial was high, partly because when participants found a medication that worked, they just wanted to keep taking it rather than swap to the alternative!

But despite its conceptual elegance and a distant promise of linking to the ‘personalised medicine’ paradigm in which every patient will have their tests and treatment options individualised to their particular genome, physiome, microbiome, and so on, the
n
of 1 trial has not caught on widely in either research or clinical practice. A review article by Lillie and colleagues [27] suggests why. Such trials are labour intensive to carry out, requiring a high degree of individual personalisation and large amounts of data for every participant. ‘Washout’ periods raise practical and ethical problems (does one have to endure one's arthritis with no pain relief for several weeks to serve the scientific endeavour?). Combining the findings from different participants raises statistical challenges. And the (conceptually simple) science of
n
of 1 trials has begun to get muddled up with the much more complex and uncertain science of personalised medicine.

In short, the
n
of 1 trial is a useful design (and one you may be asked about in exams!), but it is not the panacea it was once predicted to be.

A recent (and somewhat untested) alternative approach to individualising treatment regimens has been proposed recently by Moore and colleagues [28] in relation to pain relief. Their basic argument is that we should ‘expect failure’ (because the number needed to treat for many interventions is more than 2, statistically speaking any individual is more likely
not
to benefit than benefit) but ‘pursue success’ (because the ‘average’ for any intervention response masks a subgroup of responders who will do very well on that intervention). They propose a process of guided trial and error, systematically trying one intervention followed by another, until the one that works effectively for
this
patient is identified. Perhaps this is the
n
of 1 trial without worrying either about the placebo element or about the fact that one may need to try half a dozen options before finding the best one in the circumstances.

References

1
Marinker M. The chameleon, the Judas goat, and the cuckoo.
The Journal of the Royal College of General Practitioners
1978;
28
(189):199–206.

2
Greenhalgh T. Narrative based medicine: narrative based medicine in an evidence based world.
BMJ: British Medical Journal
1999;
318
(7179):323.

3
Edwards A, Elwyn G.
Shared decision-making in health care: achieving evidence-based patient choice
. New York: Oxford University Press, 2009.

4
Greenhalgh T. Uncertainty and clinical method.
Clinical Uncertainty in Primary Care: Springer
, 2013:23–45.

5
Meadows KA. Patient-reported outcome measures: an overview.
British Journal of Community Nursing
2011;
16
(3):146–51.

6
Garratt A, Schmidt L, Mackintosh A, et al. Quality of life measurement: bibliographic study of patient assessed health outcome measures.
BMJ: British Medical Journal
2002;
324
(7351):1417.

7
Ader DN. Developing the patient-reported outcomes measurement information system (PROMIS).
Medical Care
2007;
45
(5):S1–2.

8
Dawson J, Fitzpatrick R, Murray D, et al. Questionnaire on the perceptions of patients about total knee replacement.
Journal of Bone & Joint Surgery, British Volume
1998;
80
(1):63–9.

9
Dawson J, Doll H, Fitzpatrick R, et al. The routine use of patient reported outcome measures in healthcare settings.
BMJ: British Medical Journal (Clinical Research ed.)
2009;
340
:c186.

10
McGrail K, Bryan S, Davis J. Let's all go to the PROM: the case for routine patient-reported outcome measurement in Canadian healthcare.
HealthcarePapers
2011;
11
(4):8–18.

Other books

Claimed by Three by Rebecca Airies
Double Clutch by Liz Reinhardt
A Thousand Splendid Suns by Khaled Hosseini
The Covert Element by John L. Betcher
All the Sweet Tomorrows by Bertrice Small
Impressions by Doranna Durgin