How to Read a Paper: The Basics of Evidence-Based Medicine (20 page)

17
Bero L. Industry sponsorship and research outcome: a Cochrane review.
JAMA Internal Medicine
2013;
173
(7):580–1.

18
Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials.
Annals of Internal Medicine
2010;
152
(11):726–32.

19
Turner L, Shamseer L, Altman DG, et al. Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review.
Systematic Reviews
2012;
1
:60.

20
Chalmers I, Glasziou P, Godlee F. All trials must be registered and the results published.
BMJ: British Medical Journal
2013;
346
(7890):f105.

21
Herxheimer A. Getting good value from drug reps.
Drug and Therapeutics Bulletin
1983;
21
:13–5.

22
Montori VM, Jaeschke R, Schünemann HJ, et al. Users' guide to detecting misleading claims in clinical research reports.
BMJ: British Medical Journal
2004;
329
(7474):1093.

Chapter 7

Papers that report trials of complex interventions

Complex interventions

In section ‘What information to expect in a paper describing a randomised controlled trial: the CONSORT statement’, I defined a simple intervention (such as a drug) as one that is well demarcated (i.e. it is easy to say what the intervention comprises) and lends itself to an ‘intervention on’ versus ‘intervention off’ research design. A complex intervention is one that is not well demarcated (i.e. it is hard to say precisely what the intervention
is
) and which poses implementation challenges for researchers. Complex interventions generally involve multiple interacting components and may operate at more than one level (e.g. both individual and organisational). They include the following.

 
  • Advice or education for patients
  • Education or training for health care staff
  • Interventions that seek active and ongoing input from the participant (e.g. physical activity, dietary interventions, lay support groups or psychological therapy delivered either face to face or via the Internet)
  • Organisational interventions intended to increase the uptake of evidence-based practice (e.g. audit and feedback), which are discussed in more detail in Chapter 15.

Professor Penny Hawe and her colleagues [1] have argued that a complex intervention can be thought of as a ‘theoretical core’ (the components that make it what it is, which researchers must therefore implement faithfully) and additional non-core features that may (indeed, should) be adapted flexibly to local needs or circumstances. For example, if the intervention is providing feedback to doctors on how closely their practice aligns with an evidence-based hypertension guideline, the
core
of the intervention might be information on what proportion of patients in a given time period achieved the guideline's recommended blood pressure level. The non-core elements might include how the information is given (orally, by letter and by email), whether the feedback is given as numbers or as a diagram or pie chart, whether it is given confidentially or in a group-learning situation, and so on.

Complex interventions generally need to go through a development phase so that the different components can be optimised before being tested in a full-scale randomised controlled trial. Typically, there is an initial
development
phase of qualitative interviews or observations, and perhaps a small survey to find out what people would find acceptable, which feed into the design the intervention. This is followed by a small-scale
pilot trial
(effectively a ‘dress rehearsal’ for a full-scale trial, in which a small number of participants are randomised to see what practical and operational issues come up), and finally the full, definitive trial [2].

Here's an example. One of my PhD students wanted to study the impact of yoga classes on the control of diabetes. She initially spent some time interviewing both people with diabetes and yoga teachers who worked with clients who had diabetes. She designed a small questionnaire to ask people with diabetes if they were interested in yoga, and found that some but not all were. All this was part of her
development phase
. The previous research literature on the therapeutic use of yoga gave her some guidance on core elements of the intervention—for example, there appeared to be good theoretical reasons why the focus should be on relaxation-type exercises rather than the more physically demanding strength or flexibility postures.

My student's initial interviews and questionnaires gave her a great deal of useful information, which she used to design the non-core elements of the yoga intervention. She knew, for example, that her potential participants were reluctant to travel very far from home, that they did not want to attend more than twice a week, that the subgroup most keen to try yoga were the recently retired (age 60–69), and that many potential participants described themselves as ‘not very bendy’ and were anxious not to overstretch themselves. All this information helped her design the detail of the intervention—such as who would do what, where, how often, with whom, for how long and using what materials or instruments.

To our disappointment, when we tested the carefully designed complex intervention in a randomised controlled trial, it had no impact whatsoever on diabetes control compared to waiting list controls [3]. In the discussion section of the paper reporting the findings of the yoga trial, we offered two alternative interpretations. The first interpretation was that, contrary to what previous non-randomised studies found, yoga has no effect on diabetes control. The second interpretation was that yoga may have an impact but despite our efforts in the development phase, the complex intervention was
inadequately optimised
. For example, many people found it hard to get to the group, and several people in each class did not do the exercises because they found them ‘too difficult’. Furthermore, whilst the yoga teachers put a great deal of effort into the twice-weekly classes and they gave people a tape and a yoga mat to take home, they did not emphasise to participants that they should practise their exercises every day. As we discovered, hardly any of the participants did any exercises at home.

To
optimise
yoga as a complex intervention in diabetes, therefore, we might consider measures such as (i) getting a doctor or nurse to ‘prescribe’ it, so that the patient is more motivated to attend every class; (ii) working with the yoga teachers to design special exercises for older, under-confident people who cannot follow standard yoga exercises and (iii) stipulating more precisely what is expected as ‘homework’.

This example shows that when a trial of a complex intervention produces negative results, this does not necessarily prove that all adaptations of this intervention will be ineffective in all settings. Rather, it tends to prompt the researchers to go back to the drawing board and ask how the intervention can be further refined and adapted to make it more likely to work. Note that because our yoga intervention needs more work, we did not go on directly to the full-scale randomised controlled trial but have returned to the development phase to try to refine the intervention.

Ten questions to ask about a paper describing a complex intervention

In 2008, the Medical Research Council produced updated guidance for evaluating complex interventions, and these were summarised in the
British Medical Journal
[2]. The questions given later, about how to appraise a paper describing a complex intervention, are based on this guidance.

Question One: What is the problem for which this complex intervention is seen as a possible solution?
It is all too easy to base a complex intervention study on a series of unquestioned assumptions. Teenagers drink too much alcohol and have too much unprotected sex, so surely educational programmes are needed to tell them about the dangers of this behaviour? This does not follow, of course! The problem may be teenage drinking or sexual risk-taking, but the underlying cause of that problem may not be ignorance but (for example) peer pressure and messages from the media. By considering precisely what the problem is, you will be able to look critically at whether the intervention has been (explicitly or inadvertently) designed around an appropriate theory of action (see Question Four).
Question Two: What was done in the developmental phase of the research to inform the design of the complex intervention?
There are no fixed rules about what should be done in a developmental phase, but the authors should state clearly what they did and justify it. If the developmental phase included qualitative research (this is usually the case), see Chapter 12 for detailed guidance on how to appraise such papers. If a questionnaire was used, see Chapter 14. When you have appraised the empirical work using checklists appropriate to the study design(s), consider how these findings were used to inform the design of the intervention. One aspect of the development phase will be to identify a target population and perhaps divide this into sub-populations (e.g. by age, gender, ethnicity, educational level or disease status), each of which might require the intervention to be tailored in a particular way.
Question Three: What were the core and non-core components of the intervention?
To put this question another way, (i) what are the things that should be standardised so they remain the same wherever the intervention is implemented, and (ii) what are the things that should be adapted to context and setting? The authors should state clearly which aspects of the intervention should be standardised and which should be adapted to local contingencies and priorities. An under-standardised complex intervention may lead to a paucity of generalisable findings; an over-standardised one may be unworkable in some settings and hence, overall, an under-estimate of the potential effectiveness of the core elements. The decision as to what is ‘core’ and what is ‘non-core’ should be made on the basis of the findings of the developmental phase.
Don't forget to unpack the control intervention in just as much detail as you unpack the experimental one. If the control was ‘nothing’ (or waiting list), describe what the participants in the control arm of the trial will
not
be receiving compared to those in the intervention arm. More likely, the control group will receive a package that includes (for example) an initial assessment, some review visits, some basic advice and perhaps a leaflet or helpline number.
Defining what the control group are offered will be particularly important if the trial addresses a controversial and expensive new care package. In a recent trial of telehealth known as the
Whole Systems Demonstrator
, the findings were interpreted by some commentators as showing that telehealth installed in people's homes leads to significantly lower use of hospital services and improved survival rates (albeit at high cost per case) [4]. However, the intervention group actually received a combination of two interventions: the telehealth equipment
and
regular phone calls from a nurse. The control group received no telehealth equipment—but no phone calls from the nurse either. Perhaps it was the human contact, not the technology, that made the difference. Frustratingly, we cannot know. In my view, the study design was flawed since it does not tell us whether telehealth ‘works’ or not!
Question Four: What was the theoretical mechanism of action of the intervention?
The authors of a study on a complex intervention should state explicitly how the intervention is intended to work, and that includes a statement of how the different components fit together. This statement is likely to change as the results of the developmental phase are analysed and incorporated into the refinement of the intervention.
It is not always obvious why an intervention works (or why it fails to work), especially if it involves multiple components aimed at different levels (e.g. individual, family and organisation). A few years ago, I reviewed the qualitative sections of research trials on school-based feeding programmes for disadvantaged children [5]. In 19 studies, all of which had tested this complex intervention in a randomised controlled trial (see the linked Cochrane review and meta-analysis [6]), I found a total of six different mechanisms by this intervention may have improved nutritional status, school performance or both: long-term correction of nutritional deficiencies; short-term relief of hunger; the children felt valued and looked after; reduced absenteeism; improved school diet inspired improved home diet and improved literacy in one generation improved earning power and hence reduced the risk of poverty in the next generation.
When critically appraising a paper on a complex intervention, you will need to make a judgement on whether the mechanisms offered by the authors are adequate. Common sense is a good place to start here, as is discussion among a group of experienced clinicians and service users. You may have to deduce the mechanism of action indirectly if the authors did not state it explicitly. In section ‘Evaluating systematic reviews’, I describe a review by Grol and Grimshaw [7], which showed that only 27% of studies of implementing evidence included an explicit theory of change.

Other books

Morticai's Luck by Darlene Bolesny
Carla Kelly by Libby's London Merchant
MySoultoSave by S W Vaughn
Island by Aldous Huxley
Wildfire Wedding by Sowell, Lynette
Marilyn by J.D. Lawrence
Bec by Darren Shan
Crossroads by Chandler McGrew