Read How to Read a Paper: The Basics of Evidence-Based Medicine Online
Authors: Trisha Greenhalgh
In the academic detailing example, the methods section is very long and includes details on how the programme of ‘detailing’ was developed, how the detailers were selected and trained, how the sample of doctors was chosen, how the detailers approached the doctors, what supporting materials were used, and how the detailing visits were structured and adapted to the needs and learning styles of different doctors. Whether we agree with their measures of the project's success or not, we can certainly interpret the findings in the light of this detailed information on how they went about it.
The relatively short methods section in the DVT care pathway example may have been a victim of the word length requirements of the journal. Authors summarise their methods in order to appear succinct, and thereby leave out all the qualitative detail that would allow you to evaluate the
process
of quality improvement—that is, to build up a ‘rich picture’ of what the authors actually did. In recognition of this perverse incentive, the authors of the SQUIRE guidelines issued a plea to editors for ‘longer papers’ [9]. A well-written quality improvement study might run into a dozen or more pages, and it will generally take you a lot longer to read than, say, a tightly written report on a randomised trial. The increasing tendency for journals to include ‘eXtra’ (with the ‘e’ meaning ‘online’) material in an Internet-accessible format is extremely encouraging, and you should hunt such material down whenever it is available.
Question Seven: What were the main findings?
For this question you need to return to your answer to Question Five and find the numbers (for quantitative outcomes) or the key themes (for qualitative data), and ask whether and how these were significant. Just as in other study designs, ‘significance’ in quality improvement case studies is a multifaceted concept. A change in a numerical value may be clinically significant without being statistically significant or vice versa (see section ‘Probability and confidence’), and may also be vulnerable to various biases. For example, in a before and after study, time will have moved on between the ‘baseline’ and ‘post intervention’ measures, and a host of confounding variables including the economic climate, public attitudes, availability of particular drugs or procedures, relevant case law, and the identity of the chief executive, may have changed. Qualitative outcomes may be particularly vulnerable to the Hawthorne effect (staff tend to feel valued and work harder when any change in working conditions aimed at improving performance is introduced, whether it has any intrinsic merits or not) [16].
In the DVT care pathway example, mean length of stay was reduced by 2 days (a difference that was statistically significant), and financial savings were achieved of several hundred Euros per patient. Furthermore, 40 of 42 eligible patients were actually cared for using the new care pathway (a further 18 patients with DVT did not meet the inclusion criteria), and 62% of all patients achieved the target reduction in length of stay. Overall, 7 of 60 people experienced adverse events, and in only one of these had the care pathway been followed. These figures, taken together, not only tell us that the initiative achieved the goal of saving money but they also give us a clear indication of the extent to which the intended changes in the process of care were achieved
and
remind us that many patients with DVT are what are known as
exceptions
—that is, management by a standardised pathway doesn't suit their needs.
In the academic detailing example, the findings show that of the 130 doctors in the target group, 78% received at least one visit and these people did not differ in demographic characteristics (e.g. age, sex, whether qualified abroad or not) from those who refused a visit. Only one person refused point blank to receive further visits, but getting another visit scheduled proved challenging, and barriers were ‘primarily associated with persuading office staff of the physician’s stated intentions for further visits'. In other words, even though the doctor was (allegedly) keen, the detailers had trouble getting past the receptionists—surely a significant qualitative finding about the process of academic detailing, which had not been uncovered in the randomised trial design! Half the doctors could lay their hands on the guidelines at the second visit (and by implication, half couldn't). But the paper also presented some questionable quantitative outcome data such as ‘around 90% of practitioners appeared interested in the topics discussed’—an observation which, apart from being entirely subjective, is a Hawthorne effect until proved otherwise. Rather than using the dubious technique of trying to quantify their subjective impressions, perhaps the authors should have either stuck to their primary outcome measure (whether the doctors let them in the door or not) or gone the whole hog and measured compliance with the guidelines.
Question Eight: What was the explanation for the success, failure or mixed fortunes of the initiative—and was this reasonable?
Once again, conventions on the length of papers in journals may make this section frustratingly short. Ideally, the authors will have considered their findings, revisited the contextual factors you identified in Question One, and offered a plausible and reasoned explanation for the former in terms of the latter, including a consideration of alternative explanations. More commonly, explanations are brief and speculative.
Why, for example, was it difficult for academic detailers to gain access to doctors for second appointments? According to the authors, the difficulty was because of ‘customarily short open-diary times for future appointments and operational factors related to the lack of permanent funding for this service’. But an alternative explanation might have been that the doctor was disinterested but did not wish to be confrontational, so told the receptionists to stall if approached again!
As in this example, evaluating the explanations given in a paper for disappointing outcomes in a quality improvement project is always a judgement call. Nobody is going to be able to give you a checklist that will allow you to say with 100% accuracy ‘
this
explanation was definitely plausible, whereas
that
aspect definitely wasn’t'. In a quality improvement case study, the authors of the paper will have told a story about what happened, and you will have to interpret their story using your knowledge of evidence-based medicine, your knowledge of people and organisations, and your common sense.
The DVT care pathway paper, whilst offering very positive findings, offers a realistic explanation of them: ‘The real impact of clinical pathways on length of stay is difficult to ascertain because these non-randomised, partly retrospective, studies might show significant reductions in hospital stay but cannot prove that the only cause of the reduction is the clinical pathway’. Absolutely!
Question Nine: In the light of the findings, what do the authors feel are the next steps in the quality improvement cycle locally?
Quality is not a station you arrive at but a manner of travelling. (If you want a reference for that statement, the best I can offer is Pirsig's [17] ‘Zen and the Art of Motorcycle Maintenance’). To put it another way, quality improvement is a never-ending cycle: when you reach one goal, you set yourself another.
The DVT care pathway team were pleased that they had significantly reduced length of stay, and felt that the way to improve further was to ensure that the care pathway was modified promptly as new evidence and new technologies became available. Another approach, which they did not mention but which would not need to wait for an innovation, might be to apply the care pathway approach to a different medical or surgical condition.
The academic detailing team decided that their next step would be to change the curriculum slightly so that rather than covering two unrelated topics on different topic areas, they would use ‘judicious selection of sequential topics allowing subtle reflection of key message elements from previous encounters (e.g. management of diabetes followed by a programme on management of hypertension)’. It is interesting that they did not consider addressing the problem of attrition (42% of doctors did not make themselves available for the second visit).
Question Ten: What did the authors claim to be the generalisable lessons for other teams, and was this reasonable?
At the beginning of this chapter, I argued that the hallmark of research was generalisable lessons for others. There is nothing wrong with improving quality locally without seeking to generate wider lessons, but if the authors have published their work, they are often claiming that others should follow their approach—or at least, selected aspects of it.
In the DVT care pathway example, the authors make no claims about the transferability of their findings. Their sample size was small, and care pathways have already been shown to shorten hospital stay in other comparable conditions. Their reason for publishing appears to convey the message, ‘If we could do it, so can you’!
In the academic detailing example, the potentially transferable finding was said to be that a whole population approach to academic detailing (i.e. seeking access to every GP in a particular geographical area) as opposed to only targeting volunteers, can ‘work’. This claim could be true, but because the outcome measures were subjective and not directly relevant to patients, this study fell short of demonstrating it.
Conclusion
In this chapter, I have tried to guide you through how to make judgements about papers on quality improvement studies. As the quote at the end of section ‘What are quality improvement studies—and how should we research them?’ illustrates, such judgements are inherently difficult to make and require you to integrate evidence and information from multiple sources. Hence, whilst quality improvement studies are often small, local and even somewhat parochial, critically appraising such studies is often more of a headache than appraising a large meta-analysis!
References
1
Batalden PB, Davidoff F. What is “quality improvement” and how can it transform healthcare?
Quality and Safety in Health Care
2007;
16
(1):2–3.
2
Marshall M. Applying quality improvement approaches to health care.
BMJ: British Medical Journal
2009;339:b3411.
3
Miltner RS, Newsom JH, Mittman BS. The future of quality improvement research.
Implementation Science
2013;
8
(Suppl 1):S9.
4
Vincent C, Batalden P, Davidoff F. Multidisciplinary centres for safety and quality improvement: learning from climate change science.
BMJ Quality & Safety
2011;
20
(Suppl 1):i73–8.
5
Alexander JA, Hearld LR. The science of quality improvement implementation: developing capacity to make a difference.
Medical Care
2011;
49
:S6–20.
6
Casarett D, Karlawish JH, Sugarman J. Determining when quality improvement initiatives should be considered research.
JAMA: The Journal of the American Medical Association
2000;
283
(17):2275–80.
7
Lynn J. When does quality improvement count as research? Human subject protection and theories of knowledge.
Quality and Safety in Health Care
2004;
13
(1):67–70.
8
Greenhalgh T, Russell J, Swinglehurst D. Narrative methods in quality improvement research.
Quality & Safety in Health Care
2005;
14
(6):443–9 doi: 10.1136/qshc.2005.014712[published Online First: Epub Date].
9
Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for improvement studies in health care: evolution of the SQUIRE Project.
Annals of Internal Medicine
2008;
149
(9):670–6.
10
Verdú A, Maestre A, López P, et al. Clinical pathways as a healthcare tool: design, implementation and assessment of a clinical pathway for lower-extremity deep venous thrombosis.
Quality and Safety in Health Care
2009;
18
(4):314–20.
11
May F, Simpson D, Hart L, et al. Experience with academic detailing services for quality improvement in primary care practice.
Quality and Safety in Health Care
2009;
18
(3):225–31.
12
Fulop N, Protopsaltis G, King A, et al. Changing organisations: a study of the context and processes of mergers of health care providers in England.
Social Science & Medicine
2005;
60
(1):119–30.
13
Rotter T, Kinsman L, James E, et al. Clinical pathways: effects on professional practice, patient outcomes, length of stay and hospital costs.
Cochrane Database of Systematic Reviews
(Online) 2010;
3
(3) doi: 10.1002/14651858.CD006632.pub2.
14
O'Brien M, Rogers S, Jamtvedt G, et al. Educational outreach visits: effects on professional practice and health care outcomes.
Cochrane Database of Systematic Reviews
(Online) 2007;
4
(4):1–62.