How to Read a Paper: The Basics of Evidence-Based Medicine (42 page)

In Chapter 10, I described the main findings of Grimshaw's [20] 2004 systematic review on guideline implementation. The main conclusion of that review was that despite hundreds of studies costing millions of dollars, no intervention, either educational or otherwise, and either singly or in combination, is
guaranteed
to change the behaviour of practitioners in an ‘evidence-based’ direction.

Here's where I part company slightly with the EPOC approach. While many EPOC members are still undertaking trials (and reviews of trials) to add to the research base on whether this or that intervention (such leaflets and other printed educational materials [21], audit and feedback [22] or financial incentives [23] [24]) is or is not effective in changing clinician behaviour, my own view is that this endeavour is misplaced. Not only have no magic bullets been identified yet, but I believe
they never will be identified
—and that we should stop looking for them.

This is because the implementation of best practice is highly complex; it involves multiple influences operating in different directions [25]; and it is dependent on
people
. An approach that has a positive effect in one study might have a negative effect in another study, so the notion of an ‘effect size’ of an intervention to change clinician behaviour is not only meaningless but actively misleading. If you have children, you'll know that a childrearing strategy that worked well for your first child might not have worked at all for your second child, for reasons you can't easily explain. It's something to do with human quirkiness (child two is a different individual with a different personality), and also to do with the fact that the context is subtly different in multiple ways, even in the ‘same’ family environment (child two has an older sibling, busier parents, hand-me-down toys, etc.). So it is with organisations, their staff, and evidence-based practice. Even the more refined research approach of looking for ‘mediators’ and ‘moderators’ of the effectiveness of particular interventions [12] is still, in my view, based on the flawed assumption that there is a consistent ‘mediator/moderator effect’ from a particular contextual variable.

Let's think a bit more about the human factor. In a systematic review of the diffusion of organisational-level innovations in health services, I drew this conclusion about the human elements in the adoption of innovations.

People are not passive recipients of innovations. Rather (and to a greater or lesser extent in different individuals), they seek innovations out, experiment with them, evaluate them, find (or fail to find) meaning in them, develop feelings (positive or negative) about them, challenge them, worry about them, complain about them, ‘work round’ them, talk to others about them, develop know-how about them, modify them to fit particular tasks, and attempt to improve or redesign them
[25].

These were the key factors my team found to be associated with a person's readiness to adopt health care innovations.

a.
General psychological antecedents
: A number of personality traits are associated with the propensity to try out and use innovations (e.g. tolerance of ambiguity, intellectual ability, motivation, values and learning style). In short, some people are more set in their ways than others—and these individuals will need more input and take more time to change.
b.
Context-specific psychological antecedents
: A person who is motivated and capable (in terms of values, goals, specific skills, etc.) to use a particular innovation is more likely to adopt it. Also, if the innovation meets an
identified need
in the intended adopter, they are more likely to adopt it.
c.
Meaning
: The meaning that the innovation holds for the person has a powerful influence on his or her decision to adopt it. The meaning attached to an innovation is generally not fixed but can be negotiated and reframed—for example, through discussions with other professionals or others within the organisation. For example, in the example described in section ‘The rise and rise of questionnaire research’, one of the problems was probably that dexamethasone therapy was unconsciously seen by doctors as ‘an old-fashioned palliative care drug, used in older people’. In changing their practice, they had to place this therapy in a new mental schema—as ‘an up-to-date preventive therapy, appropriate for pregnant women’.
d.
Nature of the adoption decision
: The decision by an individual in an organisation to adopt a particular innovation is rarely independent of other decisions. It may be contingent (dependent on a decision made by someone else in the organisation); collective (the individual has a ‘vote’ but ultimately must follow the decision of a group); or authoritative (the individual is told whether to adopt or not). A good example of promoting evidence-based practice through an authoritative adoption decision is the development of hospital or practice formularies. Drugs of marginal value or poor cost-effectiveness ratio can be removed from the list of drugs that the hospital is prepared to pay for. But (as you may have discovered if you work with an imposed formulary), such policies also inhibit evidence-based practice because the innovator who is ahead of the game must wait (sometimes years) for a committee decision before implementing a new standard of practice.
e.
Concerns and information needs
: People are concerned about different things at different stages in the adoption of an innovation. Initially, they need
general information
(what is the new ‘evidence-based’ practice, what does it cost, and how might it affect me?); in the early adoption stages, they need
hands-on information
(how do I make it work in practice?), and as they become more confident in the new practice, they need
development and adaptation information
(can I adapt this practice a bit to suit my circumstances, and if so, how should I do that?).

Having explored the nature of human idiosyncracy, another important factor to consider is the influence one person can have on another. As Rogers [26] first demonstrated in relation to the adoption of agricultural innovations by Iowa farmers (who are perhaps even more set in their ways than doctors), interpersonal contact is the most powerful method of influence. The main type of interpersonal influence relevant to the adoption of evidence-based practice is the
opinion leader
. We copy two sorts of people: people we look up to (‘expert opinion leaders’) and people we think are just like us (‘peer opinion leaders’) [27].

An opinion leader who is opposed to a new practice—or even one who is lukewarm and fails to back it—has a great deal of potential wrecking power. But as this systematic review of opinion leader intervention trials showed, just because a doctor is more likely to change his or her prescribing behaviour if a respected opinion leader has already changed, it doesn't necessarily follow that targeting opinion leaders (doctors nominated by other doctors as individuals they would consult or copy) with educational interventions will lead to a widespread change in prescribing practice [28]. This is probably because opinion leaders have minds of their own, and also because of the many other influences on practice apart from that one individual. In the real world, so-called ‘social influence policies’ may fail to influence.

Another important model of interpersonal influence, which the pharmaceutical industry has shown to be highly effective, is one-to-one contact between doctors and drug company representatives (discussed in Chapter 6 and known in the UK as ‘reps’ and the USA as ‘detailers’), whose influence on clinical behaviour may be so dramatic that they have been dubbed the ‘stealth bombers’ of medicine. As the example in section ‘Ten questions to ask about a paper describing a quality improvement initiative’ shows, this tactic has been harnessed by non-commercial change agencies in what is known as
academic detailing
: the educator books in to see the physician in the same way as industry representatives, but in this case the ‘rep’ provides objective, complete and comparative information about a range of different drugs and encourages the clinician to adopt a critical approach to the evidence. Whilst dramatic short-term changes in practice have been demonstrated in research trials, the example in the previous chapter shows that in a real-world setting, consistent [29], positive changes to patient care may be hard to demonstrate. As ever, the intervention should not be seen as a panacea.

A final approach to note in relation to supporting implementation of evidence-based practice is the use of computerised decision support systems that incorporate the research evidence and can be accessed by the busy practitioner at the touch of a button. Dozens of these systems are currently being developed, piloted and tested in randomised controlled trials. Relatively few are in routine use. There have been several systematic reviews of such systems, for example, Garg et al.'s [30] synthesis of 100 empirical studies published in
JAMA
, and Black et al.'s [31] ‘review of reviews’ covering 13 previous systematic reviews on clinical decision support. Garg et al. showed that around two-thirds of these studies demonstrated improved clinical performance in the decision support arm, with the best results being in drug dosing and active clinical care (e.g. management of asthma) and the worst results in diagnosis. Systems that included a spontaneous prompt (as opposed to requiring the clinician to activate the system) and those in which the trial was conducted by the people who developed the technology (as opposed to using an ‘off-the-shelf’ product) were the most effective. Black et al.'s more recent review broadly confirmed these findings. Most, but not studies, seemed to show significant improvements in clinical performance (e.g. following a guideline, actioning preventive care such as immunisation or cancer screening) with computerised decision support, but the impact on patient outcomes was much more variable. The latter were only measured in around a quarter of studies, and where they were, they usually showed modest or absent impact except in post-hoc subgroup analyses (which have questionable statistical validity).

Note what I said earlier (page 207) about the complexity of the implementation of EBM. I am sceptical of studies that attempt to say ‘computer-based decision support is/is not effective’ or ‘computer-based decision support has an effect of X magnitude’. They work for some people in some circumstances, and our research energies should now be directed at refining what we can say about
what sort of
computerised decision support,
for whom
and
in what circumstances
[32]. Resistance to new technologies by clinicians is one of my current research interests—but if I told you the whole story here I would never finish this book, so if you are interested, make a note to look out some of my in-progress work in a year or so.

What does an ‘evidence-based organisation’ look like?

‘What does an organisation that promotes the adoption of [evidence-based] innovations look like?’ was one of the questions that my own team addressed in our systematic review of the literature on diffusion of organisational-level innovations [25]. We found that, in general, an organisation will assimilate a new product or practice more readily if it is large, mature (has been established a long time), functionally differentiated (i.e. divided into semi-autonomous departments and units), specialised (a well-developed division of labour, such as specialist services); if it has slack resources (money and staff) to channel into new projects; and if it has decentralised decision-making structures (teams can work autonomously). But although dozens of studies (and five meta-analyses) have been undertaken on the size and structure of organisations, all these determinants account for less than 15% of the variation in organisations' ability to support innovation (and in many studies, they explain none of the variation at all). In other words, it's not usually the structure of the organisation that makes the critical difference in supporting EBM.

More important in our review were less easily measurable dimensions of the organisation—particularly something the organisational theorists call
absorptive capacity
. Absorptive capacity is defined as the organisation's ability to identify, capture, interpret, share, reframe and re-codify new knowledge, to link it with its own existing knowledge base, and to put it to appropriate use [33]. Prerequisites for absorptive capacity include the organisation's existing knowledge and skills base (especially its store of tacit, ‘knowing the ropes’ type knowledge) and pre-existing related technologies; a ‘learning organisation’ culture (in which people are encouraged to learn amongst themselves and share knowledge); and proactive leadership directed towards enabling this knowledge sharing [34].

A major overview by Dopson and her colleagues [35] of high-quality qualitative studies on how research evidence is identified, circulated, evaluated and used in health care organisations found that that before it can be fully implemented in an organisation, EBM knowledge must be enacted and made social, entering into the stock of knowledge that is developed and socially shared amongst others in the organisation. In other words, knowledge depends for its circulation on interpersonal networks (who knows whom), and will only spread efficiently through the organisation if these social features are taken into account and barriers overcome.

Another difficult-to-measure dimension of the evidence-based organisation (i.e. one that is capable of capturing best practice and implementing it widely in the organisation) is what is known and a
receptive context for change
. This composite construct, developed in relation to the implementation of best practice in healthcare by Pettigrew and colleagues [36], incorporates a number of organisational features that have been independently associated with its ability to embrace new ideas and face the prospect of change. In addition to absorptive capacity for new knowledge (see preceding text), the components of receptive context include strong leadership, clear strategic vision, good managerial relations, visionary staff in key positions, a climate conducive to experimentation and risk-taking, and effective data capture systems. Leadership may be especially critical in encouraging organisational members to break out of the convergent thinking and routines that are the norm in large, well-established organisations.

Other books

The Artful Egg by James McClure
The Last Runaway by Tracy Chevalier
Family Fan Club by Jean Ure
Mortal Sin by Allison Brennan
TYCE by Jaudon, Shareef
Wild Hearts (Novella) by Tina Wainscott
Isolation by Lauren Barnholdt, Aaron Gorvine
Gun Control in the Third Reich by Stephen P. Halbrook
Touch by North, Claire