How to Read a Paper: The Basics of Evidence-Based Medicine (50 page)

3.
What were the core and non-core components of the intervention?
4.
What was the theoretical mechanism of action of the intervention?
5.
What outcome measures were used, and were these sensible?
6.
What were the findings?
7.
What process evaluation was performed—and what were the key findings of this?
8.
If the findings were negative, to what extent can this be explained by implementation failure and/or inadequate optimisation of the intervention?
9.
If the findings varied across different subgroups, to what extent have the authors explained this by refining their theory of change?
10.
What further research do the authors believe is needed, and is this justified?

Checklist for a paper that claims to validate a diagnostic or screening test (see Chapter 8)

1.
Is this test potentially relevant to my practice?
2.
Has the test been compared with a true gold standard?
3.
Did this validation study include an appropriate spectrum of participants?
4.
Has work-up bias been avoided?
5.
Has observer bias been avoided?
6.
Was the test shown to be reproducible both within and between observers?
7.
What are the features of the test as derived from this validation study?
8.
Were confidence intervals given for sensitivity, specificity and other features of the test?
9.
Has a sensible ‘normal range’ been derived from these results?
10.
Has this test been placed in the context of other potential tests in the diagnostic sequence for the condition?

Checklist for a systematic review or meta-analysis (see Chapter 9)

1.
Did the review address an important clinical question?
2.
Was a thorough search carried out of the appropriate database(s) and were other potentially important sources explored?
3.
Was methodological quality (especially factors that might predispose to bias) assessed and the trials weighted accordingly?
4.
How sensitive are the results to the way the review has been performed?
5.
Have the numerical results been interpreted with common sense and due regard to the broader aspects of the problem?

Checklist for a set of clinical guidelines (see Chapter 10)

1.
Did the preparation and publication of these guidelines involve a significant conflict of interest?
2.
Are the guidelines concerned with an appropriate topic, and do they state clearly the goal of ideal treatment in terms of health and/or cost outcome?
3.
Was a specialist in the methodology of secondary research (e.g. meta-analyst) involved?
4.
Have all the relevant data been scrutinised and are guidelines' conclusions in keeping with the data?
5.
Do they address variations in clinical practice and other controversial areas (e.g. optimum care in response to genuine or perceived underfunding)?
6.
Are the guidelines valid and reliable?
7.
Are they clinically relevant, comprehensive, and flexible?
8.
Do they take into account what is acceptable to, affordable by, and practically possible for patients?
9.
Do they include recommendations for their own dissemination, implementation and periodic review?

Checklist for an economic analysis (see Chapter 11)

1.
Is the analysis based on a study that answers a clearly-defined clinical question about an economically important issue?
2.
Whose viewpoint are costs and benefits being considered from?
3.
Have the interventions being compared been shown to be clinically effective?
4.
Are the interventions sensible and workable in the settings where they are likely to be applied?
5.
Which method of economic analysis was used, and was this appropriate?
 
  • if the interventions produced identical outcomes ⇒ cost-minimisation analysis;
  • if the important outcome is unidimensional ⇒ cost-effectiveness analysis;
  • if the important outcome is multidimensional ⇒ cost-utility analysis;
  • if the cost–benefit equation for this condition needs to be compared with cost–benefit equations for different conditions ⇒ cost-benefit analysis;
  • if a cost–benefit analysis would otherwise be appropriate but the preference values given to different health states are disputed or likely to change ⇒ cost-consequences analysis.
6.
How were costs and benefits measured?
7.
Were incremental, rather than absolute, benefits compared?
8.
Was health status in the ‘here and now’ given precedence over health status in the distant future?
9.
Was a sensitivity analysis performed?
10.
Were ‘bottom-line’ aggregate scores overused?

Checklist for a qualitative research paper (see Chapter 12)

1.
Did the article describe an important clinical problem addressed via a clearly formulated question?
2.
Was a qualitative approach appropriate?
3.
How were (i) the setting and (ii) the participants selected?
4.
What was the researcher's perspective, and has this been taken into account?
5.
What methods did the researcher use for collecting data—and are these described in enough detail?
6.
What methods did the researcher use to analyse the data—and what quality control measures were implemented?
7.
Are the results credible, and if so, are they clinically important?
8.
What conclusions were drawn, and are they justified by the results?
9.
Are the findings of the study transferable to other clinical settings?

Checklist for a paper describing questionnaire research (see Chapter 13)

1.
What did the researchers want to find out, and was a questionnaire the most appropriate research design?
2.
If an ‘off the peg’ questionnaire (i.e. a previously published and validated one) was available, did the researchers use it (and if not, why not)?
3.
What claims have the researchers made about the validity of the questionnaire (its ability to measure what they want it to measure) and reliability (its ability to give consistent results across time and within/between researchers)? Are these claims justified?
4.
Was the questionnaire appropriately structured and presented, and were the items worded appropriately for the sensitivity of the subject area and the health literacy of the respondents?
5.
Were adequate instructions and explanations included?
6.
Was the questionnaire adequately piloted, and was the definitive version amended in the light of pilot results?
7.
Was the sample of potential participants appropriately selected, large enough and representative enough?
8.
How was the questionnaire distributed (e.g. by post, email, telephone) and administered (self completion, researcher-assisted completion), and were these approaches appropriate?
9.
Were the needs of particular subgroups taken into account in the design and administration of the questionnaire? For example, what was done to capture the perspective of illiterate respondents or those speaking a different language from the researcher?
10.
What was the response rate, and why? If the response rate was low (<70%), have the researchers shown that no systematic differences existed between responders and non-responders?
11.
What sort of analysis was carried out on the questionnaire data, and was this appropriate? Is there any evidence of ‘data dredging’—that is, analyses that were not hypothesis driven?
12.
What were the results? Were they definitive (statistically significant), and were important negative and non-significant results also reported?
13.
Have qualitative data (e.g. free text responses) been adequately interpreted (e.g. using an explicit theoretical framework). Have quotes been used judiciously to illustrate more general findings rather than to add drama?
14.
What do the results mean and have the researchers drawn an appropriate link between the data and their conclusions?

Checklist for a paper describing a quality improvement study (see Chapter 14)

1.
What was the context?
2.
What was the aim of the study?
3.
What was the mechanism by which the authors hoped to improve quality?
4.
Was the intended quality improvement initiative evidence-based?
5.
How did the authors measure success, and was this reasonable?
6.
How much detail was given on the change process, and what insights can be gleaned from this?
7.
What were the main findings?
8.
What was the explanation for the success, failure or mixed fortunes of the initiative—and was this reasonable?
9.
In the light of the findings, what do the authors feel are the next steps in the quality improvement cycle locally?
10.
What did the authors claim to be the generalisable lessons for other teams, and was this reasonable?

Checklist for health care organisations working towards an evidence-based culture for clinical and purchasing decisions (see Chapter 15)

1.
Leadership
: How often has effectiveness information or evidence-based medicine been discussed at board meetings in the last 12 months? Has the board taken time out to learn about developments in clinical and cost-effectiveness?
2.
Investment
: What resources is the organisation investing in finding and using clinical effectiveness information? Is there a planned approach to promoting evidence-based medicine that is properly resourced and staffed?
3.
Policies and guidelines
: Who is responsible for receiving, acting on and monitoring the implementation of evidence-based guidance and policy recommendations such as NICE guidance or Effective Health Care Bulletins? What action has been taken on each of these publications issued to date? Do arrangements ensure that both managers and clinicians play their part in guideline development and implementation?
4.
Training
: Has any training been provided to staff within the organisation (both clinical and non-clinical) on appraising and using evidence of effectiveness to influence clinical practice?
5.
Contracts
: How often does clinical and cost-effectiveness information form an important part of contract negotiation and agreement? How many contracts contain terms that set out how effectiveness information is to be used?
6.
Incentives
: What incentives—both individual and organisational—exist to encourage the practice of evidence-based medicine? What disincentives exist to discourage inappropriate practice and unjustified variations in clinical decision-making?
7.
Information systems
: Is the potential of existing information systems to monitor clinical effectiveness being used to the full? Is there a business case for new information systems to address the task, and is this issue being considered when IT purchasing decisions are made?
8.
Clinical audit
: Is there an effective clinical audit programme throughout the organisation, capable of addressing issues of clinical effectiveness and bringing about appropriate changes in practice?

Appendix 2

Assessing the effects of an intervention

Other books

Playfair's Axiom by James Axler
Devotion by Katherine Sutcliffe
Glass Ceilings by A. M. Madden
A Painted Goddess by Victor Gischler
From a Dream: Darkly Dreaming Part I by Valles, C. J., James, Alessa
Love Me ~ Like That by Renee Kennedy
Me and You by Niccolò Ammaniti
Damia's Children by Anne McCaffrey
Patricia Rice by Devil's Lady