Read How to Read a Paper: The Basics of Evidence-Based Medicine Online
Authors: Trisha Greenhalgh
It is also important to consider whether the instrument was suitable for all participants and potential participants. In particular, did it take account of the likely range in the sample of physical and intellectual abilities, language and literacy, understanding of numbers or scaling and perceived threat of questions or questioner?
Question Seven: How was the questionnaire administered—and was the response rate adequate?
The methods section of a paper describing a questionnaire study should include details of three aspects of administration: (i) How was the questionnaire distributed (e.g. by post, face to face or electronically)? (ii) How was the questionnaire completed (e.g. self-completion or researcher-assisted)? and (iii) Were the response rates reported fully, including details of participants who were unsuitable for the research or refused to take part? Have any potential response biases been discussed?
The
British Medical Journal
will not usually publish a paper describing a questionnaire survey if fewer than 70% of people approached completed the questionnaire properly. There have been a number of research studies on how to increase the response rate to a questionnaire study. In summary, the following have all been shown to increase response rates [3].
Another thing to look for in relation to response rates is a table in the paper comparing the characteristics of people who responded with people who were approached but refused to fill out the questionnaire. If there were systematic (as opposed to chance) differences between these groups, the results of the survey will not be generalisable to the population from which the responders were drawn. Responders to surveys conducted in the street, for example, are often older than average (perhaps because they are in less of a hurry!), and less likely to be from an ethnic minority (perhaps because some of the latter are unable to speak the language of the researcher fluently). On the other hand, if the authors of the study have shown that non-responders are pretty similar to responders, you should worry less about generalisablity even if response rates were lower than you'd have liked.
Question Eight: How were the data analysed?
Analysis of questionnaire data is a sophisticated science. See these excellent textbooks on social research methods if you're interested in learning the formal techniques [4] [5]. If you are just interested in completing a checklist about a published questionnaire study, try considering these aspects of the study. First, broadly what sort of analysis was carried out and was this appropriate? In particular, were the correct statistical tests used for quantitative responses, and/or was a recognisable method of qualitative analysis (see section ‘Measuring costs and benefits of health interventions’) used for open-ended questions? It is reassuring (but by no means a flawless test) to learn that one of the paper's authors is a statistician. And as I said in Chapter 5, if the statistical tests used are ones you have never heard of, you should probably smell a rat. The vast majority of questionnaire data can be analysed using commonly used statistical tests such as Chi squared, Spearman's, Pearson correlation, and so on. The commonest mistake of all in questionnaire research is to use no statistical tests at all, and you don't need a PhD in statistics to spot that dodge!
You should also check to ensure that there is no evidence of ‘data dredging’. In other words, have the authors simply thrown their data into a computer and run hundreds of tests, and then dreamt up a plausible hypothesis to go with something that comes out as ‘significant’? In the jargon, all analyses should be hypothesis driven—that is, the hypothesis should be thought up first and then the analysis should be performed, not vice versa.
Question Nine: What were the main results?
Consider first what the overall findings were, and whether all relevant data were reported. Are quantitative results definitive (statistically significant), and are relevant
non-significant
results also reported? It may be just as important to have discovered, for example, that GPs' self-reported confidence in managing diabetes is
not
correlated to their knowledge about the condition as it would have been to discover that there was a correlation! For this reason, the questionnaire study that only comments on the ‘positive’ statistical associations is internally biased.
Another important question is have qualitative results been adequately interpreted (e.g. using an explicit theoretical framework), and have any quotes been properly justified and contextualised (rather than ‘cherry picked’ to spice up the paper)? Look back at Chapter 6 (‘Papers that report drug trials and other simple interventions’) and remind yourself of the tricks used by unscrupulous marketing people to oversell findings. Check carefully the graphs (especially the zero-intercept on axes) and the data tables.
Question Ten: What are the key conclusions?
This is a common-sense question. What do the results actually mean, and have the researchers drawn an appropriate link between the data and their conclusions? Have the findings been placed within the wider body of knowledge in the field (especially any similar or contrasting surveys using the same instrument)? Have the authors acknowledged the limitations of their study and couched their discussion in the light of these (e.g. if the sample was small or the response rate low, did they recommend further studies to confirm the preliminary findings)? Finally, are any recommendations fully justified by the findings? For example, if they have performed a small, parochial study they should not be suggesting changes in national policy as a result! If you are new to critical appraisal you may find such judgements difficult to make, and the best way to get better is to join in journal club discussions (either face to face or online) where a group of you share your common-sense reactions to a chosen paper.
In conclusion, anyone can write down a list of questions and photocopy it—but this doesn't mean that a set of responses to these questions constitutes research! The development, administration, analysis and reporting of questionnaire studies is at least as challenging as the other research approaches described in other chapters in this book. Questionnaire researchers are a disparate bunch, and have not yet agreed on a structured reporting format comparable to CONSORT (RCTs), QUORUM or PRISMA (systematic reviews) and AGREE (guidelines). Whilst a number of suggested structured tools, each designed for slightly different purposes, are now available [14–16], a review of such tools found little consensus and many unanswered questions [17]. I suspect that as such guides come to be standardised and more widely used, papers describing questionnaire research will be more consistent and easier to appraise.
References
1 Boynton PM, Wood GW, Greenhalgh T. A hands on guide to questionnaire research part three: reaching beyond the white middle classes.
BMJ: British Medical Journal
2004;
328
(7453):1433–6.
2 Boynton PM, Greenhalgh T. A hands on guide to questionnaire research part one: selecting, designing, and developing your questionnaire.
BMJ: British Medical Journal
2004;
328
(7451):1312–5.
3 Boynton PM. A hands on guide to questionnaire research part two: administering, analysing, and reporting your questionnaire.
British Medical Journal
2004;
328
:1372–5.
4 Robson C.
Real world research: a resource for users of social research methods in applied settings
. Wiley: Chichester, 2011.
5 Bryman A.
Social research methods
. Oxford University Press, Oxford, 2012.
6 Dunn SM, Bryson JM, Hoskins PL, et al. Development of the diabetes knowledge (DKN) scales: forms DKNA, DKNB, and DKNC.
Diabetes Care
1984;
7
(1):36–41.
7 Rahmqvist M, Bara A-C. Patient characteristics and quality dimensions related to patient satisfaction.
International Journal for Quality in Health Care
2010;
22
(2):86–92.
8 Phillips D.
Quality of life: concept, policy and practice
. Routledge, London, 2012.
9 Bradley C, Speight J. Patient perceptions of diabetes and diabetes therapy: assessing quality of life.
Diabetes/Metabolism Research and Reviews
2002;
18
(S3):S64–9.
10 Drewnowski A. Diet image: a new perspective on the food-frequency questionnaire.
Nutrition Reviews
2001;
59
(11):370–2.
11 Adams AS, Soumerai SB, Lomas J, et al. Evidence of self-report bias in assessing adherence to guidelines.
International Journal for Quality in Health Care
1999;
11
(3):187–92.
12 Gilbody S, House A, Sheldon T. Routine administration of Health Related Quality of Life (HRQoL) and needs assessment instruments to improve psychological outcome-a systematic review.
Psychological Medicine
2002;
32
(8):1345–56.
13 Houtkoop-Steenstra H.
Interaction and the standardized survey interview: the living questionnaire
. Cambridge University Press, Cambridge, 2000.
14 Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES).
Journal of Medical Internet Research
2004;
6
(3):e34.
15 Draugalis JR, Coons SJ, Plaza CM. Best practices for survey research reports: a synopsis for authors and reviewers.
American Journal of Pharmaceutical Education
2008;
72
(1):11.
16 Kelley K, Clark B, Brown V, et al. Good practice in the conduct and reporting of survey research.
International Journal for Quality in Health Care
2003;
15
(3):261–6.
17 Bennett C, Khangura S, Brehaut JC, et al. Reporting guidelines for survey research: an analysis of published guidance and reporting practices.
PLoS Medicine
2011;
8
(8):e1001069.
Chapter 14
Papers that report quality improvement case studies
What are quality improvement studies—and how should we research them?
The
British Medical Journal
(
www.bmj.com
) mainly publishes research articles. Another leading journal,
BMJ Quality and Safety
(
http://qualitysafety.bmj.com
), mainly publishes descriptions of efforts to improve the quality and safety of health care, often in real-world settings such as hospital wards or general practices [1]. If you are studying for an undergraduate exam, you should ask your tutors whether quality improvement studies are going to feature in your exams, because the material covered here is more often contained in postgraduate courses and you may find that it's not on your syllabus. If that is the case, put this chapter aside for after you've passed—you will certainly need it when you are working full time in the real world!
One key way of improving quality is to implement the findings of research and make care more evidence-based. This is discussed in the next chapter. But achieving a high-quality and safe health service requires more than evidence-based practice. Think of the last time you or one of your relatives was in hospital. I'm sure you wanted to have the most accurate diagnostic tests (Chapter 8), the most efficacious drugs (Chapter 6) or non-drug interventions (Chapter 7), and you also wanted the clinicians to follow evidence-based care plans and guidelines (Chapter 10) based on systematic reviews (Chapter 9). Furthermore, if the hospital asked you to help evaluate the service, you would have wanted them to use a valid and reliable questionnaire (Chapter 13).
But did you also care about things like how long you had to wait for an outpatient appointment and/or your operation, the attitudes of staff, the clarity and completeness of the information you were given, the risk of catching an infection (e.g. when staff didn't wash their hands consistently), and the general efficiency of the place? If a member of staff made an error, was this openly disclosed to you and an unreserved apology offered? And if this happened, did the organisation have systems in place to learn from what went wrong and ensure it didn't happen again to someone else? A ‘quality’ health care experience includes all these things and more. The science of quality improvement draws its evidence from many different disciplines including research on manufacturing and air traffic control as well as evidence-based medicine [2–4].