Hello John,
Unless the desire is to evaluate student work, and without wishing to discourage research for its own sake, as a practitioner I wouldn’t place any great value
on a tool to evaluate PICO questions (for example, as part of evaluating the quality of a systematic review, HTA or guideline, beyond simple use of the rubric itself). Let me explain why.
“Output from a search” is indeed the desired outcome of a PICO question, but it would not be an appropriate metric of the question’s quality. No matter how
well-formulated, in order to produce search outputs and relevant information, the PICO question still has to be translated into search strategies for multiple databases, which will depend on knowledge-based activities that very quickly extend beyond the wording
of the PICO question itself; then the search results have to be sifted, again an activity that depends in small part on the precision of wording of the PICO question and in large part on the information skills and tacit knowledge of the person sifting. This
may also be an iterative process, with the question being revised in response to the literature identified (or not identified). The ‘quality’ of search output, then, is dependent upon the experience of the searcher, the choice of search terms, the precision
and accuracy of indexing in the database, the sources used, and the application of inclusion/exclusion criteria, not just the wording of the PICO question – which could be viewed as simply a starting point for dialogue between the person with the information
need and the search specialist, as well as part of the communication of the scope of the work with the intended audience. Viewed in this context, I would argue that it is more important to evaluate the quality of search strategies than of PICO questions, and
there is a checklist for this: PRESS
http://www.cadth.ca/en/publication/781 (Appendix G).
If the question is about how to evaluate student work, there would have to be a subjective element rather than merely a score against a tool, just like any
critical appraisal. I would suggest the judgment to consist of whether the instructor can refine the question further in a meaningful way and whether the student can provide explanatory commentary as to why their choice of wording reflects the degree of precision
appropriate for the information need, including, for example, what alternative forms of wording were considered and rejected, and why. Alternatively, the test question could describe a clinical information need and the context in which the need has arisen,
provide a (flawed) PICO question produced in that context, and allow the student to demonstrate their understanding through a critique and revision of the question, identifying and explaining the ‘flaws’ of the question in that context.
This is merely my opinion and I would welcome other views.
Best wishes,
Michele
Michele Hilton Boon MA MLIS MCLIP MPH
Programme Manager, Healthcare Improvement Scotland
From: Evidence based health
(EBH) [mailto:[log in to unmask]] On Behalf Of John Epling
Sent: 01 May 2014 19:02
To: [log in to unmask]
Subject: Fwd: Evaluating the quality of a PICO question
(tapping the microphone...) Is this thing on?
I promise I won't send this around again, but just wanted to check again if anyone had any thoughts about my question: What's the
best way to evaluate the quality of a PICO question? (context is mainly in teaching EBM, but I'm open to other ideas - see more detail below).
Thanks again.
John
John Epling, MD, MSEd, FAAFP
Associate Professor and Chair
Department of Family Medicine
Co-Director, Studying-Acting-Learning-Teaching Network (SALT-Net)
Associate Professor, Public Health and Preventive Medicine
SUNY-Upstate Medical University
Syracuse, NY
[log in to unmask]
>>> On 4/27/2014 at 9:11 PM, John Epling <[log in to unmask]> wrote:
Greetings all, |