Greetings all,
I searched the list archives, and could not find this
question addressed (though there are several that
discuss the formation and uses of PICO - as well as
Andrew Booth's collection of wild-type PICO variants
from 2009).
A colleague of mine wishes to know:
Has anyone developed and validated a rubric or other
tool for evaluating the quality of a PICO question?
The trick to this question, I think, is defining what
"quality" is: output from a search? adequate
representation of a learning need? or merely
well-structured (which is somewhat recursive to the
definition of the PICO question)?
I imagine the outcomes for a output-based validation
study would be the quality of the output of a database
search (medline, embase, etc.), but the more I think
about how one would assess this quality, things start
spinning in my head.
In the sense of learning need, I'm also somewhat of the
mind that a "good" PICO question is best "validated" by
the person asking the question themselves - does it
really represent what they assess is their own learning
need (or might that be adjudicated by something like
cognitive task analysis)?
Assessing the degree of "well-constructedness" of a PICO
question - seems fairly straightforward and may not
require "validation" as it's a relatively artificial
construct to begin with.
Does anyone have any pertinent references I can share
with my colleague? Am I thinking about this the right
way?
Thanks in advance,
John
John Epling, MD, MSEd, FAAFP
Associate Professor and Chair
Department of Family Medicine
Co-Director, Studying-Acting-Learning-Teaching Network
(SALT-Net)
Associate Professor, Public Health and Preventive
Medicine
SUNY-Upstate Medical University
Syracuse, NY
[log in to unmask]
Clinical:
http://www.upstate.edu/findadoc/eplingj
Faculty:
http://www.upstate.edu/faculty/eplingj