Print

Print


Kumara,
 
Several years ago (gosh, almost a decade!), a couple of colleagues (John
Smucny and Fred Tudiver) and I came up with a set of appraisal criteria -
"How to Appraise an Appraisal" - that would at least partially address your
question.   These criteria came from our digestion of both the
Evidence-Based Medicine Working Group (McMaster and Oxford) and the
Information Mastery Working Group (UVA and now Tufts) concepts:
******
How to Appraise an Appraisal
 
Remember the Usefulness Equation
 
Usefulness = (Relevance x Validity) / Work
 
Since the work is done for us already, these could be very useful, but what
about the validity and relevance?
 
Modes of evidence retrieval: Hunting and Foraging
 
1.      Does the selection process for articles to be reviewed in the
journal include criteria for validity and relevance?  (Do they look for
POEMs?  How much validity assessment is done before the review?)
 
2.      Does the review address a focused, answerable clinical question?
 
3.      Is the abstraction of data from the original article explicit and
structured?  (Can you therefore not read the whole article and still feel
comfortable with the validity?)
 
4.      Are there clear recommendations for the integration of this
evidence into practice? (A clinical bottom line)
 
5.      Does the commentary set this evidence within a context of existing
knowledge in the field?  (Is this the only evidence on the question?  What
do other studies say?)
 
6.      Does this review use clinically relevant statistics?  (NNT, NNH,
likelihood ratios*)
 
7.      Is the writing clear and understandable (avoiding jargon and
complicated statistics)?
 
8.      Where does this review fit in the Usefulness Equation?
 
9.      Can you apply this evidence to your patient or population?
******
 
Slawson and Shaugnessy have for several recent years presented formalized
criteria for choosing "secondary sources" - oriented mainly at choosing an
ongoing source of "keeping up" literature, but I cannot place my hands on a
reference or source for this - I would refer you to their excellent yearly
Information Mastery Workshop held in Boston, MA.

As for a ranking - if you get a dozen EBM-ers in a room with a pile of
pre-appraise sources, I'm sure you'd get 13 different rankings....

For what it's worth, in my EBM course, I tried to get medical students to
rank a set of pre-appraised sources based on these criteria (derived from a
Slawson/Shaughnessy workshop exercise) that all looked at a single clinical
topic.  In retrospect, I think I aimed this at the wrong level of learner -
might be more appropriate for graduate medical education
("registrar"-level?) than for medical students.

Good luck!

jwe

John Epling, MD, MSEd, FAAFP

Associate Professor and Vice-Chair, Family Medicine
Director, Studying-Acting-Learning-Teaching Network (SALT-Net)
Associate Professor, Public Health and Preventive Medicine
Director, Preventive Medicine Program
SUNY-Upstate Medical University
Syracuse, NY
[log in to unmask]


>>> On 9/7/2009 at 9:48 PM, Kumara Mendis <[log in to unmask]> wrote:

Dear Colleagues
What is a pre-appraised evidence-based resource?
Has there been a *definition* 
Has someone categorized or ranked the commonly known pre-appraised evidence
resources?
Can we differentiate between a good text-book of medicine that is
evidence-based (to a extent that is possible with current electronic
versions) and a pre-appraised evidence resources?
Any clarifications or articles re the above?
Kumara