Matthias Perleth posted:
> I would suggest a more relaxed approach to the hierarchy of evidence
> issue. Levels of evidence have been created to rank study designs -
> regardless of their adequacy or quality - according to their internal
> validity. Thus, RCTs are less likely to be confounded (if they are of
> high quality) than observational studies or case series. Thus, as have
> been mentioned in the discussion, levels of evidence comprise only one
> dimension.
Perhaps I'm missing something in this discussion, but isn't RCT versus
observational designs more a matter of the nature of the underlying question
than a strict versus relaxed approach? If the question is efficacy ("can it
work?") then RCT is the gold standard; if not ethical or feasible, then
observational designs with due consideration of potential limitations may
have to suffice. If the question is "does it work?" then observational
designs seem more appropriate to study effectiveness (as opposed to
efficacy).
>...reliance on single studies could be misleading...
Agreed! Isn't that a central point of debate between the Neyman-Pearson and
Fisher vs. Bayesean camps?
Benjamin Djulbegovic posted:
>This all implies that current scales/checklists are grossly inadequate.
>Should their use be abandoned? Should search for "perfect" checklist/scale
>be also abandoned?
Isn't the real value of these scales/checklists an inventory of potential
strength and weakness in each study reviewed, not just a "pass/fail" grade
per se? If so, perhaps "necessary but not sufficient" would be a better
description of their role?
David Birnbaum, PhD, MPH
Clinical Assistant Professor
Dept. of Health Care & Epidemiology
University of British Columbia, Canada
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|