My understanding was that RCTs can answer both questions: can it work
(exploratory or explanatory RCTs) and does it work (pragmatic RCTs). It is a
frequently fallacy (FF) to say that RCTs cannot answer the question does it
work in "real life"? - you just have to make sure that the RCT is designed
to replicated "real life" scenarios but there is nothing about the
methodology of RCTs that would preclude you from answering "real life"
questions. It is true that most RCTs are not pragmatic but they are still
more reliable than observational studies if both are methodologically strong
to come out with real life answers. I agree that issues of safety or ethical
limitations are more relevant in preferring one method over another.
Victor
-----Original Message-----
From: David Birnbaum [mailto:[log in to unmask]]
Sent: Friday, July 14, 2000 1:55 AM
To: [log in to unmask]
Subject: Re: Randomized vs. Non-randomized trials
Matthias Perleth posted:
> I would suggest a more relaxed approach to the hierarchy of evidence
> issue. Levels of evidence have been created to rank study designs -
> regardless of their adequacy or quality - according to their internal
> validity. Thus, RCTs are less likely to be confounded (if they are of
> high quality) than observational studies or case series. Thus, as have
> been mentioned in the discussion, levels of evidence comprise only one
> dimension.
Perhaps I'm missing something in this discussion, but isn't RCT versus
observational designs more a matter of the nature of the underlying question
than a strict versus relaxed approach? If the question is efficacy ("can it
work?") then RCT is the gold standard; if not ethical or feasible, then
observational designs with due consideration of potential limitations may
have to suffice. If the question is "does it work?" then observational
designs seem more appropriate to study effectiveness (as opposed to
efficacy).
>...reliance on single studies could be misleading...
Agreed! Isn't that a central point of debate between the Neyman-Pearson and
Fisher vs. Bayesean camps?
Benjamin Djulbegovic posted:
>This all implies that current scales/checklists are grossly inadequate.
>Should their use be abandoned? Should search for "perfect" checklist/scale
>be also abandoned?
Isn't the real value of these scales/checklists an inventory of potential
strength and weakness in each study reviewed, not just a "pass/fail" grade
per se? If so, perhaps "necessary but not sufficient" would be a better
description of their role?
David Birnbaum, PhD, MPH
Clinical Assistant Professor
Dept. of Health Care & Epidemiology
University of British Columbia, Canada
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|