I thought someone might have started kicking this one around, particularly
with all the recent discussion on evidence grading. From JAMA, 8/15/2001:
Ioannidis, J et al. "Comparison of Evidence of Treatment Effects in
Randomized and Nonrandomized Studies"
the authors state:
"Although we perused several hundreds of meta-anlyses, the vast majority
regarded the randomized design as a prerequisite for eligibility and most of
them did not even cite the nonrandomized studies. This is unfair for
epidemiological research that may offer some complementary insights to those
provided by randomized trials. We propose that future systematic reviews
and meta analyses should pay more attention to the available randomized
data. It would be wrong to reduce the efforts to promote randomized trials
so as to obtain easy answers from nonrandomized designs. However,
nonrandomized evidence may also be useful and may be helpful in the
interpretation of randomized results."
I can see their point, but have a little trouble with using the term unfair.
Limiting to randomized data provides a uniform framework for building a
systematic review or meta-analysis. There is something to be said for
keeping it simple. Perhaps, citing of nonrandomized trials that are
discrepant would be helpful. Any other thoughts?
Dan Sontheimer
Assoc. Director Spartanburg Family Medicine Residency
Spartanburg, SC USA
DISCLAIMER: The information in this message is confidential and may be
legally privileged. It is intended solely for the addressee. Access to
this message by anyone else is unauthorized. If you are not the intended
recipient, any disclosure, copying, or distribution of the message, or any
action or omission taken by you in reliance on it, is prohibited and may be
unlawful. Please immediately contact the sender if you have received this
message in error. Thank you.
|