Print

Print


I hate happy endings, but I can't agree more.

> -----Original Message-----
> From: Doggett, David [SMTP:[log in to unmask]]
> Sent: Friday, August 17, 2001 11:32 AM
> To:   [log in to unmask]
> Subject:      Re: Randomized versus non-randomized studies
>
> I believe I agree with everything Victor Montori said.  The point I think
> we
> are both making is that a too narrow, literal approach to EBM or HTA is
> not
> appropriate.  Bruce Guthrie is correct that in EBM as well as in HTA there
> are those who throw up their hands and proclaim "no evidence" when
> confronted with a topic for which there are no RCTs.  Academicians seeking
> to publish meta-analyses and systematic reviews typically confine
> themselves
> to topics with RCTs.  The physician at the bedside, and sometimes policy
> makers contracting for TAs, do not have this luxury.  The fact that groups
> like Cochran and academic methodologists have not addressed these problems
> has led some people to think that EBM methods only include RCTs.  Also, I
> think some EBM leaders may be reluctant propose methods for non-RCTs for
> fear of giving too much legitimacy to lesser study designs.  The fact is,
> the systematic and critical methods of EBM must be extendable into areas
> without ideal studies.
>
> D. Doggett
>
>
> -----Original Message-----
> From: Montori, Victor M., M.D. [mailto:[log in to unmask]]
> Sent: Friday, August 17, 2001 11:47 AM
> To: Doggett, David; [log in to unmask]
> Subject: RE: Randomized versus non-randomized studies
>
>
> I take issue with the definition of evidence-based medicine implied.
>
> Evidence based medicine recognizes a continuum of strength of inference
> related to the strength of study design and conduct (as far as protection
> against bias) that creates a hierarchy.  It recognizes that clinicians
> (because EBM is a clinical paradigm) need to determine what is the highest
> level of evidence available to answer a specific clinical question.  The
> predominance of the RCT and the systematic review come from the
> predominance
> of treatment clinical questions (and the availability of treatment
> studies)
> in practice.  It just takes a quick look at the Users Guides to the
> Medical
> Literature series in JAMA or at the series on the Rational Cllinical
> Examination to understand that the scope of EBM is not limited to any
> specific question type or topic.
>
> Consideration to a hierarchy of evidence is only one part of EBM (other
> components include the incorporation of patient values and preferences, of
> reality constraints, and expertise).
>
> Thus, the methods David attributes to HTA are no different than those
> involved in the clinical practice of EBM.
>
> The need to make policy recommendations based on evidence and to
> incorporate
> evidence in an explicit fashion has come associated with the need to have
> a
> classification system for the evidence and a separate one for the
> recommendations.  I would suggest people look at a more modern approach of
> this issue in the most recent ACCP Consensus on Antithrombotic treatment
> (Chest, 2001).  Again, this is different than the use of evidence at the
> bedside.
>
> V
>
> > -----Original Message-----
> > From: Doggett, David [SMTP:[log in to unmask]]
> > Sent: Friday, August 17, 2001 10:32 AM
> > To:   [log in to unmask]
> > Subject:      Re: Randomized versus non-randomized studies
> >
> > This question highlights the difference between evidence-based medicine
> > (as
> > it has been defined and practiced in systematic reviews) and technology
> > assessment.  EBM meta-analyses and systematic reviews have confined
> > themselves almost exclusively to RCTs.  Thus, the topics covered by EBM
> > are
> > limited to questions addressed by RCTs.  Technology assessment (TA) does
> > not
> > have that luxury.  We must present decision makers with the current
> state
> > of
> > knowledge, regardless of the source; although, it is essential to
> > critically
> > analyze the reliability of the data.
> >
> > I recently gave a talk on meta-analysis of uncontrolled studies at the
> > annual meeting of the International Society for Technology Assessment in
> > Health Care that was here in Philadelphia in June.  Our approach has
> been
> > to
> > use an evidence hierarchy only to guide our literature searches and
> > inclusion criteria, not to assign points by which to weight evidence.
> > Thus,
> > if there are a number of double-blind RCTs, we do a meta-analysis of
> > those.
> > Lesser designs (unblinded RCT, other controlled studies, uncontrolled
> > studies) will then only be looked at for any additional evidence they
> may
> > provide, such as on special patient groups, prognostic factors, etc.
> But
> > if
> > there are no dbRCTs, we use whatever there is on the next level down the
> > evidence hierarchy.
> >
> > In addition to the Ioannidis article cited by Sontheimer, there are
> other
> > interesting articles on randomized versus nonrandomized studies.  One is
> > "Randomized, Controlled Trials, Observational Studies, and the Hierarchy
> > of
> > Research Designs"
> > Concato J, Sha N and Horwitz RI, N Engl J Med, 2000, 342:1887-92.  This
> > study found little difference in effect sizes in 55 RCTs and 44
> controlled
> > studies of five different medical topics.
> >
> > On the other hand, another study, "Assignment Methods in
> Experimentation:
> > When Do Nonrandomized Experiments Approximate Answers From Randomized
> > Experiments?" Heinsman DT and Shadish WR, Psych Meth, 1995, 1:154-69,
> > found
> > substantial differences in effect sizes in 51 RCTs vs. 47 controlled
> > trials
> > of four topics in education research.  These two contrasting findings
> show
> > that the problem is topic specific.  Furthermore, these latter authors
> > went
> > on to do multiple regression analysis of various study design and
> > reporting
> > variables in the studies.  That is, they correlated the study variables
> to
> > the effect size.  What they found was that randomization was seventh in
> > the
> > top ten ranking of study variables affecting the effect size.  Knowing
> > these
> > correlation coiefficeints, they were then able to adjust the study
> results
> > for these variables.  After adjustment there was little or no difference
> > in
> > the effect sizes of the studies.
> >
> > Sometimes there are not any controlled trials, only uncontrolled case
> > series.  Then it is necessary to go to the literature and synthesize a
> > historical control.  This is also good practice for assessing the
> validity
> > of active controls in RCTs without a no-treatment group.  This procedure
> > is
> > problematic and has been examined in the study "Randomized versus
> > Historical
> > Controls for Clinical Trials" Sacks H, Chalmers TC, Smith H Jr; Am J
> Med,
> > 1982, 72:233-40.  These authors found that using historical controls
> > frequently exagerates the effect size.  While treatment group results
> were
> > similar regardless of the comparison design, historical controls usually
> > fared worse than parallel controls, thus accounting for the exageration
> in
> > effect size.  Because of this potential exageration, small or modest
> > effect
> > sizes found with historical controls are not very reliable; however, we
> > have
> > seen some situations where the effect size with historical controls was
> so
> > large and striking that the findings could not be ignored, and in fact
> > were
> > strong evidence that there was no equipoise, and an RCT might be
> > unethical.
> >
> > This raises a point that has always puzzled me.  RCTs are only
> considered
> > ethical if there is equipoise.  But what can the evidence be for
> > equipoise?
> > EBM only recognizes RCTs as valid evidence.  As far as I know, EBM is
> > silent
> > on what the evidence must be for equipoise.  Any thoughts anyone?
> >
> > David L. Doggett, Ph.D.
> > Senior Medical Research Analyst
> > Health Technology Assessment and Information Services
> > ECRI, a non-profit health services research organization
> > 5200 Butler Pike
> > Plymouth Meeting, Pennsylvania 19462, U.S.A.
> > Phone: (610) 825-6000 x5509
> > FAX: (610) 834-1275
> > http://www.ecri.org
> > e-mail: [log in to unmask]
> >
> >
> >
> > -----Original Message-----
> > From: Sontheimer, Daniel MD [mailto:[log in to unmask]]
> > Sent: Friday, August 17, 2001 8:30 AM
> > To: [log in to unmask]
> > Subject:
> >
> >
> > I thought someone might have started kicking this one around,
> particularly
> > with all the recent discussion on evidence grading.   From JAMA,
> > 8/15/2001:
> >
> > Ioannidis, J et al.  "Comparison of Evidence of Treatment Effects in
> > Randomized and Nonrandomized Studies"
> > the authors state:
> > "Although we perused several hundreds of meta-anlyses, the vast majority
> > regarded the randomized design as a prerequisite for eligibility and
> most
> > of
> > them did not even cite the nonrandomized studies.  This is unfair for
> > epidemiological research that may offer some complementary insights to
> > those
> > provided by randomized trials.  We propose that future systematic
> reviews
> > and meta analyses should pay more attention to the available randomized
> > data.  It would be wrong to reduce the efforts to promote randomized
> > trials
> > so as to obtain easy answers from nonrandomized designs.  However,
> > nonrandomized evidence may also be useful and may be helpful in the
> > interpretation of randomized results."
> >
> > I can see their point, but have a little trouble with using the term
> > unfair.
> > Limiting to randomized data provides a uniform framework for building a
> > systematic review or meta-analysis.  There is something to be said for
> > keeping it simple.  Perhaps, citing of nonrandomized trials that are
> > discrepant would be helpful.  Any other thoughts?
> >
> > Dan Sontheimer
> > Assoc. Director Spartanburg Family Medicine Residency
> > Spartanburg, SC USA
> >
> > DISCLAIMER:  The information in this message is confidential and may be
> > legally privileged.  It is intended solely for the addressee.  Access to
> > this message by anyone else is unauthorized.  If you are not the
> intended
> > recipient, any disclosure, copying, or distribution of the message, or
> any
> > action or omission taken by you in reliance on it, is prohibited and may
> > be
> > unlawful.  Please immediately contact the sender if you have received
> this
> > message in error.  Thank you.