Print

Print


Dear Eugene (and the list)

I suggest you look at the following UK HTA Methodology report
Deeks JJ, Dinnes J, D'Amico R, Sowden AJ, Sakarovitch C, Song F, et al. 
Evaluating non-randomised intervention studies. Health Technol Assess 
2003;7(27).


which can be freely downloaded from the web:
http://www.ncchta.org/ProjectData/1_project_record_published.asp?PjtId=1117&status=6


Chapter 3 of the report critically reviews the Benson study you cite, 
together with all the other studies which have attempted to answer the 
question by making multiple comparisons of RCTs and non-randomised studies 
of the same intervention.  The Benson study used a very liberal criteria 
for defining results "to be the same", which is why it came to a different 
conclusion from some of the other studies.

The report also contains some novel studies that have created 
non-randomised studies from RCT data, which enables unconfounded 
comparisons to be made between results of RCTs and non-randomised studies - 
a more powerful type of comparison than that from studies like Benson.

In summary, there is plenty of evidence that the "results of RCTs and 
non-randomised studies sometimes differ", but we have very poor 
understanding of predictors of when the differences will occur.

Jon Deeks
Senior Medical Statistician
Oxford


At 17:39 08/11/2005 -0500, Eugene Lusty wrote:
>Hi. I'm new to this list. I sent these queries to a list member some time 
>ago, so this may be familiar. However, I was not subscribed to the list at 
>that time, so I'm not sure if this topic was discussed.
>
>
>I'm involved in a debate in which someone is claiming that there is no 
>meaningful objective evidence that the results of RCTs and 'outcomes 
>research' and other observational studies differ significantly when 
>evaluated in the context of a given intervention. This is to say that he 
>believes that outcomes research is essentially equally accurate and 
>meaningful to RCTs despite it's lower position in the 'Levels of Evidence' 
>hierarchy. He is basing this opinion upon only one study, though it was 
>published in the NEJM (comparing RCTs and observational studies):
>
>http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=10861324&query_hl=1
>
>A comparison of observational studies and randomized, controlled trials.
>
>Benson K, Hartz AJ.
>
>Department of Family Medicine, University of Iowa College of Medicine, 
>Iowa City 52242-1097, USA.
>
>BACKGROUND: For many years it has been claimed that observational studies 
>find stronger treatment effects than randomized, controlled trials. We 
>compared the results of observational studies with those of randomized, 
>controlled trials. METHODS: We searched the Abridged Index Medicus and 
>Cochrane data bases to identify observational studies reported between 
>1985 and 1998 that compared two or more treatments or interventions for 
>the same condition. We then searched the Medline and Cochrane data bases 
>to identify all the randomized, controlled trials and observational 
>studies comparing the same treatments for these conditions. For each 
>treatment, the magnitudes of the effects in the various observational 
>studies were combined by the Mantel-Haenszel or weighted 
>analysis-of-variance procedure and then compared with the combined 
>magnitude of the effects in the randomized, controlled trials that 
>evaluated the same treatment. RESULTS: There were 136 reports about 19 
>diverse treatments, such as calcium-channel-blocker therapy for coronary 
>artery disease, appendectomy, and interventions for subfertility. In most 
>cases, the estimates of the treatment effects from observational studies 
>and randomized, controlled trials were similar. In only 2 of the 19 
>analyses of treatment effects did the combined magnitude of the effect in 
>observational studies lie outside the 95 percent confidence interval for 
>the combined magnitude in the randomized, controlled trials. CONCLUSIONS: 
>We found little evidence that estimates of treatment effects in 
>observational studies reported after 1984 are either consistently larger 
>than or qualitatively different from those obtained in randomized, 
>controlled trials.
>
>
>
>
>
>I am well aware of the theoretical reasons for which RCTs are considered 
>more reliable than outcomes research and occupy a higher level in the 
>hierarchy. What I'm looking for is something more objective, i.e., are 
>there any important studies which demonstrate the value of RCTs over any 
>and/or all other study designs in an objective, practical sense rather 
>than in theoretical construct?
>
>To be more specific, here are a couple of statements which I think may be
>refutable:
>
>"There is no evidence that I am aware of that demonstrates a meaningful
>difference between RCT and outcomes studies for the same of similar
>condition."
>
>
>"In the modern era (within the last 15 years, from what I've read, there is
>NO difference in conclusions drawn between outcomes studies and RCT. (when
>studying similar conditions)."
>
>"From what I've read...the only justification for the claim that RCT are in
>a meaningful way superior than outcomes studies extrapolated from studies
>done in the 1940's early 50s."
>
>"from what I've read there is no convincing evidence that RCTs produce
>superior (in real terms) evidence than case series>"
>
>
>
>
>
>
>
>Are the above statements correct?
>
>
>
>Thanks,
>
>  Russ
>
>_________________________________________________________________
>On the road to retirement? Check out MSN Life Events for advice on how to 
>get there! http://lifeevents.msn.com/category.aspx?cid=Retirement