Agree with all that.
One (current) example.
If we were looking at pharmacotherapies for osteoarthritis we might restrict evidence on effectiveness to RCTs. We'd likely lose data on long term effectiveness because the RCTs would be mostly / all short term, but we'd gain more by sticking to RCTs and reducing bias. On the other hand, if we were looking at safety then the RCTs would be less useful (because they are short term and probably their primary intention was not to detect safety) and we'd probably, on balance get more useful data from longer term observational data. Of course, ideally safety signals from the RCTs and observational studies would be consistent - but the point is we'd be missing a key piece in the jigsaw in the SR if we just stuck rigidly to RCTs.
In other words, as so often, it depends on the question you are trying to answer.
Best wishes
Neal
Professor Neal Maskrey
Consultant Clinical Adviser, Medicines and Prescribing Centre
National Institute for Health and Care Excellence
Ground Floor Building 2000 Vortex Court | Enterprise Way | Wavertree Technology Park | Liverpool L13 1FB | United Kingdom
Tel: +44 (151) 353 7729 | Fax: +44 (151) 220 4334
Honorary Professor of Evidence-informed Decision Making, Keele University, Staffordshire. ST5 5BG.
Web: http://nice.org.uk
-----Original Message-----
From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Steve Simon, P.Mean Consulting
Sent: 30 August 2013 19:45
To: [log in to unmask]
Subject: Re: Exclude observational studies in guidelines
On 8/29/2013 3:37 PM, Allan Stubbe Christensen wrote:
> When making national guidelines is it okay to exclude all
> observational studies and only include rct, meta-analysis and
> systematic reviews. In many cases rct are clearly better than
> observational studies. But in many areas there aren't performed any
> good quality rct, but there might be some evidence from large cohort
> studies. Should these just be discarded.
>
> Well, I strongly believe that this is an wrong approach, but what is
> your take on this? And which arguments could best be used against this
> approach?
>
> GRADE is being used for this guideline, but isn´t that a misuse not
> including all the evidence?
There's a motivational statement at my son's school and normally I hate those things, but this one was rather clever. It said "Hard work will beat talent when talent doesn't work hard." I would argue that an observational study will beat a randomized study when the randomized study was done poorly. Several others have already voiced this sentiment.
That being said, setting a quality threshold in a systematic review, and looking at only randomized studies is often a very reasonable approach.
You could be even stricter and insist on looking only at randomized and blinded studies. You could be even stricter and insist on looking only at randomized and blinded studies with concealed allocation.
The problem is not drawing a line in the sand, but insisting on using the same line in the sand no matter what the context. But you always need to draw the line somewhere. So you might include some observational studies but exclude those observational studies that had historical controls.
In a perfect world, you would include all the available studies and then perform a sensitivity check by excluding those studies that were of lower quality. But if there is a wealth of data from randomized studies, I can't fault someone for excluding non-randomized studies. There are only so many hours in a day.
There's also a risk in dividing out studies by quality. I don't have the citation in front of me, but there was a systematic overview involving mammography as a screening tool. When you looked at the best seven studies, it showed that screening was helpful. But when you excluded five of those studies that failed to meet a quality threshold, the remaining two studies showed that screening was worthless. What do you do in such a circumstance? I see the take-home message as that our belief in the objectivity of certain methods, such as meta-analysis, fails to recognize that many of the subjective decisions made during the protocol write-up can have a strong influence on the outcome.
Here are several peer-reviewed publications that support your perspective that you can't afford to ignore observational studies.
Journal article: Oded Yitschaky, Michael Yitschaky, Yehuda Zadik. Case report on trial: Do you, Doctor, swear to tell the truth, the whole truth and nothing but the truth? Journal of Medical Case Reports.
2011;5(1):179. Abstract: "We are in the era of "evidence based medicine"
in which our knowledge is stratified from top to bottom in a hierarchy of evidence. Many in the medical and dental communities highly value randomized clinical trials as the gold standard of care and undervalue clinical reports. The aim of this editorial is to emphasize the benefits of case reports in dental and oral medicine, and encourage those of us who write and read them." [Accessed on May 17, 2011]. Available at:
http://www.jmedicalcasereports.com/content/5/1/179
Journal article: Bonnie Kaplan, Gerald Giesbrecht, Scott Shannon, Kevin McLeod. Evaluating treatments in health care: The instability of a one-legged stool BMC Medical Research Methodology. 2011;11(1):65.
Abstract: "BACKGROUND: Both scientists and the public routinely refer to randomized controlled trials (RCTs) as being the "gold standard" of scientific evidence. Although there is no question that placebo-controlled RCTs play a significant role in the evaluation of new pharmaceutical treatments, especially when it is important to rule out placebo effects, they have many inherent limitations which constrain their ability to inform medical decision making. The purpose of this paper is to raise questions about over-reliance on RCTs and to point out an additional perspective for evaluating healthcare evidence, as embodied in the Hill criteria. The arguments presented here are generally relevant to all areas of health care, though mental health applications provide the primary context for this essay. DISCUSSION:
This article first traces the history of RCTs, and then evaluates five of their major limitations: they often lack external validity, they have the potential for increasing health risk in the general population, they are no less likely to overestimate treatment effects than many other methods, they make a relatively weak contribution to clinical practice, and they are excessively expensive (leading to several additional vulnerabilities in the quality of evidence produced). Next, the nine Hill criteria are presented and discussed as a richer approach to the evaluation of health care treatments. Reliance on these multi-faceted criteria requires more analytical thinking than simply examining RCT data, but will also enhance confidence in the evaluation of novel treatments. SUMMARY: Excessive reliance on RCTs tends to stifle funding of other types of research, and publication of other forms of evidence.
We call upon our research and clinical colleagues to consider additional methods of evaluating data, such as the Hill criteria. Over-reliance on RCTs is similar to resting all of health care evidence on a one-legged stool. [Accessed on May 24, 2011].
http://www.biomedcentral.com/1471-2288/11/65.
GA Wells, B Shea, D O'Connell, J Peterson, V Welch, M Losos, P Tugwell.
The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. Description: If you are conducting a systematic overview of nonrandomized studies, you need an objective method for evaluating the quality of these studies. The Newcastle-Ottawa scale provides a numeric score that you can use for excluding low quality studies, giving greater weight to higher quality studies, or for sensitivity analysis. This website was last verified on August 7, 2007. URL:
www.ohri.ca/programs/clinical_epidemiology/oxford.htm
Journal article: Paul Glasziou, Iain Chalmers, Michael Rawlins, Peter McCulloch. When are randomised trials unnecessary? Picking signal from noise BMJ. 2007;334(7589):349 -351. Abstract: "Although randomised trials are widely accepted as the ideal way of obtaining unbiased estimates of treatment effects, some treatments have dramatic effects that are highly unlikely to reflect inadequately controlled biases. We compiled a list of historical examples of such effects and identified the features of convincing inferences about treatment effects from sources other than randomised trials. A unifying principle is the size of the treatment effect (signal) relative to the expected prognosis
(noise) of the condition. A treatment effect is inferred most confidently when the signal to noise ratio is large and its timing is rapid compared with the natural course of the condition. For the examples we considered in detail the rate ratio often exceeds 10 and thus is highly unlikely to reflect bias or factors other than a treatment effect. This model may help to reduce controversy about evidence for treatments whose effects are so dramatic that randomised trials are unnecessary." [Accessed on April 4, 2011]. See Critical Appraisal for related links and pages.
http://www.bmj.com/content/334/7589/349.abstract
Nick Black. Why we need observational studies to evaluate the effectiveness of health care. BMJ. 1996;312(7040):1215 -1218. Excerpt:
"The view is widely held that experimental methods (randomised controlled trials) are the "gold standard" for evaluation and that observational methods (cohort and case control studies) have little or no value. This ignores the limitations of randomised trials, which may prove unnecessary, inappropriate, impossible, or inadequate. Many of the problems of conducting randomised trials could often, in theory, be overcome, but the practical implications for researchers and funding bodies mean that this is often not possible. The false conflict between those who advocate randomised trials in all situations and those who believe observational data provide sufficient evidence needs to be replaced with mutual recognition of the complementary roles of the two approaches. Researchers should be united in their quest for scientific rigour in evaluation, regardless of the method used." [Accessed November 9, 2010]. Available at:
http://www.bmj.com/content/312/7040/1215.short.
Steve Simon, [log in to unmask], Standard Disclaimer.
Sign up for the Monthly Mean, the newsletter that dares to call itself average at www.pmean.com/news
__________________________
Delivered via MessageLabs
__________________________
The information contained in this message and any attachments is intended for the addressee(s) only. If you are not the addressee, you may not disclose, reproduce or distribute this message. If you have received this message in error, please advise the sender and delete it from your system. Any personal data sent in reply to this message will be used in accordance with provisions of the Data Protection Act 1998 and only for the purposes of the Institute's work.
All messages sent by NICE are checked for viruses, but we recommend that you carry out your own checks on any attachment to this message. We cannot accept liability for any loss or damage caused by software viruses.
http://www.nice.org.uk
|