The following is my first reaction to Dr. Djulbegovic's email and the two
articles and editorials in the June 22 NEJM.
Benjamin Djulbegovic writes:
>A bit surprised with the silence of this discussion group, I want to draw
>your attention to June 22 issue of New England Journal of Medicine where
two
>papers challenged a central tenet of EBM, which is that the validity of
>evidence is a function of the design of the study from which the finding is
>collected. Hence, supremacy of randomized trials as "gold standard" of
truth
>was questioned.
I'm not sure that the authors stated this so strongly. Concato et al write
"The popular belief that only randomized, controlled trials produce
trustworthy results and that all observational studies are misleading does a
disservice to patient care, clinical investigation, and the education of
health care professionals." Benson and Hartz state "Our results suggest that
observational studies usually do provide valid information."
>In my opinion, the question of hierarchy of medical evidence represents a
>core issue of EBM movement and if indeed assumption that "not all evidence
>is created equal" is not accepted then there is not much left of the EBM
>paradigm.
Surely you don't use only the observational/randomized distinction in
assessing the evidence. There are other issues: blinding and intention to
treat (ITT), just to name two. And there is good empirical evidence that
both blinding and ITT improve the validity of the findings.
>This is an issue that my colleagues and I have given a lot of
>thoughts during last several years and tried to answer in several papers
and
>commentaries in various ways. However, the more I think about it, the more
>it appears that the question of superiority of RCTs vs. observational
>studies is PERHAPS empirically an unanswerable question. For the simple
>reason of the lack of "standard of truth". Current, hierarchy of evidence
is
>NORMATIVELY derived, and reflects belief in supremacy of experimental
method
>that has served us so well over such a long period of time.
There is some empirical basis for the hierarchy of evidence. Some of it is
even cited in the bibliographies of the two papers. Things have changed
recently, though, (at least with respect to one dimension of this hierarchy)
if these papers are to be believed.
Perhaps the change is that observational studies are designed better today
than they were in the past. Perhaps the methods of adjusting for confounding
have solved some of the problems with observational studies.
Another possibility is that the peer-review process screens out the bad
observational studies. The ones that remain are of high enough quality that
they can compete with the randomized trials. Concato et al argue that this
is not the case, based on some analyses not presented in their paper.
Perhaps (as both papers suggest), the previous studies of observational
versus randomized studies were flawed. They included studies at the low end
of observational research, such as those using historical controls, and then
used this low end to indict all observational research.
Observational trials are known to be superior to randomized trials in some
respects. As Concato et al mention, the observational studies are more
likely to include a broad representation of the population at risk. They
also note that randomized trails may utilize a treatment regimen that is not
representative of clinical practice.
Concato et al seem to be making an argument that there is more heterogeneity
in the randomized trials than there are in the observational studies. This
is a very surprising assertion.
>However, as I just stated, this hypothesis
>that experimental method is superior to observational one appears not to be
>empirically testable [much as normative models of decision making (dealing
>with the question how should we make decisions) cannot be shown to be
>superior to descriptive models of decision making (dealing with the
question
>how we actually make decisions)].
The authors of the two papers would probably claim that their approaches
represent empirical testing (and rejection) of the hypothesis about the
superiority of randomized trials.
>As postmodernists would say real life
>defies the precision of normative, mathematically ordained world; the truth
>is elusive and subjective.
Are you so upset about the findings of these papers that you are willing to
accept a postmodernist approach to scientific evidence in its place? Neither
paper seems to make an argument in this direction. In fact, they argue the
opposite: that truth is obtainable. Even at the most extreme, their argument
would only be that we can obtain truth equally well from observational
studies as from randomized studies.
> In deciding what works and what doesn't should we
>then add a "value factor" to each form of scientific evidence? Or,
hierarchy
>of findings closest to the truth should be looked across all forms of
>evidence relevant to particular question at hand, as suggested by E.
>Wilson's consilience model. According to this view the 'consilience test'
>takes place when findings obtained from one class of facts coincides with
>findings obtained from another different class of observations.
I am not familiar with the consilience model. Clearly, when all of the
randomized studies and all of the observational studies say the same thing,
we should be happy. What these two papers tell us is that randomized and
observational studies usually do tell us the same thing.
When they do disagree, I would trust the results of the randomized study,
ALL OTHER THINGS BEING EQUAL. How often, though, are all other things equal?
Certainly observational studies have some strengths compared to randomized
studies, as mentioned above.
What I have been saying at journal clubs is that a single well-designed
randomized study will trump any number of observational studies. Perhaps
that statement was a bit harsh.
>Could it be that the current hierarchy of evidence is just not sufficiently
>good enough or that ranking of evidence is not a feasible exercise to begin
>with?
Sometimes the ranking of evidence strikes me as a bit simplistic.
Furthermore, if people in EBM are indeed making comments like "observational
studies are not reliable and should not be funded" and "observational
studies should not be used for defining evidence-based medical care" (as
Benson and Hartz imply), then we need to change things.
Certainly these two articles would encourage us to be more open-minded about
observational studies. The canyon that divides observational studies from
randomized studies isn't so big and so wide after all. It may only be a
narrow crack.
>These are crucial issues to whole idea of EBM, and it would be interesting
>to hear opinions from the members of the group about relative value of
>experimental method vs. observational one and whether creation of hierarchy
>of medical evidence is a feasible idea.
What this article tells me is that assessing evidence cannot be reduced to a
simple checklist. You can't rate journal articles like Siskel and Ebert
rated movies (thumbs up or thumbs down). I apologize if this cultural
reference is unknown to some of my overseas friends.
Another way of saying this is that you can't say that one piece of research
is better than another on the basis of a single factor like whether
randomization was used. You have to look at the entire picture and you have
to accept the fact that randomized studies are prone to different weaknesses
than observational studies.
Still another way of saying this is that the quality of a study cannot be
rated on a single dimension. One article may have higher quality on one
dimension and another might have higher quality on a different dimension.
It also tells me that an arbitrary exclusion of all observational studies
from a meta-analysis may possibly be a mistake. I'll have to think a lot
about that one.
It is an interesting pair of articles and I thank Dr. Djulbegovic for
bringing them to my attention.
Steve Simon, [log in to unmask], Standard Disclaimer.
STATS - Steve's Attempt to Teach Statistics: http://www.cmh.edu/stats
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|