Piersante Sestini wrote:
> Lately, I have been involved in a couple of debates with critics of
> EBM. http://chestjournal.chestpubs.org/content/135/1/245.1.full
> http://www.ncbi.nlm.nih.gov/pubmed/20367853
>
> The charge of my opponents was, basically, that EBM consists largely
> in the application of rigid rules (particularly about critical
> appraisal and hierarchies of evidence). My understanding, however, is
> that those are not rules, but heuristics (rules of thumb) based of
> reasonable rational assumptions, while EBM consists in the systematic
> and judicious integration of expertise and data from clinical
> research to solve a patient's problem (which of course includes
> preferences and values), not in following rules.
This is hard to respond to, but it is an important question. I view
checklists and hierarchies as a necessary evil, and that sometimes they
are applied too rigidly.
As an example, a recent article in the Skeptical Inquirer criticized
case-control studies (Park 2010) and said they were analogous to
election polls which sometimes agree with the results of the election.
Case-control studies are indeed a weak form of evidence, but when they
produce an effect of strong magnitude and are associated with a
plausible mechanism, they can provide convincing evidence. Case-control
studies, for example, correctly identified the link between aspirin use
and Reye's syndrome, a critical step in the prevention of this disease
(Monto 1999).
This is an important point which we sometimes neglect to teach. No one
study should be examined in isolation. It needs to be thought of in the
whole context of knowledge of the problem. So replication is important,
biological mechanisms are important, the presence of a dose response
relationship is important, and so forth. When these things are present,
a case-control study can and should move higher on the hierarchy. When
they are absent, a randomized trial should drop lower on the hierarchy.
Do practitioners of EBM look at the whole picture or do they rigidly
stick to a hierarchy? That's something that could be studied, but it
would be difficult to identify when someone was too rigid in applying a
hierarchy versus appropriately discounting weak evidence. The best
example of this was the fuss over eight randomized trials of
mammography. When the best two trials were pooled, mammography did not
look so good. When the (slightly?) flawed remaining studies were
included, mammography looked much better. So was using only the two best
trials being too rigid, or was it appropriate? I don't think there is a
truly objective answer to that question. A nice summary of the
controversy appears in Jackson 2002.
An important point in favor of EBM has the benefit of transparency. If
you are trying to dispute an expert opinion, your only serious option is
to attack the expert. In EBM, the cards are all laid out on the table.
If you don't like the way that studies were selected in a systematic
overview, you can suggest an alternate approach. Compare that with how
you might try to criticize the bibliography in a subjective expert
review. I can't see how you could do this without crawling inside the
mind of the expert to see if the exclusion of some studies was a
deliberate attempt to skew the results or if there was a rational basis
for these exclusions. Now lots of people do try to do this and attribute
base motives to experts that they dislike. I much prefer the objectivity
of debate that EBM makes possible.
Another point in favor of EBM is that EBM is self-critical. If
observational studies are not getting enough respect, then a
systematic overview of observational versus randomized studies (Concato
2000) should answer the question.
This is the irony of the situation. Many critics of EBM use the tools of
EBM to attack it. But actually, this EBM's greatest strength, as
critical research about EBM allows EBM to improve itself. I'm something
of an outsider (I'm a statistician and not a doctor), but in my
experience with EBM (I first became aware of EBM in the late 1990s), it
appears that it was practiced much too rigidly in its early history. It
is still too rigid at times today, but better than earlier.
John Concato, Nirav Shah, Ralph I. Horwitz. Randomized, Controlled
Trials, Observational Studies, and the Hierarchy of Research Designs. N
Engl J Med. 2000;342(25):1887-1892.
http://content.nejm.org/cgi/content/abstract/342/25/1887
Valerie P. Jackson. Screening Mammography: Controversies and Headlines.
Radiology. 2002;225(2):323 -326. [Accessed August 19, 2010]. Available
at: http://radiology.rsna.org/content/225/2/323.short
A S Monto. The disappearance of Reye's syndrome--a public health
triumph. N. Engl. J. Med. 1999;340(18):1423-1424. [Accessed August 19,
2010]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/10228195
B Park. Cell Phones: The High Cost of Scientific Ignorance. Skeptical
Inquirer. 2010 (September/October) 34(5): 6. Also available on Bob
Park's blog: http://bobpark.physics.umd.edu/WN10/wn062510.html
--
Steve Simon, Standard Disclaimer
Sign up for The Monthly Mean, the newsletter that
dares to call itself "average" at www.pmean.com/news
"Data entry and data management issues with examples
in IBM SPSS," Tuesday, August 24, 11am-noon CDT.
Free webinar. Details at www.pmean.com/webinars
|