Nicely put Brian.
I would add that it may occasionally be appropriate to pools studies which
reach opposing conclusion on a therapeutic intervention (some showing a
favorable effect and some showing harm) so long as the heterogeneity in
effects can be plausibly explained.
For example a pooled analysis of the use of steroids in septic shock finds
no benefit and even suggests harm as some of the studies pooled are positive
and many others are negative. Further elucidation based on the dosage used
(a reasonable a priori hypothesis) reveals that low dose steroids confer a
significant mortality benefit while higher doses increase mortality.
So putting two and two together can give you more than the sum of the parts,
it just depends on how you add things up.
Eddy
Dr. Eddy Lang MDCM CCFP(EM)CSPQ
Assistant Professor, Attending Staff, Emergency Department
SMBD Jewish General Hospital
McGill University, Montreal Canada
----- Original Message -----
From: "Brian Alper MD" <[log in to unmask]>
To: <[log in to unmask]>
Sent: Saturday, October 08, 2005 7:57 AM
Subject: Re: Meta-analysis - Putting two and two together to get five.
Meta-analysis can be useful or can be misleading. It is critical to check
the methods of the meta-analysis just like it is critical to check the
methods of a randomized trial.
One of the quality criteria for a meta-analysis is making sure it is proper
to combine similar studies. This needs to be evaluated statistically
(homogeneity) but also needs to be evaluated with common sense.
A study that shows beta blockers reduce mortality after a heart attack and a
study that shows antibiotics reduce mortality after pneumonia could
potentially be statistically congruent. But it would still be inappropriate
to combine them in a meta-analysis and conclude that medications reduce
mortality. The medications tested and patient populations are clearly
dissimilar and inappropriate to combine.
Two seemingly similar studies with "opposite" results may or may not be
appropriate for combination. If one study is "positive" (i.e. statistically
significant and favoring treatment A over treatment B) and one study is
"negative" (i.e. treatment A fared better than treatment B but did not reach
statistical significance), this may be appropriate for meta-analysis. This
is an example of similar findings and using meta-analysis to aggregate
statistical power.
But if one study strongly favors treatment A over treatment B and another
study strongly favors treatment B over treatment A, and there are only two
studies, there is a significant enough difference between these two studies
such that meta-analysis is inappropriate.
One must remember to check the quality of meta-analyses, including the
appropriateness of studies combined. Cochrane reviews are generally
excellent for their methods, but there is a Cochrane review (CD001094) that
concludes "For children with persistent nasal discharge or older children
with radiographically confirmed sinusitis, the available evidence suggests
that antibiotics given for 10 days will reduce the probability of
persistence in the short to medium-term."
This conclusion is based on combining 4 trials of older children with
radiographically confirmed sinusitis which found benefit for antibiotics AND
2 trials of children with persistent nasal discharge for more than 10 days.
Of these latter two trials, one trial with 188 patients found no benefit and
the other trial had only 13 patients.
Combination of these two groups is inappropriate. The explanation given
that most children with sinusitis have persistent nasal discharge is
inappropriate because the corollary (most children with persistent nasal
discharge have sinusitis) has not been established.
This Cochrane review can inappropriately guide clinicians to treat every
child with persistent nasal discharge as if they have radiographically
confirmed sinusitis.
Again, most Cochrane reviews are excellent, but my point is that you can't
allow "meta-analysis" to replace critical thinking.
-----Original Message-----
From: Evidence based health (EBH)
To: [log in to unmask]
Sent: 10/8/05 7:38 AM
Subject: Meta-analysis - Putting two and two together to get five.
Dear List
Can I run this past the list so you can pick holes in this argument:
Meta-analysis - Putting two and two together to get five.
The big thing about Evidence Based Medicine is the statistical method to
aggregate studies. We argue that the basis of meta-analysis is
statistically flawed.
There is a hierarchy of evidence: Case reports are subject to chance (a
sample size of 1) and can be disregarded. Cohort studies and
case-control studies are often self-selected groups and subject to
confounding and bias. The double-blind, Randomised Controlled Trial
(RCT) eliminates these problems of bias and confounding. If you have a
large sample and show a good level of significance (p value <0.05) there
is less than 5% probability of the finding being due to chance.
Yet on systematic review we have 2 RTC coming to diametrically opposite
conclusions. Why is that – if bias and confounding and chance have been
excluded? One or both the studies may be wrong. Now if we assume only
one of the two studies is wrong - taking the mean value between the
correct value and the wrong value will not yield a value that is 'more
correct'. This is the error of meta-analysis.
There can be another argument. It is possible that both studies are
correct and they each reflect the truth in their different populations.
In that case each study represents the population from which it is
drawn, and we have to aggregate the populations they represent – not the
sample sizes. Large samples from small populations will get undue
weightage otherwise. Meta-analysis can therefore be misleading and
unreliable.
Jacob M Puliyel, MD MRCP M Phil
V Sreenivas PhD
--
___________________________
Jacob M. Puliyel MD MRCP MPhil
Sara Varughese FRCS
eFax UK 07092-124285
|