Jacob M Puliyel writes:
>The big thing about Evidence Based Medicine is the
>statistical method to aggregate studies. We argue that the
>basis of meta-analysis is statistically flawed.
You wouldn't be the first to make such an argument. Here's my quick comments on a lazy Sunday afternoon.
>There is a hierarchy of evidence: Case reports are subject
>to chance (a sample size of 1) and can be disregarded.
Not so fast. I can think of lots of situations where a case report should not be disregarded. I remember a case report of salmonella transmission from a pet turtle to a family which was attributed to letting the turtle swim around in the family bathtub. If you had a pet turtle, you would be quite foolish if you disregarded this case report. Don't let your kids put the turtle in your bathtub or be sure that you thoroughly disinfect afterwards! You don't need to wait for the RCT to appear before you take action.
Case reports are a poor way of comparing two competing treatments, but they still have significant value in other areas.
>Cohort studies and case-control studies are often
>self-selected groups and subject to confounding and bias.
Uh-oh, I don't like where this is heading. We have learned a lot from cohort and case-control studies, including the very valuable finding linking cigarette smoking and lung cancer. There was not a single randomized trial in the evidence that lead to the 1964 Surgeon General's report on smoking, but the evidence of link was overwhelming.
It sounds like whoever wrote this has a predisposition to distrust all research.
>The double-blind, Randomised Controlled Trial (RCT)
>eliminates these problems of bias and confounding.
Also incorrect. The RCT is an excellent research tool, but it does have problems as well. No research model is perfect.
>If you have a large sample and show a good level of
>significance (p value <0.05) there is less than 5%
>probability of the finding being due to chance.
Um, as far as I know the p-value works pretty well for small samples as well.
So the first four claims of the writer are arguably false. Let's hope that they do better when the topic shifts to meta-analysis.
>Yet on systematic review we have 2 RTC coming to
>diametrically opposite conclusions. Why is that - if bias
>and confounding and chance have been excluded? One or both
>the studies may be wrong. Now if we assume only one of the
>two studies is wrong - taking the mean value between the
>correct value and the wrong value will not yield a value
>that is 'more correct'. This is the error of
>meta-analysis.
That's the error of a meta-analysis that ignores heterogeneity. Is heterogeneity present in all meta-analyses and does it result in flawed conclusions in all meta-analyses? Arguing that it is possible to make an error using meta-analysis naively, does not imply that all meta-analyses are flawed, especially considering that the folks performing meta-analysis have been aware of the problems caused by heterogeneity and have several proposed solutions to this problem.
>There can be another argument. It is possible that both
>studies are correct and they each reflect the truth in
>their different populations. In that case each study
>represents the population from which it is drawn, and we
>have to aggregate the populations they represent - not the
>sample sizes. Large samples from small populations will
>get undue weightage otherwise. Meta-analysis can therefore
>be misleading and unreliable.
The same comments apply as above. Furthermore, if you have two contradictory RCTs, it may be that the patients that you see are dissimilar from either of the two trials. Maybe there is a third reality that applies to your particular patients.
So what can we conclude here? Meta-analysis can be misleading? That's hardly surprising. All research can be misleading if it is conducted poorly. And all research can be misleading if you fail to consider how the patients you see are different from the patients being studied by the researchers themselves.
All of our research tools are flawed. The trick comes from understanding when the flaws are trivial and when they are fatal.
Furthermore, this looks like a "straw man" argument to me. You describe an application of meta-analysis that is so simplistic that it does not represent anything even remotely close to how meta-analysis is practiced today. You cite a "flaw" of meta-analysis that has been well known for at least a decade or two and for which there has been substantial research efforts.
If meta-analysis is indeed a flawed process, cite a particular meta-analysis that has lead to an incorrect conclusion and then explain to us why it has failed. That would lead to a far more productive debate than some hypothetical speculation.
Finally, to say a research method is flawed is troublesome when you don't offer a serious alternative. There are some folks who argue that a large scale randomized trial is the definitive approach and that meta-analysis is only second best. There are others who argue that a carefully done meta-analysis is superior. Ironically, heterogeneity is one of meta-analyses strengths, as its proponents have noted. If a research finding is replicated across a variety of patient populations under a variety of research designs, the meta-analysis is the best way to show robustness of the finding. In other words, a uniform finding across heterogenous studies is far more persuasive than a single finding in a single population.
But rather than offer a serious alternative to meta-analysis, the author of these comments seems to be saying that almost all research is flawed. That's a very dangerous thing to be saying, because disregarding good research findings, no matter what the research design that generated them, can be dangerous to our health.
Rather than asking if meta-analysis is flawed, I would ask "Are we better off or worse off having the tool called meta-analysis in our research arsenal." And my answer is yes, we're much better off as long as we keep these tools out of the hands of amateurs.
I hope this didn't come across as too pejorative. I think the author raises an important question that all of us should think about when we read the latest research studies.
Steve Simon, [log in to unmask]
|