Print

Print


I agree with Steve Simon that when there is an apparent clumping, it is best
to try to explain that, and to treat the clumps separately.

I also agree with him that for very homogeneous study results, the fixed and
random effects methods give similar results.  The more heterogeneity there
is, the further appart the results will be with the two methods (the point
estimates will differ somewhat, but mainly, the confidence interval will be
wider for the random effects method.  Therefore, the most conservative thing
is to always use the random effects method.  When there is truly no
heterogeneity, the random method will give the same result as the fixed, and
the more heterogeneity is present, the more the random effets method will be
appropriate.

Philosophically, the way to decide which method applies is to ask this
question.  If all of the studies (or centers) expanded  their sample sizes
many-fold, would one expect all of the results to converge to the same point
estimate?  If so, then one believes all of the variability between study
results comes from scatter because of small study sizes.  Then weighting
should be solely on the basis of study size (or variance), and the fixed
effects model is appropriate.  If on the other hand one would not expect the
different study results to converge to the same point estimate, because of
real differences in the populations being sampled, then the random effects
model is appropriate.

As you can see, this is a sort of philosophical expectation that one is
guessing at.  The only emipirical way to address the question is with
heterogeneity analysis.  Unfortunately, with small numbers of studies (or
centers) heterogeneity analysis is underpowered, so that "non-significant"
heterogeneity may be found in a situation in which there is some real
heterogeneity.  For this reason it may be wise to use the random effects
model even when no "significant" heterogeneity is found.  As mentioned
above, the results will be very similar, but the random effects model will
give a more conservative, wider confidence interval.

However, there is a potential problem with any random effects analysis.  If
the different studies (or centers) have different populations, but the
studies are not a random sample of the different population centers, then
the result will be biased to the extent that the sample is biased.  Other
than using many studies (or centers), there is no remedy for this problem.
However, the larger confidence interval of the random effects model again
seems to be more appropriate and conservative than the fixed effects
approach.  Actually, presentation of the simple range of study (or center)
results may be the most appropriate way to deal with this sampling problem.
That way having more samples toward one end of the range does not bias the
result.  But of course, an extreme outlier can shift the range.  Thus,
outliers must be examined carefully for explanations of the heterogeneity.

The bottom line is that, even with the more conservative random effects
method, the results of a meta-analysis (or a multi-center study) will be
biased to the extent that the studies (or centers) are not representative.


David L. Doggett, Ph.D.
Senior Medical Research Analyst
Health Technology Assessment and Information Services
ECRI, a non-profit health services research organization
5200 Butler Pike
Plymouth Meeting, Pennsylvania 19462, U.S.A.
Phone: (610) 825-6000 x5509
FAX: (610) 834-1275
http://www.ecri.org
e-mail: [log in to unmask]



-----Original Message-----
From: Simon, Steve, PhD [mailto:[log in to unmask]]
Sent: Monday, August 20, 2001 7:57 PM
To: [log in to unmask]
Subject: Re: Fixed effects versus random effects models - idiot's guide?


Andrew Booth writes:

>Does anyone have a simple explanation or "rule of thumb" for explaining to
people
>when a fixed effects or a random effects method should be used for a
systematic review.

There is a fair amount of controversy about this. I like to think of a
meta-analysis as a multi-center trial where each center uses a different
protocol. Since the multi-center trial requires random effects, so should a
meta-analysis.

The controversy occurs because many times there will be a sharp disagreement
between studies where some of them will cluster at one point and others will
cluster at a different point. This violates the assumption of normality for
the random effects model.

A new trend is to look for trends that might explain the underlying
heterogeneity (e.g., baseline risk) and incorporate these trends into a
model. This sometimes goes by the name of meta-regression.

Some people test for heterogeneity and then choose. I don't like this
approach--if there is little heterogeneity, the random and fixed models will
be close anyway, so why not always choose the random effects model?

I am not an expert in meta-analysis--just an informed consumer. So take my
comments with a grain of salt.

Steve Simon, [log in to unmask], Standard Disclaimer.
STATS: STeve's Attempt to Teach Statistics. http://www.cmh.edu/stats
Watch for a change in servers. On or around June 2001, this page will
move to http://www.childrens-mercy.org/stats