Print

Print


On Mon, 20 Aug 2001 15:40:45 +-100, Andrew Booth <[log in to unmask]>
wrote:
> Does anyone have a simple explanation or "rule of thumb" for explaining to
> people when a fixed effects or a random effects method should be used for
> a systematic review. Although I have had the technicalities explained to
> me at an excellent systematic reviews course I would like a simple way to
> explain it to others when facilitating critical appraisal.

Andrew,

I've been thinking about this too, though I have to admit I usually
duck the issue when facilitating critical appraisal.
Unfortunately I don't think there are any easy rules of thumb, as
this is a highly contested area.

Here's a possible way of getting people to think about the issues
(it would of course need adapting to a political structure that the
participants are familiar with).

I would welcome comments on whether this does clarify the issues,
and how it could be improved.

Disclaimer - I am not an expert in meta-analysis and this is
certainly not a technically perfect explanation!

Sally


Scenario:
Suppose you are the Prime Minister, and you want to find the opinion
of the electorate on a certain topic (e.g. should we adopt the euro).
You ask all the MPs to investigate opinions in their constituency.
When you ask what they have found, some of them give you an answer
based on a careful survey of a large number of people, whereas others
have an informal view from a smaller number of people.

Ask participants: How would you combine their answers?

Possible views:
 Put more weight on the best answers: least biased answers.
. Put more weight on the largest sample sizes: more precise answers.
 Give them equal weighting because you want an answer that
  represents the views of the entire population.
 Other views may well be reasonable (e.g. sample answers in a
  way which reflects the structure of the population in terms of
  factors like urban/rural location), but would probably be difficult
  to implement in a meta-analysis of a clinical intervention.

Then explain:
A fixed-effects analysis simply gives more weight to the larger
(more precise) studies.
A random-effects analysis still takes into account the precision of
the individual studies, but gives more weight to the smaller studies
than a fixed-effects study in order to better represent the varying
answers from the different studies.

If you want to go into more depth, this example allows discussion of
other relevant issues:

In a random-effects analysis, the variation in the answers between
areas is reflected in a wider confidence interval.

If there is substantial variation it is always advisable to
investigate potential sources of this variation rather than to just
give an answer with a wider confidence interval.

Quality (bias) is always an important issue, and the effect of this
needs to be examined, perhaps by sensitivity analysis.

Random-effects analysis may place more weight on the lower quality
results by putting more weight on smaller studies.


Sally Hollis
Medical Statistics Unit
Lancaster University

Email [log in to unmask]
Tel 01524 593187
Fax 01524 592681
http://www.lancs.ac.uk/users/IHR/shollis.html