Dear All,
FWIW, I think this is a VERY pertinent question. I will based my answers
mainly on my own area of practice (oncology).
SRs are important for reasons of estimating effect-size and bias - as
suggested by others far more experienced than me.
However, there are serious problems. Many SRs are based on
unrepresentative trials. Many "group" treatments in order to achieve
some form of power. As a result, a SR can draw a conclusion without
answering two important clinical questions:
Does this apply to my patient, if they are older/ sicker/ fitter than
the trial population?
Which of these regimens should I use ? (I cannot prescribe
"chemotherapy" or "radiotherapy" in the abstract, and Kev cannot
prescribe "antibiotics").
There is a third problem that comes up with some SRs. The criteria on
inclusion are based around assessing effect size, and are therefore
designed to produce a (relatively) homogenous group of trials. However,
from a clinical perspective, I am less interested in effect-size than in
the best treatment. This may seem unimportant, until we consider
mixed-modality treatment.
We are currently working on a chemo-radiotherapy in lung cancer. In
brief, one can add chemotherapy to radiotherapy, or alter the
radiotherapy dose and fractionation, in order to improve the treatment.
There are two main SRs - A Cochrane one from 2010, and a recently
published one (2012) in JCO. However, each considers one different area,
and thus cannot answer the question as to which is the "best" treatment.
There are reasons to think that some combination of the both might be
the best, so it really would be nice to know.....
I think that the systematic approach to assessing evidence is important.
However, whether the best thing to do with the evidence is then a MA
seems less clear. We have been working on a novel approach to summarise
and reasoning with the results of clincial trials (which gives some nice
pictures), which I'd be happy to share if people are interested.
BW,
Matt
|