Hi,
I appreciate why the methodology of SRs is undertaken - to reduce bias, ensure we get all the papers etc. But, what I'm thinking, when I ask the question, is around the actual end result (of a meta-analysis) the effect size. One could easily say that we do a SR (and M-A) to get a very accurate effect size. But how is that practically useful?
For instance, if you're a clinician you may simply want to know is an intervention effective - in which case extreme precision is not as important as a 'yes', 'no', or 'maybe'.
I could well see if you have two interventions and you're weighing up the relative merits of two interventions (effect size, side effects, patient circumstances etc) one wants to know how effective each intervention relative to each other. But again does that have to be massively accurate? I can also see a case, when doing cost-effectiveness work, for accurate effect sizes.
So, can people please let me know, practically, when such precision is required and when, sometimes, you could probably get away with something less accurate.
Thanks
jon
--
Jon Brassey
TRIP Database
Find evidence fast