Hi Jon,
As a practitioner, I do want to know how good a treatment is, not just that it works, before offering it. This matters even if there is no alternative treatment since there is alway an alternative: no intervention. It is vital to sharing decisions with patients. When I give patients the estimated benefit of a treatment such as warfarin or bisphoshponates, many turn it down so it matters to patients too.

Kev

On 27 Jan 2013, at 08:06, Jon Brassey <[log in to unmask]> wrote:

Hi,
I appreciate why the methodology of SRs is undertaken - to reduce bias, ensure we get all the papers etc. But, what I'm thinking, when I ask the question, is around the actual end result (of a meta-analysis) the effect size. One could easily say that we do a SR (and M-A) to get a very accurate effect size. But how is that practically useful?
For instance, if you're a clinician you may simply want to know is an intervention effective - in which case extreme precision is not as important as a 'yes', 'no', or 'maybe'.
I could well see if you have two interventions and you're weighing up the relative merits of two interventions (effect size, side effects, patient circumstances etc) one wants to know how effective each intervention relative to each other. But again does that have to be massively accurate?  I can also see a case, when doing cost-effectiveness work, for accurate effect sizes.
So, can people please let me know, practically, when such precision is required and when, sometimes, you could probably get away with something less accurate.
 
Thanks
 
jon

--
Jon Brassey
TRIP Database
http://www.tripdatabase.com
Find evidence fast