Print

Print


Jon,
There are 3 reasons "why do SR":
1) the best method developed to date, which can provide assessment of " TOTALITY of EVIDENCE" ( mathematicians proved long time ago that if you dropped a piece of evidence, you will get a biased evidence)
2) avoid ( or, at least minimize) selective citation bias - still leading cause of biased and distorted evidence that continue to plague the current literature
3) you can never predict the results of SR from individual trials. That is, even if in the majority of cases the results of SR do not differ from the results of largest trial, you can never know when that will happen. So, your question "Do we do 10 SRs OR do we do 9 SRs and 100 rapid reviews?" depends how much you are willing to be wrong, as that one SR that you decided not to do ( in order to do 100 rapid reviews) may ultimately cost more (in terms of poor patient outcomes, bad decision-making etc)

Best
Ben

On Jan 27, 2013, at 5:57 AM, "Jon Brassey" <[log in to unmask]<mailto:[log in to unmask]>> wrote:

Hi Chris,

Thanks for the reply, which came in while I was typing my response to Kev.

I wonder what proportion of clinicians use NNTs and NNHs in discussing risks.  I keep telling my mum to ask her GP what her NNT is for the statins he's suggesting she takes.  Its certainly not be raised in any of the consultations.

But, the above anecdote aside, it'd be interesting to see how different a rapid review could be on the subject.  We know that the largest RCT (if positive and significant) is around 95% likely to show a subsequent meta-analysis would be positive and significant.  So, if you find that you've got pretty close to the dichotomous 'yes'.  The issue - for me - becomes how much effort is required to get 'super' accurate and is that benefit worth it.

So, it comes back (although worded slightly differently) to what is the cost benefit of comparing:


  *   A SR (which isn't perfect) but may cost £50-100,000 and take 12 months to perform.
  *   A rapid review that takes a week, costs £1,000.

The former will identify 90% of the trials (say) while the latter might find 65% of the trials.  This figure would vary between topics - but hopefully you get the point.

Will those extra trials affect the effect size sufficiently to justify the cost?

I think it's a bit bad that that evidence doesn't exist.  If we have £1,000,000 we could have this sort of discussion:

Do we do 10 SRs OR do we do 9 SRs and 100 rapid reviews?

BW

jon


On Sun, Jan 27, 2013 at 10:42 AM, Chris Del Mar <[log in to unmask]<mailto:[log in to unmask]>> wrote:
John

It matters when the benefits are modest.

Take the example of antibiotics for acute otitis media. The simple dichotomous outcome is yes, antibiotics ARE beneficial compared with none, for pain at 3 days. But the effect size (which is what you cane more accurately pin-point with SR and meta-analysis) is so small the NNT is somewhere between 10 and 20 (depending on severity etc) (see the Cochrane review). This benefit is so small (especially compared with similar NNH for antibiotics – abdominal pain, rashes, diarrhoea etc) that many patients and their doctors elect to hold off, and use something more direct for the pain and discomfort.

Chris

From: Evidence based health (EBH) [mailto:[log in to unmask]<mailto:[log in to unmask]>] On Behalf Of Jon Brassey
Sent: Sunday, 27 January 2013 6:06 PM
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Why do systematic reviews?

Hi,
I appreciate why the methodology of SRs is undertaken - to reduce bias, ensure we get all the papers etc. But, what I'm thinking, when I ask the question, is around the actual end result (of a meta-analysis) the effect size. One could easily say that we do a SR (and M-A) to get a very accurate effect size. But how is that practically useful?
For instance, if you're a clinician you may simply want to know is an intervention effective - in which case extreme precision is not as important as a 'yes', 'no', or 'maybe'.
I could well see if you have two interventions and you're weighing up the relative merits of two interventions (effect size, side effects, patient circumstances etc) one wants to know how effective each intervention relative to each other. But again does that have to be massively accurate?  I can also see a case, when doing cost-effectiveness work, for accurate effect sizes.
So, can people please let me know, practically, when such precision is required and when, sometimes, you could probably get away with something less accurate.

Thanks

jon

--
Jon Brassey
TRIP Database
http://www.tripdatabase.com
Find evidence fast




--
Jon Brassey
TRIP Database
http://www.tripdatabase.com
Find evidence fast