Hi Ben, not very convinced by your 3 reasons !
1) dropping a piece of evidence AT RANDOM does not change bias, though it should (if analysed reasonably) increase uncertainty in conclusions. If 'all the evidence' is unbiased, so is randomly-selected partial evidence.
2) expending lots more effort in chasing down some of the more difficult references / unpublished studies etc will reduce bias only if the easier-to-find studies tend to be more biased, or if the biases in easy-to-find and hard-to-find studies tend to cancel out. This is a priori likely if one is a cynic (published easy to find studies positively biased, harder to find studies unbiased or negatively biased) - don't know if there are methodological studies demonstrating it.
3) but economics applies to reviews as to everything else - no free lunch. By doing 10 SRs you maybe condemn to worse outcomes (on average) zillions of patients who might have been helped by the 100 Rapid Reviews you didn't do. I am sure for many maybe most SRs you could keep searching and analysing essentially for ever, getting closer and closer to 'perfection' but after 10 years or so (say!) adding very little to the usefulness - while costing a great deal of money & rare-expertise (not to mention all the patients left in the dark while SR is done). There are opportunity costs to expending further effort, and (especially as diminishing returns of evidence found per effort sets in) at some point the increased precision will not be worth the extra cost. Is anyone aware of an attempt at a Value Of Information analysis for a cheaper / more-expensive SR ? I vaguely recall attempts to decide how often to update a SR which involves some of the same
issues.
Many Rapid Reviews and SRs don't come to a conclusion at all (regarding effectiveness). Where they do, I suspect these conclusions (eg from meta-analyses) should generally be much less certain than they appear, due to potential biases not taken into account, and due to differences between study conditions and the patient in front of you. This will tend to favour smaller / cheaper reviews, because in effect they are not so much worse than even a 'perfect' review which will have remaining large uncertainties when applied to your patient.
Cheers, David
----- Original Message -----
From: "Benjamin Djulbegovic" <[log in to unmask]>
To: [log in to unmask]
Sent: Sunday, 27 January, 2013 2:06:57 PM
Subject: Re: Why do systematic reviews?
Jon,
There are 3 reasons "why do SR":
1) the best method developed to date, which can provide assessment of " TOTALITY of EVIDENCE" ( mathematicians proved long time ago that if you dropped a piece of evidence, you will get a biased evidence)
2) avoid ( or, at least minimize) selective citation bias - still leading cause of biased and distorted evidence that continue to plague the current literature
3) you can never predict the results of SR from individual trials. That is, even if in the majority of cases the results of SR do not differ from the results of largest trial, you can never know when that will happen. So, your question " Do we do 10 SRs OR do we do 9 SRs and 100 rapid reviews?" depends how much you are willing to be wrong, as that one SR that you decided not to do ( in order to do 100 rapid reviews) may ultimately cost more (in terms of poor patient outcomes, bad decision-making etc)
Best
Ben
On Jan 27, 2013, at 5:57 AM, "Jon Brassey" < [log in to unmask] > wrote:
Hi Chris,
Thanks for the reply, which came in while I was typing my response to Kev.
I wonder what proportion of clinicians use NNTs and NNHs in discussing risks. I keep telling my mum to ask her GP what her NNT is for the statins he's suggesting she takes. Its certainly not be raised in any of the consultations.
But, the above anecdote aside, it'd be interesting to see how different a rapid review could be on the subject. We know that the largest RCT (if positive and significant) is around 95% likely to show a subsequent meta-analysis would be positive and significant. So, if you find that you've got pretty close to the dichotomous 'yes'. The issue - for me - becomes how much effort is required to get 'super' accurate and is that benefit worth it.
So, it comes back (although worded slightly differently) to what is the cost benefit of comparing:
* A SR (which isn't perfect) but may cost £50-100,000 and take 12 months to perform.
* A rapid review that takes a week, costs £1,000.
The former will identify 90% of the trials (say) while the latter might find 65% of the trials. This figure would vary between topics - but hopefully you get the point.
Will those extra trials affect the effect size sufficiently to justify the cost?
I think it's a bit bad that that evidence doesn't exist. If we have £1,000,000 we could have this sort of discussion:
Do we do 10 SRs OR do we do 9 SRs and 100 rapid reviews?
BW
jon
On Sun, Jan 27, 2013 at 10:42 AM, Chris Del Mar < [log in to unmask] > wrote:
John
It matters when the benefits are modest.
Take the example of antibiotics for acute otitis media. The simple dichotomous outcome is yes, antibiotics ARE beneficial compared with none, for pain at 3 days. But the effect size (which is what you cane more accurately pin-point with SR and meta-analysis) is so small the NNT is somewhere between 10 and 20 (depending on severity etc) (see the Cochrane review). This benefit is so small (especially compared with similar NNH for antibiotics – abdominal pain, rashes, diarrhoea etc) that many patients and their doctors elect to hold off, and use something more direct for the pain and discomfort.
Chris
From: Evidence based health (EBH) [mailto: [log in to unmask] ] On Behalf Of Jon Brassey
Sent: Sunday, 27 January 2013 6:06 PM
To: [log in to unmask]
Subject: Why do systematic reviews?
Hi,
I appreciate why the methodology of SRs is undertaken - to reduce bias, ensure we get all the papers etc. But, what I'm thinking, when I ask the question, is around the actual end result (of a meta-analysis) the effect size. One could easily say that we do a SR (and M-A) to get a very accurate effect size. But how is that practically useful?
For instance, if you're a clinician you may simply want to know is an intervention effective - in which case extreme precision is not as important as a 'yes', 'no', or 'maybe'.
I could well see if you have two interventions and you're weighing up the relative merits of two interventions (effect size, side effects, patient circumstances etc) one wants to know how effective each intervention relative to each other. But again does that have to be massively accurate? I can also see a case, when doing cost-effectiveness work, for accurate effect sizes.
So, can people please let me know, practically, when such precision is required and when, sometimes, you could probably get away with something less accurate.
Thanks
jon
--
Jon Brassey
TRIP Database
http://www.tripdatabase.com
Find evidence fast
--
Jon Brassey
TRIP Database
http://www.tripdatabase.com
Find evidence fast
|