We've been having related conversations around the office in our medical library as well as via the Medical Librarians community on Twitter (and probably also the MEDLIBS-L email list and beyond). Here are some of the points I've been noticing.

There are some SERIOUS problems with systematic reviews, including but not limited to:
 - lack of access to all the evidence;
 - bias in the published record; 
 - the lack of capable peer reviewers and journal editors who truly understand the requirements of the methodology.

(See my post last week about the infamous Google Scholar article: <http://etechlib.wordpress.com/2013/01/23/whats-wrong-with-google-scholar-for-systematic-reviews/>)

These result in a substantial number of published reviews that say they are an SR, but aren't really. This leads to further bias in the published record, and influences policy making and clinical decisionmaking, but is not the high quality of evidence that we are aiming for and which is claimed for the results of systematic reviews. 

So, we are having many conversations around that. We are also having some interesting conversations about Comparative Effectiveness Reviews, where they fit into the evidence-based medicine hierarchy, trying to understand this new methodology as we do the the SR methodology, and looking at the potential role of librarians in #CER. 

This blogpost on CER on Twitter is just opening that conversation to a broader audience.
<http://etechlib.wordpress.com/2013/01/25/hashtags-of-the-week-hotw-comparative-effectiveness-research-week-of-january-21-2013/>

This is a very fruitful area for conversation at large, and I am grateful for this group and the colleagues who think deeply and perceptively about these issues. 

 - Patricia 



On Sun, Jan 27, 2013 at 5:56 AM, Jon Brassey <[log in to unmask]> wrote:
Hi Chris,
 
Thanks for the reply, which came in while I was typing my response to Kev.
 
I wonder what proportion of clinicians use NNTs and NNHs in discussing risks.  I keep telling my mum to ask her GP what her NNT is for the statins he's suggesting she takes.  Its certainly not be raised in any of the consultations.
 
But, the above anecdote aside, it'd be interesting to see how different a rapid review could be on the subject.  We know that the largest RCT (if positive and significant) is around 95% likely to show a subsequent meta-analysis would be positive and significant.  So, if you find that you've got pretty close to the dichotomous 'yes'.  The issue - for me - becomes how much effort is required to get 'super' accurate and is that benefit worth it.
 
So, it comes back (although worded slightly differently) to what is the cost benefit of comparing:
 
  • A SR (which isn't perfect) but may cost £50-100,000 and take 12 months to perform.
  • A rapid review that takes a week, costs £1,000.
The former will identify 90% of the trials (say) while the latter might find 65% of the trials.  This figure would vary between topics - but hopefully you get the point.
 
Will those extra trials affect the effect size sufficiently to justify the cost?
 
I think it's a bit bad that that evidence doesn't exist.  If we have £1,000,000 we could have this sort of discussion:
 
Do we do 10 SRs OR do we do 9 SRs and 100 rapid reviews?
 
BW
 
jon


On Sun, Jan 27, 2013 at 10:42 AM, Chris Del Mar <[log in to unmask]> wrote:

John

 

It matters when the benefits are modest.

 

Take the example of antibiotics for acute otitis media. The simple dichotomous outcome is yes, antibiotics ARE beneficial compared with none, for pain at 3 days. But the effect size (which is what you cane more accurately pin-point with SR and meta-analysis) is so small the NNT is somewhere between 10 and 20 (depending on severity etc) (see the Cochrane review). This benefit is so small (especially compared with similar NNH for antibiotics – abdominal pain, rashes, diarrhoea etc) that many patients and their doctors elect to hold off, and use something more direct for the pain and discomfort.

 

Chris

 

From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Jon Brassey
Sent: Sunday, 27 January 2013 6:06 PM
To: [log in to unmask]
Subject: Why do systematic reviews?

 

Hi,

I appreciate why the methodology of SRs is undertaken - to reduce bias, ensure we get all the papers etc. But, what I'm thinking, when I ask the question, is around the actual end result (of a meta-analysis) the effect size. One could easily say that we do a SR (and M-A) to get a very accurate effect size. But how is that practically useful?

For instance, if you're a clinician you may simply want to know is an intervention effective - in which case extreme precision is not as important as a 'yes', 'no', or 'maybe'.

I could well see if you have two interventions and you're weighing up the relative merits of two interventions (effect size, side effects, patient circumstances etc) one wants to know how effective each intervention relative to each other. But again does that have to be massively accurate?  I can also see a case, when doing cost-effectiveness work, for accurate effect sizes.

So, can people please let me know, practically, when such precision is required and when, sometimes, you could probably get away with something less accurate.

 

Thanks

 

jon

--

Jon Brassey

TRIP Database

Find evidence fast

 




--
Jon Brassey
TRIP Database
Find evidence fast
 



--
Patricia Anderson, [log in to unmask]
Emerging Technologies Librarian
University of Michigan
http://www.lib.umich.edu/users/pfa