Print

Print


Hi Ahmed,

Thanks.  You, rightly, state "We make trade-offs every day and systematic reviews are no different."  Due to the lack of evidence to inform the trade offs it is based on what - convention?

We know excluding unpublished studies can have a really significant impact on the results of a systematic review, while the evidence around number of databases is fairly thin on the ground.

Is the trade off to not go after - properly - unpublished studies based on an honest and unbiased assessment of the evidence?

Someone's making these rules/conventions - they're not based on evidence and they're certainly not transparent.

Best wishes

jon

On 27 April 2017 at 13:29, Manitoba Hunter <[log in to unmask]> wrote:

Hi Jon,

 

You may also find some of the work on SOR/ SAR sheds more light on the underlying problem with trial reports (https://www.dropbox.com/s/0nqnyur9kdlzq62/Norris_2014_ResSynthMethods.pdf?dl=0). The title says it all “Clinical trial registries are of minimal use for identifying selective outcome and analysis reporting”. So short of hacking researchers computers/ databases for data, where do we as systematic reviewers expected to look for evidence? Feasibility is a huge factor in any successful research endeavor. I’ve worked on projects where it was difficult to the return on investment, but the funding agency wanted that level of sensitivity regardless of whether or not it would change the conclusions. This same question comes up when deciding how many databases to search, setting up filters, etc. We make trade-offs every day and systematic reviews are no different.

 

Best wishes,

 

Ahmed

 

From: Jon Brassey [mailto:jon.brassey@tripdatabase.com]
Sent: Thursday, April 27, 2017 7:03 AM
To: Manitoba Hunter
Cc: EVIDENCE-BASED-HEALTH
Subject: Re: Systematic reviews, what am I missing?

 

Hi Ahmed,

 

Thank you for that.  It's in relation to the last paragraph that is particularly relevant.  Two points:

 

  • It is not a dichotomy - I agree, as I see it evidence synthesis is an umbrella term that covers many review types (e.g. 'rapid', 'systematic').  As I see it there is no cliff-edge from 'rapid' to 'systematic' and there is little in the way of evidence to guide that.  It seems eminence (something EBM is meant to rail against) is the order of the day.  If it's not eminence it's 'faith' (again, something EBM is historically against).  But ultimately, when is the evidence synthesis 'good enough'?  My view is it depends on context and the EBM has not really grasped this nettle.  Is it time for Eminence-Based Evidence-Based Medicine (EBEBM)?
  • Cost-benefit - linked to when is 'it' good enough is the notion of cost-benefit. Again, we have no evidence to guide this.  My view is that evidence synthesis is a classic case of the law of diminishing returns. You don't move from 100 hours effort to 1000 hours effort and get ten times the gain. My supposition is that there will be a context specific sweet spot where any gains are outweighed by cost.  Linked to that, we talk a lot about reducing waste - might doing too much evidence synthesis (spending 1000 hours when 100 would suffice) be wasteful and therefore unethical?

We appear to be running blind with little evidence to guide us.  In this evidence-vacuum it appears eminence, faith and arguably 'big business' has subverted it all.

 

Cheers

 

jon

 

On 27 April 2017 at 12:38, Manitoba Hunter <[log in to unmask]> wrote:

Hi Jon,

 

I think you bring up an important point that systematic reviewers and decision maker should consciously be aware of when reviewing the evidence. The broader context is with regards to selective outcome reporting (SOR) and selective analysis reporting (SAR). Unpublished data, whether within a published trial report, or unpublished because the results of the trial where never at all, is problematic and very difficult to quantify. If we take the first definition as the rule of law then I would challenge any person to demonstrate a single systematic review that includes ‘all relevant studies’. How do we define relevance? These could be published or unpublished, of any study design, etc. The term ‘relevant’ could refer to being limited to the pre-specified PICOTS, but also be interpreted literally as ‘ALL’. Therefore, I don’t think (using what we know about the trial landscape) it would be feasible or realistic to assume ‘all relevant studies on a specific topic’ can ever be captured in a single systematic review.

 

The second definition (and further elaboration by Carol) is more practical, but still has flaws and carries its own set of biases. For example, it was explained that CT.gov should be searched in addition to ICTRP. While that will definitely increase the sensitivity of the search, Cochrane has failed to mention that depending on whether you search the ‘basic’ or ‘advanced’ search features you will get different results. A presentation at the Cochrane Colloquium (I believe in 2012) showed differing results that often favored the ‘basic’ search. Also, what about all the other trial registries that make up ICTRP. Why is CT.gov singled out as needing to be searching independently in addition to using the WHO portal? I have my own thoughts and possible explanations, but if we are truly going to be systematic then why are we duplicating in only one database. To complicate things further, the algorithm used in any database is a black box and can be changed from time to time without any explicit reference to what has changed or how this will affect the search results. Therefore, what was demonstrated a few years ago may not be relevant today and vice versa.

 

Lastly, there is a wealth of information hidden in the drug approval applications (e.g. FDA New Drug Approval) that are rarely hand-searched. There’s good reason for that as they were never intended to be user-friendly and take a great deal of time to decipher. Even if we slaved over FDA reports, what about applications for other countries, what additional or unpublished data is locked away in those files that generally never see the light of day.

 

If we embark on a systematic review in the assumption that it a dichotomy (get all the data or fail) then we have set ourselves up for failure before we even begin. The goal should be ‘reduce’ the effect of selective outcome reporting bias, but not to eliminate it. How we go about this also has to take into consideration time, effort, feasibility and the potential cost-benefit ratio or else we’re just hoping for a miracle without a real plan of how to succeed.

 

My five cents…

 

Ahmed

 

 

 

From: Evidence based health (EBH) [mailto:EVIDENCE-BASED-HEALTH@JISCMAIL.AC.UK] On Behalf Of Jon Brassey
Sent: Thursday, April 27, 2017 4:00 AM
To: EVIDENCE-BASED-HEALTH@JISCMAIL.AC.UK
Subject: Systematic reviews, what am I missing?

 

Stimulated by a recent paper on stem cells (http://www.cell.com/stem-cell-reports/pdf/S2213-6711%2817%2930119-4.pdf) reported that nearly half of stem cells trials aren't reported.  This falls in the range of OpenTrials which report 30-50% of trials aren't reported.  It got me thinking:

  • Many trials are unpublished.

Further. a couple of definitions of systematic reviews:

 

1) A Brief History of Research Synthesis EVALUATION & THE HEALTH PROFESSIONS, Vol. 25 No. 1, March 2002 12-37
SYSTEMATIC REVIEW The application of strategies that limit bias in the assembly, critical appraisal, and synthesis of all relevant studies on a specific topic. Meta-analysis may be, but is not necessarily, used as part of this process.

 

2) Cochrane's
A systematic review attempts to identify, appraise and synthesize all the empirical evidence that meets pre-specified eligibility criteria to answer a given research question. Researchers conducting systematic reviews use explicit methods aimed at minimizing bias, in order to produce more reliable findings that can be used to inform decision making.

 

The first definition would suggest - as it states 'all relevant studies' - that most systematic reviews are not really systematic reviews as they miss lots of trials.

The second definition is more flexible by saying "..attempts to identify, appraise and synthesize...".  It's more flexible as it's not saying 'all' merely that you 'attempt' to identify all the evidence. 

 

So, two questions to the group:

 

  • Based on the first definition, are systematic reviews that don't include 'all relevant studies' not actually systematic reviews?
  • Based on the second definition, any clue as to how hard one should 'attempt' to locate all the evidence?  For instance, systematic reviews tend to try to locate ALL published journal articles and - generally - a fairly poor attempt at unpublished trials, who decided that and is it evidence based?

I look forward to hearing from you all.

 

Best wishes

 

jon

 

--

 

Jon Brassey

Director, Trip Database

Honorary Fellow at CEBM, University of Oxford

 




--

Jon Brassey

Director, Trip Database

Honorary Fellow at CEBM, University of Oxford

 




--
Jon Brassey
Director, Trip Database
Honorary Fellow at CEBM, University of Oxford