Print

Print


Hi Ahmed,

Thank you for that.  It's in relation to the last paragraph that is
particularly relevant.  Two points:


   - It is not a dichotomy - I agree, as I see it evidence synthesis is an
   umbrella term that covers many review types (e.g. 'rapid', 'systematic').
   As I see it there is no cliff-edge from 'rapid' to 'systematic' and there
   is little in the way of evidence to guide that.  It seems eminence
   (something EBM is meant to rail against) is the order of the day.  If it's
   not eminence it's 'faith' (again, something EBM is historically against).
   But ultimately, when is the evidence synthesis 'good enough'?  My view is
   it depends on context and the EBM has not really grasped this nettle.  Is
   it time for Eminence-Based Evidence-Based Medicine (EBEBM)?
   - Cost-benefit - linked to when is 'it' good enough is the notion of
   cost-benefit. Again, we have no evidence to guide this.  My view is that
   evidence synthesis is a classic case of the law of diminishing returns. You
   don't move from 100 hours effort to 1000 hours effort and get ten times the
   gain. My supposition is that there will be a context specific sweet spot
   where any gains are outweighed by cost.  Linked to that, we talk a
   lot about reducing waste - might doing too much evidence synthesis
   (spending 1000 hours when 100 would suffice) be wasteful and therefore
   unethical?

We appear to be running blind with little evidence to guide us.  In this
evidence-vacuum it appears eminence, faith and arguably 'big business' has
subverted it all.

Cheers

jon

On 27 April 2017 at 12:38, Manitoba Hunter <[log in to unmask]> wrote:

> Hi Jon,
>
>
>
> I think you bring up an important point that systematic reviewers and
> decision maker should consciously be aware of when reviewing the evidence.
> The broader context is with regards to selective outcome reporting (SOR)
> and selective analysis reporting (SAR). Unpublished data, whether within a
> published trial report, or unpublished because the results of the trial
> where never at all, is problematic and very difficult to quantify. If we
> take the first definition as the rule of law then I would challenge any
> person to demonstrate a single systematic review that includes ‘all
> relevant studies’. How do we define relevance? These could be published or
> unpublished, of any study design, etc. The term ‘relevant’ could refer to
> being limited to the pre-specified PICOTS, but also be interpreted
> literally as ‘ALL’. Therefore, I don’t think (using what we know about the
> trial landscape) it would be feasible or realistic to assume ‘all relevant
> studies on a specific topic’ can ever be captured in a single systematic
> review.
>
>
>
> The second definition (and further elaboration by Carol) is more
> practical, but still has flaws and carries its own set of biases. For
> example, it was explained that CT.gov should be searched in addition to
> ICTRP. While that will definitely increase the sensitivity of the search,
> Cochrane has failed to mention that depending on whether you search the
> ‘basic’ or ‘advanced’ search features you will get different results. A
> presentation at the Cochrane Colloquium (I believe in 2012) showed
> differing results that often favored the ‘basic’ search. Also, what about
> all the other trial registries that make up ICTRP. Why is CT.gov singled
> out as needing to be searching independently in addition to using the WHO
> portal? I have my own thoughts and possible explanations, but if we are
> truly going to be systematic then why are we duplicating in only one
> database. To complicate things further, the algorithm used in any database
> is a black box and can be changed from time to time without any explicit
> reference to what has changed or how this will affect the search results.
> Therefore, what was demonstrated a few years ago may not be relevant today
> and vice versa.
>
>
>
> Lastly, there is a wealth of information hidden in the drug approval
> applications (e.g. FDA New Drug Approval) that are rarely hand-searched.
> There’s good reason for that as they were never intended to be
> user-friendly and take a great deal of time to decipher. Even if we slaved
> over FDA reports, what about applications for other countries, what
> additional or unpublished data is locked away in those files that generally
> never see the light of day.
>
>
>
> If we embark on a systematic review in the assumption that it a dichotomy
> (get all the data or fail) then we have set ourselves up for failure before
> we even begin. The goal should be ‘reduce’ the effect of selective outcome
> reporting bias, but not to eliminate it. How we go about this also has to
> take into consideration time, effort, feasibility and the potential
> cost-benefit ratio or else we’re just hoping for a miracle without a real
> plan of how to succeed.
>
>
>
> My five cents…
>
>
>
> Ahmed
>
>
>
>
>
>
>
> *From:* Evidence based health (EBH) [mailto:EVIDENCE-BASED-HEALTH@
> JISCMAIL.AC.UK] *On Behalf Of *Jon Brassey
> *Sent:* Thursday, April 27, 2017 4:00 AM
> *To:* [log in to unmask]
> *Subject:* Systematic reviews, what am I missing?
>
>
>
> Stimulated by a recent paper on stem cells (http://www.cell.com/stem-
> cell-reports/pdf/S2213-6711%2817%2930119-4.pdf) reported that nearly half
> of stem cells trials aren't reported.  This falls in the range of
> OpenTrials which report 30-50% of trials aren't reported.  It got me
> thinking:
>
>    - Many trials are unpublished.
>
>
>    - Even Cochrane (one of the better SR publishers) does a poor job of
>    handling unpublished studies (eg http://www.bmj.com/content/
>    346/bmj.f2231)
>
>
>    - Only including published trials can have a profound effect on the
>    outcome of a systematic review (e.g. http://www.nejm.org/doi/full/
>    10.1056/NEJMsa065779 & http://www.bmj.com/content/344/bmj.d7202)
>
> Further. a couple of definitions of systematic reviews:
>
>
>
> 1) A Brief History of Research Synthesis EVALUATION & THE HEALTH
> PROFESSIONS, Vol. 25 No. 1, March 2002 12-37
> *SYSTEMATIC REVIEW The application of strategies that limit bias in the
> assembly, critical appraisal, and synthesis of all relevant studies on a
> specific topic. Meta-analysis may be, but is not necessarily, used as part
> of this process.*
>
>
>
> 2) Cochrane's
> <http://www.cochranelibrary.com/about/about-cochrane-systematic-reviews.html>
> *A systematic review attempts to identify, appraise and synthesize all the
> empirical evidence that meets pre-specified eligibility criteria to answer
> a given research question. Researchers conducting systematic reviews use
> explicit methods aimed at minimizing bias, in order to produce more
> reliable findings that can be used to inform decision making.*
>
>
>
> The first definition would suggest - as it states '*all relevant studies*'
> - that most systematic reviews are not really systematic reviews as they
> miss lots of trials.
>
> The second definition is more flexible by saying "*..attempts to
> identify, appraise and synthesize...*".  It's more flexible as it's not
> saying '*all*' merely that you '*attempt*' to identify all the evidence.
>
>
>
> So, two questions to the group:
>
>
>
>    - Based on the first definition, are systematic reviews that don't
>    include 'all relevant studies' not actually systematic reviews?
>
>
>    - Based on the second definition, any clue as to how hard one should
>    'attempt' to locate all the evidence?  For instance, systematic
>    reviews tend to try to locate ALL published journal articles and -
>    generally - a fairly poor attempt at unpublished trials, who decided that
>    and is it evidence based?
>
> I look forward to hearing from you all.
>
>
>
> Best wishes
>
>
>
> jon
>
>
>
> --
>
>
>
> Jon Brassey
>
> Director, Trip Database <http://www.tripdatabase.com>
>
> Honorary Fellow at CEBM <http://www.cebm.net>, University of Oxford
>
> Creator, Rapid-Reviews.info
>
>
>



-- 
Jon Brassey
Director, Trip Database <http://www.tripdatabase.com>
Honorary Fellow at CEBM <http://www.cebm.net>, University of Oxford
Creator, Rapid-Reviews.info