Print

Print


Hi Jon,

I would expect it to vary by field and by type of institution (industry vs
academia). I would expect the industry trials that are missing to be the
ones with unfavourable results. I would expect the missing trials by
independent academics perhaps to be those with dull results or to be small
trials that failed to recruit. I may be wrong.

I think in fields where most studies are done by pharmaceutical companies,
it would be reasonable to assume that systematic reviews that don't include
all trials (e.g. those that don't actively look for unpublished trials)
will overestimate benefit.

I have argued elsewhere that those involved in systematic reviews should
present results stratified by funding source -
http://sickpopulations.wordpress.com/2012/01/04/cochrane_reviews/. Note
that since I wrote that, Joel Lexchin has updated his analysis -
http://onlinelibrary.wiley.com/doi/10.1002/14651858.MR000033.pub2/abstract-
the conclusions are similar.

Best wishes,
Tom


On 22 October 2013 10:20, Jon Brassey <[log in to unmask]> wrote:

> Hi All,
>
> In the 2008 paper Selective Publication of Antidepressant Trials and Its
> Influence on Apparent Efficacy<http://www.nejm.org/doi/full/10.1056/NEJMsa065779>the authors compaed the effects of the 74 FDA-registered studies of 12
> antidepressant agents with the smaller sub-set of published trials.  They
> reported:
>
> *Separate meta-analyses of the FDA and journal data sets showed that the
> increase in effect size ranged from 11 to 69% for individual drugs and was
> 32% overall*
>
> Two questions:
>
>    - Are people aware of other papers that have tried to quantify the
>    effect of unpublished trials?
>    - If a systematic review does not uncover the unpublished trials is it
>    reasonable to assume it over-estimates the effects of the intervention?
>
> Best wishes
>
> jon
>
> --
> Jon Brassey
> Trip Database
> http://www.tripdatabase.com
> Find evidence fast
>
>