Print

Print


Ahmed, I am all for transparent and public discourse of scientific work ( including my own) but I
am afraid you did not understand our work well; please read the paper more carefully.  Briefly,investigators make their bets BEFORE the trials begin. How often these bets will turn out to be correct? How would you evaluate these bets? Can we actually not only describe but predict the distribution of the outcomes when they are tested in RCTs? What would you say? If a patient ( or the policy-maker) ask you ( as they asked me) " doc, when you study these new treatments what is the probability that they will be superior to already existing treatments" , what would you tell them? 
Try to think about these questions - when you do, I am sure you will end up agreeing with us.
(we actually postulated in advance- without seeing data- that we will observe what we end up observing; we did this by formulating the "equipoise hypothesis" - linking treatment " success" to underlying moral principle of trials conduct).
Best
Ben
Ps As for that we have looked at 743 trials ( close to 300,000 patients) "ONLY", it took us years to obtain data + effort that can only be characterized as an Herculean to extract and analyze data. As of today, this is the only set of data that includes a CONSECUTIVE series of the trials in which outcomes from all published and unpublished trials were accounted for. Incidentally, we did exclude noninferiority trials ( although there were few equivalence and non-inferiority trials in our data set). 
Pss There is, however, one caveat one should have about our paper: it describes publicly funded trials only. Many people maintain that the industry would never invest in clinical trials if its amount to "success"  rate of about 50%. This remains a very important- even a fundamental  question - related to the nature of the way we evaluate and discover new treatments ( which, by the way,  we have also studied, hoping to provide an unbiased answer to this question too).




On Oct 28, 2012, at 8:11 AM, "Dr. Ahmed M. Abou-Setta" <[log in to unmask]> wrote:

Just for transparency, there is a discussion on the Cochrane Collaboration Group's Bulletin Board in LinkedIn around this review. One of the commentators suggested that this review is "another testimony to the necessity for proper implementation of randomization in testing treatments". This was responded to by a second commentator stating that the "EBM model... needs re-evaluation. While in theory the EBM model has merit there is much bias in the "evidence" that is used and often raises questionable quality when there is a profit motive involved in those doing the research." and they make a reference to the book by Ben Goldacre, "Bad Pharma."

Below is my response:

Before we starting jumping to conclusions on whether or not the EBM model is a success or failure based on a single snap-shot in time, we should take a closer look at the evidence. Just in the abstract alone, we can get a few glimpses of issues that can affect the external validity (generalization) of the results of this review.

(1) The question being asked is very specific: "What is the likelihood that new treatments being compared to established treatments in randomized trials will be shown to be superior?" Therefore only 'superiority' trials should be looked at, while non-inferiority and underpowered trials should all be excluded from the analyses.

(2) Only four cohorts of consecutive, publicly funded, randomized trials (743 trials) were included in the analyses. With over 25,000 reports of RCTs being published every year in Medline alone, well I'll let you do the math :) How confident then are we that this truly represents RCTs in general.

(3) The authors report "We found that, on average, new treatments were very slightly more likely to have favorable results than established treatments, both in terms of the primary outcomes targeted and overall survival." OK, what about safety. That whole purpose of the RCT might have been to prove one drug has a more preferable side-effects profile than the competitor.

(4) And of course, we are forgetting one of the main reasons to undertake a RCT... just to prove using an experimental design that one drug is equivalent to another in efficacy and safety. If we use the rationale that drugs/interventions should only be used if they are 'better' than the competitor then there would be only ONE treatment option for any ONE disease.

(5) There is a misunderstanding that the authors' statement "Random allocation is only ethical, however, if there is genuine uncertainty about which of the treatment options is preferable. If a patient or their healthcare provider is certain which of the treatments being compared is preferable they should not agree to random allocation, because this would involve the risk that they would be assigned to a treatment they believed to be inferior." Off the bat this statement sounds like it only reflects efficacy, while in fact it should encompass other aspects of treatment including effectiveness, availability, compliance, preferences, moral/religious beliefs, safety, cost, etc. If we boil the whole argument down to statistically significant efficacy the we lose all the shades of grey that is allowed in the EBM model.


Since Ben is an active member on this listserv I didn't want it to look like I was criticizing his work behind his back and so I am posting my opinion here also.

Ahmed




Date: Wed, 17 Oct 2012 05:24:11 -0700
From: [log in to unmask]
Subject: Re: New treatments turn out better than the old ones just slightly more than half the time in randomized trials
To: [log in to unmask]

 
Dear colleagues,
Ben Djulbegovic and a whole host of other distinguished members of our Group have just published a Cochrane Review on the subject:
Djulbegovic B, et al "New treatments compared to established treatments in randomized trials"Cochrane Database of Syst Rev 2012; 10: Art. No. MR000024. http://onlinelibrary.wiley.com/doi/10.1002/14651858.MR000024.pub3/abstract;jsessionid=6739FCF811D60C9C0645840920079841.d03t04
Regards,
Ash