Hi Ahmed,
   As a co-author on this study and having work on the specifics please see my responses to each query by you. My answers are next to you questions/quotes:

My point is that for the results of your study to be generalized to all RCTs then all RCTs must answering the same question that you proposed. The fact of the matter is that not all RCTs are performed to prove superiority. Many of them are out to prove equivalency, or a better safety profile. Almost all RCTs submitted for regulatory approval are trying to prove that their intervention is 'similar' to what is already available on the market, not particularly that it is superior.

The issue that was addressed was not related to generalization. The analogy that I could think of is the financial market. For each product in the financial market we have a track record. How often stocks "on average" perform better than bonds. Off course one is aware that not all stocks are "Apple" stocks. Similarly the issue is not generalizability or predicting the success rate of kinds of treatments or a specific trial but the success rate of the publicly funded RCT SYSTEM. All the included studies assessed superiority only. Any study assessing equivalence were excluded.


If we only look at efficacy outcomes then we are concentrating on the tree and forgetting about the forest.


All the trials in the cohort were designed to assess superiority. If you read the details, the success rate was assessed 3 ways. So it did take care of the “forest” and not only “trees”. But it was the forest of similar trees.


This is not a criticism of your work but it means that we can't just take your work and generalize this for all RCTs.


Again generalization is not an issue here. It is looking at the efficiency of the system. If a patient decides on participating in a RCT, and asks what is the track record of the RCT system in general. There was no answer. Empirical evidence to this question prior to this was non-existent. By the way the distribution of success rate was not different according to types of treatment, control and other variables in our cohort of studies. Also, just imagine if the success rate was high…say 80%. Do you think any patient in his/her right sense of mind would agree to participate in a RCT? On another note a high success rate would also indicate that uncertainty did not exist and was only faked to conduct a RCT, which also answers you next question.


If we could then we can also generatlize that most ethical committees are inefficient because they are allowing trials to be performed in which the target is not to prove superior efficacy than the comparator (especially when the comparator is a placebo or no treatm ent).


Coming back to the presentation of the evidence, whenever we discuss options with patients you have to describe the differences in efficacy but also side-effects, compliance rates, treatment cost, cost-effectiveness, personal values, availability, etc. Therefore it's not as simple as drug A is better than drug B.

Again, the success rate was assessed 3 ways and on way was authors judgments which takes into account several issues apart from cost. Regarding values and other details.....when data will be incorporated on a regular basis on all the issues that you are pointing in context of RCT (may be part of CONSORT) we can certainly revisit. Please keep in mind the issue of desirability versus feasibility. We were able to address only the issues that were feasible.

Remember my comment on the bulletin board was not a criticism of your work as much as it was a reality check that this review does not answer 'all questions related to RCTs' nor can the results be used to prove the 'failure of the EBM model' nor that 'all RCTs that don't reach statistical significance are 'unethical' '.

It is all for the good of science and spirit of being objective.

Best wishes

Ambuj Kumar


On Mon, Oct 29, 2012 at 9:24 AM, Djulbegovic, Benjamin <[log in to unmask]> wrote:
Thank you, Jeremy
Indeed, we have completed study on the success rate in industry sponsored trials- hopefully it will be published soon. I completely agree with you about the interpretation of the non-inferiority studies. Nevertheless, an empirical look at the non-inferiority/equivalence trials similar to the one we described for the superiority trials, should be undertaken. ( I hope someone else will do it- working on this project has been one of the most time-consuming & arduous projects I have ever been involved in!)

Your point about the optimism is a very interesting one. Empirically, we know that 
only 17% of the trials had treatment effects that matched original researchers' expectations
This further corroborates our key message: the investigators simply CANNOT predict IN ADVANCE what they are going to discover ( although their informed guesses are probably reason for odds slightly being in favor of new treatments). 
However, the researchers would never undertake a trial if they did not believe in ( if they were not optimistic about the results). Due to equipoise/uncertainty principle they cannot proceed to test their "sure bets" in RCTs; they have to be sufficiently uncertain to submit their "bets" to testing in RCTs.
The higher uncertainty, the lower chance that they will find out exactly what they hoped for. Hence, as you well know it, we talk about the paradox of equipoise:
"the precept that paradoxically drives discoveries of new treatments while limiting the proportion and rate of new therapeutic discoveries."

The important question to which we are alluding to here is: if you invested up to $1B in the drug development, and hope to reap the benefits from it, how certain you would like to be submit it to testing in a rigorous RCT? Or, you would like to stack up the game in your favor, violating equipoise requirement?

Thanks again for your insightful comments ( as usual)
Best
Ben

On Oct 29, 2012, at 8:03 AM, "Jeremy Howick" <[log in to unmask]> wrote:

Dear Ben,

Thanks for the wonderful study that should challenge the view that new=better.

Are you planning to follow up with a similar study of industry funded trials? You are correct that industry may not be willing to invest in a proposition with a 50% likelihood of success. On the other hand everyone is susceptible to 'optimism bias' (http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2806%2968153-1/fulltext). Moreover  demonstrating non-inferiority is sufficient to gain marketing approval, so your claim that there is a 50% chance of 'success' may require revision. You define success as rejecting the null hypothesis of no difference, whereas success in a non-inferiority trial involves demonstrating rough equivalence (including slightly worse) or better.

Aside: I believe the justification for non-inferiority trials is misguided: http://www.ncbi.nlm.nih.gov/pubmed/19998192 but this is another matter.

Best wishes,

Jeremy

* please note my email address is now [log in to unmask]
--

Jeremy Howick PhD
Department of Primary Care Health Sciences
Centre for Evidence-Based Medicine
New Radcliffe House, 2nd floor, Walton Street,
Jericho OX2 6NW

From: <Djulbegovic>, "[log in to unmask]" <[log in to unmask]>
Reply-To: "[log in to unmask]" <[log in to unmask]>
Date: Sunday, 28 October 2012 14:49
To: "[log in to unmask]" <[log in to unmask]>
Subject: Re: New treatments turn out better than the old ones just slightly more than half the time in randomized trials

Ahmed, I am all for transparent and public discourse of scientific work ( including my own) but I
am afraid you did not understand our work well; please read the paper more carefully.  Briefly,investigators make their bets BEFORE the trials begin. How often these bets will turn out to be correct? How would you evaluate these bets? Can we actually not only describe but predict the distribution of the outcomes when they are tested in RCTs? What would you say? If a patient ( or the policy-maker) ask you ( as they asked me) " doc, when you study these new treatments what is the probability that they will be superior to already existing treatments" , what would you tell them? 
Try to think about these questions - when you do, I am sure you will end up agreeing with us.
(we actually postulated in advance- without seeing data- that we will observe what we end up observing; we did this by formulating the "equipoise hypothesis" - linking treatment " success" to underlying moral principle of trials conduct).
Best
Ben
Ps As for that we have looked at 743 trials ( close to 300,000 patients) "ONLY", it took us years to obtain data + effort that can only be characterized as an Herculean to extract and analyze data. As of today, this is the only set of data that includes a CONSECUTIVE series of the trials in which outcomes from all published and unpublished trials were accounted for. Incidentally, we did exclude noninferiority trials ( although there were few equivalence and non-inferiority trials in our data set). 
Pss There is, however, one caveat one should have about our paper: it describes publicly funded trials only. Many people maintain that the industry would never invest in clinical trials if its amount to "success"  rate of about 50%. This remains a very important- even a fundamental  question - related to the nature of the way we evaluate and discover new treatments ( which, by the way,  we have also studied, hoping to provide an unbiased answer to this question too).




On Oct 28, 2012, at 8:11 AM, "Dr. Ahmed M. Abou-Setta" <[log in to unmask]> wrote:

Just for transparency, there is a discussion on the Cochrane Collaboration Group's Bulletin Board in LinkedIn around this review. One of the commentators suggested that this review is "another testimony to the necessity for proper implementation of randomization in testing treatments". This was responded to by a second commentator stating that the "EBM model... needs re-evaluation. While in theory the EBM model has merit there is much bias in the "evidence" that is used and often raises questionable quality when there is a profit motive involved in those doing the research." and they make a reference to the book by Ben Goldacre, "Bad Pharma."

Below is my response:

Before we starting jumping to conclusions on whether or not the EBM model is a success or failure based on a single snap-shot in time, we should take a closer look at the evidence. Just in the abstract alone, we can get a few glimpses of issues that can affect the external validity (generalization) of the results of this review.

(1) The question being asked is very specific: "What is the likelihood that new treatments being compared to established treatments in randomized trials will be shown to be superior?" Therefore only 'superiority' trials should be looked at, while non-inferiority and underpowered trials should all be excluded from the analyses.

(2) Only four cohorts of consecutive, publicly funded, randomized trials (743 trials) were included in the analyses. With over 25,000 reports of RCTs being published every year in Medline alone, well I'll let you do the math :) How confident then are we that this truly represents RCTs in general.

(3) The authors report "We found that, on average, new treatments were very slightly more likely to have favorable results than established treatments, both in terms of the primary outcomes targeted and overall survival." OK, what about safety. That whole purpose of the RCT might have been to prove one drug has a more preferable side-effects profile than the competitor.

(4) And of course, we are forgetting one of the main reasons to undertake a RCT... just to prove using an experimental design that one drug is equivalent to another in efficacy and safety. If we use the rationale that drugs/interventions should only be used if they are 'better' than the competitor then there would be only ONE treatment option for any ONE disease.

(5) There is a misunderstanding that the authors' statement "Random allocation is only ethical, however, if there is genuine uncertainty about which of the treatment options is preferable. If a patient or their healthcare provider is certain which of the treatments being compared is preferable they should not agree to random allocation, because this would involve the risk that they would be assigned to a treatment they believed to be inferior." Off the bat this statement sounds like it only reflects efficacy, while in fact it should encompass other aspects of treatment including effectiveness, availability, compliance, preferences, moral/religious beliefs, safety, cost, etc. If we boil the whole argument down to statistically significant efficacy the we lose all the shades of grey that is allowed in the EBM model.


Since Ben is an active member on this listserv I didn't want it to look like I was criticizing his work behind his back and so I am posting my opinion here also.

Ahmed




Date: Wed, 17 Oct 2012 05:24:11 -0700
From: [log in to unmask]
Subject: Re: New treatments turn out better than the old ones just slightly more than half the time in randomized trials
To: [log in to unmask]

 
Dear colleagues,
Ben Djulbegovic and a whole host of other distinguished members of our Group have just published a Cochrane Review on the subject:
Djulbegovic B, et al "New treatments compared to established treatments in randomized trials"Cochrane Database of Syst Rev 2012; 10: Art. No. MR000024. http://onlinelibrary.wiley.com/doi/10.1002/14651858.MR000024.pub3/abstract;jsessionid=6739FCF811D60C9C0645840920079841.d03t04
Regards,
Ash 
    



--
Ambuj Kumar, MD, MPH
727-481-2787