Dear Michael,
I would have to look into the Servier case to comment on it specifically. As somebody who has presented for companies at the FDA and the EMA I can tell you that they do get very picky about pre-specification.
ICHE9 States the regulatory view clearly, for example in section 2.2.1
"The primary variable should be specified in the protocol, along with the rationale for its selection. Redefinition of the primary variable after unblinding will almost always
be unacceptable, since the biases this introduces are difficult to assess. When the clinical effect defined by the primary objective is to be measured in more than one
way, the protocol should identify one of the measurements as the primary variable on the basis of clinical relevance, importance, objectivity, and/or other relevant
characteristics, whenever such selection is feasible....."
and in Section
"5.1 Prespecification of the Analysis
When designing a clinical trial the principal features of the eventual statistical analysis of the data should be described in the statistical section of the protocol. This
section should include all the principal features of the proposed confirmatory analysis of the primary variable(s) and the way in which anticipated analysis problems will be
handled. In case of exploratory trials this section could describe more general principles and directions."
and in many other places throughout the document. In fact the enormous and rapidly growing literature on adjusting for multiplicity is very much influenced by drug regulation.
As regards NICE I can also quote a case where NICE has based its judgement on a single clinical trial where the authors have refused to release data to others on the grounds that they alone are uniquely competent to analyse the data. I seem to recall also that NICE refused itself to place an economic model it used in the public domain until taken to court. None of this proves that NICE is not a good thing but what is sauce for the goose is sauce for the gander and although I think that the FDA and the EMA makes plenty of bizarre judgements they also deserve some credit.
On the other hand I also headed a university stats clinic for several years and I can tell you that in at least one university the majority of independent medical researchers pay no attention to pre-specification whatsoever. In fact my most-common exasperated question to clients was 'how could you plan this study if you did not know how you would analyse it?' I would be surprised if it was better eleswhere. It may be that there are other list members who have similarly been involved in stats clinics and it would be interesting to know if their opinion differs.
As somebody who has worked for a pharma company I can also give an example of having been put under pressure by an investigator to change the pre-specified analysis in order to increase the chances of publication and of my company backing me when I refused to do so. You might like to look into the FDA NEJM case I posted recently for another curious example.
On a technical note, I think that the problem of pre-specification is an issue in meta-analysis if Glass's effect size is used because different investigators have used different measures. One may then be picking the 'best' outcome from each study. It is less of a problem if the same outcomes are available for every study even if the 'primary' outcome varied from study to study. Early stopping is also not really a problem for meta-analysts.
Stephen
Stephen Senn
Professor of Statistics
School of Mathematics and Statistics
Direct line: +44 (0)141 330 5141
Fax: +44 (0)141 330 4814
Private Webpage: http://www.senns.demon.co.uk/home.html
University of Glasgow
15 University Gardens
Glasgow G12 8QW
The University of Glasgow, charity number SC004401
________________________________________
From: Michael Power [[log in to unmask]]
Sent: 21 August 2011 14:54
To: Stephen Senn; [log in to unmask]
Subject: RE: Regulation, publication and pre-specification
Stephen,
" Case 4 is biasing and is outlawed in the regulatory framework."
What does outlawed mean in practice?
For example, Servier embroiled NICE in legal proceedings that lasted about 2
years. The argument was over differing interpretations of the risk of bias
in results from a subgroup analysis in a trial of strontium ranelate. NICE's
case boiled down to the fact that these analyses were not prespecified and
the age group was unusual - reading between the lines it seems that they
suspected that other subgroup analyses had been done and the most convenient
results selected for publication. Servier's main defence seemed to be that
they were asked to do this analysis by the regulator, EMA (European
Medicines Agency). If they gave NICE the original data to verify the
analyses and check other, more usual, age groups, I missed this in the
reports.
Michael
-----Original Message-----
From: Stephen Senn <[log in to unmask]>
To: [log in to unmask]
Sent: Sunday, 21 August 2011, 11:37
Subject: Re: Ezetimibe/Simvastatin
Multiplicity is tricky issue. I too do not believe in the mystic value of
pre-specification. Nevertheless evidentially there are some different
scenarios one can imagine.
1. Several outcomes were measured all were analysed and presented.
2. Several outcomes were measured all were analysed and presented and one
was prespecified.
3. Several outcomes were measured but only the pres-specified one was
presented and it was always known that this would be the case.
4. Several outcomes were measured and analysed but one that was not
pre-specified was presented.
For a third party I don't see much difference between 1 and 2 except perhaps
that 2 is indicative of some thinking by those who conducted the trial that
may be useful as secondary information. (However one may have to be very
careful in case 1 to avoid falling into the trap of only paying attention to
the most significant measure.) Case 3 I think is a shame because one would
like to know about the other measures but it is not biasing. Case 4 is
biasing and is outlawed in the regulatory framework. In fact there is at
least one case where the FDA has rapped the knuckles of the NEJM for
publishing a type 4 analysis.
So which was the case here?
Stephen
Stephen Senn
Professor of Statistics
School of Mathematics and Statistics
Direct line: +44 (0)141 330 5141
Fax: +44 (0)141 330 4814
Private Webpage: http://www.senns.demon.co.uk/home.html
University of Glasgow
15 University Gardens
Glasgow G12 8QW
The University of Glasgow, charity number SC004401
________________________________________
From: Piersante Sestini
[[log in to unmask]<mailto:[log in to unmask]>]
Sent: 21 August 2011 03:36
To: Stephen Senn
Cc:
[log in to unmask]<mailto:[log in to unmask]
C.UK>
Subject: Re: Ezetimibe/Simvastatin
On 20/08/2011 10.35, Stephen Senn wrote:
> Does anybody know if the endpoint was changed before or after unblinding
of the data?
> Stephen
This is an excellent point.
I don't believe in the mystic of a priori primary outcomes. As Ludwik Fleck
pointed out more than 70 years ago, we then shouldn't accept the discovery
of America, because Colombo's declared primary outcome was to get to India.
Thus, there is nothing wrong in reinterpreting the data to accept the
conclusions that they logically support, once they are fairly reported.
But here the point is different: it is that of a possible casuistic (in its
popular negative meaning) choice of an outcome based not on logic, but
possibly on reaching statistic "significance", which by itself does not
provide logic strength to the interpretation of the data, or on some other
untold reason.
The argument that to analyze the results from a single carotid site is
faster than from three which were already measured and recorded is
ridiculous, and I think that should have been rejected. The two more likely
explanations that I see are a)results from that analysis "look better",
and/or b)a form of auto-plagiarism: an attempt to maximize the number of
papers "produced" by the study (by publishing the other results in a
separate paper).
Both reasons are appealing both to the sponsor (to amplify the selling
points) and to the participating scientists (fattening the CV), and are a
consequence of a "pathological" mechanism of how the scientific literature
is evaluated and used.
I maintain that it is the journal editors that should guard about these
ethical misconducts. It makes no sense to require that clinical trials have
been pre-registered if their report is accepted without questioning whether
they are have been conducted and reported accordingly.
The problem is that journals have also an interest in publishing more
papers, particularly when they have a good chance of rise interest (and
sponsors do help with this), to be cited afterwards (increasing the IF of
the journal), or to raise income by selling reprints to drug companies.
Thus, with due exceptions, drug companies, clinical scientists (including
reviewers) and journals are largely in bed together, and breaking these
pathological mechanisms seems difficult. Open disclosure (in this case of
the reasons for the change in primary outcome, and when it occurred), could
help. Of course, your proposal of moving the publication of results of
clinical trials out of this business could also help, although with the
danger of creating one more dumb bureaucracy caring more about rules than
about logic. Nevertheless, while promoting a new journal that enforces
stricter rules for publication of clinical trials could be successful,
forbidding authors to submit to other journals is not simple.
The worst thing is that many call this process "evidence-based medicine",
and any effort to explain to the practitioners and the community that EBM is
a different thing that suffers, rather than cause, these problems would be
valuable.
regards,
Piersante Sestini
|