Print

Print


Dear Stephen,

Thank you,  I am thinking through protocols for my project with elements of
analysis that are clear and could be  adapted across projects. This is right
on time for me, interesting and most useful:-)

Best
Amy

From:  Stephen Senn <[log in to unmask]>
Date:  Friday, August 1, 2014 at 2:57 AM
To:  Amy Price <[log in to unmask]>, "'Huw Llewelyn [hul2]'"
<[log in to unmask]>, 'Benjamin Djulbegovic' <[log in to unmask]>
Cc:  'Michael Power' <[log in to unmask]>,
"[log in to unmask]"
<[log in to unmask]>, 'Kevork Hopayian'
<[log in to unmask]>
Subject:  RE: Since when did case series become acceptable to prove
efficacy?

The following paper may also be of interest

Added Values
http://onlinelibrary.wiley.com/doi/10.1002/sim.2074/abstract

I distinguish between different questions one might hope to answer in a
clinical trial

Q1. Was there an eff?ect of treatment in this trial?
Q2. What was the average eff?ect of treatment in this trial?
Q3. Was the treatment eff?ect identical for all patients in the trial?
Q4. What was the eff?ect of treatment for di?erent subgroups of patients?
Q5. What will be the eff?ect of treatment when used more generally (outside
of the trial)?

 and state

"Given an assumption of what might be called local (or weak) additivity,
that is to say
that the eff?ect of treatment was identical for all patients in the trial
(in other words that the
answer to Q3 is Œyesı), then Q1, Q2, & Q4 can all be answered using the same
analysis: a
con?dence interval or posterior distribution for the mean eff?ect of
treatment says it all. The
eff?ect on each patient is the average e?ffect Q2 and is hence the eff?ect
in every subgroup Q4
and if it is implausible that this eff?ect is zero, then the treatment has
an e?ffect Q1. Given
a further assumption of universal (or strong) additivity, this observed
eff?ect is the e?ffect to
every patient to whom it might be applied; this also provides an answer to
Q5.""

Stephen

From: Amy Price [[log in to unmask]]
Sent: 31 July 2014 23:41
To: 'Huw Llewelyn [hul2]'; Stephen Senn; 'Benjamin Djulbegovic'
Cc: 'Michael Power'; [log in to unmask]; 'Kevork Hopayian'
Subject: RE: Since when did case series become acceptable to prove efficacy?

I agree Huw and thank you Senn for your clarifications. I am keeping this
stream of communication, it is important to see things through the eyes of
those that write research up for the times when standardization in
communication fails us. I have seen areas that have contributed to a less
than clear understanding because of my own thinking/background and really
appreciate this exchange.
 
Best
Amy
 

From: Huw Llewelyn [hul2] [mailto:[log in to unmask]]
Sent: 31 July 2014 05:21 PM
To: Stephen Senn; 'healingjia Price'; Benjamin Djulbegovic
Cc: 'Michael Power'; [log in to unmask]; 'Kevork Hopayian'
Subject: Re: Since when did case series become acceptable to prove efficacy?
 
Perhaps the most interesting thing about this discussion is that the same
topic of hypothesis testing regarding RCTs is seen from such very different
perspectives by people trained in different disciplines!

Huw. 


From: Stephen Senn <[log in to unmask]>

Date: Thu, 31 Jul 2014 20:42:17 +0200

To: 'healingjia Price'<[log in to unmask]>; 'Djulbegovic,
Benjamin'<[log in to unmask]>

Cc: 'Huw Llewelyn [hul2]'<[log in to unmask]>; 'Michael
Power'<[log in to unmask]>; <[log in to unmask]>;
'Kevork Hopayian'<[log in to unmask]>

Subject: RE: Since when did case series become acceptable to prove efficacy?

 
There seem to be a lot of things being discussed in this thread. I have four
comments.
1)     The main purpose of the falsificationism paper was to show that there
was a fundamental difference in trying to disprove the hypothesis that the
data comes from a single distribution (e.g. the drug is a placebo) and
trying to disprove the hypothesis that the data come from two distributions
(the interventional drug is not the same as the active comparator). The
latter case is what you try to do in active control equivalence studies. If
you reject the two distribution theory, then you end up with one
distribution and the conclusion that the new treatment is equivalent to the
old. (This is seen most clearly in bioequivalence studies but it applies
elsewhere also.) The point of that paper was to claim that all the
statistical fixes in the world do not deal with the fundamental
Œphilosophicalı difference between these two cases. Some of the technical
issues are currently being debated on Deborah Mayosı blog. See

http://errorstatistics.com/2014/06/05/stephen-senn-blood-simple-the-complica
ted-and-controversial-world-of-bioequivalence-guest-post/
and
http://errorstatistics.com/2014/07/31/roger-berger-on-senns-blood-simple-wit
h-a-response-by-s-senn-guest-posts/
 
2)     95% of (correctly calculated) 95% confidence intervals will contain
the true parameter value. This does not mean (however), for example, that
95% of CIs that exclude 0 do so correctly.  One must beware of invalid
inversion. 

 
3)The case series method, as pioneered by Farrington and Whittaker
            Farrington CP, Whitaker HJ. Semiparametric analysis of case
series data (with discussion). Journal of the Royal Statistical Society
Series C-Applied Statistics 2006; 55: 1-28.

 

 

and the earlier case cross-over method of Marshall et al
Marshall RJ, Jackson RT. Analysis of case-crossover designs. Statistics in
Medicine 1993; 12: 2333-2341.
 
can be powerful ways of assessing causality using the timing of events.
Even where we have clinical trials, there are occasions where it is clear
that we would not accept the results that the pure randomisation analysis
would produce. Such a case is given by the trial of TGN1412 in which 6 out
of 6 healthy volunteers given TGN1412 had severe reactions and 2 given
placebo did not. A Fisherıs exact test of the result does not begin to
describe what everybody. This trial and a much larger one where it would be
foolish to depart from classical analysis are discussed here
Senn S. Lessons from TGN1412 and TARGET: implications for observational
studies and meta-analysis. Pharmaceutical statistics 2008; 7: 294­301.
However, it is not clear to me from this discussion that case-series
methodology is being appropriately applied in the example cited.
4)Picking up on an earlier thread, one has a natural prejudice in favour of
oneıs own work but I think it would be wise to defer judgement on Tamiflu
until (at least) the MUGAS analysis reports
http://www.mugas.net/mugas/re-analysis-of-clinical-trials/
.
 
My declaration of interest is here
http://www.senns.demon.co.uk/Declaration_Interest.htm
 
 
Stephen
 

From: healingjia Price [mailto:[log in to unmask]]
Sent: 31 July 2014 16:21
To: Djulbegovic, Benjamin
Cc: Huw Llewelyn [hul2]; Michael Power;
[log in to unmask]; Kevork Hopayian;
[log in to unmask]
Subject: Re: Since when did case series become acceptable to prove efficacy?
 

Thank you for the precision. It clarifies how error of thought can creep in
through careless phrasing especially illuminating as we are dealing with
uncertainty.

 

Is there a link on the CI statement Ben as I would like to understand this
better. Are you saying that  the single computed 95% is dichotomous  and as
a result can't be between something because we have already established
the95%?

 

Best

Amy

Amy Price 

Empower 2 Go 

Building Brain Potential

Http://empower2go.org

Sent from my iPad


On 31 Jul 2014, at 07:04 am, "Djulbegovic, Benjamin"
<[log in to unmask]> wrote:
> 
> Agree with Huw.
> 
> We have to ask our selves where the laws come from- presumably from testing
> multiple hypotheses over time resulting in what Quine called "web of
> knowledge". ( I think we may have been confusing here the difference between
> hypotheses, theories, and laws - admittedly not easy definition to give in a
> short reply)
> 
> Ben
> 
>  
> 
> PS. BTW, the 95% CI does not say that the true values are between x and y ;
> instead, the frequency with which this single computed 95% CI contains the
> true value is either 100% or 0%.
> 
> 
> Sent from my iPad
> 
> ( please excuse typos & brevity)
> 
> 
> On Jul 31, 2014, at 3:15 AM, "Huw Llewelyn [hul2]" <[log in to unmask]> wrote:
>> 
>> Hi Michael
>> 
>> I agree that the null hypothesis is a device to help estimate the probability
>> of replicating a result using multiple readings eg of an RCT. But that RCT
>> will be based on an underlying hypothesis eg that the treatment molecule is
>> able to compete with an endogenous molecule for a receptor and thus improve a
>> patient's symptoms. This interaction would be based partly on the
>> mathematical model representing a general law called 'the law of mass
>> action'. 
>> 
>> If the RCT fails to show a replicable difference in symptoms between this
>> molecule and a placebo then the entire reasoning leading up to the RCT,
>> (including the general 'law of mass action') is called into question. Popper
>> appears to say that the whole theory / hypothesis is 'falsified'. I prefer to
>> say that it is thrown into doubt and that the probability of success of a
>> hypothesis using that rationale in future is lower.
>> 
>> It is possible that some error was made eg that the molecule used in the RCT
>> was not manufactured properly and different to the one used in earlier phases
>> of its development. If this supplementary hypothesis is 'verified', and the
>> correct molecule is used in a second RCT which does show a difference which
>> can probably be replicated, then the probability of the original hypothesis /
>> theory will be restored.
>> 
>> What do you think of this?
>> 
>> Best
>> 
>> Huw 
>> 
>> 
>> From: Michael Power <[log in to unmask]>
>> 
>> Sender: "Evidence based health (EBH)" <[log in to unmask]>
>> 
>> Date: Thu, 31 Jul 2014 06:34:44 +0100
>> 
>> To: <[log in to unmask]>
>> 
>> ReplyTo: Michael Power <[log in to unmask]>
>> 
>> Subject: Re: Since when did case series become acceptable to prove efficacy?
>> 
>>  
>>  
>> Hi
>>  
>> Apologies about but-ing into this conversation so late. But, sometimes
>> philosophy seems to me to make things so complicated that practical
>> understanding disappears.
>>  
>> An RCT comparing the effects of treatments A and B does not test any theory
>> analogous to a general law such as F = MA.
>>  
>> A clinical trial is simply a measuring tool, a tool for measuring effects
>>  
>> The null hypothesis is a way of framing the statistical theory that underpins
>> the measurement of random variation: If the RCT were to be repeated
>> endlessly, the 95% confidence interval(s) it has measured will ³capture² the
>> true measured mean 95% of the time.
>>  
>> This is about precision, not accuracy. I.E. the 95% CI captures the true
>> measured mean, NOT the true mean.
>>  
>> The difference between the true mean and the true measured mean is the bias,
>> or systematic error of the measurement tool, and is a consequence of the
>> accuracy of the measuring instrument.
>>  
>> We can measure precision. But we cannot measure bias, we can only estimate it
>> by critical appraisal (not by deduction as Stephen Sennıs 1991 paper would
>> have us believe).
>>  
>> So, there are 3 fundamental problems of induction:
>>  
>> 1.    Trials measure precision, but our heads see the measurements as
>> accurate ­ this is a psychological problem, which hopefully can be mitigated
>> by education
>> 
>> 
>> 2.    Trial design (e.g. control, randomization, blinding) can hopefully
>> mitigate the risks of bias.  And, critical appraisal can guestimate the risks
>> of bias. But, because important bias can arise from sources we canıt control
>> or canıt imagine, our mitigations and guestimates leave an uncertain interval
>> of fuzziness around any measurement. Bias can only be measured against a true
>> reference standard, but we are forced to use an artificial ³gold standard²,
>> and we know from experience that the gold can turn out to be foolıs gold.
>> Conclusion: we need outside evidence to calibrate the accuracy of
>> measurements of clinical trials (which will have the Bayesians clapping their
>> hands in joy).
>> 
>> 
>> 3.    We have evidence of mean (measured) effects and guestimates of risks of
>> bias. How do we apply this evidence to the individual in the consultation?
>> This is nowhere near the problem of betting on whether the sun will rise
>> tomorrow, or if E=MC^2 has to be applied to F = MA when calculating the tides
>> or electron orbitals. (Do I hear the Bayesians again?)
>> 
>>  
>> I hope I have falsified the hope of falsification in RCTs!
>>  
>> Michael
>>  
>>  
>>  
>> 
>> From: Evidence based health (EBH)
>> [mailto:[log in to unmask]] On Behalf Of Djulbegovic,
>> Benjamin
>> Sent: 31 July 2014 00:31
>> To: [log in to unmask]
>> Subject: Re: Since when did case series become acceptable to prove efficacy?
>>  
>> 
>> Hi Huw,
>> 
>> Your e-mail succinctly summarizes the epistemological problems that have been
>> debated in science for  hundred of years going all the way to Aristotle!
>> Indeed, the problem of application of probability calculus to single events
>> is one of the eternal issues. The usually proposed solution is to accept the
>> premise of exchangeability of past with future events- whether this
>> assumption is always acceptable is another issue, but so far, it has served
>> us pretty well.
>> 
>> I look forward to hearing further thoughts from you and others on these
>> issues that are more relevant to EBM that people may appreciate at first
>> blush..
>> 
>> Best
>> 
>> Ben 
>> 
>> 
>> Sent from my iPad
>> 
>> ( please excuse typos & brevity)
>> 
>> 
>> On Jul 30, 2014, at 6:23 PM, "Huw Llewelyn [hul2]" <[log in to unmask]> wrote:
>>> 
>>> Dear Amy, Ben, Kev, Stephen and all
>>> Thank you Ben for attaching a link to Stephen Senn's paper.  I would be
>>> grateful for your comments on the questions that I pose below to try to
>>> clarify my understanding.
>>> If I postulate (or Œhypothesiseı) that it will probably rain tomorrow and it
>>> does rain then is my Œrain hypothesisı verified and is the alternative Œno
>>> rain hypothesisı falsified?  Do you think that this use of the term
>>> Œhypothesisı, Œverifiedı and Œfalsifiedı for a single event is appropriate?
>>> If I postulate that it will probably rain tomorrow, I may look for more
>>> facts which make it more probable or less probable that it will rain.  If a
>>> null hypothesis that it will be sunny becomes very improbable, I can decide
>>> to Œrejectı the plan of going to the sea-side tomorrow to Œsunı myself.
>>> Note that the null hypothesis does not include all the other possibilities
>>> e.g. Œnot sunny but dryı.  Do you agree that we can never verify or falsify
>>> a statistical hypothesis about an infinitely large population by direct
>>> observation but only estimate its probability?
>>> The problem with a scientific hypothesis, theory or diagnosis is that it is
>>> not like a single prediction about a statistical null hypothesis about an
>>> unobservable infinite population or an easily accessible falsifiable or
>>> verifiable hypothesis about tomorrowıs rain.  It is a title to a group of
>>> predictions about past, present and future consequences, many of which are
>>> interconnected and the probability of many of which cannot easily be
>>> estimated directly by studies.  The result of a single RCT on an infinite
>>> number of patients is one of these consequences that we can try to predict
>>> from a study sample.  Popper appears to say that if one such a consequence
>>> becomes improbable, then the overall hypothesis / theory is 'falsified'.
>>> However, to my mind the validity of the overall hypothesis / theory becomes
>>> less probable and an alternative hypothesis / theory may become more
>>> probable.  A resulting decision to reject one course of scientific
>>> investigation and to pursue another is another matter.  Popper points out
>>> that even if all the consequences do remain probable after testing, we still
>>> cannot assume that the overall hypothesis / theory is verified (as I assume
>>> there may be other consequences and alternative hypotheses / theories that
>>> we have not yet considered).  It seems to me that our imaginary model is a
>>> hypothesis when there is an intention to test it against some alternative
>>> but a theory if there is no immediate intention to do so.  Do you agree?
>>> Best
>>> Huw
>>> 
>>> 
>>> From: Evidence based health (EBH) [[log in to unmask]] on
>>> behalf of Amy Price [[log in to unmask]]
>>> Sent: 30 July 2014 20:14
>>> To: [log in to unmask]
>>> Subject: Re: Since when did case series become acceptable to prove efficacy?
>>> 
>>> Thanks Ben and Kev,
>>>  
>>> I found paragraph 2 of Kevıs explanation helpful. I used to not love theory
>>> and think it took too much room in the bathwater. After throwing that baby
>>> out I have searched for it. I came to the conclusion that theory became
>>> difficult when it was overstated and built on as if it was a validated fact
>>> without uncertainties  and so it was not the theory that was the issue but
>>> the abuse of it.   I have defined in my own mind the term pragmatic to see
>>> if something works and at what dosage/intensity etc and explanatory to
>>> define why and to improve the working. To me it is a cycle not an either or
>>> superiority. The problem comes with expecting more than the design can
>>> support. 
>>>  
>>> Best
>>> Amy
>>>  
>>> 
>>> From: Evidence based health (EBH)
>>> [mailto:[log in to unmask]] On Behalf Of Djulbegovic,
>>> Benjamin
>>> Sent: 30 July 2014 02:37 PM
>>> To: [log in to unmask]
>>> Subject: Re: Since when did case series become acceptable to prove efficacy?
>>>  
>>> Perhaps, this paper to which I alluded before may help clarify some of the
>>> issues we are discussing
>>>  
>>>                    Senn SJ. Falsificationism and clinical trials. Stat Med.
>>> 1991;10:1679-1692.
>>>  
>>> Stephen used to be active on this group, perhaps he may wish to commentŠ
>>>  
>>> Best
>>> ben
>>>  
>>>  
>>> 
>>> From: Evidence based health (EBH)
>>> [mailto:[log in to unmask]] On Behalf Of k.hopayian
>>> Sent: Wednesday, July 30, 2014 5:08 AM
>>> To: [log in to unmask]
>>> Subject: Re: Since when did case series become acceptable to prove efficacy?
>>>  
>>> Dear Ben and all,
>>> 
>>> To those who dislike theory, at least please read the next paragraph,
>>> 
>>>  
>>> 
>>> 1 This discussion is important because of its practical consequences. Many
>>> people outside EBP mistakenly believe that EBP holds that anything less than
>>> a clinical trial is poor evidence. So comparing RCTs to the method of basic
>>> science that develops theories can give support to these mistaken beliefs.
>>> (That is not to say that you, Ben, are mistaken). Observational studies are
>>> no less scientific (in the sense of applying statistics, medical knowledge,
>>> pharmacology etc) than RCTs, they both use the tools of epidemiology, it is
>>> the risk of bias that differs.
>>> 
>>>  
>>> 
>>> 2 I would argue that the RCT process has a superficial resemblance to the
>>> Popperian method: Null hypothesis - Experiment - Reject/Do not reject null
>>> hypothesis; Theory - Prediction - Experiment -Reject/Do not reject theory.
>>> 
>>> The difference is that theories attempt to explain observations already made
>>> and then predict new observations for testing. The null hypothesis does not
>>> explain current observation nor predict new observations. It applies
>>> theories that do. The use of the word explanatory in the explanatory vs
>>> pragmatic variation of trials only adds to the superficial resemblance, an
>>> example maybe of what a contemporary of Popper said, that philosophy is the
>>> battle against bewitchment by language.
>>> 
>>>  
>>> 
>>> No, it isn't fall yet but the Parrottia persica outside my front door
>>> proudly displays leaves of varying shades. It was a bare, straggly, ugly
>>> piece of bark when I planted it. Five years later, it is an admirable
>>> exhibit. 
>>> 
>>>  
>>> 
>>> Good luck with the grant application!
>>> 
>>>  
>>> 
>>> Kev
>>> 
>>>  
>>> 
>>> On 26 Jul 2014, at 15:38, Djulbegovic, Benjamin <[log in to unmask]>
>>> wrote:
>>>  
>>> 
>>> This is ³learning ² weekend for me, Kev
>>> 
>>> Working on the grant (what else is new?) and my 99% of perspiration is on
>>> welcome occasions being alleviated by reflection on thoughtful messages and
>>> remarks by people like youŠ(I confess: when I get tired of writing, I log
>>> back on my e-mail, and the messages from the EBM folks never stop inspiring
>>> meŠ)
>>> 
>>>  
>>> 
>>> In thinking about your latest example, we may be talking about two different
>>> things: evidence vs. decision-making, which in clinical trial design
>>> paradigm translates into explanatory trials (whose goal is to provide a
>>> scientific answer to a research question, typically focusing on the proof of
>>> a concept or mechanism etc ;ŒEfficacy² question) vs. pragmatic trials (Which
>>> treatment of already proven efficacy is better?² ;³effectiveness question).
>>> [Several years ago we used regret approach to tackle these issues; see:Hozo
>>> I, Schell MJ, Djulbegovic B. Decision-making when data and inferences are
>>> not conclusive: risk-benefit and acceptable regret approach. Semin Hematol.
>>> Jul 2008;45(3):150-159.]
>>> 
>>> In terms of application of Popperıs discourse of scientific method to
>>> clinical trials, Jeremy Howick informed me that Fisherian hypothesis testing
>>> and Popperian falsificationism is identical. Fisher wrote it first, but
>>> Popper had not read Fisher (until much later, I believe) at which point
>>> Popper acknowledged the similarityŠ (I slightly edited Jeremyıs words, and
>>> if I am not quoting him accurately, I hope he can clarify)
>>> 
>>>  
>>> 
>>> Enjoy auburn leaves (is it already fall in England?)
>>> 
>>> Best
>>> 
>>> ben
>>> 
>>>  
>>> 
>>>  
>>> 
>>>  
>>> 
>>> From: Evidence based health (EBH)
>>> [mailto:[log in to unmask]
>>> <mailto:[log in to unmask]> ] On Behalf Of k.hopayian
>>> Sent: Saturday, July 26, 2014 7:47 AM
>>> To: [log in to unmask]
>>> Subject: Re: Since when did case series become acceptable to prove efficacy?
>>> 
>>>  
>>> 
>>> Hi Ben,
>>> 
>>>  
>>> 
>>> Short reply: I respectfully disagree that a trial is an example of Popper's
>>> scientific method. Popper was concerned with theory. Whether drug A is
>>> better than B is not a theory (although confusingly, the negation of that
>>> statement is called the null hypothesis) in the sense that the biochemistry
>>> and pharmacology around those drugs have theories (such as drug receptors,
>>> enzyme action etc).
>>> 
>>>  
>>> 
>>>  
>>> 
>>> Slightly longer reply: Theories make predictions but not all predictions
>>> come from theories.
>>> 
>>> For example, the laws of physics are applied in engineering. Physics and
>>> engineering have theories. Engineers may design cars. Two cars may be
>>> compared for performance (acceleration, speed, efficiency etc). In comparing
>>> them, a manufacturer may start with an idea (our cars are better than yours)
>>> but that would hardly count as a theory. [By the way, have car manufacturers
>>> ever ever started with a null hypothesis?:-)] The manufacturer's prediction
>>> may be proven wrong (and no doubt get buried in company vaults, car
>>> manufacturers are so different to pharma, aren't they?) but no rejection of
>>> the theory of engineering is necessarily implied. Now if an engineer
>>> designed a car whose performance was beyond the boundaries predicted by the
>>> laws of thermodynamics, that WOULD falsify theory.
>>> 
>>>  
>>> 
>>> It is a lovely sunny weekend here in Suffolk, with many shades of green and
>>> auburn leaves, so out I go. I hope your weekend is a good one too.
>>> 
>>>  
>>> 
>>>  
>>> 
>>>  
>>> 
>>> Dr Kev (Kevork) Hopayian,
>>> 
>>> MD FRCGP
>>> General Practitioner, Leiston, Suffolk,
>>> 
>>> General Practice Trainer, Leiston
>>> 
>>> Hon Sen Lecturer, Norwich Medical School, University of East Anglia
>>> Primary Care Tutor, East Suffolk
>>> 
>>> RCGP Clinical Skills Assessment examiner
>>> 
>>> NHS Senior Appraiser, East Anglia
>>> 
>>> http://www.angliangp.org <http://www.angliangp.org/>
>>> 
>>>  
>>> 
>>> On 24 Jul 2014, at 23:40, Djulbegovic, Benjamin <[log in to unmask]
>>> <mailto:[log in to unmask]> > wrote:
>>> 
>>>  
>>> 
>>> Kev,
>>> 
>>> In fact, clinical trial is a classic example of (Popper's) falsificationism
>>> paradigm...and does test hypothesis that one drug is better than the other (
>>> Ho: A=B; Ha:A<>B, as postured by classic frequentist statistical approach).
>>> Statistical evidence obtained this way connects one phenomena with others
>>> eventually corroborating ( or, rejecting) theories ( group of related
>>> principles and laws that were built by testing a number of hypotheses) such
>>> as those if beta-blockers effects are consistent with biochemical
>>> drug-receptor theory...
>>> 
>>> Best
>>> 
>>> Ben
>>> 
>>> Ps Steven Senn had a wonderful article some years ago about falsificationism
>>> in clinical trials...worth reading...
>>> 
>>>  
>>> 
>>> 
>>> Sent from my iPad
>>> 
>>> ( please excuse typos & brevity)
>>> 
>>> 
>>> On Jul 24, 2014, at 5:06 PM, "k.hopayian" <[log in to unmask]
>>> <mailto:[log in to unmask]> > wrote:
>>>> 
>>>> "The definition of breakthrough is "it costs a packet" "  Now that, I
>>>> like.. 
>>>> 
>>>>  
>>>> 
>>>> But I have to disagree on your method of science. Einstein's general
>>>> relativity theory remains a theory despite the experimental results that
>>>> concord with the predictions it makes. What these experiments do is fail to
>>>> falsify the theory, so we stick with it. There are some things it cannot
>>>> explain, which quantum theory does better, so we continue with two not
>>>> compatible but very useful models to explain our world.
>>>> 
>>>>  
>>>> 
>>>> Such models and experiements should not be confused with the method
>>>> employed in trials. Trials are not designed to test a theory (for example,
>>>> a trial of betablockers is not testing the theory that there are
>>>> biochemical receptors). The trial is there to establish which of one or
>>>> more interventions is superior, if at all. No theory/modle is falsified by
>>>> such experiments  - although beliefs (some cherished) can be dispelled. I
>>>> suppose statisticians/epidemiologists have not helped our understanding by
>>>> using the term hypothesis testing.
>>>> 
>>>>  
>>>> 
>>>> Kev
>>>> 
>>>>  
>>>> 
>>>>  
>>>> 
>>>>  
>>>> 
>>>> Dr Kev (Kevork) Hopayian,
>>>> 
>>>> MD FRCGP
>>>> General Practitioner, Leiston, Suffolk,
>>>> 
>>>> General Practice Trainer, Leiston
>>>> 
>>>> Hon Sen Lecturer, Norwich Medical School, University of East Anglia
>>>> Primary Care Tutor, East Suffolk
>>>> 
>>>> RCGP Clinical Skills Assessment examiner
>>>> 
>>>> NHS Senior Appraiser, East Anglia
>>>> 
>>>> http://www.angliangp.org <http://www.angliangp.org/>
>>>> 
>>>>  
>>>> 
>>>> On 24 Jul 2014, at 20:49, Tom Jefferson <[log in to unmask]
>>>> <mailto:[log in to unmask]> > wrote:
>>>> 
>>>>  
>>>> 
>>>> The definition of breakthrough is "it costs a packet" - and Sovaldi fits
>>>> the picture.
>>>> This kind of bullshit is replicated in the EU with so called early
>>>> assessment to get better, innovative drugs earlier to patients who
>>>> desperately need them. So the burden of proof is slowly being pushed back
>>>> to phase IV or beyond which may be observational, subverting Galileo's
>>>> methods.
>>>> 
>>>> What we should always remember is that Einstein's general relativity theory
>>>> (1915) was a theory and remained a theory until Eddington's natural
>>>> experiment during the 1919 solar eclipse confirmed that gravitation could
>>>> deflect starlight as the theory had put forward.
>>>> 
>>>>  
>>>> The rise of observational data (even non comparative) is an involution, not
>>>> an evolution. There are many culprits most of them in my profession (I am a
>>>> physician) and they will be held to account.
>>>> Greed and science do not mix.
>>>> Nite from Rome.
>>>> 
>>>> Tom.
>>>> 
>>>>  
>>>> 
>>>> On 24 July 2014 17:10, Poses, Roy <[log in to unmask]
>>>> <mailto:[log in to unmask]> > wrote:
>>>> 
>>>> Thanks, Tom.  Could not agree more.  But it seems like there is little
>>>> protest about this paradigm shift, your work, of course, excepted.
>>>> Re: "trials are for regulators" - but in the US, the regulators apparently
>>>> decided they don't need so many trials.  The FDA designated Sovaldi/
>>>> sofosbuvir as a "breakthrough" therapy, which apparently allows approvals
>>>> based on much more limited evidence, although the evidence behind that
>>>> "breakthrough" designation itself was not clear.
>>>> 
>>>> See: 
>>>> http://www.fda.gov/newsevents/newsroom/pressannouncements/ucm377888.htm
>>>> <http://www.fda.gov/newsevents/newsroom/pressannouncements/ucm377888.htm>
>>>> 
>>>>  
>>>> 
>>>> On Thu, Jul 24, 2014 at 10:45 AM, Tom Jefferson <[log in to unmask]
>>>> <mailto:[log in to unmask]> > wrote:
>>>>> 
>>>>> Roy and all evidencers.
>>>>> The scientific method of Galileo has been subverted before our very eyes.
>>>>> Galileo observed, described and then produced a hypothesis or theory which
>>>>> he then proceeded to test with an experiment. This model has served us
>>>>> well in the last 400 years with a few exceptions (the already cited
>>>>> penicillin for example).
>>>>> What we now witness is a fundamental subversion of the order (I am not
>>>>> going to call it a paradigm shift) of things. Observations are fact,
>>>>> case-controls, case series, cohorts (even retrospective and datalinked
>>>>> ones) are being held out as proof. Trials are for regulators, they say.
>>>>> The origin of all this is complex and partly known. In the Tamiflu story
>>>>> as we began uncovering the extent of reporting bias affecting the clinical
>>>>> trials that had been used to make policy and justify stockpiling, decision
>>>>> makers turned to observational evidence (of universally recognised poor
>>>>> quality) as props for their unchangeable policies.
>>>>> It is a sad parable of the world we live in.
>>>>> Best wishes,
>>>>> 
>>>>> Tom.
>>>>> 
>>>>>  
>>>>> 
>>>>> On 24 July 2014 16:33, Valerie King <[log in to unmask]
>>>>> <mailto:[log in to unmask]> > wrote:
>>>>> 
>>>>> Agree Roy.
>>>>> 
>>>>>  
>>>>> 
>>>>> Donıt think we can assume, based on these case-series in highly selected
>>>>> populations, that the ³eradication² rate is >=90% or that SVR is a good
>>>>> surrogate. Also, although the studies were registered with SVR24 as
>>>>> primary outcome the FDA let them give SVR12 as part of the ³breakthrough²
>>>>> designation so we donıt even have a particularly good surrogate.
>>>>> 
>>>>>  
>>>>> 
>>>>> Virtually all of the subjects in published studies had very positive
>>>>> treatment prognosis anyway. Over half of subjects had HCV genotype 2 which
>>>>> is the easiest to treat no matter what drug used.  And there is certainly
>>>>> more than a hint in several of the studies of substantial relapse rates
>>>>> after SVR24 achieved (e.g. ~9% in NUTRINO.) I honestly think that most
>>>>> people are listening to the marketing drumbeat on this drug and not
>>>>> reading the papers for themselves. Iıd be happy to have these drugs be the
>>>>> breakthrough that most people seem to think that they are, but in my
>>>>> opinion, currently there are a lack of data to support that position.
>>>>> 
>>>>>  
>>>>> 
>>>>> Cheers,
>>>>> 
>>>>> Valerie
>>>>> 
>>>>>  
>>>>> 
>>>>> Valerie J. King, MD, MPH
>>>>> 
>>>>> Professor of Family Medicine, and
>>>>> 
>>>>> Public Health & Preventive Medicine
>>>>> 
>>>>> Director of Research
>>>>> 
>>>>> Center for Evidence-based Policy
>>>>> 
>>>>> Oregon Health & Science University
>>>>> 
>>>>> Mailstop MDYCEBP
>>>>> 
>>>>> Suite 250
>>>>> 
>>>>> 3030 SW Moody Ave.
>>>>> 
>>>>> Portland, OR 97201
>>>>> 
>>>>> Voice: 503-494-8694 <tel:503-494-8694>
>>>>> 
>>>>> Fax: 503-494-3807 <tel:503-494-3807>
>>>>> 
>>>>> [log in to unmask] <mailto:[log in to unmask]>
>>>>> 
>>>>> www.ohsu.edu/policycenter/ <http://www.ohsu.edu/policycenter/>
>>>>> 
>>>>> Twitter: @drvalking
>>>>> 
>>>>>  
>>>>> 
>>>>>  
>>>>> 
>>>>>  
>>>>> 
>>>>> From: Evidence based health (EBH) 
>>>>> [mailto:[log in to unmask] 
>>>>> <mailto:[log in to unmask]> ] On Behalf Of Poses, Roy
>>>>> Sent: Thursday, July 24, 2014 07:08
>>>>> 
>>>>> 
>>>>> To: [log in to unmask] 
>>>>> <mailto:[log in to unmask]> 
>>>>> Subject: Re: Since when did case series become acceptable to prove 
>>>>> efficacy?
>>>>> 
>>>>>  
>>>>> 
>>>>>  
>>>>> 
>>>>> But it is not clear that SVR is a good surrogate marker.
>>>>> 
>>>>> 
>>>>> 
>>>>> See: 
>>>>> http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2814%296102
>>>>> 5-4/fulltext 
>>>>> <http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2814%29610
>>>>> 25-4/fulltext>  
>>>>> 
>>>>> and the Cochrane review it references:
>>>>> 
>>>>> http://www.ncbi.nlm.nih.gov/pubmed/24585509 
>>>>> <http://www.ncbi.nlm.nih.gov/pubmed/24585509> 
>>>>> 
>>>>>  
>>>>> 
>>>>> As an aside, given that hep C infection is a chronic problem and bad 
>>>>> effects from it occur long after original infection, it is extremely 
>>>>> strange that no one has ever thought to do a good RCT with long term 
>>>>> follow up to assess clinical outcomes of ANY hepatitis C treatment.
>>>>> 
>>>>>  
>>>>> 
>>>>> On Wed, Jul 23, 2014 at 4:54 PM, Djulbegovic, Benjamin 
>>>>> <[log in to unmask] <mailto:[log in to unmask]> > wrote:
>>>>> 
>>>>> But, if eradication of viral load is greater than 90% ( and we accept 
>>>>> viral load as a good surrogate marker), would then single arm study be 
>>>>> justified?
>>>>> 
>>>>> Ben 
>>>>> 
>>>>> Sent from my iPhone
>>>>> 
>>>>> (Please excuse typos & brevity)
>>>>> 
>>>>> 
>>>>> On Jul 23, 2014, at 4:50 PM, "Poses, Roy" <[log in to unmask] 
>>>>> <mailto:[log in to unmask]> > wrote:
>>>>>> 
>>>>>> That seems reasonable, but certainly does not apply to this particular 
>>>>>> clinical situation and article.  
>>>>>> 
>>>>>>  
>>>>>> 
>>>>>> On Wed, Jul 23, 2014 at 4:47 PM, Steve Simon, P.Mean Consulting 
>>>>>> <[log in to unmask] <mailto:[log in to unmask]> > wrote:
>>>>>> 
>>>>>> On 7/23/2014 9:39 AM, Poses, Roy wrote:
>>>>>>> > I still don't see why case-series without any control groups are now
>>>>>>> > regarded as credible ways to evaluate efficacy of therapy???
>>>>>> I cannot comment on this particular example, but in general you can 
>>>>>> safely dispense with a control group when there is close to 100% 
>>>>>> morbidity or mortality in that control group. In such a setting, any 
>>>>>> improvement is painfully obvious and does not need a rigorous design or 
>>>>>> fancy statistical analysis. Also, it is pretty difficult and probably 
>>>>>> unethical to randomly assign half your patients to be in a group that has 
>>>>>> 100% morbidity or mortality.
>>>>>> 
>>>>>> Steve Simon, [log in to unmask] <mailto:[log in to unmask]> , Standard Disclaimer.
>>>>>> Sign up for the Monthly Mean, the newsletter that
>>>>>> dares to call itself average at www.pmean.com/news 
>>>>>> <http://www.pmean.com/news> 
>>>>>> 
>>>>>>  
>>>>> 
>>>>> 
>>>>> --
>>>>> 
>>>>> Dr Tom Jefferson
>>>>> Medico Chirurgo
>>>>> GMC # 2527527
>>>>> www.attentiallebufale.it <http://www.attentiallebufale.it/> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> Roy M. Poses MD FACP
>>>> President
>>>> Foundation for Integrity and Responsibility in Medicine (FIRM)
>>>> [log in to unmask] <mailto:[log in to unmask]> 
>>>> Clinical Associate Professor of Medicine
>>>> Alpert Medical School, Brown University
>>>> [log in to unmask] <mailto:[log in to unmask]> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> Dr Tom Jefferson
>>>> Medico Chirurgo
>>>> GMC # 2527527
>>>> www.attentiallebufale.it <http://www.attentiallebufale.it/> 
>>>