To elaborate on what Brian said (and he put it better than me), the
problem with PP is that less enthusiastic patients don't take the
treatment, and it wouldn't have worked on them, so you get a biased
result - the treatment looks better than it should. However, the
unenthusiastic people are still in the control group, so they do
worse.
To give an example, I was vaguely involved in a study of Calcium and
Vitamin D for older women at risk of hip fractures. Some in the
intervention group didn't take their pills, so do we remove them or
not? If we remove them, they are probably the people who also take
less care of their health in other ways, and we don't remove them from
the control group (because they didn't have any pills to take). So
when we remove them, the effectiveness of the pills looks better.
But if we keep them in, well, the problem is that they didn't take
their pills, so the effectiveness of the treatment looks worse. This
means that we get a result biased towards the null hypothesis, and
this is intention to treat. This is what we do, because
statisticians like being conservative (with a lower case 'c').
However, it also means that the study doesn't assess the effectiveness
of taking pills, it assesses the effectiveness of being given pills to
take - and this is more like the real life effect - the doctor doesn't
make you take pills, rather give you pills (or a prescription). If no
one takes the pills (for whatever reason) , it doesn't matter whether
they work. The classic example of this is faecal occult blood
screening as a test for bowel cancer - you can tell people to get a
spoonful of poo and send it in a jar for analysis, but 90% won't.
This is why we do ITT - because we want to know the effectiveness of
the treatment in real life.
However, what we really want to know is the effectiveness of the
treatment, for those people that would have taken the treatment had it
been offered (you can ask the doctor what the chances are that the
pills will make you better - but research using ITT doesn't tell us
that - it tells us the chances that the doctor telling you to take the
pills will make you better , and that's not the same thing). There's
a way of doing this - we try to estimate the probability that a person
in the control group would have taken the pills, had they been offered
them, and weight the control group by this probability, thereby making
sure that the control group is equivalent to the treatment group *who
took the pills*, this is called CACE - Complier Average Causal Effect
or LATE - Local Average Treatment Effect analysis. And (chance to
blow my own trumpet) it's described here: Hewitt, C.E., Torgerson,
D.J., Miles, J.N.V. (2006). Is there another way to account for
contamination n randomised controlled trials? Canadian Medical
Association Journal, 175, 4, 347-348.
This is all pretty complex. And you have an extra problem of
potentially differential dropout biasing the results also. But that's
pretty complex too, and as the editor didn't raise it, I'm not going
to talk about it.
J
2010/1/6 Brian K. Saxby <[log in to unmask]>:
> Hi Ioanna,
>
> Intention to Treat (ITT) analyses versus Per Protocol (PP - those who
> complete the study as you intended, which is pretty much what you have so
> far) is common in clinical trials, so it may be worth looking at the
> EMEA/FDA websites and clinical trial journals for source refs. By looking
> only at completers in the PP analysis, you run the danger of biasing your
> results towards finding a positive result and saying a treatment is
> effective, when actually it's only effective in those who can put up with
> it/buy into it - that's probably where the editor is coming from.
>
> As Jeremy said, for the ITT you should include the data on all cases in the
> group they were allocated to (I'm presuming you have two randomised groups
> in this study?), regardless of whether they completed treatment.
>
> In an ideal trial you'd still have been able to collect follow-up assessment
> data even in those who withdrew from treatment. Your ITT would tell you the
> effect overall, and the PP would tell you the effect of receiving the full
> treatment. But you have some subjects without assessment data? Am I correct
> in thinking you only have two timepoints - baseline and end of trial (EOT)?
> (if you have other timepoints, I strongly suggest including these in your
> analysis model). It doesn't give you much room for imputation methods. For
> those where you have only EOT, I'm not sure there's much you can do without
> a baseline - I'm not a fan of substituting with the group mean, certainly
> not at baseline. For those with baseline but not follow-up, a common
> imputation method for ITT is Last Observation Carried Forward (LOCF), where
> you take the latest post-baseline measurement for a subject and substitute
> that for the missing timepoints. In your case though, if you only have two
> timepoints, you don't have another post-baseline measurement to bring
> forward to EOT. I doubt the editor would be suggesting you use the baseline
> as the LOCF value, as that would effectively would mean 25% of your sample
> would show no change (although if your intervention shows an effect even
> with a quarter of your sample not changing, that might be worth reporting!).
> Did the editor make reference to imputation methods, or were they just more
> interested in including the data on those who didn't receive full treatment?
>
> Without an imputation method, it sounds like you'll have a quarter of your
> data missing, so I suggest you present a comparison between those with data
> for the ITT and those without, on any other variables that could be
> relevant. As a reader I'd be looking to see if I can extrapolate the ITT
> results to the full sample you recruited, or is there something about these
> drop-outs that make them different and unable to tolerate the treatment etc.
> I think the concern from the editor is that your 'missing data' is
> systematically linked to withdrawal from treatment (so not truly missing, in
> the random sense of the concept) - unfortunately, with only two timepoints
> it's difficult to do much to tease it apart.
>
> I hope this rambling email helps - I'd be interested to hear how you get on!
>
> Brian
>
>
> On 06/01/2010 05:46, Ioanna Vrouva wrote:
>>
>> Dear All,
>> I would be grateful for your advice concerning the following.
>> I am working on an outcome analysis (with outcome data-CBCL scores-
>> available at intake and end of treatment) regarding a parenting program.
>> It has been impossible to obtain data from parents who dropped out (around
>> 25 %). Moreover, another 15% have not provided data at either intake or
>> follow-up, although they completed the intervention.
>> I had previously based the outcome analysis on only those who had data
>> available at both intake and completion. However, the Editor has asked me
>> to present an intention to treat analysis (ITTA).
>> My questions
>> 1. Should I include all cases for this analysis? (regardless of
>> whether
>> they completed the treatment, and whether they provided data at both
>> intake and the second time point?)
>> 2. When performing an ITTA with missing data, which is the best way
>> to
>> impute the missing data (CBCL scores)? The average? Or?
>> Any guidance/reading suggestions would be hugely appreciated
>> Many thanks and happy New Year
>> Ioanna
>>
>
--
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com
|