Andrew Holmes wrote:
> Dear Kevin,
>
> At 11:41 17/12/98 -0600, you wrote:
> | Andrew Holmes wrote:
> | > ... Presumably you're interested in assessing whether patients with the
> | > pathological condition in general are different from normals in general,
> | > rather than just testing whether these particular patients are different
> | > from these particular normal controls at the time they were scanned.
> | >
> | > The former requires a random effects analysis, which can be effected by
> | > summarising the data within each subject into one scan for each of the two
> | > conditions. In the context of inferring about population differences,
> | > repeat observations on an individual are of lesser relevance, and no
> | > relevant information is lost by averaging the data in this way.
> |
> | I think this is going too far. Even with a random effects analysis, the
> | conclusions still depend entirely on how representative the subjects are
> | of the populations they are drawn from. There are always effects of who
> | is likely to volunteer, who is not inaccessible (e.g. homeless or
> | hospitalized), and other effects in addition to the obvious potential
> | differences (including socioeconomic and demographic differences).
>
> I completely agree. This is the sampling issue that hides behind nearly all
> statistical inference. Like most statisticians, I usually start with a
> "random sample from the population" and go on from there without worrying
> about the population I'm really sampling from, and if the sampling is random.
>
> | A paper done with a fixed effects analysis would be entirely appropriate
> | in my view as long as the authors note that any differences are between
> | the two sample groups shown, and may only generalize to the sampled
> | populations insofar as the subjects chosen are representative of the
> | populations.
>
> I completely disagree! With a fixed effect analysis of repeated measures
> data containing subject by condition interaction you can't generalise
> statistically beyond the *actual* subjects studied. With a single group
> analysis I'm happy with such a "case study" approach, since replications or
> similar experiments will probably be undertaken to build up evidence
> anecdotally across published reports (a sort of conscious Bayesian
> meta-analysis of the sort medics do in their heads all the time!) However,
> since there's double the scope in group comparisons for effects peculiar to
> the particular subjects to throw up a group difference, and since group
> comparisons are rarely replicated, I'm not happy with fixed effects
> analyses fo group-comparisons.
>
> This random effects for group comparisons (on repeated measures data with
> subject by condition interactions - which is nearly always the case) stance
> is standard in psychology, clinical trials and other areas of research. In
> Neuroimaging it's becoming standard as a number of eminent statistical
> reviewers are demanding it. We've had group comparison papers assessed as
> you propose returned by Nature for "appropriate random effects analyses"!
>
> Arguing that a fixed effects analysis generalises to the population insofar
> as the subjects are representative of the population doesn't hold water,
> since the only population these subjects *are* representative of completely
> is the population consisting of only these subjects! The only exception is
> when there is no variation from subject to subject, in which case every
> subject is representative of the population (but this never happens:-).
>
> If you want to argue that these subjects are representative of a population
> (whatever that is), then you're only defining the population. If there's
> some variability in the response from subject to subject, then these
> subjects are only representative of the population in that they've been
> sampled from it, and so to assess the population mean effect you have to
> account for the potential variability in estimating that from a sample of a
> few subjects - i.e. use a random effects model.
>
> As an aside: If you really want to say something just about these subjects,
> then arguably a (non-parametric) randomisation test is appropriate. Eugene
> Edgington, one of the founding fathers of randomisation testing, has been
> most outspoken in this regard, arguing that you never have random sampling
> from a population, so why try to infer statistically beyond what you've
> got! The introduction of his "Randomization Tests" makes interesting reading!
This is essentially what I meant to say, but I need to consider your points.
> Maybe some of this is wordplay with statistical terms, but I remain convinced!
>
> Look forward to your comments,
>
> -andrew
>
> PS: Usually I Cc. the SPM discussion list, rather than answer personally,
> but in this case I'll leave that decision to you.
I appreciate your thoughtful reply and I think others would benefit from
it.Thanks,
Kevin
|