Hi Greg,
Excerpts from Re: Using fixed-effects stu.. by "Gregory S Berns"@emory.
> I'm not sure I agree with this approach. A complimentary issue that is not
> often discussed is a Type II error. I don't believe that we know enough
> about brain systems to make good a priori hypotheses. Doing a pilot
> study to generate a mask still doesn't get away from the problem that
> one, or a few, subjects may drive the fixed analysis, and hence the
> subvolume mask.
Good point: When you do use a mask you are assuming you've captured
"the right" voxels. Though, the conjunction is supposed to address
the issue of few subjects driving significance.
With masking in general, I see it two ways:
If you're going to the trouble of an imaging study, don't you want
to know what's happening everywhere in the brain?
But, alternatively,
If you're going after a subtle (but well-localized) effect, you may
not be able to afford the power of searching over the whole brain.
It's just another trade off one has to make in planning your analysis.
With very focal hypotheses in particular, I think that masks can be
useful. Say your fundamental research question is addressed by the
involvement of the anterior cingulate; since that's a fairly well
defined region, you could create an anatomy-based mask that would
safely encompass all the voxels of interest, buying you some power.
Of course it would be foolish not to also look at the rest of the
brain, say with a complementary mask... but you create a new multiple
comparisons problem: If you control your false positive rate to (say)
5% in the subvolume and to 5% in the rest of the brain, you have a
greater than 5% chance of a type I error in either the sub or
complementary region. As they are disjoint regions they are
approximately independent tests (specifically, the distribution of the
maxima are approximately independent), so using a 2.5% threshold on
each (good old Bonferroni) should control the false positive rate
appropriately.
> If you're going to do 10 subjects for a pilot study, it's not really that
> much more difficult to do 20 or 30, and then do a random effects
> analysis. In our random effects designs, we don't see stability of
> the results until about N=20.
Sounds reasonable... from my experience, when there's fewer than 10
subjects (9 degrees of freedom) you can get problems with noisy
statistic images, something that is nicely dealt with in SnPM (yes, a
shameless plug, but I think SnPM can very valuable for preliminary
random effects fMRI analyses, before you get those 20 subjects).
-Tom
-- Thomas Nichols -------------------- Department of Statistics
http://www.stat.cmu.edu/~nicholst Carnegie Mellon University
[log in to unmask] 5000 Forbes Avenue
-------------------------------------- Pittsburgh, PA 15213
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|