Thanks Brad, indeed that makes sense, and I particularly like your comment
about BET....
Sam, it does sound like you have a good grasp of the norms in
thresholding. You're right that strictly one should scale p-thresholds
according to numbers of contrasts tested, though you'll find that people
rarely do that in the literature....
I think you're more likely to get bitten though by using fixed effects -
do you really have a good excuse not to use mixed effects? Surely the
normal arguments for ME hold here. In this case you would want a 3-stage
analysis like in the manual.
Cheers, Steve.
On Fri, 6 Jan 2006, Bradley Buchsbaum wrote:
>
> Hi Samuel,
>
> Choosing thresholds for fMRI analyses is something of an art. More and
> more often an experiment is conducted for which there is a lot of
> previous research and for which the "neural correlates" of a given task
> paradigm has been pretty well mapped out. And so the more you know
> about the neurobiology going in, the greater justification for using a
> "liberal threshold" (for instance, one often sees P < 0.001 --
> uncorrected for multiple comparisons -- these days. The reason for this
> is because, even though one generally performs statistical comparions
> for all voxels in the brain, it's generally the case in an advanced
> research program, that one has an idea of where one is looking ahead of
> time. But ESP research is something of a new frontier -- and here you
> have no a priori hypotheses -- so that it's essential that you use a
> "conservative" threshold. How conservative? I would say more
> conservative than the generally approved "conservative" threshold. The
> thing is, before claiming you've found "the neural correlate of ESP" you
> want to be pretty darn sure it's not a statistical artifact.
>
> But wait, if there is a neural correlate of ESP can it then be ESP? A
> natural explanation for ESP would seem to be incompatible with the whole
> idea of "extra-sensory perception". By the same token, a true ESP
> believer would say that a null result would actually support the
> existence of ESP!
> Seems to me it's a catch-22.
>
> So perhaps you should be focusing most intently on the voxels *outside*
> the brain -- in which case you must be sure not to run BET on the fMRI
> images.
>
> good luck,
>
> Brad Buchsbaum
>
>
>
>
>
>
>
>
>
>
> Samuel Moulton wrote:
>
> >hi FSL gurus.
> >
> >first of all, thank you for designing such an impressive piece of software and for maintaining this
> >very helpful list.
> >
> >i'm the lone FSL user in my lab and this is my first fMRI study. therefore, i'd really appreciate your
> >help to ensure that i haven't made any dumb mistakes when setting up my analysis. to make
> >matters worse, part of the experiment happens to be on a particularly controversial topic: ESP.
> >because of this, i need to be sure that my analysis adheres to the current standards out there, and
> >that i am conservative (but not overly so). first let me briefly describe my experiment:
> >
> >16 participants completed in a very simple guessing task during 5 functional runs. participants
> >were sequentially exposed to two photographs and then had to decide which photograph was
> >randomly selected by the computer as the "target". after they made their choice, they were shown
> >the target picture a second time as feedback. this trial sequence continued ad nauseam (but with
> >different pictures for each trial). for half of the trials, the target picture was presented first
> >(target1--->decoy-->target2), and for the other half the decoy was presented first (decoy---
> >
> >
> >>target1--->target2). target assignment was also counterbalanced across participants such that
> >>
> >>
> >the pictures in the overall target and decoy sets were identical. by looking for differential
> >activation associated with target vs. decoy exposures, we were testing for ESP. obviously i left off
> >some details (like, for example, that participants had an identical twin or relative viewing the
> >targets for each trial in a separate room), but that's the basic design.
> >
> >this link contains my first and second-level design files and matrices, as well as a histogram i used
> >to settle on a high pass filter cutoff:
> >
> >http://www.courses.fas.harvard.edu/~psy970dn/temp/
> >
> >my goal in this analysis -- given the topic -- was to be completely uncontroversial. i would greatly
> >appreciate if you could scan the files above to see if anything strikes you as even a little dodgy.
> >
> >i also have a couple questions. first, do you think 25s is too low a cutoff for high-pass filtering? i
> >initially used 50s, but revised that downwards after reading some of the posts on this list. as far
> >as i know, the frequencies in my model (see histrogram) are the only things i should use to select
> >this parameter.
> >
> >secondly: are the standards of fixed-effects analysis different than the standards of mixed-effects
> >analysis? because i'm not trying to generalize beyond my subject pool, i have a completely fixed-
> >effects analysis. this seems to be a rare situation in psychology. is there any reason to think that i
> >should adopt more conservative thresholds (i.e. lower alpha-levels) for a fixed-effect analysis than
> >a mixed-effects one?
> >
> >thirdly: there are two contrasts that would support claims of ESP: decoy > target1 and target 1 >
> >decoy. does this mean i should halve my alpha values? right now i have a cluster significant in the
> >decoy > target1 contrast at p = .04 and nothing in the target1 > decoy contrast, so the answer to
> >this question unfortunately matters. i had no a priori hypothesis about directionality. i've never
> >seen this type of bonferroni correction done in fMRI studies, but it seems necessary here.
> >
> >finally: this may be an impossible question, but what thresholding parameters are the least
> >controversial? right now, i'm using cluster stats with a z-threshold of 3.1 and a p-threshold of
> >0.05 (this is the closest to a standard that i've been able to find). as i mentioned, i do find a blob
> >(91 voxels) of significant activation. this goes away with voxel-based thresholding (p = 0.05). it
> >also disappears when my cluster z-threshold goes below 2.7 or above 3.1. i had no a priori
> >hypothesis about the size or location of potential activation.
> >
> >ok, i think that's enough for now. i hope some of these questions are of general interest to the
> >list.
> >
> >thanks
> >sam
> >
> >
>
>
--
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director, Oxford University FMRIB Centre
FMRIB, John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK
+44 (0) 1865 222726 (fax 222717)
[log in to unmask] http://www.fmrib.ox.ac.uk/~steve
|