Sam,
Here are my thoughts on what it would take for me (as a skeptic with
regard to ESP, as I suspect most on the list are) to actually believe
your results. That's not to say that your study would on its own
convince me that ESP was a real phenomenon, but at least it would
cause me to think twice, were it to meet these criteria.
1. You would need to use analysis parameters that are within the
range of community standards. For example, a high-pass cutoff of 25
seconds is pretty far outside of usual practice (in my experience,
it's generally > 66 seconds for event-related designs).
2. You would need to use a random effects analysis. Though you
claim that you are not trying to generalize beyond your sample, the
problem with a fixed effects analysis is that it can be heavily
influenced by an effect in a small number of subjects, whereas in a
random effects analysis it will need to be present in a larger
proportion of individuals. Again, this is a community standards
issue; very few papers are published anymore using fixed effects
analysis.
3. You need to use a stringent corrected threshold. I think that one
good approach would be use to nonparametric methods using FSL's
randomise tool, because it provides you with the most exact p-values
given your data set. I would recommend a corrected p value of p<.01;
either cluster-based or voxel-based thresholding would be fine with me.
I think it's important to realize that, because the priors on your
effect are so small for most people, you will have to present very
strong evidence to change anyone's mind.
Cheers,
Russ
On Jan 6, 2006, at 3:03 PM, Samuel Moulton wrote:
> hi FSL gurus.
>
> first of all, thank you for designing such an impressive piece of
> software and for maintaining this
> very helpful list.
>
> i'm the lone FSL user in my lab and this is my first fMRI study.
> therefore, i'd really appreciate your
> help to ensure that i haven't made any dumb mistakes when setting
> up my analysis. to make
> matters worse, part of the experiment happens to be on a
> particularly controversial topic: ESP.
> because of this, i need to be sure that my analysis adheres to the
> current standards out there, and
> that i am conservative (but not overly so). first let me briefly
> describe my experiment:
>
> 16 participants completed in a very simple guessing task during 5
> functional runs. participants
> were sequentially exposed to two photographs and then had to decide
> which photograph was
> randomly selected by the computer as the "target". after they made
> their choice, they were shown
> the target picture a second time as feedback. this trial sequence
> continued ad nauseam (but with
> different pictures for each trial). for half of the trials, the
> target picture was presented first
> (target1--->decoy-->target2), and for the other half the decoy was
> presented first (decoy---
>> target1--->target2). target assignment was also counterbalanced
>> across participants such that
> the pictures in the overall target and decoy sets were identical.
> by looking for differential
> activation associated with target vs. decoy exposures, we were
> testing for ESP. obviously i left off
> some details (like, for example, that participants had an identical
> twin or relative viewing the
> targets for each trial in a separate room), but that's the basic
> design.
>
> this link contains my first and second-level design files and
> matrices, as well as a histogram i used
> to settle on a high pass filter cutoff:
>
> http://www.courses.fas.harvard.edu/~psy970dn/temp/
>
> my goal in this analysis -- given the topic -- was to be completely
> uncontroversial. i would greatly
> appreciate if you could scan the files above to see if anything
> strikes you as even a little dodgy.
>
> i also have a couple questions. first, do you think 25s is too low
> a cutoff for high-pass filtering? i
> initially used 50s, but revised that downwards after reading some
> of the posts on this list. as far
> as i know, the frequencies in my model (see histrogram) are the
> only things i should use to select
> this parameter.
>
> secondly: are the standards of fixed-effects analysis different
> than the standards of mixed-effects
> analysis? because i'm not trying to generalize beyond my subject
> pool, i have a completely fixed-
> effects analysis. this seems to be a rare situation in psychology.
> is there any reason to think that i
> should adopt more conservative thresholds (i.e. lower alpha-levels)
> for a fixed-effect analysis than
> a mixed-effects one?
>
> thirdly: there are two contrasts that would support claims of ESP:
> decoy > target1 and target 1 >
> decoy. does this mean i should halve my alpha values? right now i
> have a cluster significant in the
> decoy > target1 contrast at p = .04 and nothing in the target1 >
> decoy contrast, so the answer to
> this question unfortunately matters. i had no a priori hypothesis
> about directionality. i've never
> seen this type of bonferroni correction done in fMRI studies, but
> it seems necessary here.
>
> finally: this may be an impossible question, but what thresholding
> parameters are the least
> controversial? right now, i'm using cluster stats with a z-
> threshold of 3.1 and a p-threshold of
> 0.05 (this is the closest to a standard that i've been able to
> find). as i mentioned, i do find a blob
> (91 voxels) of significant activation. this goes away with voxel-
> based thresholding (p = 0.05). it
> also disappears when my cluster z-threshold goes below 2.7 or above
> 3.1. i had no a priori
> hypothesis about the size or location of potential activation.
>
> ok, i think that's enough for now. i hope some of these questions
> are of general interest to the
> list.
>
> thanks
> sam
---
Russell A. Poldrack, Ph.d.
Assistant Professor
UCLA Department of Psychology
Franz Hall, Box 951563
Los Angeles, CA 90095-1563
phone: 310-794-1224
fax: 310-206-5895
email: [log in to unmask]
web: www.poldracklab.org
|