Dear FSL experts,
I have a few questions about how FSL computes group-level statistics. One motivation for these questions is that my PI and I are trying to informally compare results using FSL and AFNI, in order to be sure that we have a reliable analysis pipeline. On the first pass, the results seemed relatively similar for one contrast of interest, but very different for another, and we seem to have ruled out all obvious sources of errors. So I’m hoping to get a better understanding of how FSL does certain analysis steps.
Here are my distinct questions:
1. Can you give more detail about how the FSL automatic outlier detection option (in FLAME) works? Does it exclude subjects on a voxel by voxel basis, or does it exclude an entire subject’s data for a given contrast if it seems too noisy? Also, if it is the latter, is there an output available of which subjects were excluded from a given analysis? I couldn’t seem to find one...
2. The pipeline that I have typically used in FSL for a functional MRI task with multiple runs is to analyze each run separately, and then use a fixed-effects analysis to combine results across runs into a single .gfeat directory for each subject. As I understand it, the appropriate way to do this is to input the COPEs representing each condition vs. baseline, then weight each run equally (as 1) when constructing EVs. The concern is that for my tasks, there can be different numbers of trials in each run, and some runs might not have any trials of a given type, because trial sorting is based in part on subject responses. Is it still appropriate to weight runs equally in this case? I know that FSL weights inputs based on the variance, but I’m still somewhat confused about what effect this has on the results. Also, I know that runs are not supposed to be concatenated in FSL, the way they often are in AFNI and SPM, but should this method give equivalent results to an analysis in one of those other programs in which runs are concatenated?
3. I am also somewhat concerned that I might be misunderstanding how FSL weights different trial types in contrasts. To give an example: in my task paradigm, each item was presented twice, once in the first half of the run and once in the second half. In the first-level analysis, I modeled trials from the first half of the run separately from trials in the second half. This factor ended up not being important, so in the contrasts of interest, we included the first-half trials with a weighting of 0.5, and the second-half trials with a weighting of 0.5. I had thought this would be equivalent to collapsing across this factor, i.e., weighted by the number of trials in each condition (which was often different for the first half and second half). However, I now believe that the different inputs are simply averaged. How does FSL handle this situation? Is it in fact necessary to re-run the first level analyses (or manually weight by trial counts) if I want to collapse across a variable, such as first half/second half in the above example?
Thanks,
Michael
--
Michael S. Cohen, Ph.D.
Postdoctoral Researcher
Reber Lab
Department of Psychology
Northwestern University
|