Sorry for being unclear - I'm not using cluster size inference, but rather randomise with tfce. My understanding is that the *_tfce_corrp_tstat* output images are voxelwise corrected for multiple comparisons - is that incorrect? If not, the issue I'm having is not with the correction itself, but rather how I can interrogate the findings (see below).
As requested, here's a little background on the task/analyses:
TASK: This is a sample (n=66) of healthy participants who performed two (counterbalanced) versions of the Go/NoGo. One run was a standard Go/NoGo (i.e., respond when you see a square, withhold when you see a triangle). The other run added a 1-back component, whereby participants were to respond if the symbol in the current trial matched the last trial (e.g., square -> square) and withhold when the symbols differed (e.g., square -> triangle).
ANALYSIS: The first level was nested within both participant and run (1-back, standard), and thus we modeled the timeseries for each run separately. For each run, we modeled Go and NoGo trials with separate evs and computed a NoGo - Go contrast. We then computed second level analyses, still nested within participant, which consisted of a fixed effects analysis that contrasted the NoGovGo contrasts from the two different runs (i.e., 1-back vs. standard). Thus, the output of this level was the interaction between two within-participant effects: (NoGo vs. Go) X (1-back vs. standard task). Finally, these were carried up to a third (between-subject) analysis where the mean across participants was modeled (i.e., a column of ones) and the contrasts were just the positive and negative version of this mean.
I've attached the relevant design and contrast matrices for each level (FYI, in the files, the 1-back version is referred to as cog and the standard version as neu). In regards to Matthew's comment about modeling issues, I'm fairly sure that's not an the case: The highest level is just a column of ones and the second level is just a 1 for the 1-back version and a -1 for the standard version. I've rechecked the first level designs again, and I'm fairly sure they've been set up correctly. So, overall, I don't think there's an issue with the finding itself.
Instead, I'm wondering if there is principled (i.e., non-arbitrary) way to break down this giant contiguous set of voxels into pieces that I can interrogate separately. By interrogate, I'm talking about extracting the mean across voxels for each condition (i.e., 1-back go, 1-back nogo, standard go, standard nogo) and seeing what condition differences are driving the effects. I can do that across all 300k significant voxels, but that seems silly, as there is the potential for significant spatial variation in the pattern of means. So, I'm wondering if there are any principled methods out there that can help me to create meaningful subgroups of significant voxels, which I would then examine further.
Thanks for the help so far!
########################################################################
To unsubscribe from the FSL list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=FSL&A=1
This message was issued to members of www.jiscmail.ac.uk/FSL, a mailing list hosted by www.jiscmail.ac.uk, terms & conditions are available at https://www.jiscmail.ac.uk/policyandsecurity/
|