Hi all
Just about to embark on some functional connectivity analyses and wanted to check my options before proceeding!
Basically we have 4 conditions A1B1, A1B2, A2B1 and A2B2 and have already run a univariate analysis revealing significant clusters for the interaction between A and B (e.g 1 -1 -1 1; -1 1 1 -1). What we now want to do is seed some of these effects to test for networks that underlie/support these regions/interactions.
So far I have extracted the time series per session per subject (using fslmeants) from a peak voxel sphere for one of these clusters –
1. I think my first option is to run PPI using these time series data and modeling an EV for this, the time series for task effects (e.g. onsets and durations) including weights for the interaction across conditions and the time series for task effects including weights for all conditions above baseline. Is this correct?
2. I have read Jill’s paper about PPI and noted a high propensity for false negatives, especially for event-related designs (such as ours). I’m going to run it anyway but was wandering, is it possible to create a model in feat that assess functional connectivity by splitting the seed fslmeants information into separate EVs, one for each condition, and running contrasts on these? So in our case, the glm would contain 4 EVs (as per our original univariate analysis) each detailing the TRs (or volumes) and the mean signal separately for each of our conditions – then define the interaction term across these EVs in the contrast matrix.
3. Finally, how about extracted the fslmeants information from regions we see for the interaction contrasts from our original analyses, segmenting these per condition and running a regression on these values in SPSS?
Apologies in advance, I realize this email is laborious! I scoured the jiscmail and couldn’t find much that could directly answer query 2 and 3.
Thanks!
Hilary
|