Dear Andy,
I would suggest to redo the preprocessing based on the first two sessions (in case you want to discard the third one). As the subject moves several mm in session 3 it is possible that she moved out of the FOV at some point, which can result in loss of data for the topmost or lowest part of the brain. Depending on the preprocessing settings these parts might be removed from the whole data set. As the second-level model is based on voxels that are covered by all the first-level models, a smaller/larger FOV for a single subject has an impact of number of voxels, smoothness, and statistics. So better adjust this right now (as you've already done anyway).
Concerning head motion, I would set up a certain threshold and then exclude all the "bad" sessions, independent of whether the results "look good" or not. A common criterion is "within-session head motion has to be within the size of voxel" (usually understood as displacement from the first to the last volume or the maximum displacement relative to the first volume). In addition and IMO at least as important, I would also check for fast scan-to-scan motion ("jumps"). Depending on the frequency of fast motion I would either discard the sessions or add dummy regressors. Note that if you use dummy regressors for e.g. all the volumes with >0.5 mm/TR you should do so for all your subjects. Artrepair offers some more options, e.g. interpolating bad volumes through temporally neighbouring ones. However, this interpolated data is artificial, so you should still add some regressor at the first level (aka "deweighting"), although this time it should be sufficient to go with a single dummy regressor, e.g. [0 0 0 0 1 1 1 0 0 0 0 0 1 0 0 0 0] instead of [0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0]. Another issue is that while interpolation might effectively remove (most of the) artefacts from the data, the head motion parameters still reflect the fast head motion, so you should also adjust the motion parameters. The easiest approach thus seems to be to just rely on dummy regressors and the rp parameters as obtained from realignment.
Now, as you still find ventricle activations, there might 1) still be some large artefacts in your data (e.g. there seem to be a few small jumps of maybe 0.5 mm esp. along y in the first sesssion) 2) small but highly task-correlated head motion (e.g. head motion accompanying a motor response, small twitch each time a large/bright/surprising stimulus is presented, respiratory-related motion) 3) task-correlated changes in respiration, which may also globally affect the signal in ventricles and blood supply nearby.
Thus I would have a closer look at the motion parameters and correlations with the stimuli. Set up certain criterions, then use the remaining/adjusted data, independent of the results of the first-level models (if motion is correlated with certain tasks/stimuli you might think of a different design for the next experiment, or maybe turn to sparse sampling acquisition). Do NOT remove subjects due to unexpected findings in the T maps, as this is highly subjective. For example it is normal that some subjects show rather large activations and others almost none. CSF activations in some of the subjects might also occur just by chance (.05 uncorrected is liberal anyway). In case of CSF activations on second-level I would suggest to think of an explanation (e.g. some global change in blood flow/oxygen saturation). Masking is certainly useful, but it would be absolutely misleading. Boundary voxels in basal ganglia might still reach significance depending on the size of the mask, and would be interpreted accordingly, although activation actually results from CSF.
Best,
Helmut
|