Dear Pei Ling,
Modeling REST is alright then. I would not try to model the instruction in your case, at least not if it's temporally very close to when subjects move, as you can't really separate the processes due to instruction and motion execution, but I don't think it can explain your findings, as it should be the same for the first and the second part.
Looking at the realignment parameters, this is across the two sessions, isn't it. Did you go with two separate sessions within a single model or with one long session? Except for that big jump, which is due to different placement during the second appointment I guess, head motion seems to be not much of an issue in general. For evaluation I would suggest to calculate fast scan-to-scan motion, e.g. based on the formula in http://cibsr.stanford.edu/content/dam/sm/cibsr/documents/tools/methods/artrepair-software/ClinicalSubjectMotionHBM2011.pdf (which is also used in the Artrepair toolbox). For any of the volumes with motion exceeding e.g. 0.5 mm you could add a dummy regressor to the design, coding the "bad" volume separately, same for the preceding and the subsequent volumes around those fast motion events. There are minor jumps at around vol. 460 (see y translation) and at vol. 720 (z translation). I would assume these to be large enough to result in signal artefacts visible to the naked eye, although I'm not sure whether they can explain your unexpected findings. Of course, if the fast motion coincides with a certain condition, and if there are only a few trials of that type, and if the artefact is large enough, then it will heavily bias the estimation even if it's just a single motion event. Artrepair toolbox http://cibsr.stanford.edu/tools/human-brain-project/artrepair-software.html has some more options to search for signal artefacts that might stem from the scanner itself, which might be another error source.
Instead of looking at the statistics you could still lower the threshold for the contrasts of the first session = does a pattern similar to that of the second emerge, you could check the (unthresholded) beta estimate images, how do they look like, also those from the first next to those from the second session. In principle the beta estimates could be very similar, but due to lots of noise in the second session T maps might differ to a large extent.
There's nothing much else to suggest right now I'm afraid.
Hope this helps
Helmut
|