Dear all,
I was wondering whether a good alternative to slice timing
interpolation would be to change the microtime onset (at the
estimation step) for each slice to match the real acquisition time?
For example, if the stimulus is presented at acquisition onset of
slice 1, I would shift the microtime onset for each following slice
back TR/n_slices, e.g. 60ms for slice 2, 1.8s for slice 30, etc. I
would like to use this on unnormalized data, so in principle
realignment should not have too much of an impact on this.
Is there any reason against such an approach?
Did anyone by any chance already create such a "hack" or can guide me
to the important lines of code to change?
Thanks a lot,
Martin
|