> Eventually, I came up with the following idea: Why not create
> a second,
> equally sized 3-D dataset with timing information (in seconds
> of shift),
> and, with each preprocessing step, apply all transformations that are
> inflicted upon the time series onto these "timing images" as
> well. This way,
> after all the preprocessing steps are done, you would have
> two values for
> each voxel/point in time: a contrast value and a time shift
> value. Right
> before estimating the time course data, one would have to
> build a correctly
> shifted (ideal) time course of the pair and then estimate
> this composed time
> course data with the model.
Hi Jochen,
FYI...we actually do something similar to the above. However we
also use ascending acquisition, because as you mention the timing
differences between slices are much smaller in the first place. As for
whether one needs to used interleaved, if one is concerned about cross-talk
between slices this is still a reason...however with a small gap and today's
slice profiles I believe this is less necessary. I'd ask your physicist
about this though.
Regards,
VDC
|