Dear Karsten, Rik, and others experts,
this thread continues to touch very important issues, at least for my
feebleminded brain. So thanks for all the input, clarification and
communication! This night I became occupied by another two related
questions:
What would people consider a resonable jitter, i.e. what is the minimum
SOA - TR difference? Obviously, there would be no point to have a
difference of less than the time required for a single slice acquisition
(well - maybe we could think of some fancy stuff but lets leave thant
off). On the other hand, lots of folks have sticked to something like 1
s or a bit more. Why not going much lower? If the SOA - TR difference
equals exactly the timing of a single slice, you would get a high
effective sampling rate (for the entire slice timed volume acquired) and
be able to ascribe the stimulus onset to the slices (without the timing
procedure). Any reason for not doing that, e.g. preferably to sample
equivalent time points over the time course repetetively at some minimum
rate?
I have also re-reviewed Miezinīs paper (thrown in by Rik). The amplitude
reduction of the estimated hrf at short SOAs is quite an interesting
phenomenon of considerable impact (albeit being overweighed by the
statistical advantage of more repetitions at rapid SOAs). It is stated
that this phenomenon could be either due to hrf saturation or neuronal
effects. For what it is worth - personally, I would not generally favor
the latter. Any comments?
TIA- andreas
|