Dear Sandra,
> the 1:st level model the microtome resolution has to be set as the number of slices
In fact no, the microtime resolution is not related to the number of slices acquired. Instead it reflects the temporal resolution with which the stimulus functions are sampled (on-off function). It's not the exact onsets and durations which are used for convolution, but rounded versions (with regard to microtime resolution bins). When relying on a stimulus logfile with a (trustworthy) resolution of ms or tenths of ms one would have to adjust the microtime resolution accordingly to make use of the entire information, e.g. with a TR of 2 s the resolution would be set to 2,000 or 20,000 then. In contrast, with a lower resolution onsets and durations are shifted/rounded to the next full microtime bin, e.g. in an extreme case with microtime resolution 1 all the onsets occuring within a TR would be shifted to either the beginning or the end of that TR and also last for n * TR seconds (n = 1, 2, 3, ...). In that case the predictor would be unnecessary noisy. Accordingly, it is somewhat misleading to talk about "0 s" in event-related designs, as there's not really a zero-second "on" period but an "on" period reflecting one time bin (with the default microtime settings 16 and 8 and a TR of 2 s this would be 0.125 s). Put it differently, even if a slice is acquired every few seconds only, the predicted amplitude would be a different one for different trials depending on exact stimulus onset.
In practice this shouldn't make that much of a difference, as the assumptions are rather crude approximations (does the exact shape of the HRF really hold for every single voxel; is the "on" period on neural level really identical to the specified duration), but as we rely on those assumptions there's no reason why one should not rely on the entire temporal information available.
Now with regard to reference microtime bins, when performing slice timing onto the temporally middle slice and following the "no. of slices" logic, you would indeed encounter a problem as you can't go with 23 out of 45 and 23 (or 24) out of 46 in a single model. Note that the temporal resolution for the stimulus function would be a slightly different one then, TR/45 vs. TR/46, which might lead to a bias - if the amplitude of predictor 1 is slightly higher, the beta estimates would be downscaled a little, which might result in a sig. difference between the two sessions.
Thus basically, if you want to have the two sessions in a single model, even if the temporal middle is not exactly the same you could just stick with the default 8 out of 16, or 1,000 out of 2,000. Alternatively, go with two separate models, but in that case I would do so for all your subjects, also for those with identical settings across sessions. However, to rule out any possible confounds I would just discard those subjects (easy to say). Failing to find sig. differences with regard to sensitivity doesn't rule out there's a bias. In fact we know there's a bias due to different settings anyway (possibly also with regard to partial Fourier, fat suppression turned on/off), it's rather a power issue whether you are able to detect them.
Best
Helmut
|