> Interestingly, in order of most to least statistical power
You seem to look at T values or cluster sizes, statistical power is something different.
> I'm not sure what to make of this. Any ideas?
As with any model-driven method it often remains unclear whether the predictors are good or not. A simple example, we might detect a sig. linear relationship although the data reflects a quadratic one. The same holds for fMRI. We rely on stimulus matrix functions, the canonical HRF and the concept of convolution to generate predictors, but they might be suboptimal for particular experiments/stimuli, leaving aside variability between regions and subjects, changes over time.
If the predictors are assumed to be reasonable, and if we want to focus on whole-brain analyses and not on particular slices, then a microtime onset set to the middle time bin is optimal as it minimzes the discrepancies across the whole brain (as stated previously), leaving aside whether it results in higher T values or not.
-> This is because in case of microtime res 36 and microtime onset 18 the discrepancy should be 0 for slice 18, 1/36 * TR "too late" for slice 17, 1/36 * TR "too late" for slice 15, ... , 17/36 * TR "too late" for slice 1, and 1/36 * TR "too early" for slice 19, ..., 18/36 * TR "too early" for slice 36. Summing up the absolute values this results in 18 s "overall discrepancy".
-> If you go with mircotime onset 1 instead then the discrepancy is zero for slice 1, 1/36 * TR "too early" for slice 2, ... 35/36 * TR "too early" for slice 36, or 35 s "overall discrepancy".
In your case the predictor based on microtime onset 1 would be better only for slices 1-9. We should also consider that the most dorsal and ventral slices often do not contain any brain volume at all (depending on slice thickness), which would also speak against optimizing microtime onset for a slice that does not end up in the analysis.
In any case, the expected effect due to different microtime onsets is difficult to assess without directly contrasting the con images from different models, leaving aside that data has been interpolated across slices due to realignment, normalisation, smoothing. If you detect significant differences between the models Changed_T0 and Default conflicting with the expectations, then it might be the case that the differently shifted regressors not (just) account for shifts due to acquisition but (also) for differences in time-to-peak. Which means that another predictor based on a larger shift might result in even higher T values.
Concerning your Defaults_Torben with microtime res 36, microtime onset 18 and onsets shifted by -TR/2, this should be similar (identical) to microtime res 36, microtime onset 36 with original onsets, which you haven't set up. In contrast, microtime res 36, microtime onset 18 with original onsets should be similar to microtime res 36, microtime onset 1, onsets shifted by -TR/2.
-> From a theoretical perspective (minimizing temporal inaccuracies) it is best to go with the original stimulus onsets even if you did not use slice timing
-> If you want to use slice timing you should perform the correction onto the temporally middle slice (again to minimize inaccuracies) and choose a microtime bin as a reference that reflects the middle of the microtime resolution (e.g. the 8th of 16, the 18th of 36)
-> If you slice-time corrected onto another slice then you should adjust the microtime onset to match a time bin that reflects the acquisition time of the chosen reference slice (e.g. slice-timing onto the first acquired slice = microtime onset 1, also note that number values entered to specifiy slice order during slice timing reflect spatial properties of the slices, the first acquired slice might e.g. be slice no. 2 or 36, while number values entered for microtime onset corresponds to temporal properties of the time bins).