Dear everyone,
It's worth a note that for EPI template normalisation your EPIs should have distortions similar to those which were used when creating the EPI template. Otherwise one might easily introduce strong biases. For whatever reason this is hardly ever considered, only the issues with alignind distorted EPIs on (differently distorted) T1 volumes. E.g. if your EPI files have large signal loss in ventromedial PFC, then more dorsal regions might be warped "downward". The output looks nice and like a "complete" brain then, but the "vmPFC" in the normalised files doesn't reflect vmPFC of course, but e.g. subgenual area.
The same holds if you didn't cover the whole brain, i.e. the most dorsal or ventral regions falling outside the FoV. With EPI normalisation this is much less evident when looking at the normalised data, which might be one reason why people stick to EPI normalisation.
> but any smoothing (>=6mm) would make the procedures almost indistinguishable
Well, the output is much more blurry of course, but if one procedure was more precise before/resulted in a better overlap then there should still be advantages for smoothed data (at least in theory).
Another aspect, while looking at Christopher's poster, it's also quite important to know how labels were created when working with atlases. Which type of normalisation did they use, which MNI templates/TPMs did they work with? E.g. for the Hammersmith n30r83 and the LPBA40 brain atlases the original structural volumes are available, so it is possible to preprocess these like your own data, possibly resulting in a better overlap between your data and the atlas, and possibly resulting in more precise labels (as a better preprocessing might well result in more overlap between different subjects used for atlas building). However, for other brain atlases original data is often not available or rather difficult to obtain.
Best
Helmut
|