Print

Print


Hello,
 
I'm working with scans of about 80 participants on a working memory task. I have preprocessed these data using the following two methods:
 
1. slice time correction --> realignment --> normalisation (source: EPI mean; template: EPI) --> smoothing
2. slice time correction --> realignment --> coreistration (source: structural; reference: mean EPI) --> normalisation (source: structural; reference: mean EPI) --> smoothing
 
I have better normalisation results with the 1st method. However, when I run individual level stats, the resulting T-maps don't reflect the alignment with the template that I observe when checking normalisation. I have checked, and my script is picking up the correct files. This discrepancy between normalisation and T-maps is leading to a shrinking effect at the group level.
 
Attached are screenshots of the normalisation result of one subject (SPM), that individual’s T-map overlaid on the EPI template (using MRICron; at a very low threshold), and a group T-map overlaid on the EPI template (using MRICron).
 
We have also tried several other variants of the preprocessing pipeline (without slice time correction, using analyse format files instead of nifty, and varying the parameters at all stages)
 
Can anyone explain to me what’s going on here? Why am I seeing this shrinking effect at the group and, sometimes, at the individual level?
 
Best,

- Amri