Hi Amri,
my 2 cents:
> 1. slice time correction --> realignment --> normalisation (source:
> EPI mean; template: EPI) --> smoothing
I think that (1) using simple "normalize" should be discouraged as it
has effectively been superseded by unified segmentation, which uses a
much more extensive approach to matching an input image to a target, and
(2) using the functionals is almost always not the best solution as you
will have less spatial resolution as well as worse image contrast to
work with (the exception being strong distortions and a large group for
which you could then construct your own template).
> 2. slice time correction --> realignment --> coreistration (source:
> structural; reference: mean EPI) --> normalisation (source:
> structural; reference: mean EPI) --> smoothing
I would expect the results not to be better if you normalize a
structural (T1, I presume) to a functional template (T2, usually). And
again, try unified segmentation/vbm8/new segment for that purpose.
> I have better normalisation results with the 1st method. However,
> when I run individual level stats, the resulting T-maps don't
> reflect the alignment with the template that I observe when checking
> normalisation. I have checked, and my script is picking up the
> correct files. This discrepancy between normalisation and T-maps is
> leading to a shrinking effect at the group level.
I would not be surprised to see bad alignment when using normalize ;) No
offense, but as John pointed out repeatedly himself, the normalize code
is very old, and has not been optimized in several years. In spm12b, the
code behind "normalize" is what is now called new segment in spm8.
> Attached are screenshots of the normalisation result of one subject
> (SPM), that individual’s T-map overlaid on the EPI template (using
> MRICron; at a very low threshold), and a group T-map overlaid on the
> EPI template (using MRICron).
Actually, as far as can be judged from the image, the misalignment is
not that bad in the group images, and how bad it is would also depend on
how much you smoothed the images. Remember that spm will only analyze
voxels for which data is available for all subjects (above a certain
threshold). With your n=80, you are bound to have some misalignment in
some (if you use ... but I repeat myself ;) which leads to some voxels
not being present in all, thus they are not considered for the group
analysis. Check the mask.img file to see which are in, and which are
out. You can also sum up all con-images using something like
% snip
cons = spm_select([1 Inf],'image','Select con maps',[],pwd, '.*');
nimgs = size(cons,1);
V = spm_vol(cons(1,:));
sums = zeros(V.dim);
for i = 1:nimgs
temp = spm_read_vols(spm_vol(cons(i,:)));
temp(isnan(temp)) = 0;
sums = sums + (temp~=0);
end;
V.fname = [pwd filesep 'con_sum.img'];
spm_write_vol(V,sums);
% snap
to see where overlap is perfect (image value = number of subjects) and
where not (don't forget to set interpolation to NN if you want to see
the actual individual voxel value).
Hope this helps,
Marko
--
____________________________________________________
PD Dr. med. Marko Wilke
Facharzt für Kinder- und Jugendmedizin
Leiter, Experimentelle Pädiatrische Neurobildgebung
Universitäts-Kinderklinik
Abt. III (Neuropädiatrie)
Marko Wilke, MD, PhD
Pediatrician
Head, Experimental Pediatric Neuroimaging
University Children's Hospital
Dept. III (Pediatric Neurology)
Hoppe-Seyler-Str. 1
D - 72076 Tübingen, Germany
Tel. +49 7071 29-83416
Fax +49 7071 29-5473
[log in to unmask]
http://www.medizin.uni-tuebingen.de/kinder/epn/
____________________________________________________
|