Dear wizards,
while evaluating the advantages and disadvantages of 3d- vs. 4D time series
with spm5 (updates 958) I found different results. So I tried to keep all
images the same and still got different results.
Here's the tool chain.
I converted a 146 EPI time series using dicomnifti into a 4D-nii image and
used fslsplit to construct 146 3D-images. I also converted a 3D-T1 image
using spm and segmented it producing a struct_seg_sn.mat file.
First step: Coregister onto the struct-image separately for the 4D and 3D
series procuced 2 identical time series (as expected). I used the first
functional in both cases.
Second step: Realignment-estimation(nr of passes=1: to first image). Here I
got some very small differences. Maximum differences of the to rp_*.txt files
was 0.13E-06. Still no big deal but interesting anyway. The produced series
still look the same. (looking at signal intensity of random images and random
voxels within the series). The "mat-matrices" compared image by image show a
maximum difference of 3.0670e-06
Now I do "Normalisation->Write" using the _same_ struct_seg_sn for both time
series.
Displaying the first normalised image of each EPI-series allready shows a
difference in signal intensity at the origin (520 vs 523). I checked severall
normalised images with similar small but recognisable differences.
After smoothing (6x6x6) the results look quite different considering having
used the exact same images at the start and using the same preprocessing. I
am attaching two pdfs documenting the difference.
Did I do something very stupid or is there indeed a bug in 4D image processing
within spm?
Regards,
Roland
PS: if anyone is interested I can put up the sample dataset for download.
System: Debian etch, matlab R2007a
--
Dr. Roland Marcus Rutschmann <[log in to unmask]>
Institute for Experimental Psychology, University of Regensburg
Universitätsstraße 31, 93053 Regensburg, Germany
Tel: +49 941 943 2533, Fax: +49 941 943 3233
http://www.psychologie.uni-regensburg.de/Rutschmann
|