Print

Print


> Hello again SPM experts - thank you for your previous messages!  I am still
> confused ... can you please help me understand?  John suggested that the
> co-registration and non-linear deformation would be combined with "write
> normalised" - did you mean under Normalise: "Normalise-Write" or were you
> referring to Segment: writing the images as normalized as they were
> segmented?  Under "Normalise-Write" I only see an option to apply a single
> *mat warp file - I do not understand how the Time1 to Template warps can be
> applied directly to Time2 images without co-registration.

Yep.  I meant Normalise: "Normalise-Write".  This function automatically reads 
the orientation information from the *.mat files (in older SPM versions) or 
NIfTI headers (for the current and recommended SPM5 version) of the images, 
so you only need to specify the normalisation parameter file.

>
> However, now I am wondering if it may be better to segment/normalise the
> two time-points independently, and subtract the resulting segments.  If the
> warps from Time1 are applied to Time2 images and then we modify Time2
> images by Time1 warps, what would the resulting values for the Time2 images
> represent? - it would no longer be volume at Time2.  Then the subtraction
> of Time1 from Time2 would also no longer represent the change in volume. 
> Wouldn't this be more difficult to interpret?  Also, isn't it preferable to
> take advantage of the SPM5 improved simultaneous segment/normalize function
> at both time-points?

I'm not exactly certain what the best model would be.  The original suggestion 
was based on the idea that a rigid-body alignment between the images would be 
more accurate than independently registering the images with a common 
template.

Doing it properly could involve combining the segmentation with nonlinear 
registration within subject, but this would require a lot of coding and 
probably be to complicated to explain in a paper.  Therefore, it is not a 
part of SPM.

>
> On another topic: since we have children we tried making a study-specific
> template, but found we got better results segmenting with the SPM default
> templates.  Segments with the study-specific template had more non-brain
> and wrong-tissue included.  I don't know why that would be, but because of
> the obvious better segmentation we are sticking with the SPM defaults.

The issue with study specific templates is that they may begin to drift away 
from the original templates supplied with SPM.  I would suggest using what 
currently works best for your data, and I will continue to try to improve the 
algorithms.

Best regards,
-John

> -----Original Message-----
> From: SPM (Statistical Parametric Mapping) [mailto:[log in to unmask]] On
> Behalf Of John Ashburner Sent: Thursday, August 21, 2008 5:46 AM
> To: [log in to unmask]
> Subject: Re: [SPM] longitudinal VBM
>
> > Just to verify that I understand, here is the plan:
> > 1) Segment Time1 and Time2 in native space.
> > 2) Coregister Time2 segments to Time1 segments (estimate only).
> > 3) Apply the sn.mat from Time1 -> MNI to Time1 images and "preserve
> > amount" to modulate.
> > 4) Apply the .mat from the Time1->Time2 coreg and the
> > Time1->MNI sn.mat to Time2 images (also selecting "preserve amount").
> > 5) Mask the images to get rid of values < .1
> > 6) Smooth
> > 7) Subtract the MNI-space Time2 segments minus MNI-space Time1 segments.
> >
> > Is this right?  Is there a simple way to combine .mat files for step 4 so
> > the images will not have to be resampled twice?
>
> The rigid body transform as well as the nonlinear deformation would be
> combined when you do a "write normalised".
>
>  Also, when I segment Time1
>
> > in native space, then use Normalize to apply the sn.mat and "preserve
> > amount", should I be concerned that I get a different result than when
> > Segment directly produces modulated, normalized segments?
>
> Not too concerned, providing you do a similar thing for both datasets.  The
> reason is that when Segment produces warped tissue class images, it smooths
> the data slightly first.  The reason for this is that the warped images are
> lower resolution than the original ones.  See the following for more info:
>     http://en.wikipedia.org/wiki/Decimation_%28signal_processing%29
>
> >  When I subtract
> > these images I was hoping to get all 0, but have values ranging from
> > about -.2 to .2.
>
> A slight smoothing of the original segmented images would give more similar
> results.  See around line 148 of spm_preproc_write.m:
>         %
>         % Average voxel size of tissue probability maps (ie images that are
>         % written).
>         ovx     = abs(det(p.VG(1).mat(1:3,1:3)))^(1/3);
>         %
>         % FWHM of required smoothness (in voxels).  Don't ask me why it is
>         % calculated like this.  It just seemed like a good idea at the
> time. fwhm    = max(ovx./sqrt(sum(p.VF.mat(1:3,1:3).^2))-1,0.1); %
>         % Do the smoothing using the function at around line 193.  Notice
> that % it uses spm_smoothkern to generate the kernel. This assumes that %
> images are continuous over space - rather than just a bunch % of stick
> functions arranged on a regular grid.
>         dat{k1} = decimate(dat{k1},fwhm);
>
>
> Best regards,
> -John