> I'm having problems using custom tpm images in spm5, I searched on the
> list archive and found this thread:
>
> http://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind05&L=SPM&P=R396109&I=-3
>
> which basically says "don't". However, I think there are cases (e.g.
> paediatric, gerontological) where the case for custom atlases is quite
> strong.
Fair enough. It was a little bit of a lazy answer. In SPM2, custom templates
were needed because everyone seems to have different T1-weighted sequences,
field strengths etc. Therefore, because the initial affine registration is
based on the mean-squared difference, these scans would not be so well
matched with the templates released with SPM. The custom priors were
suggested because these would match the templates. Fewer subjects were
needed because the tissue probability maps were smoothed.
In SPM5, the tissue probability maps are warped so that they can be overlayed
on the individual images, so some of the anatomical differences between
populations is accounted for by this warping. Because more of these shape
differences are explained by the segmentation model, there may be less need
to have custom tissue probability maps.
>
> So how should one create the tpms?
Average loads of segmented images. You may also need to do some manual
editing - particularly for the CSF class.
> Smoothed or not?
Not if you have enough subjects. If you don't have many subjects then you may
need to smooth in order to account for for the variability that you would
have encountered if you had more subjects.
> If smoothed, what
> kernel width?
It depends.
http://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind05&L=SPM&P=R397219&I=-3
You could try different amounts of smoothing to see which works best. It may
be possible to determine the optimal amount of smoothing by doing a proper
model selection. The log-likelihood (shown plotted in the lower-left window)
may be a useful guide. Better fitting models may have larger
log-likelihoods, so trying the segmentation with different amounts of
smoothing and comparing the log-likelihoods may be one way of figuring out
what the best tissue probability maps are.
> What protocol was used for creating the spm5 tpms?
All I know is what is written in spm_templates.man and on the ICBM web pages.
http://www.loni.ucla.edu/ICBM/ICBM_Probabilistic.html
All 452 ICBM subject T1-weighted scans were aligned with the atlas space,
corrected for scan inhomogenities, and classified into gray matter, white
matter, and cerebrospinal fluid. The 452 tissue maps were separated into
their separate components and each component was averaged in atlas space
across the subjects to create the probability fields for each tissue type.
These fields represent the likelihood of finding gray matter, white matter,or
cerebrospinal fluid at a specified position for a subject that has been
linearly aligned to the atlas space.
I may be wrong, but I think the procedure may be described in:
David E. Rex, Jeffrey Q. Ma, and Arthur W. Toga (2003).
"The LONI Pipeline Processing Environment".
NeuroImage 19:1033-1048.
>
> And are the segmentation results really expected to be worse with
> custom priors than with the spm defaults, and if so why?
Unless you have loads of subjects, then the averages may not going to be as
representative of a whole population. If you have loads of subjects, and
everything is done properly, then custom tissue probability maps may be more
suitable.
> P.S. There are two sets of grey/white/csf nii files, one in spm5/tpm/
> and one in spm5/apriori/ the latter appears to be a smoothed version
> of the former -- what are these different sets used for and why?
The old segmentation code is still available to people who may want to use it
(e.g. for multi-spectral segmentation). This old code still uses the data in
the apriori directory. The new code uses the data in the tpm directory.
Best regards,
-John
|