Hi Dorothee,
> I am performing routine analysis of T1 and EPI imgs in SPM5, Matlab
> 7.0.1(Windows XP) but invariably run out of memory at the segmentation
> phase.
>
> I perform integrated segmentation/normalisation using Christian Gaser's
> toolbox, and then intend to normalise the epi's through 'normalise – write'
> applying the parameter file from the segmentation step.
There were some problems with the HMRF segmentation being memory
hungry, do you have the latest version of CG's toolbox?
http://dbm.neuro.uni-jena.de/new-updates-for-vbm2-and-vbm5-to-save-memory-problems/
HMRF segmentation might still be a little greedy, in which case you
could just use the standard unified segmentation of SPM5 without it
(e.g. clicking the segment button). Also see below...
> - could I possibly prevent this by adapting HMRF weighting, and if
> so what factor would be advisable?
I don't think this will change memory requirements, but I'm not 100%
certain.
> - Are the seg_(inv)_sn.mat files definitely finalised at this stage
> or are pars still being written/adaped? So eager to continue analyzing
> ;), I
> applied the par files for normalisation and resulting imgs 'seem OK', so I
> suppose all required parameters have been written but perhaps still being
> adapted?
...I think CG's toolbox does the usual SPM5 unified segmentation and
normalisation step first, which produces the sn.mat files. I think the
HMRF is then used as a second step to "write" the segmentations. So
probably the sn.mat files are fine, and only the hmrf step is failing.
You can use the sn.mats to create (modulated) warped segmentations.
Perhaps CG's toolbox can do this with HMRF switched off? If not, take
a look at the following, noting the format of:
opts.TISSUE is [mwc wc c]:
http://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind06&L=SPM&P=R427439
> - I have been suggested to switch to Linux, spm5 unified
> segmentation being somewhat intensive, yet this is not an option. Would
> increasing RAM (eg to 3) be helpful for analysing longitudinal
> multi-subject VBM?
First, note that unified segmentation (with or without CG's HMRF
extras) is done per image, so having multiple subjects and/or
longitudinal data doesn't increase the segmentation memory
requirements at all.
If you really need the HMRF (e.g. if your images are pretty noisy),
then extra RAM, and/or Linux might well help. How much so will depend
primarily on the number of voxels in each image you have. I'm afraid I
can't give more precise guidance.
Best,
Ged.
|