Adam Gibson wrote:
> (1) This saves files starting m* and mwc* which are huge (m* is about
> 200MB).
Hi Adam,
Lines 91-92 of spm_preproc_write set the datatype for these bias
corrected m* images:
'dt', [spm_type('float64') spm_platform('bigend')],...
'pinfo', [1 0 0]',...
If you change 'float64' to 'float32' you'll get single-precision
images that will be half the size. If this is still too big, you can
change to various other data-types, making sure you set the pinfo
scaling correctly (see help spm_vol). I believe something like:
'dt', [spm_type('int16') spm_platform('bigend')],...
'pinfo', [(2^15-1)^-1 0 0]',...
should give you correctly scaled 16 bit integer output, which should
be circa 50MB rather than 200. If I've got this wrong, you'll have to
ask John ;-)
Also, note that this intensity scaling is not part of the Analyze
standard, so be careful if you switch between NiFTI and Analyze or SPM
and other software packages.
I'm puzzled as to why your images should be so large to start with
though, are they especially high-resolution?
> (2) I'm saving the files using spm_preproc_write and then reading them in
> again with spm_read_vols which is clumsy. How can I perform the
> normalisation and segmentation, and keep the images in memory?
I'm afraid I can't help you there -- let me (and/or the list) know if
you make any progress here, as I'd be interested in this too.
Best,
Ged.
|