May I suggest having this bit of information on the New Segmentation
documentation/SPM manual? I spent days trying to figure out why I ran out of
memory when trying to write the deformation fields on a 32-bit Linux virtual
machine hosted on Win XP 32-bit.
All the best!
Manuel
-------------
Manuel de la Cruz Gutierrez, PhD
Developmental Psychology Lab
Dept. of Psychology, U of Houston
On 2/15/10 7:45 AM, "John Ashburner" <[log in to unmask]> wrote:
> On a 32 bit computer, the most memory that SPM or any other program can
> use at any time is 4Gbytes (or sometimes only 2Gbytes). This is because
> the largest number that can be represented with 32 bits is
> 4,294,967,295. A 150Mbyte file probably has 2 bytes per voxel, so is
> probably about 512x512x300 in size. Then add a bunch of deformations
> and their inverses, a few tissue class images etc, and the amount of
> memory needed soon adds up.
>
> 64-bit computers can handle images of these dimensions, but it seems
> that 32 bit computers simply do not have enough memory to run this part
> of SPM.
>
> You are essentially searching for the model that most accurately
> characterises the anatomical patterns of difference among the
> populations. I have my own preferred model for doing this, but most
> investigators want to use a method that localises volumetric differences
> of grey matter to specific regions. To make interpretations in terms of
> localised volume differences, then the pre-processed data should
> accurately reflect the tissue volumes you are interested in. The more
> accurate the models used for pre-processing, then the more
> straight-forward is the interpretation. I wouldn't have the first clue
> about trying to assign any physiological interpretation to patterns of
> difference that did not account for the stretching and shrinking that
> occurs through spatially normalising the data.
>
> Best regards,
> -John
>
> On Mon, 2010-02-15 at 11:43 +0000, Dennis Chung Ming-Tak wrote:
>> On Fri, 12 Feb 2010 17:15:36 +0000, John Ashburner wrote:
>>> Realign, Coreg, Normalise and the old Segment are pretty much the same
>>> as in SPM5. However, SPM8 has a new version of the segmentation (see my
>>> email from a few minutes ago) that I think will achieve more accurate
>>> spatial normalisation than the older versions.
>>
>> Given the improvements you listed in the other post, does this dramatically
>> change the structure of the output files? I only ask because my system (1GB
>> physical + 3GB virtual memory in 32 bit XP) reports "out of memory" when I
>> try
>> to import certain new segmented images - specifically 150ish megabyte
>> MPRAGE files - into DARTEL. The new segmented images (i.e. c1 to c5) were
>> produced and can be viewed but this "out of memory" seemed to pop up when
>> the program tries to produce the DARTEL files (i.e. rc1 etc.).
>>
>> I should note that the old segmented files of these MPRAGE images were
>> processed through DARTEL without trouble and the 50ish megabyte SPGR files,
>> which were segmented using the new method, could also be successfully
>> imported into DARTEL.
>>
>>> Dartel is a still more accurate approach to spatial normalisation,
>>> although it is a bit more complicated to use.
>>
>> I've been following this modulated vs. unmodulated debate here on the mailing
>> list. I understand that Dr. Ashburner's position is that an unmodulated
>> comparison merely reflects registration error but others such as Mechelli
>> 2005
>> and Gaser etc. suggest that unmodulated analyses reveal brain matter
>> density/concentration. Now, with the improvement of registration through the
>> use of new segmentation and DARTEL, does this change the debate in any
>> way? If not, I'm sorry for beating a dead horse. Of course, I am assuming
>> we
>> are not studying subjects with significant deformation or volume reduction
>> which would interfere with registration.
>>
>> Thank you for your attention.
>>
>> Sincerely,
>> Dennis
>>
>>
>
|