Print

Print


Is there a set formula for estimating the memory requirements for MELODIC on a normal (64-bit Linux) system? The FAQ http://www.fmrib.ox.ac.uk/fslfaq/ suggests 1 GB of RAM, more if using MELODIC with "a large image matrix or large number of timepoints or subjects". But how much more? There have been a number of posts in the past about how so-and-so's dataset failed with 4 GB of RAM or so-and-so's dataset worked with 8 GB of RAM, but that doesn't tell me if processing *my* dataset with MELODIC is feasible.

For instance, suppose I am using the temporal concatenation approach with a total of x voxels (x = voxels per volume * volumes per session * # of sessions). Is the expected virtual memory requirement x? Is it 2*x? Is it x^2? For large datasets, is there a way to trade speed or disk space for lower memory consumption?

FYI, my specific case is x = 109350 voxels per volume * 400 volumes per session * 76 sessions = 26.6 billion voxels. Stored as single precision floating point data, that's 12.4 GB uncompressed. I am running:

melodic --approach=concat --in=files --outdir=result --nomask --nobet --report --bgimage=MNI152_T1_4mm_brain.nii --tr=3 --Sdes=music.mat --Scon=music.con --mmthresh=0.5 --Oall --verbose

And I get an std::bad_alloc during the PCA step (Data size : 5928 x 53516).