Hi List,
I am trying to run a randomise analysis on a dataset of about 1500
volumes in MNI space (with brain mask). On a single machine, i need
about 70 GB RAM for randomise to be able to load the data which drops to
about 50 GB for carrying out permutations.
Given that, if I understand correctly, every instance of randomise
launched by randomise_parallel will have the same memory requirements
(i.e. it will have to load the whole dataset), I can't just throw this
on a cluster which has, say 150 GB RAM per node with default settings.
Ideally I would like to set the number of fragments to 2 or 3 (better
than 1!) and have everything follow from that (time, number of
permutations).
Is this easily controllable in randomise_parallel (didn't look like it
was when perusing the script)?
Thanks a lot in advance!
Nicola
|