Thank you Stefano,
my problem is that I have lots of RAM across several nodes (150-196GB RAM per node) so technically every node can handle the problem already (twice over, approximately). So I want to make sure I partition the problem optimally within randomise_parallel.
Anyone done this before?
Thanks!
Nicola
On 10/08/2017 08:57 PM, Stefano Orsolini wrote:
Stefano OrsoliniSee you!In my personal experience I've partitioned a 250GB SSD as swap to make it through a similar situation.Hi NicolaI think that manually handling the number of fragments won't bring you to a sensible solution.
I would suggest a very practical approach by creating a swap space as large as you need memory.
If you can't dedicate a large partition you can create a swap file within the disk you have available (it would be awkward to have a 250GB file but it will work).
2017-10-08 15:59 GMT+02:00 Nicola Toschi <[log in to unmask]>:
Hi List,
I am trying to run a randomise analysis on a dataset of about 1500 volumes in MNI space (with brain mask). On a single machine, i need about 70 GB RAM for randomise to be able to load the data which drops to about 50 GB for carrying out permutations.
Given that, if I understand correctly, every instance of randomise launched by randomise_parallel will have the same memory requirements (i.e. it will have to load the whole dataset), I can't just throw this on a cluster which has, say 150 GB RAM per node with default settings.
Ideally I would like to set the number of fragments to 2 or 3 (better than 1!) and have everything follow from that (time, number of permutations).
Is this easily controllable in randomise_parallel (didn't look like it was when perusing the script)?
Thanks a lot in advance!
Nicola