Print

Print


Hi Nicola
I think that manually handling the number of fragments won't bring you to a
sensible solution.

I would suggest a very practical approach by creating a swap space as large
as you need memory.
In my personal experience I've partitioned a 250GB SSD as swap to make it
through a similar situation.

If you can't dedicate a large partition you can create a swap file within
the disk you have available (it would be awkward to have a 250GB file but
it will work).

See you!
Stefano Orsolini


2017-10-08 15:59 GMT+02:00 Nicola Toschi <[log in to unmask]>:

> Hi List,
>
> I am trying to run a randomise analysis on a dataset of about 1500 volumes
> in MNI space (with brain mask). On a single machine, i need about 70 GB RAM
> for randomise to be able to load the data which drops to about 50 GB for
> carrying out permutations.
>
> Given that, if I understand correctly, every instance of randomise
> launched by randomise_parallel will have the same memory requirements (i.e.
> it will have to load the whole dataset), I can't just throw this on a
> cluster which has, say 150 GB RAM per node with default settings.
>
> Ideally I would like to set the number of fragments to 2 or 3 (better than
> 1!) and have everything follow from that (time, number of permutations).
>
> Is this easily controllable in randomise_parallel (didn't look like it was
> when perusing the script)?
>
> Thanks a lot in advance!
>
> Nicola
>