Print

Print


Hello Andras,
                         You will need at peak usage roughly twice the storage requirements of your input file ( you can uncompress it to get this value ). In the randomise parallel script you can reduced the PERMS_PER_SLOT variable to increase the number of fragments that will be run.

Kind Regards

Matthew
> Dear FSL experts,
> 
> I have questions regarding randomise_parallel:
> 
> I wish to submit these jobs to a HPC, but they keep terminating after a while. This is mainly due to memory errors (as in stdout), as the default hard limit for the queues is 512Mb. Do I need to change the code of fsl_sub in order to request h_vmem=moreG ? (lines 340, 342, 350) 
> 
> I have done so, and 6 Gb also led to termination, what is the approximate hard memory threshold for 50-100 subjects, 1*1*1mm voxel images? I observe that memory usage is fluctuating, so what is the peak I have to request?
> 
> Also, I can't find the way to increase the number of parallel jobs after the superficial reading of the scripts. Does it make sense to increase it to 100-200 jobs on a feasibly large cluster and how should I do that?
> 
> Thank you very much.
> András
> 
> -
> Andras Jakab, M.D. Ph.D.
> Post Doctoral Researcher
>