Hi
I am looking to implement DATREL for warping a large number of subjects
(250 for now, but possibly going up to 750!). I am working on a HPC
cluster. the pre-processing steps (coreg, segmentation, etc) can be
performed quite easily (processing on one node per subject). Calculating
the templates and warp fields however I'm not sure about as you need to
specify all subject scans in the command, so I can't really split the
task across multiple processors.
I was wondering which of the following would be my best option:
1 I could compute the templates for a subset of the data and then, warp
subsequent subjects to the template. If i can get away with this I could
apply the warping to subsequent subjects in parallel. IN which case, how
many subjects would i need to get an accurate template?
2. If if the above is not feasible, Is there any way i can parallelise
DARTEL so I can use multiple nodes. Even if there just some parts of the
code i can run on multiple nodes that would be helpful. One way i
thought of doing this would be to compute warps and templates for groups
of about 20 subjects per node, and then (somehow) combine these together
to create a warp for the entire cohort.
Any advice or help you can offer in these issues would be very helpful
Many thanks
Mark
--
Mark Drakesmith
Research Associate
Caridff University Brain Research Imaging Centre (CUBRIC)
School of Psychology
Cardiff University
Park Place
Cardiff
CF10 3AT
Tel: +44 (0) 29 2087 0354
Fax: +44 (0) 29 2087 0339
|