Dear Renat and List,
One more detail I forgot to mention about my cluster setup. In order to properly create the spm2.ps output files, I set up a Virtual Frame Buffer, called Xvfb. This was also fairly straightforward to setup, and once I got SPM to display its graphics to the Xvfb, everything else stayed the same, and the spm2.ps files are created. This is important as the files provide a snapshot of the results for each subject.
For more on Xvfb, go here:
http://www.xfree86.org/4.0.1/Xvfb.1.html
I hope this helps,
Matt Senjem.
-----Original Message-----
From: Senjem, Matthew L.
To: 'Automatic digest processor '; 'SPM (Statistical Parametric Mapping) '
Cc: [log in to unmask]
Sent: 5/27/2005 5:43 PM
Subject: RE: running SPMon a Linux cluster
Dear Renat, and SPM users
Here is a solution I have worked out, although it requires a separate
Matlab license for each node on your cluster.
I have setup a small Linux cluster to act as a batching system for MRI
processing tasks, including SPM2 jobs. I am using Sun Grid Engine
(free), combined with a separate Matlab license for each Linux or Sun
node. (http://gridengine.sunsource.net)
We already had several Linux systems sharing network drives, and running
Matlab, so it was pretty easy to setup.
Basically, I have created m-files and some shell scripts, to dispatch
Matlab-SPM2 jobs to client machines. It has worked quite well so far,
but I am still ironing out some of the details.
For example, I have taken a template creation script, and modified it so
that the segmentation, normalization, etc., steps for each subject are
dispatched as a job to a node on the grid. Currently I am running with
6 nodes, but each one of those is a dual Pentium4, some of them with HT
enabled, allowing me to process 18 subjects simultaneously across the
cluster.
The results of each subject's normalisation and segmentation are stored
in a .mat file on disk somewhere that is reachable by all the nodes.
Then when they are all done, one unlucky node gets the laborious task of
adding them all together to get the summed template and apriori images.
Using this setup, I recently created a template of 1412 subjects in
about 32 hours elapsed time. The "adding them up" step took about 8
hours, after the batch processing of all 1412 subjects had completed in
about 24 hours.
I am also working on scripts to process fMRI data using SPM2 across the
grid, but a lot of system calls and specialized scripts are involved
with this. I am planning on breaking this up on a subject/session
level, e.g., if just one subject is selected, with 5 sessions, then each
session is dispatched to a separate node for processing.
If you or anyone would like some code snippets to help get started, I
would be happy to help.
Matt Senjem
[log in to unmask]
Information Systems, Radiology Research
Mayo Clinic and Foundation.
------------------------------------------------------------------------
Date: Thu, 26 May 2005 03:52:03 +0100
From: Renat Yakupov <[log in to unmask]>
Subject: running SPMon a Linux cluster
Hello everybody.
Does anybody have experience running Matlab on a Linux cluster? Is it
possible at all? Is Matlab a, what it is called, cluster-aware
application?
What about running it on a multi-processor workststation? Up to how many
can
Matlab handle?
Thank you.
Renat.
--------------------------------------
|