On Tue, 4 Jan 2005, Russ Poldrack wrote:
> note that MATLAB does have a new distributed computing toolbox for use on
> clusters (not sure if it works on a single multiprocessor machine), but I
> assume this would require recoding of the SPM matlab code and/or mex files to
> take advantage of.
>
> http://www.mathworks.com/products/distribtb/
>
> cheers
> russ
> On Jan 4, 2005, at 2:38 PM, Neggers, S.F.W. (Bas) wrote:
>
>> Dear colleage,
>>
>> for one analysis at a time you do not gain much from dual procs when using
>> SPM and the conventional binary libraries, since they cannot split
>> themselves into tiny "calculation packages" for parallel processing. I
>> just looked up a post on the list that offered some tweaked binaries that
>> apparently can split up spm processing, to take advantage of 2 procs:
>>
>> http://www.jiscmail.ac.uk/cgi-bin/wa.exe?A2=ind0312&L=spm&P=R6876&I=-1
>>
>> ....
>> Good luck,
>>
>> Bas
dear Russ, dear Bas,
i am using pspm (v1.0.1) for a while now, on and off, and implemented
and adjusted Jeo Koola's scripts for my needs here at our scanner. on
one hand i use my 'quick spm' setup to check swiftly on subjects'
compliance regarding motion, than also for a quick and dirty
impression about activation patterns. in the future i/we may use it
for some kind of semi-real or real-time fMRI experiments with
on-the-fly recon and analysis. now, my experiences are mixed:
- all depends on network speed: 1Gb/s is a prerequisite, and a local
rack system should be beneficial as distances are minimized that
way;
- first-class hardware is of advantage, like SuperMicro server
motherboards and double xeons (i have unfortunately no experience
with AMD processors, nor with 64bit); RAM and RAID should also not
play bottle-neck roles. my usual scenario is 10-14 logical xeon
units, all 2-3GHz and hyper-thread;
- off-line EPI or PRESTO reconstruction should be parallelized as
well, if possible (i didn't do that yet, that's outside the scope of
pspm of course)
- initial booting of multiple distributed matlab sessions via LAM/MPI
costs many seconds(!);
- coregistration and reslicing profit from parallelization potentially
much;
- regular smoothing is very fast already, and may only profit from MPI
for large data sets, otherwise for smaller data set single-CPU
processing may be faster;
- GLM analysis is unfortunately not implemented for MPI in Jejo
Koola's pspm scripts, but perhaps that's not possible, except for
some voxel-based calculations(?). in any case, GLM for simple
experimental designs and large voxel data appears to be very fast,
seconds only, so profit may be questionable;
- linux kernel 2.6.x should be of advantage as job scheduling appears
much improved as compared to 2.4.x, but i didn't get it running yet
that way. i think adjustments will be needed in PSPM in order to
coexist with the new kernels, MPI, matlab v7.0.1 and so on.
pspm is of use to somebody who truly needs recon, preprocessing and
statistical analysis done in seconds. but for regular SPM analysis we
too use the 'canonical' scenario of good hardware and single CPU
processing, a path i trusted more for high quality analysis.
moreover, in order to conserve system resources i rely on slackware
v10.0, tailored & recompiled kernels v2.4.28, and window manager fvwm
v2.4.18, and refrain from luxury suites.
best, bye,
pisti
-----
[log in to unmask]
|