I'm having a similar issue to one that was described for a HPC environment about a year ago (Item #048589) but on an 8-core MacPro running Mountain Lion and a recompile of Open Grid Engine for darwin-x64.
The symptoms are the same as in the previous thred - when possumX is run over multiple cores, the final data has missing/dark slices which move position over time in a consistent pattern. I've tried experimenting with different total numbers of slices and different requested number of CPUs, and although the pattern changes, the problem remains. It doesn't seem to be a motion x slice timing interaction producing really severe saturation in certain slices.
The banding only appears when multiple volumes are simulated over time, with motion selected (I'm using motionAllLarge_60s, with a total scan time just over 60s) . Simpler simulations (single volume or multiple volumes without motion) don't seem to suffer from this effect.
One thing that seemed a little strange is that the possumX script uses <= rather than < for the process loop:
procnum=0
while [ $procnum -le $nproc ]
do
echo "${POSSUMDIR}/bin/possum $command --procid=${procnum} -o ${subjdir}/diff_proc/signal_proc_${procnum}" >> ${subjdir}/possum.com
procnum=$(($procnum + 1))
done
so if 8 CPUs are requested through possumX, it spawns 9 processes - is this working as intended? I thought this might be the source of the problem, if it has to keep track of integrating the results from each process, but switching -le to -lt still produces dark banding in the final image_abs.nii.gz. The simulation appears to complete without errors according to the logs (possum.log file attached, process logs show no errors).
Any advice greatly appreciated - it looks like a great tool and I'm definitely keen to use it down the line.
Mike Tyszka
Caltech Brain Imaging Center
|