Print

Print


Hi all - I don't know the timing data well, because we parallelise things and I tend to forget about them once they are on the queue, but I would think that with the same settings, xfibres (bedpostx) should take about 2x as long as diff_pvm (bedpost). 

In the patched FSL release, the settings are such that bedpostX might feel about 3x as slow as bedpost. 

In the unpatched FSL release, the settings were still in test mode, and bedpostX might have taken >10x what you were used to!


Cheers

T


On 18 Oct 2007, at 13:21, David Gutman wrote:

I've run both bedpost and bedpostX on various data sets,and  the crossing-fiber algorithm tended to take 3-6 times as long.  I think it's a lot more computationally intensive.

Fortunately it parallelizes well, and you only have to run bedpostX once on a given data set.


DG



On 10/18/07, Neil Killeen <[log in to unmask]> wrote:
Hi

I have noticed that the running times for  V 3.3.6/bedpost and V4.0/bedpostx
are hugely different.  On my relatively small test data set, each
3.3.6/bedpost slice
runs in a few minutes (diff_pvm). However, 4.0/bedpostx takes a
VERY long time (> 60 min). It is the  xfibres process that  takes the time.

I am assuming this is not right and I have something to track down, or
could it possibly bve something to do with different algorithms and
the particular data set ?

thanks
Neil



--
David A Gutman, M.D. Ph.D.
Department of Psychiatry & Behavioral Sciences
Emory University School of Medicine