Print

Print


Hello,


On 29 Dec 2011, at 19:56, Colin Reveley wrote:


what's the minimum sensible burnin?

with this resolution, what mileage can I gain by increasing the number of jumps from 1250 to, say, 5000. What mileage could I gain by sampling more frequently than default? what impact would these things have on time?


Both burnin and number of jumps have to do with the MCMC sampling of the posterior distribution of the bedpostx model parameters. The MCMC sampling is an iterative procedure, the burnin parameter sets the number of initial iterations that will be discarded from sampling. It should be large enough to ensure convergence of the MCMC chain. Empirically, we have found that for the bedpostx model a burnin value between 1000-3000 is a good compromise between convergence and execution time. In case you think convergence has not been achieved with the default value (1000) (e.g. you have some crossings that look to you like false positives), you could try a value of 3000-4000.

Regarding the number of jumps, this effectively determines the number of samples that will be drawn from the posterior distribution. Obviously, the more samples, the better they will represent the distribution. But also, the longer the execution time, and also the higher the memory requirements (for probtrackx all samples are considered for tractography). Again, the default value is a good compromise.

As the MCMC does not return *independent* samples, normally a "thinning" of the returned samples is performed. This is done by choosing every M sample from the whole set. The default values number-of-jumps=1250 and sample-every=25, will give 1250/25=50 independent samples. Therefore, the output merged_*samples images will have 50 volumes.
 

finally, I find that in the white matter, for much of what I want to do (look at long fibres like SLF1,2,3 ILF etc) the second distribution is something I want to weight equally, really, with the first in tractography. But I don't quite understand the -w parameter. does a low value make it more likely that the algorithm will interpret the signal towards the second distribution? In any case, given that I want to emphasize the second distribution, what is a good value for -w?


The -w parameter changes the prior distribution for the secondary volume fractions. I suggest leaving it at 1, but if you want to a-priori increase the sensitivity for secondary fibres for a specific dataset, you should decrease it. You could experiment with a few values, try e.g. 0.1-0.5.


finally: Is there a parallelized (in any way, threads, SGE) version of qboot?

Not yet, you could however easily parallelize it by dividing the volume into slices and submitting qboot jobs on individual slices. 


Cheers,
Stam