Dear Michiel,
Thanks for your answer. I copy here the full probtrackx command I used:
probtrackx2 --network --seed=${subrois}/${sub}_Listofseeds.txt --samples=${subject}/merged --mask=${subject}/nodif_brain_mask -c 0.2 -S 2000 --steplength=0.5 --nsamples=5000 --fibthresh=0.01 --distthresh=0.0 --sampvox=0.0 --pd --forcedir --opd --dir=${outdir}/${sub} -V 0
Do you think that sampling 5000 streamlines is enough to avoid these random fluctuations you are referring to?
If yes, the correct order of analysis would then be: (1) thresholding, (2) normalisation and (3) averaging, correct?
Regarding the normalisation step, I still have some doubts on the best approach to follow.
In the first option you mentioned, is the total number of streamlines seeded calculated as (number of voxels * 5000)? If I then use this as my normalisation factor, I am basically only dividing by the number of seed voxels since 5000 is a constant factor between seeds, right? This seems to me strictly speaking the information accounting for the different ROI volumes.
Waytotal and row-sum then convey additional information regarding the probability of reaching one specific ROI relative to the other ROIs, but, just to be sure, they still take into account the number of seed voxels somewhere in the formula with which they are calculated?
Thanks again in advance, I hope this is clear
########################################################################
To unsubscribe from the FSL list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=FSL&A=1
|