Hi Chen-Chia,

Please see below:

On Wed, 10 Oct 2018 at 11:42, Chen-Chia Lan <[log in to unmask]> wrote:
Dear FSL Experts

I have a question regarding, for example, how a statistical image is transformed into an uncorrected p-map (or the 1-p map in the output).

What I mean is that during the permutation, if we do 1000 permutations, each voxel will have a null-distribution of the specific statistic at hand consisted of 1000 observations of that voxel. If I have 30000 voxels in the image, then I will have 30000 individual null distributions. If I want to change this statistic image into uncorrected-p maps, should I use the null-distribution from each particular voxel (the specific one from the 30000 distributions corresponding to the particular voxel) separately for each voxel?

Yes (conceptually), although doing so in practice would require a lot of memory as you'd need to store the non-parametric distribution of each voxel. With 1000 permutations, that would be as if you had a 4D image file with 1000 timepoints. Instead, when the permutation algorithm runs, a counter is incremented, which requires just a tiny fraction of the memory, and gives the same result.
 

Or should I pick the maximum statistic across the whole image (that is the value of the highest voxel) for each permutation, then I will end up with only “one” null-distribution with 1000 observations for the whole image, and then I will change each statistic value in the voxel into p-values by this one common null-distribution for the whole image?

Yes too, but this isn't for the uncorrected p-values anymore, but rather for the corrected p-value (corrected in the FWER sense).

All the best,

Anderson

 

Thank you very much!

Chen-Chia

########################################################################

To unsubscribe from the FSL list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=FSL&A=1


To unsubscribe from the FSL list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=FSL&A=1