Hi Chris,
> I've looked at both the manual and the paper, but could not find a
> more detail explanation of how the voxel-based thresholding is
> performed.
The underlying idea is that if there is no difference between e.g.
groups then, by definition, they are the same. If they are the same then
it should not matter if we permute the data, i.e. swap the labels of
some data points. In other words, if the groups are the same the labels
are of no meaning.
This means that we should (if indeed there is no difference) be able to
test all possible (or some suitably large number) re-labellings of the
data and from that build up an empirical estimate of the
null-distribition given our particular statistic and data. If it then
turns out that the original labelling has a statistic that falls
somewhere in the middle of that distribution we can conclude that there
was nothing special about that labelling. This would then be an
indication that the null-hypothesis (the assumption that there is no
difference) was true (or more correctly, there is no evidence of it
being false). You can transform this to a p-value by observing how
"unusual" the statistic of the original labelling was. If for example
you have done 1000 re-labellings and you find that the original
labelling was the hundreth down (i.e. there were 99 re-labellings that
yielded a larger statistic) you would say that the p-value was 100/1000
= 0.1.
If you do this for each voxel and compare the observed original
statistic to the voxel-specific distribution you get an "uncorrected"
p-value.
If you build the distribution of the maximum t-value (anywhere in the
brain) and compare the statistic of all voxels in the original labelling
to that distribution you get a "corrected" p-value.
You may find it useful to download the slides at
http://www.fmrib.ox.ac.uk/fslcourse/lectures/seg_struc/randomise.pdf
for an explanation of the underlying concepts.
Good luck Jesper
|