Dear Ged,
I am trying to think of a Fundamental Topologist's reply
:
Perhaps the simplest is that if you want to model an image as a bag of
correlated
voxels, then you should report and discuss every voxel in every cluster.
Unless you
do this, you are a closet topologist.
I hope this helps :)
Karl
PS To assign a p-value to a voxel is a category error. The p-value is an
attribute of
the Euler characteristic, which is an attribute of connected voxels (not
a single voxel).
At 18:08 06/05/2010, Justin Chumbley wrote:
Hi Ged,
On 6 May 2010 14:21, DRC SPM
<[log in to unmask]
> wrote:
- Hi Guillaume,
- This is an interesting philosophical point... Personally, I am
- slightly more "Voxelist" than I think Justin, Karl, and
perhaps you
- are... You might be surprised to hear that Keith was also Voxelist
in
- one pratical regard too, as SurfStat can indeed produce maps of
RFT
- corrected p-values for "peaks" and clusters, where these
maps are
- defined over all vertices or voxels. In the cluster case,
- vertices/voxels have the uniform p-value of the cluster they are
- contained in (or p=1, if they are outside a significant cluster),
but
- in the peak case, despite the arguments from the
"Topologist" school,
- Keith did assign FWE-corrected p-values to every vertex/voxel, and
not
- just the local maxima.
- In fact, based on the lack of information within the clusters,
Keith
- came up with a nice visualisation which combines cluster and
- vertex-wise significance, see e.g.
-
http://www.stat.uchicago.edu/~worsley/surfstat/figs/Pm-f.jpg
- I don't think he got around to implementing a similar
visualisation
- for voxel-wise data (SurfStatP returns the peak and cluster
results
- necessary, but I think you are on your own as to how to
visualise
- these), but I've seen no evidence that he had a philosophical
- objection to this (especially not one that was somehow specific to
the
- voxel-wise but not vertex-wise case).
- Similarly, in permutation testing, comparison to the null
distribution
- of the maximum over the image yields FWE-corrected p-values for
every
- voxel; you can choose to look at these only at local maxima voxels
if
- you wish, but no topological assumptions are required to control
FWE.
- In fact, being able to interpret individual voxels as significant is
a
- key distinction between weak and strong control of FWE made by
e.g.
- Nichols and Hayasaka (2003), p.422
-
http://dx.doi.org/10.1191/0962280203sm341ra
- Of course, this is all assuming that you can declare voxels as true
or
- false positives, which Justin and Karl have argued against...
However,
- I don't think their arguments have entirely convinced me that you
can
- declare local maxima or clusters as true or false either, if you
can't
- do so for voxels, since the same arguments about continuous and
- infinitely extended signal would seem to screw up *all* notions
of
- type I and type II error, not just the voxel-wise ones.
Smoothing images with broad support (e.g. Gaussian) Kernels rides
roughshod over the aspiration for strong control. Beyond this
nit-picking, some of this depends on definition. Traditional RFT defines
and controls false-positives under the null SPM. Under the null SPM all
positives - no matter where they occur spatially - are false-positives.
We considered a definition of false-positives that is more general,
applying also under the alternative SPM (i.e. in the presence of
experimentally-induced activations, even when these extend across the
whole image). We followed the intuition that a false positive must
generally be spatially removed from any underlying activation. To
formalise this, take the example of peak inference. First interpret
significant SPM peaks as indicating the existence of true signal
peaks. Let x indicate the distance between a discovered peak and
the nearest true peak. Then any discovered peak beyond (predefined)
distance x>c from a true peak is defined as false-positive (otherwise
it is true-positive). Under the null SPM, we define x=inf for all
discovered peaks (there are no underlying peaks). All discovered peaks
are therefore spatial false-positives, in accordance with non-spatial
definition of error. Importantly, false-positives are now also defined
under the alternative SPM: i.e. observed peaks farther than c from
a true peak. Familywise false-positive error-rate and false-discovery
rates can now be defined under the alternative SPM.
Note that this definition of a procedure’s spatial error-rate is derived
from true/false classification of peaks. This classification is based on
the spatial accuracy with which the procedure identifies target peaks in
the underlying signal. Spatial accuracy can be therefore be examined and
discussed per se. We took this perspective in the work Guillaume cited.
Finally, while the analysis of spatial error (or accuracy), applies most
naturally to peak-level inferences using RFT, one may appraise the
spatial accuracy of other procedures. Keith seemed happy with this (he
was a co-author!), but I think it is rather questionable to preempt how
he would have contributed to this debate now....
With my very best wishes
JC
- Perhaps a fundamentalist Topologist will reply to put me in my
place?! ;-)
- Best wishes,
- Ged