Hi all,
Pleased to see that I have been put in my place, as requested ;-)
I have a few more thoughts on this though, for anyone interested...
> Smoothing images with broad support (e.g. Gaussian) Kernels rides
> roughshod over the aspiration for strong control
I see the problem, but I think it's a problem with the entire
philosophy of classical statistics, including voxel-wise and
topological, FDR and FWE. If you think there is signal everywhere, but
wish to know how likely it is that you have a meaningfully large
amount of it in some part of the brain, then arguably you want the
Bayesian posterior probability of exactly that (prob signal > squat |
data). As far as I understand the PPMs, this is what they do, i.e.
they force you to choose the "squat" threshold, and then give you a
probability at every voxel.
This seems more voxelist to me...? Though I accept Karl's
counter-argument that I should then report every voxel's
probability... I gather that some Bayesians would also insist upon
retaining the full posterior distribution of p(signal|data) instead of
summarising it with p(signal>squat|data), but then even visualising
(let alone reporting) that at every voxel would be a bit of a pain...
Perhaps the current PPMs are a nice pragmatic compromise?
> Let x indicate the distance between a discovered peak and the nearest
> true peak. Then any discovered peak beyond (predefined) distance x>c
> from a true peak is defined as false-positive
I do think this was very nice work, though it arguably replaces one
arbitrary decision (how low should my smoothed signal be before I
decide there is no signal left) with another (how far from my true
peak before I decide it is wrong); I think you might be right that
this peak distance is the lesser of two arbitarinesses, but does it
also run into some extra problems when we consider more realistically
complicated signals? With real signals, there could be multiple peaks
of differing heights, with differing responses between them; the
peak-to-peak distances for the following 1D signal and detection cases
are the same, but I would argue the second is less of a mistake due to
the pattern of the true signal.
[0 0 1 6 0 0 0] - true signal
[0 0 0 0 0 0 1] - detected
[0 1 3 5 4 3 2] - true signal
[0 0 0 0 0 0 1] - detected
Anyway, I'm not a fundamentalist Voxelist myself, just aiming to be a
kind of pragmatist and devil's advocate...
Best wishes,
Ged
|