Dear Chris,
this is a good point. With a standard resolution of nowadays probably something like 3x3x3 = 27 mm^3 and resampled to 2x2x2 = 8 mm^3 with the SPM default values it is still somewhat ambigous in my opinion whether a single sig. voxel (less than one third of the original resolution) is really meaningful or not (but define meaningful).
I think the general problem is that we rely much too much on p values and commonly don't report effect sizes. There are various methodological papers out there discussing limitations of p values, e.g. the recent Nature article by Nuzzo titled "Scientific method: Statistical errors".
In the case of fMRI, ideally one could display e.g. beta estimates and p values on a voxel-by-voxel basis, also for the "non-significant" ones. Depending on the threshold it might look like very specific activation patterns, but if you look more closely at the data / subthreshold voxels widespread acivations might become evident. Just a few days ago I came across a nice paper by Allen et al. (2012) http://statacumen.com/pub/2012_AllenErhardt_DataVis.pdf dealing with that issue. Basically it's about how to improve the information content of figures, e.g. violin plot vs. box plot. They also present some figures for fMRI results in which beta/contrast estimates are colour-coded and p-values are coded with different levels of transparency. This way the reader can assess the data and the results much better (leaving aside the quality of the data, e.g. did they really cover the whole brain or were some parts lost in the group analysis).
Best,
Helmut
|