Dear Lasse,
> I would argue against reporting *any* results that have not had some sort of formal correction for multiple comparisons
I agree with Tom that you shouldn't build your discussions on rather dubious findings. You might want to look at Woo et al. (2014, http://dx.doi.org/10.1016%2Fj.neuroimage.2013.12.058 ) for a recent note on thresholding (although their focus is on fMRI). But it's never incorrect to also report/show unthresholded findings; and it definitely provides a better overview of the data. Any cut-off is arbitrary, be it an uncorrected or a corrected one (e.g. p = .05 FWE - why .05 and not .049 or .051), and by looking at sig. results only major parts of the analysis are unnecessarily hidden. For example anatomical variance is larger in some regions, accordingly, it might be more difficult to get a good overlap resulting in possibly more widespread activations with lower peak T values. Although it would be non-sig. in your case it could be interesting for another study focusing on that particular region, e.g. does a more complex normalisation increase overlap, is it useful to go with subject-specific localizers when it comes to fMRI, ... In fMRI, you might simply fail to detect differences in regions due to large susceptibility artefacts. The story would be a different one then, in an extreme case good data quality but no sig. effect vs. these voxels not even being part of the analysis due to the default intensity/masking settings. Focusing on p values also has some drawbacks when it comes to e.g. studies with low statistical power (e.g. arising from a small sample size, which is common in neuroimaging). The obtained p values would vary greatly if the study were replicated, and accordingly, different regions might survive the threshold, including false positives. If you provide unthresholded data one might detect some consistent patterns across studies at a later point still. Unfortunately, people tend to consider power issues only once they've failed to find effects, but they don't care about power if they obtain highly sig. effects right from the beginning, even if power is low (which would make the effects/study questionable).
> We used small volume correction (thresholded at voxel-wise p < .05 FWE) to investigate any group differences within these ROIs
Make sure the SVC is performed correctly = combining the different ROIs into a single mask file or alternatively, adjusting the threshold for the number of ROIs. Also make sure you use the critical T value reflecting .05 FWE on voxel level for the small volume as described in https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=spm;614fe6e5.1504 & then look at corrected peak statistics, and not e.g. going with .05 uncorrected (probably a common error).
As stated, I would prefer a proper ROI analysis. With SVC you only obtain sig. effects or not (for whatever reason people often don't even provide information like no. of voxels), with average GM values extracted for anatomical ROIs (or functional labels from e.g. resting state parcellation, or spheres around coordinates) you can also provide unbiased effect sizes for the very same voxel selection, and you can also safely correlate the average values with e.g. behavioral scores for additional analyses - while you are always going to run into a bias due to initial voxel selection when relying on values extracted from sig. voxels of the SVC. Other voxels might be associated (more) strongly with e.g. some disease score, but due to the large variance they might not show up when contrasting the two groups. Leaving this aside, and especially as it's VBM, is your hypothesis really "somewhere within region xyz" or is it more specific = can you link the hypothesis to an entire region, e.g. due to some cytoarchitectonic characteristics (in which you might want to try the labels in Anatomy toolbox). I mean, even if it were a significant finding I don't know what the meaning of a 2-voxel group difference in region xyz would be - could this be considered a plausible finding? GM reduction in really just this subset of voxels?
Best
Helmut
|