Print

Print


>
>
> - is anyone aware of a paper on the use of uncorrected thresholds to
> define clusters?
>

There are numerous papers out there that use uncorrected thresholds. Also,
there are papers that use AlphaSim/3dClustSim or other monte carlo methods
to create cluster sizes for different p-values to obtain a cluster p-value.
(McLaren et al 2010;
http://www.brainmap.wisc.edu/publications/56-Rhesus-Macaque-Brain-Morphometry)
has some commentary on this approach.


>
> - is anyone aware of a paper justifying reporting uncorrected peaks in
> tables (for example to facilitate meta-analyses)?
>

People routinely report uncorrected peak values, but make sure that the
cluster is significant. However, SPM8 only reports the top three peaks. If
you really want to report all the peaks, you should use peak_nii (
http://www.martinos.org/~mclaren/ftp/Utilities_DGM).


>
> - is anyone aware of a paper justifying thresholding images at uncorrected
> levels for illustration purposes?
>

Alex provided good examples of this.


>
> For those who feel like giving advice on responding, the text is below.
>

The real concern of the reviewer seems to be in your wording.

You state the the voxels are FDR corrected, but that you used an
uncorrected threshold. This doesn't make sense. I think what you meant to
say is that you use an uncorrected p-value, an extent of Y, and only kept
clusters where the cluster FDR was less than p<0.05. If you simple looked
at the voxel FDR values, then this would be misleading because only the
peaks are likely survive, not every voxel in the cluster.


>
> Thank you in advance
> Roberto Viviani
> University of Ulm, Germany.
>
> REVIEWER's TEXT
> What is implied to the reader by stating that "Correction for multiple
> comparisons was obtained through the false discovery rate (FDR) approach"
> is that ALL VOXELS within regions (clusters) listed were above the
> threshold for multiple-comparisons, not just that there was at least one
> (or more) peak voxels within the cluster that exhibited such an effect
> size. In other words, we are concerned with the significance threshold for
> the blobs, not the peaks. Readers are rarely interested in the effect size
> of a particular voxel. Based on the authors' response, I'm concerned a
> false impression is being made (not necessarily by intention, but in
> interpretation). In sum, the threshold for statistical significance for
> outputted results in the appended SPM tables are rather clearly NOT
> CORRECTED FOR MULTIPLE COMPARISONS. I appreciated the authors attaching
> this output to make the point crystal clear.
>
> The authors also state on page 6 of their paper "Cluster-level tests were
> conducted on clusters defined by the threshold of p = 0.005, uncorrected".
> Instead what should have been stated, by the SPM output given, was that the
> threshold for statistical significance was p < .005 UNCORRECTED for
> multiple comparisons, with a cluster size threshold of 50 voxels. PERIOD.
> THERE WAS NO FURTHER CORRECTION FOR MULTIPLE COMPARISONS WHATSOEVER. But
> instead, by the language given, we should expect the cluster sizes to
> represent voxels all with p-values < .05 FDR-corrected - this is extremely
> unlikely.
>