Two additional questions that came in regarding results and thresholding:
> 1. I use p <0.001 (uncorrected) and threshold 20 voxels to begin with. According to your email, am I right in presuming, this is cluster based thresholding and not voxel wise?
It may be helpful to be specific about the difference between
thresholding and correction. If you use a voxelwise threshold of p <
.001 (uncorrected), you will only see voxels that survive this
threshold—but you haven't performed a correction. Similarly, if you
set a 20 voxel cluster extent (assuming the value of 20 is arbitrary),
you've thresholded your data in that you are not looking at any
clusters smaller than 20 voxels, but there is no correction going on
for multiple comparisons. So, you have a combination of voxel- and
cluster-based thresholding, but no correction for multiple comparisons
(as you've described).
A common approach (but by no means the only one) would be to:
1) Run results using a voxelwise threshold of p < .001 (uncorrected)
and an extent minimum of 0.
2) Look at the results table to find the smallest cluster that reaches
cluster-level significance.
3) Re-run results using a voxelwise threshold of p < .001
(uncorrected), and now specify the extent so that only voxels large
enough to be significant are displayed.
You now have results that are corrected at the cluster level. You can
press the "save" button to save this thresholded statistical map as a
nifti image.
> 2. I then look for significance peak values using voxel corrected p values in the table. If they aren't I do svcs (10mm radius) aroun that peak voxel and again only look at the p (FWE) voxel corrected values. Is this correct? Can I then expect all the voxels in that cluster survived small voxel correction?
There is a problem with using SVC (small volume correction) around
peak values from an analysis—because you are choosing your region
based on peaks, you are biased towards finding peaks in your data (see
e.g. Kriegeskorte et al. 2009). To avoid issues of nonindependence,
your volumes should be defined based on something besides the data you
are testing. This could be a peak from a previous study,
macroanatomical landmark or ROI, or from an independent result (e.g.
independent data, or probably an orthogonal contrast would be ok) from
the current study.
> How do I report the results?
It very much depends on specifically what you do, but some good
general guidelines for various approaches are found in Poldrack et al
(2008).
Other relevant and helpful papers include Chumbley & Friston (2009),
Nichols & Hayasaka (2003), and
http://imaging.mrc-cbu.cam.ac.uk/imaging/PrinciplesMultipleComparisons.
References:
Chumbley JR, Friston KJ (2009) False discovery rate revisited: FDR and
topological inference using Gaussian random fields. NeuroImage
44:62-70.
Kriegeskorte N, Simmons WK, Bellgowan PSF, Baker CI (2009) Circular
analysis in systems neuroscience: The dangers of double dipping. Nat
Neurosci 12:535-540.
Nichols T, Hayasaka S (2003) Controlling the familywise error rate in
functional neuroimaging: a comparative review. Statistical methods in
medical research 12:419-446.
Poldrack RA, Fletcher PC, Henson RN, Worsley KJ, Brett M, Nichols TE
(2008) Guidelines for reporting an fMRI study. NeuroImage 40:409-414.
Best regards,
Jonathan
--
Dr. Jonathan Peelle
Department of Neurology
University of Pennsylvania
3 West Gates
3400 Spruce Street
Philadelphia, PA 19104
USA
http://jonathanpeelle.net/
|