Dear Keith,
> We used an uncorrected p-value in conjuction with the extent threshold
> to generate our SPMt. The statistics show an uncorrected p-value, its
> correlate z and t-score and a corrected p-value. My question is about
> the corrected p. Is this value calculated based on the pre-set
> parmeters specified prior to generating the result - the uncorrected
> p-value and extent threshold? Or is that corrected p-value
> representative of a different type of multiple comparison correction?
The corrected p-value is computed based on the actual t-value and the
resel count of the statistical image.
KJ Worsley, S. Marrett, P.Neelin, AC Vandal, KJ Friston and AC Evans,
1996. A unified statistical approach for determining significant signals
in images of cerebral activation, Human Brain Mapping 4:58-73.
The corrected p-value doesn't depend on your specified voxel-threshold.
> Im fairly certain for instance the corrected p isnt Bonferroni
> corrected, as Bonferroni assumes that all values (voxels) are
> independent of one another which isnt the case in spm since the images
> are all smoothed prior to analysis.
Correct.
> But if I am comparing say "voxel
> 1" from 50 different images, even though each individual voxel within
> a scan has a contribution from its neighbors, from scan to scan each
> "voxel 1" would be completely independent from one another. In that
> case a bonferroni correction could be applied to generate the
> corrected p-value.
>
After (temporal) model specification, SPM estimates the parameter of
this model and computes a t-map based on these parameters (and some
other variables). The important point is that the multiple comparison
problem is given for this (spatial) t-map, i.e. there is no temporal
multiple comparison problem. The characterization of the temporal
autocorrelation is taken into account in the temporal modelling part.
> Also, some of the results fall within a cluster that has a corrected p
> of 0.001. But when you look at the voxel-level the corrected p-values
> typically pole-vault to greater than .99. Does this mean that only the
> cluster itself is significant, or can you infer that any voxel that
> falls within a signifcant cluster is itself signifcant?
Only the size of the cluster is significant.
> Or is the
> corrected p-value at the voxel-level just indicative that the
> individual "peaks" that comprise the cluster are not signficantly
> different from one another?
No.
> Or does the corrected p-value stipulate
> that the chance of finding, for example, 3 signficantly different
> "peaks" within one cluster is .99? Can you apply the same logic to the
> set-level? If the chance of finding so many clusters (say 15) out of
> the given contrast is p<.001, does that mean that every cluster is
> also significant, and every voxel comprising the cluster is also
> significant?
>
No, the other way: If a cluster p-value is significant, this says
something about the probability to observe this cluster size under the
null hypothesis. If a set p-value is significant, this says something
about the probability to observe the size (cardinality) of this set
under the null hypothesis.
> I apologize in advance for the volume of questions contained within
> this post (and the possible resultant confusion) and a HUGE thank you
> in advance for any input regarding this topic.
Pleasure! here another interesting reading:
Detecting Activations in PET and fMRI: Levels of Inference and Power
K. J. Friston, A. Holmes, J-B. Poline, C. J. Price, C. D. Frith
Neuroimage, pp. 223-235
Stefan
--
Stefan Kiebel
Functional Imaging Laboratory
Wellcome Dept. of Cognitive Neurology
12 Queen Square
WC1N 3BG London, UK
Tel.: +44-(0)20-7833-7478
FAX : -7813-1420
email: [log in to unmask]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|