Elizabeth,
Sorry for the delay. First let me say I will help you as much as possible
with respect to assumptions & software, but there are others that have more
experience with structural analyses... hopefully they can send advice
and references to the list. (list members: nudge-nudge)
> I have some questions regarding the use of SPM for structural image
> analysis. Specifically, the assumptions regarding smoothness of the
> images (or the smoothness of the signal we are trying to detect in the
> structural images) and the homogeneity of variance at each spatial
> location. Can these assumptions be met with structural binary grey/
> white matter volumes where the variance between subjects (at each
> voxel) would seem to be greater at the edges of cortical, or
> subcortical structures, and lower in the center of these structures?
First the variance issue, then smoothness, then another issue:
You correctly note that the variance is not assumed to be constant
across voxels, but yet your concern that variance between subjects
is spatially inhomogeneous is exactly addressed by this. That is,
the variance is locally estimated, and acceptable to vary voxel-by-voxel.
The relevant smoothness assumption that SPM requires is that the spatial
correlation structure does not vary with location, that is,
the spatial autocorrelation function is stationary. On unsmoothed or
hardley smoothed images this will be a concern, but with sufficient
smoothing it is probably an OK assumption.
Note that SnPM makes no assumption on the spatial structure of the image.
Another assumption that you don't mention is that of normality of
the data at each voxel, with each group having a common mean. If
the data were not smoothed this assumption would be untenable, as
there would just be possible data values in the binary images, 0 and 1.
I don't know if any work has been done to check the veracity of this
assumption with sufficient smoothing.
SnPM doesn't require normality, but the GLM/t-test framework it is
based on isn't real meant to be used with binary data (no smoothing).
> In another study with apriori anatomical hypotheses, I have used SPM
> without correction for multiple comparisons, and used permutations for
> the omnibus test of significance (based on the number of
> suprathreshold clusters). Region of interest tests were then used for
> localization. And all was fine!
I'm not sure I understand what you did here; by "permutations" did you use
SnPM? If so, and if you had an a priori hypothesis, then the
permutation distribution of maximal suprathreshold cluster will
be conservative since it is protecting against false positives across
the whole brain. Further, ROI tests for localization suggests
post hoc placement of the regions.
> I am now looking at another set of subjects (binary grey matter maps,
> group 1 n=9, group 2 n=10, age as a confounding covariate, smoothed to
> 8mm, 12 parameter linear affine spatial normalization) and using a
> similar permutation strategy described above.
I assume you mean the permutation strategy described below. OK...
as I understand, you are trying to reconcile the results of four
different methods:
> 1. I have used SPM's correction for multiple comparisons (p<.05) where I
> see a significant group difference localized to a large cluster in an
> anatomical region which generally meets my apriori hypotheses.
To be clear, I will assume that this cluster was deemed significant at the
("cluster-level {k,Z}") and not uncorrected k.
> 2. Native space (e.g., not spatially normalized) regional volumetric
> measurement of the structure apparently different between the groups
> based on SPM . The results of volumetric analysis in raw data revealed
> a statistically significant group volume difference consistent with the
> SPM finding.
This is a post-hoc finding, akin to an ROI analysis based on the
peaks in a statistic image; hence, it's not surprising you also found a
difference.
> 3. Randomly assign subjects to groups (30 permutations) and count
> suprathreshold voxels (not corrected for multiple comparisons) using the
> same criteria as the real group test. The mean number of suprathreshold
> voxels in the random tests is not different from the number of clusters
> in the real test. In addition, I looked for clusters in all the random
> tests which passed correction for multiple comparisons, and found
> significant results about 10% of the time. thought the location of the
> random significant results were not the same as in the group test, they
> tended to be anatomically plausible.
A concern would be a consistent heuristic to identify the cluster of
interest; that concern aside, the metric of interest in a permutation
test is not the mean of the permutation distribution but rather the
proportion of statistic value as or more extreme than the observed
statistic.
Again, your examination of the clusters which survived multiple comparisons
correction is critically dependent on a heuristic that can uniquely
identify a cluster as being of interest or not. For example, requiring
a cluster to have a voxel over lapping a single voxel (or a small
collection of voxels).
> 4. Attempt to us SnPM to perform a similar test both with and without
> variance smoothing (1000 permutations). I did not find any significant
> results (e.g., max pseudo-t = 1.93), though may facility with setting up
> analyses and viewing results in SnPM is not quite up to par yet, and I
> could be missing something.
Did you examine suprathreshold cluster size statistics? To compare with
SPM's parametric result, do not use variance smoothing (0 0 0), and
answer Yes to 'Collect Supra-Threshold stats?'. Running this analysis
will create a large .mat file, but then you will be able to assess
cluster size.
BUT, it is very important to note that SPM96 assesses clusters by size
and hight; the joint size/hight theory is not easily framed in a
nonparametric test, and hence is not incorporated in SnPM.
Hence you cannot make a direct comparison to the SPM results, but it should
be roughly comprable.
> I was extremely pleased with the results from the first 2 steps, but
> concerned when I used permutations and SnPM to carefully validate my
> findings.
The first two results are essentially testing the same thing, so it
is not suprising they agree; the third, if the clusters were uniquely
identified, suggests that you found a false positive in the first two
results; the last, appeared to be based on intensity, not cluster size,
and hence was measuring something different.
I hope this helps.
-Tom
-- Thomas Nichols -------------------- Department of Statistics
http://www.stat.cmu.edu/~nicholst Carnegie Mellon University
[log in to unmask] 5000 Forbes Avenue
-------------------------------------- Pittsburgh, PA 15213
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|