> [...] the baseline is
> determined from the mean activity and variance across all the voxels in
> the brain [/ROI], which is then used to obtain beta values for each voxel.
I think you're referring to the estimation of the variance components,
used with weighted least squares to estimate the betas. SPM pools all
voxels which survive a main-effects F-test at an uncorrected
alpha-level (set in spm_defaults.m, look for .ufp). The use of
different explicit ("small" or otherise) ROIs would mean different
sets of voxels would be pooled to estimate the non-sphericity, and
hence beta could possible differ.
I guess the argument here would be similar to what Tom Nichols said
earlier in the thread about smoothness estimation. In both cases, if a
very small ROI is used, the estimate is likely to be very unreliable.
On the other hand, both for non-sphericity and for smoothness, I
wonder if one might argue that a respectably large ROI could actually
be better than whole-brain, since both smoothness and non-sphericity
could be non-stationary over the brain, and might be locally better
estimated for the ROI (Tom?).
Anyway, this is probably a fairly minor point, and is slightly off the
original question that Susie raised: does the SVC only look at voxels
which survive the whole-brain stat thresholding that it follows?
("follows" in the sense that you can only press the SVC button *after*
you've specified an alpha (and optional extent threshold) for the
whole-brain). I think it does, due to the way spm_VOI is coded (see my
previous messages in this thread). This might not matter much, since
if you just choose a fairly lax uncorrected threshold for the
whole-brain, you won't be ignoring any voxels which would have any
chance of passing a stricter and/or corrected threshold for the ROI.
I think it's an important point though, in the sense that I believe
users expect their results after pressing SVC to be independent of the
previous threshold they specified. So they might for example select
FWE 0.01 as their whole-brain threshold, then (perhaps without very
much surviving that) they might click SVC and enter their ROI,
expecting that every voxel within the ROI will be analysed, and
corrected for the ROI. This does not seem to be the case, for reasons
outlined in my previous emails in this thread. Their SVC analysis in
this case might include more if they instead clicked "results" again,
set the whole-brain threshold to uncorrected-0.5 and then clicked SVC
So in other words, it's predominantly an issue of
documentation/user-expectations, that I am concerned about. UNLESS, it
is deliberate that SVC excludes voxels that failed to pass the
whole-brain threshold, which is seeming less likely (following
comments from Tom and Marko), but hasn't actually been confidently
denied by anyone. Since in this case, the use of uncorrected-0.5 above
could be "cheating" in some way. It would be good to have this
confirmed/denied. Possibly the usage or documentation of SVC could
also be changed, to clarify that it won't relax a previously very
strict whole-brain threshold.
P.S. I have now done myself what I suggested Mahinda try: re-running
multiple clicks of the "results" button, changing the whole-brain
alpha, and then using SVC. It seems to me that I can indeed reduce the
number of SVC-significant voxels (e.g. noticing changes in the K_E of
the largest cluster) with stricter initial whole-brain alpha (SPM5,
latest updates). Though possibly people think I am doing something