Dear Pär,
to obtain insight on issues like these, you should use random data
throughout -- i.e., you should test your random regressors on
randomly generated random fields, not on real data. I would also not
expect (as you write) that conditions B and C give you more
overthreshold voxels than A; I would expect the same number of voxels
in all conditions (the null is true is all these three cases).
I would conjecture that the result you obtain is due to the influence
of voxelwise non-normality of EPI data on the t statistic obtained in
the extreme conditions of your test. I suspect that the highly
correlated design matrix creates high-leverage sampling points that
fish for casual influential observations across the volume. In the
presence of skewed data, these generate spikes in the t maps over and
above what modelled by a random t field (i.e. in this extreme
conditions the approximation of a random distribution voxelwise is no
longer applicable).
The setting of resting state EPIs is interesting in this context,
since there are publications on connectivity that use densely sampled
seeds, all included together in a design matrix, where one seed is in
turn taken to be of interest and all others are included as nuisance
covariates.
Best wishes
Roberto Viviani
Dept. of Psychiatry III
University of Ulm, Germany
> Dear SPMers,
> In order to better understand the effects of correlated regressors
> in 1st level SPM design, I've played around with simulated
> regressors. My fMRI data consist of 200 resting state fMRI volumes.
> I have created 3 design matrices:
> (A). A matrix consisting of two random regressors ("Rand1
> "=rand(200,1)*10; "Rand2"= Rand1 + rand(200,1)).
> (B). A matrix consisting of only the 'Rand1'.
> (C). A matrix consisting of the "Rand1" and an with regard to
> "RAnd1" orthogonalized regressor "Rand2_orth".
> Utmost surprisingly, when in the first model testing for the
> "Rand1" (i.e. [1 0] ) I obtained more significant and extended
> activations than testing for "Rand1" in the other two models. Since
> the two original regressors "Rand1" and "Rand2" are so highly
> correlated (R>0.994), I would expect that only very little BOLD
> variance were left to be attributed to each of the mutually
> orthogonal components of the regressors in model (A). And in the
> unlikely cases I still should find some significant effects, I
> would strongly expect that testing for the effects of "Rand1" in the
> later two models (B) and (C) I would yield much more significant
> results, since the shared variance is eliminated in the later cases.
> The fact that I found the opposit to be true made me have to test
> new models, using similar designs but with new random vectors. I
> replicated the first results of increased significance when
> controlling with a very correlated vector, and can not understand
> how this could be the case. After having digged in the SPM code etc
> I still don't have a clue. Do anyone see how adding a very
> correlated regressor can push the effects of a random regressor even
> above the significance threshold of FWE corrected cluster?
> Any insight on this would be highly appreciated!
> Pär
>
|