Print

Print


Dear SPMers,
In order to better understand the effects of correlated regressors in
1st level SPM design, I've played around with simulated regressors. My
fMRI data consist of 200 resting state fMRI volumes. I have created 3
design matrices (see attached picture /pptx):
(A). A matrix consisting of two random regressors ("Rand1
"=rand(200,1)*10; "Rand2"= Rand1 + rand(200,1)).
(B). A matrix consisting of only the 'Rand1'.
(C). A matrix consisting of the "Rand1" and an with regard to "RAnd1"
orthogonalized regressor "Rand2_orth".
Utmost  surprisingly,  when in the first model testing for the "Rand1"
(i.e. [1 0] )  I obtained more significant and extended activations
than  testing for "Rand1" in the other two models. Since the two
original regressors  "Rand1" and "Rand2" are so highly correlated
(R>0.994), I would expect that only very little BOLD variance were
left to be attributed to each of the mutually orthogonal components of
the regressors in model (A). And in the unlikely cases I still should
find some significant effects,  I would strongly expect that testing
for the effects of "Rand1" in the later two models (B) and (C) I would
yield much more significant results, since the shared variance is
eliminated in the later cases. The fact that I found the opposite to
be true (see attached pic) made me have to test new models, using
similar designs but with new random vectors. I replicated the first
results of increased significance when controlling with a very
correlated vector, and can not understand how this could be the case.
After having digged in the SPM code etc I still don't have a clue. Do
anyone see how adding a very correlated regressor can push the effects
of a random regressor even above the significance threshold of FWE
corrected cluster?
Any insight on this would be highly appreciated!
Pär