Print

Print


I did you testing with a few random sequences (not actual brain data). When
you put two highly correlated regressors in the same model, you will notice
that while [1 0] becomes stronger. Those same areas will be significant for
[0 -1].

In essence, the stability of the estimates become unstable. I believe you
notice them get larger because the voxels you are looking at are the ones
that were correlated the most initially. I believe - based on the few
sequences I looked at -- if you look at voxels that were more correlated
with the second covariate, these voxels would become significantly negative
in the combined model when you look at the first regressor. You are pushing
the estimates away from each other.

After accounting for the variance of the second regressor, does the first
regressor explain extra variance.

Additionally, you must remember that these are now partial correlations and
can become negative even when they are both positively correlated.

Best Regards, Donald McLaren
=================
D.G. McLaren, Ph.D.
Postdoctoral Research Fellow, GRECC, Bedford VA
Research Fellow, Department of Neurology, Massachusetts General Hospital
and
Harvard Medical School
Website: http://www.martinos.org/~mclaren
Office: (773) 406-2464
=====================
This e-mail contains CONFIDENTIAL INFORMATION which may contain PROTECTED
HEALTHCARE INFORMATION and may also be LEGALLY PRIVILEGED and which is
intended only for the use of the individual or entity named above. If the
reader of the e-mail is not the intended recipient or the employee or agent
responsible for delivering it to the intended recipient, you are hereby
notified that you are in possession of confidential and privileged
information. Any unauthorized use, disclosure, copying or the taking of any
action in reliance on the contents of this information is strictly
prohibited and may be unlawful. If you have received this e-mail
unintentionally, please immediately notify the sender via telephone at
(773)
406-2464 or email.



2012/3/15 Pär Flodin <[log in to unmask]>

> Dear SPMers,
> In order to better understand the effects of correlated regressors in 1st
> level SPM design, I've played around with simulated regressors. My fMRI
> data consist of 200 resting state fMRI volumes. I have created 3  design
> matrices:
> (A). A matrix consisting of two random regressors ("Rand1
> "=rand(200,1)*10; "Rand2"= Rand1 + rand(200,1)).
> (B). A matrix consisting of only the 'Rand1'.
> (C). A matrix consisting of the "Rand1" and an with regard to "RAnd1"
> orthogonalized regressor "Rand2_orth".
> Utmost  surprisingly,  when in the first model testing for the "Rand1"
> (i.e. [1 0] )  I obtained more significant and extended activations than
>  testing for "Rand1" in the other two models. Since the two original
> regressors  "Rand1" and "Rand2" are so highly correlated (R>0.994), I would
> expect that only very little BOLD variance were left to be attributed to
> each of the mutually orthogonal components of the regressors in model (A).
> And in the unlikely cases I still should find some significant effects,  I
> would strongly expect that testing for the effects of "Rand1" in the later
> two models (B) and (C) I would yield much more significant results, since
> the shared variance is eliminated in the later cases. The fact that I found
> the opposit to be true made me have to test new models, using similar
> designs but with new random vectors. I replicated the first results of
> increased significance when controlling with a very correlated vector, and
> can not understand how this could be the case. After having digged in the
> SPM code etc I still don't have a clue. Do anyone see how adding a very
> correlated regressor can push the effects of a random regressor even above
> the significance threshold of FWE corrected cluster?
> Any insight on this would be highly appreciated!
> Pär
>