Print

Print


Dear Dorian,

Dorian P. wrote:
> Dear Will,
>
> I have a question on the topic. Is there a risk that the
> orthogonalization A<-B produces a vector C1, which is not uncorrelated
> with the vector C2 produced by the reverse orthogonalization B<-A? Is
> it possible that we're still measuring correlated activity if we use 2
> GLMs with inverted pmods?
>
> Otherwise why not getting the orth values from A<-B and B<-A and use
> them in one GLM?
>
>

If you do this then you get back the original regressors A and B.

The transformations are perhaps best visualised using Venn Diagrams.

See eg. Slide 26 onwards of the Statistical Inferece PPT from

http://www.fil.ion.ucl.ac.uk/spm/course/slides09-zurich/

These are really to show the difference between F and t-tests (F-test
for overall, t for unique contributions). The orthogonalsisation gives
the shared variance to the regressors that are not orthogonalised.

Best, Will.

>
> Best regards.
> Dorian
>
>
> 2010/2/18 Will Penny <[log in to unmask]>:
>> Dear Bruno,
>>
>> The standard way of thinking about correlated regressors is as follows.
>>
>> Because A and B are correlated what they can explain about a third
>> variable C (eg. BOLD activity) comprises 3 parts
>>
>> 1. That uniquely attributable to A
>> 2. That uniquely attributable to B
>> 3. Shared variance - that which could be explained by either A or B
>>
>> In the orthogonalisation of parametric regressors the second variable is
>> orthogonalised wrt the first. So for AB the shared variance is 'given' to
>> the A regressor (of course it also has component 1). For BA the shared
>> variance is given to the B regressor.
>>
>> So, in your language, its the second variable thats "cleaned" from the
>> first. The one that is *not* orthogonalised gets all the shared variance.
>>
>> Best,
>>
>> Will.
>>
>> Bruno Oertel wrote:
>>> Dear SPM users,
>>>
>>>
>>> I know the topic has been discussed in length in this forum and I tried to
>>> figure out a solution to my problem by searching the archives, but I am
>>> still not 100 percent sure whether or not I am getting it right.
>>>
>>>
>>> I have a first level design where I defined a single condition (stimulus)
>>> with two parametric modulators (A and B, both 1st order). I tried both
>>> parameter sequences (A-B and B-A) and got different results looking at the
>>> simple contrasts for A and B, respectively, depending on the sequence order.
>>> Since I am mainly interested in the effects of the parameters, I was a bit
>>> confused about this. By searching the archives, I found out that the order
>>> of the parameters matters because the 2nd regressor is orthogonalized to the
>>> 1st regressor. This would explain, why I got different results for both
>>> parameters depending on the sequence order.
>>>
>>>
>>> My question now is, is it right to say that by looking at the simple
>>> contrast for A (0 1 0) in the parameter sequence A-B, I am looking at the
>>> effects of A "cleaned" from B and vice versa for B (0 1 0) in parameter
>>> sequence (B-A)? If that is so, can I go on and do a one sample t-test for A
>>> (from sequence A-B) and B (from sequence B-A), respectively, on the
>>> second-level to get my group results? Is this a valid approach?
>>>
>>>
>>> Thanks in advance for any insights.
>>>
>>> Best,
>>>
>>> Bruno
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>> --
>> William D. Penny
>> Wellcome Trust Centre for Neuroimaging
>> University College London
>> 12 Queen Square
>> London WC1N 3BG
>>
>> Tel: 020 7833 7475
>> FAX: 020 7813 1420
>> Email: [log in to unmask]
>> URL: http://www.fil.ion.ucl.ac.uk/~wpenny/
>>
>
>

--
William D. Penny
Wellcome Trust Centre for Neuroimaging
University College London
12 Queen Square
London WC1N 3BG

Tel: 020 7833 7475
FAX: 020 7813 1420
Email: [log in to unmask]
URL: http://www.fil.ion.ucl.ac.uk/~wpenny/