Print

Print


Hi Dorian
> Dear all,
> I was discussing this in private with another member of the list but
> we cannot fully understand it.
>   
ok I'll try then ..
> 1.
> When we bypass orthogonalization the variance of the model is
> explained by all regressors in a kind of *competition*. I don't
> understand how this competition works statistically but actually I
> need the regressors to compete as much as possible with each other.
> This way I can compare them in a paired t-test at the 2nd level in
> order to find areas where one explains more variance than the other
> (independently of the order I put them in SPM). Does this make sense
> to you?
>   
Without orthogonalization (the usual stuff) each regressors fits the 
data but you are only looking at the 'unique part of variance' for each 
of them -- with orthogonalization, the order matters because you 
attribute the maximum of variance to the 1st then 2nd etc ...(it's like 
performing a simple linear regression with the 1st regressor then 
another simple linear regression with the 2nd regressor on the residuals 
of the 1st fit etc .. )

The way I think of this is using a diagram: think you have 3 conditions 
represented by 3 circles. Case 1, the 3 circles do not overlap - easy 
each condition got its own part of variance explained. Case 2, the 
circles overlap - well that's where you have several options (that also 
relates to the different sum of squares options in the statisitcal 
packages) ; the unique part of variance like in SPM is that you estimate 
the effect of each circle removing the overlapping part ; now if you 
orthogonalize you give to the 1st regressor its full variance (full 
circle) then to the 2nd the full variance minus the overlap with the 1st 
circle etc ...  (hope that makes sens to you :-\ )

Note that in all cases (orthogonalization or not) you can perform a 2nd 
level analysis.
> 2.
> Also discussing with my friend, I thought having one model with 5
> non-orthogonalized (i.e. independent) parametric modulations is like
> having 5 GLMs. Apparently this is not true because in the first case
> we have X variance exlpained by 5 modulations, while in the second
> case we have X variance explained by 1 modulation each time. But
> wouldn't the comparison in a paired t-test produce the same *winner* ?
> It make sense logically: if two collinear regressors A and B explain
> 50% variance, with regressor A 30%, and regressor B 40%, their
> variance overlaps but regressor B will result with higher T values, no
> matter of measured in the same GLM or in two separate GLMs. So is it
> better to keep them in the same GLM or split them up? Would the result
> be the same?
>   
didn't quite understand this -- the sum square of the effect is computed 
via a single design matrix no matter you orthogonalize or not ..
regarding the T value it depends on the error; if orthogonalized A=30% 
and B=40% they make up 70% of the variance but if not orthogonalized 
they may make up more (the overlapping bit) so T values will vary
> 3.
> Somebody advised to orthogonalize modulations (not with each-other
> but) with the main condition . Does this produce any benefit?
>
> Help is highly appreciated from experts or non-experts. :)
>   
well orthogonalization is a matter of theory more than stats; it all 
depends how you are thinking of your data ..
say you use a standard regressor and a some orthogonalized modulation 
then you mean to model the BOLD response the standard way and the 
remaining variance by the modulation .. now if you do not, well, as you 
said above, they 'compete' and one voxel will have some variance 
explained by the standard model and some other part of variance 
explained by the modulator ; the problem is that they are extremely 
correlated (large overlap) and thus the unique part of variance will be 
small .. i.e. it is likely (I think) that you end up having nothing 
significant ..

Hope this helps
Cyril



-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.