Dorian P. wrote:
> Dear Cyril,
> What I mean in point 2 is that regressor A overlap with B. Together
> they explain 50%, separately A = 30%, B = 40%. This means they have
> 20% in common (we're talking about a voxel's variance to be precise).
> >From what you say I immagine that when put in the same GLM this 20%
> will be lost; --- A will have its remaining 10% and B its remaining
> 20%. Comparing them in the 2nd level means that B = 2A (quite good to
> detect this voxel is explained better by B).
yep that's right :-)
> Alternatively, putting them in separate GLMs will simply reproduce
> their original variance, A=30% and B=40%. Comparing them in the 2nd
> level will produce a weaker result, B = 4/3A. Am I correct with this
> assumption? Does the common variance really get lost (would be nice if
> so)? So, I should I keep them in the same GLM un-orthogonalized to
> have stronger results.
yes the common variance get 'lost' ..
note it is only one way to deal with this - over methods exist ; see
stuffs about types of sums of squares
an easy read about this is http://www.statsoft.co.uk/textbook/stathome.html
> About point 3, I dont need the main condition at all. If it absorbs
> its variance without *stealing* it from modulations for me is ok, but
> if it gets variance from modulations I would like to keep it
> not-orthogonalized. I have collapsed all trials in that condition to
> play around with parametric modulations and thus don't need the
> condition itself. Any suggestion here can be useful.
Well I'm not sure about your design but if you have only 1 modulation
you could use the modulation parameters when modeling each trial ..
There was a recent paper, in neuroimage I think, where RT of each trials
was used to model the hrf (instead of 0 modeling an inpulse) and this
model was compared with a standard approach (1 column for the regressor
+ modulation by RT) -- clearly it was better to directly 'modulate' the
1st regressor, but it may not always be possible to do so .. cannot
think of anything else here ...
> Thank you for answering.
> 2009/3/4 cyril pernet <[log in to unmask]>:
>> Hi Dorian
>>> Dear all,
>>> I was discussing this in private with another member of the list but
>>> we cannot fully understand it.
>> ok I'll try then ..
>>> When we bypass orthogonalization the variance of the model is
>>> explained by all regressors in a kind of *competition*. I don't
>>> understand how this competition works statistically but actually I
>>> need the regressors to compete as much as possible with each other.
>>> This way I can compare them in a paired t-test at the 2nd level in
>>> order to find areas where one explains more variance than the other
>>> (independently of the order I put them in SPM). Does this make sense
>>> to you?
>> Without orthogonalization (the usual stuff) each regressors fits the data
>> but you are only looking at the 'unique part of variance' for each of them
>> -- with orthogonalization, the order matters because you attribute the
>> maximum of variance to the 1st then 2nd etc ...(it's like performing a
>> simple linear regression with the 1st regressor then another simple linear
>> regression with the 2nd regressor on the residuals of the 1st fit etc .. )
>> The way I think of this is using a diagram: think you have 3 conditions
>> represented by 3 circles. Case 1, the 3 circles do not overlap - easy each
>> condition got its own part of variance explained. Case 2, the circles
>> overlap - well that's where you have several options (that also relates to
>> the different sum of squares options in the statisitcal packages) ; the
>> unique part of variance like in SPM is that you estimate the effect of each
>> circle removing the overlapping part ; now if you orthogonalize you give to
>> the 1st regressor its full variance (full circle) then to the 2nd the full
>> variance minus the overlap with the 1st circle etc ... (hope that makes
>> sens to you :-\ )
>> Note that in all cases (orthogonalization or not) you can perform a 2nd
>> level analysis.
>>> Also discussing with my friend, I thought having one model with 5
>>> non-orthogonalized (i.e. independent) parametric modulations is like
>>> having 5 GLMs. Apparently this is not true because in the first case
>>> we have X variance exlpained by 5 modulations, while in the second
>>> case we have X variance explained by 1 modulation each time. But
>>> wouldn't the comparison in a paired t-test produce the same *winner* ?
>>> It make sense logically: if two collinear regressors A and B explain
>>> 50% variance, with regressor A 30%, and regressor B 40%, their
>>> variance overlaps but regressor B will result with higher T values, no
>>> matter of measured in the same GLM or in two separate GLMs. So is it
>>> better to keep them in the same GLM or split them up? Would the result
>>> be the same?
>> didn't quite understand this -- the sum square of the effect is computed via
>> a single design matrix no matter you orthogonalize or not ..
>> regarding the T value it depends on the error; if orthogonalized A=30% and
>> B=40% they make up 70% of the variance but if not orthogonalized they may
>> make up more (the overlapping bit) so T values will vary
>>> Somebody advised to orthogonalize modulations (not with each-other
>>> but) with the main condition . Does this produce any benefit?
>>> Help is highly appreciated from experts or non-experts. :)
>> well orthogonalization is a matter of theory more than stats; it all depends
>> how you are thinking of your data ..
>> say you use a standard regressor and a some orthogonalized modulation then
>> you mean to model the BOLD response the standard way and the remaining
>> variance by the modulation .. now if you do not, well, as you said above,
>> they 'compete' and one voxel will have some variance explained by the
>> standard model and some other part of variance explained by the modulator ;
>> the problem is that they are extremely correlated (large overlap) and thus
>> the unique part of variance will be small .. i.e. it is likely (I think)
>> that you end up having nothing significant ..
>> Hope this helps
>> The University of Edinburgh is a charitable body, registered in
>> Scotland, with registration number SC005336.
Dr Cyril Pernet,
fMRI Lead Researcher SINAPSE
SFC Brain Imaging Research Center
Division of Clinical Neurosciences
University of Edinburgh
Western General Hospital
[log in to unmask]
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.