Print

Print


Dear Dan,

Thank you for pointing out that old entry in the FAQ. I have now changed this to be consistent with the emails on the list and the FSL Course material.  We do not recommend orthogonalisation for confounds and we believe that it is better to apply the GLM's naturally conservative strategy of only determining significance from the unique parts of the signal, not the shared part.  Using orthogonalisation attributes the shared signal to one EV alone, and is not the conservative approach if you are uncertain whether the effect may have been caused by neuronal activation or a confound.  So please ignore the old FAQ entry.

All the best,
	Mark



On 30 Oct 2013, at 17:32, Dan Ames <[log in to unmask]> wrote:

> Sorry, this looks like it may have become unlinked from the original posting (maybe because my response was to a pretty old thread?). In any case, here's the context:
> 
> Hi,
> 
> You don't have to worry about this.
> There is no orthogonalisation because the results that you
> get with contrasts that exclude the confounds (as they should)
> are *exactly* the same whether you orthogonalise the main
> EVs wrt the confounds or not.  Hence there is no reason to
> have this option as it changes nothing of interest.
> 
> All the best,
> 	Mark
> 
> 
> On 2 Jul 2010, at 20:44, Mona Maneshi wrote:
> 
>> Dear FSL Experts,
>> Hi,
>> 
>> I am using FEAT to do seed-based functional connectivity analysis. I  
>> need to add some confounds to my GLM. Because of the large number of  
>> confound regressors that I have, I need to use "Add additional  
>> confound EVs" option in "Stats" tab.
>> 1) I am wondering if my regressor of interest defined in the "Full  
>> model setup" (i.e. the averaged time course over ROI) will be  
>> automatically orthogonalized with respect to these confounds (I know  
>> that there is an option to orthogonalize regressors defined in "Full  
>> model setup" wrt. each other, but what about the EVs wrt. confounds?).
>> 2) If not, how can I change the source code to be able to do so?
>> 
>> Thanks in advance and looking forward to hearing from you,
>> Mona