JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for FSL Archives


FSL Archives

FSL Archives


FSL@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

FSL Home

FSL Home

FSL  December 2010

FSL December 2010

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: orthogonalization

From:

Anderson Winkler <[log in to unmask]>

Reply-To:

FSL - FMRIB's Software Library <[log in to unmask]>

Date:

Mon, 20 Dec 2010 19:55:20 -0500

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (120 lines)

Mark,

I wouldn't be so strict about it... There is a limit to which it would 
still be possible to see statistically significant responses to a 
contrast if the signal in the data matches one EV over (and above) what 
is taken out by the other, correlated one. In other words, if EV1 is 
strongly correlated with EV2, only a little margin remains for the 
effect to be detected as significant in EV1 but not in EV2. So, the 
higher the correlation, the more conservative a non-orthogonalised 
approach becomes and the lower the error must be so that something 
significant at all is found. Ok, one may always argue that if the 
correlation is so high, then just ignore the redundant regressor and 
proceed without orthogonalisation. Yes, that certainly is possible and 
maybe even a sensible choice, as is to use an F-test for the overall 
effect. It would all depend on what research is being done and what is 
the purpose of what is being done.

Though a bit off-thread, this points to another topic: conservativeness 
may not always the way to go, and what may be good for a strong 
scientific paper may not apply everywhere, such as for internal reports, 
discussions, to evaluate trends and gather some (weak) evidence that 
justify more well controlled, expensive studies to be conducted later 
(epidemiologists are good on this). Also in clinical settings: MR 
scanner manufacturers, for instance, sell fMRI packages that promise to 
identify "eloquent" cortex for pre-surgical planning, say, to resect a 
tumor. These packages often do simple on/off block designs, analysed 
with t-tests, offering no correction for multiple testing, leaving the 
t-statistic as a free parameter for the radiologist/neurosurgeon to surf 
around. While it is almost a blasphemy of us to think of such a thing in 
research settings, when you have a single patient, debilitated, 
uncooperative, and the objective is to save as much tissue as possible 
while keeping wide, safe surgical margins, then it makes sense to 
consider being much more liberal than as if it were results for a paper 
to be submitted to HBM or NeuroImage. OK, FSL is not for clinical use, 
but this extreme example is just to highlight that different labs may 
have different scenarios and objectives and, as such, different needs in 
terms of conservativeness/liberality. I wonder if strong statements to 
the list as that this or that technique should not be used or that is 
makes no sense on its grounds may end up closing the door to valid 
methods. And even for research, as it may give authoritative arguments 
for reviewers to criticize/reject papers that are, in reality, fine.

I'm now even afraid about pronouncing the O-word aloud... :) btw, you 
were the guys who added it to FSL... :)

All the best!

Anderson

PS: We often think in terms of GLM with least squares, and it even 
allows easy matrix computation using pseudo-inverse (ill-conditioned 
designs become estimable). On the other hand, thinking on 
maximum-likelihood, correlated EVs would possibly result in common 
convergence failures, so orthogonalisation and its interpretation 
becomes an even more important thing to think about -- otherwise there 
may be no estimates altogether.
PS2: will try to stop mixing s and z in spellings... sorry if anyone got 
bothered by the previous email...


On 12/20/2010 05:15 PM, Mark Jenkinson wrote:
> Hi,
>
> I have an opinion.
>
> I agree with Jeanette much more - there is very, very, very rarely a good reason to use orthogonalisation.  This is because the GLM automatically takes into account correlation in the most natural and conservative way, so that if two (or more) EVs are strongly correlated then you will only see a statistically significant response to a contrast on one of these if the signal in the data matches this *over and above* what is taken out by the other (correlated) EVs.  That is, if it can be explained by the remaining EVs then the GLM will assign it (for that contrast only) to the EVs that do not form part of the contrast.  In this way signal that is ambiguous (it could be due to either EV) cannot be the cause of a significant result. This is the safe and conservative way to do statistics and really should be accepted as that.  It is not necessary to do orthogonalisation to get this - it happens automatically.  Also, you don't need to worry about this making all the results go away if there is a strong enough joint signal in the data. You can still find out about the joint signals by doing an F-test across any set of EVs that you think might share signal (due to correlation) and this will then give you a statistically significant result if the signal in the data matches any of the EVs (technically t-contrasts) included in that F-test contrast - including any signal that is in the joint space (the bit that correlates strongly with both EVs).  So because you normally get the conservative result from a t-contrast, but can find out about shared signals using an F-contrast - all without using orthogonalisation - I see no need for orthogonalisation in general.
>
> All that orthogonalisation can add here is the ability to force the joint signal to be interpreted (without any support from the data or the model) as being due to one EV and not the other.  If your original model (and therefore your experimental design) is incapable of determining the true cause of the signal then it is very artificial to impose something via orthogonalisation. This is therefore the complete opposite of the conservative approach and really not a good idea to do in the vast majority of cases.
>
> The only situation I have come across where it makes some sense to orthogonalise is when you have a higher level design that includes lots of highly correlated EVs, but where you can identify some form of causal relationship between them.  For example, disease duration and some measure of disability.  It is likely that the disease (and hence its duration) is the cause of the disability, and they would be highly correlated.  So it might be that you want to orthogonalise here so that you assign all the main effect to the duration EV and then let the disability EV be the remaining change over and above the duration. This then allows the contrasts on the individual EVs to be interpreted simply. However, it isn't really much more informative that doing the two t-contrasts and then an F-contrast (possibly with contrast masking to separate the positive and negative correlations in the F-contrast, which itself is unsigned).  So even in this case it is a weak argument for orthogonalisation.
>
> As for cars - they are also safe without indicators if you can interpret what everyone else on the road is going to do - it is just safer with indicators as interpretation is quite tricky!  :)
>
> And now finally, to Jeanette's original question - I think (and Steve or others can correct me if I'm wrong) that the orthogonalisation in FEAT is done one by one, so that if you do x1 wrt x2 and x2 wrt x1, then it does the second after the first one has been done already, meaning that it uses the new EVs, not the original ones, and so these become orthogonal after the first step and nothing happens in the second step.  It probably isn't quite what you'd want in this condition theoretically, but I honestly cannot ever see anyone needing such a set of orthogonalisations as in the examples given, so in that case I think it is not really a problem.
>
> All the best,
> 	Mark
>
>
> On 20 Dec 2010, at 20:21, Anderson Winkler wrote:
>
>> Hi Jeanette,
>>
>> I think the designs should always be well constructed and there should never be the possibility of regressors correlated. As this isn't a realistic scenario, I think orthogonalisation may be a reasonable choice when one is willing to shift the effect explained by one regressor to another, being fully aware of what it means and clearly disclosing in reports/papers that it was done and how. My take is that orthogonalisation should be used parsimoniously, yet without hesitation if the correlation is strong and one is willing to accept that one EV should absolutely take precedence over another.
>>
>> A difficulty in interpretation would exist anyway without orthogonalization, as it'd be difficult to tell if the effect you are interested in is being split across multiple correlated regressors, reducing the significance of all of them, maybe to the point of not detecting legitimate effect. Orthogonalization is so an ad hoc procedure which, although not to be used to "correct" poor designs, can be of some help if there no other way to have a more efficient one.
>>
>> About misusing the feature and getting crazy orthogonalizing everything with everything..., I think it's a sensible concern and the fact that it's an easy-to-use feature in FSL shouldn't prevent people from being better educated about it and using it only cautiously. Cars are safe, as long as you drive safely...
>>
>> I look forward to hear other's opinions on that.
>>
>> All the best,
>>
>> Anderson
>>
>>
>> On 12/20/2010 01:16 PM, Jeanette Mumford wrote:
>>> Hi!
>>>
>>> I'll start off by saying I think orthogonalization of regressors makes
>>> absolutely no sense in almost all cases, regardless, I'm curious as to
>>> how FEAT handles cases where people go a little wild and start
>>> orthogonalizing every ev with respect to all other evs.
>>>
>>> For example, my (rusty) geometry tells me that if I have two
>>> regressors, x1 and x2 and if I then orthogonalize each one with
>>> respect to the other to get x1_wrt_x2 and x2_wrt_x1, then cor(x1, x2)=
>>> -1* cor(x1_wrt_x2, x2_wrt_x1).  When I ask FEAT to orthogonalize each
>>> of two regressors with respect to the other it actually didn't alter
>>> x2.  Thus, the correlation in the orthogonalized FEAT model is 0.
>>>
>>> Further if somebody had 3 or more regressors and (wrongly)
>>> orthogonalized everything with respect to everything else, what does
>>> FEAT do?  It seems with all the orthogonalization it would be very
>>> difficult to correctly interpret hypothesis tests.  Is there a
>>> meaningful interpretation?
>>>
>>> Thanks!
>>> Jeanette

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager