Not that I'm aware of, you must type in the names yourself

Cheers,
Jeanette


On Mon, May 13, 2013 at 12:04 PM, Iwo Bohr <[log in to unmask]> wrote:
OK, it's maybe not as clear as with colinearity but I'm getting there with the concept of overall mean as well.
Thanks for all your explanations.
 
Also a technical detail question: is it possible in Feat gui to paste the names of a series of names for covaraites as it is possible with their values?
 
Iwo

From: Jeanette Mumford <[log in to unmask]>
To: [log in to unmask]
Sent: Monday, 13 May 2013, 17:27

Subject: Re: [FSL] FEAT: modelling covariates of interest at 2nd level

Hi,

Glad you understand collinearity.  

Without the overall mean (column of 1s) in your model, you're assuming that the mean of your dependent variable is exactly 0, which isn't likely, and so we include the column of 1s.  This is almost always done by default in statistical packages like R, SPSS, SAS, etc.  If you mean center all of your continuous covariates the interpretation of the parameter associated with the column of 1s is the mean of your dependent variable.  If you do not mean center your continuous covariates, the interpretation is the mean of your dependent variable when all of your continuous covariates are set to 0.

IMPORTANTLY, I'm referring to a FEAT analysis.  Randomise implicitly models the mean when you use the -D option and we don't typically put it in the design matrix for randomise.  This is a very special case.

Cheers,
Jeanette


On Mon, May 13, 2013 at 10:58 AM, Iwo Bohr <[log in to unmask]> wrote:
Thank you Jeanette,
 
While digging further the question: "to orthogonalize or not to orthogonalize" I came across an interesting exchange on the topic with your involvement:
 
Actually Mark Jenkinson mentioned in it that possibly (not necessarly though) the only situation where othogonalization could be justified is when related covariates are used on a higher level:
 
This implies to my situation, but even in this case there is a workabout to avoid generally discouraged orthogonalization as Mark proposes:
 
"However, it isn't really much more informative that doing the two t-contrasts and then an F-contrast (possibly with contrast masking to separate the positive and negative correlations in the F-contrast, which itself is unsigned). So even in this case it is a weak argument for orthogonalisation."
 
After your email and Mark's old remarks I think the problem of orthogonalization is more or less clarified for me. However I'm still struggling a bit with the concept of the overall mean and including it in the model, especially at the 2nd level.
In my case: I include a number of covariates in addion BOLD statistics from the 1st level. What  is the meaning of this overall mean? I can see that as  the name suggests it refers to varaibility  attributable to all regressors: i.e. BOLD statistics AND covariates? Why is it necessary to take it into the model?
 
Iwo
 
From: Jeanette Mumford <[log in to unmask]>
To: [log in to unmask]
Sent: Monday, 13 May 2013, 16:03
Subject: Re: [FSL] FEAT: modelling covariates of interest at 2nd level

Hi, see below


On Mon, May 13, 2013 at 9:34 AM, Iwo Bohr <[log in to unmask]> wrote:
Dear FSL experts,
I know that this topic has already been dealt with (partially) by Steve Smith on this list (Item #8709 (1 Sep 2006 04:59) - Re: 2nd level covariate of interest), but I would like to double check I got right his response.
I would like to include different measures of lesion severity as covariates of interest in a 2nd level analysis (one group).
So in my model I should:
1.      include group mean regressors (column of ones) next to EACH covariate but WITHOUT demeaning actual scores?

Demeaning won't hurt you and is only necessary if you're also looking at the overall mean (interpreting the column of 1's).  Basically you can extend this example for your design

 
2.      in addition: orthogonalize  covariates with respect of each other? It’s important since I would expect quite some degree of correlation between them since they measure similar things (not identical though; want to know which; if any; is the best predictor of BOLD activations)
 
The GLM's p-values automatically reflect the unique variability due to each regressor.  Orthogonalizing basically defeats the purpose of adding additional regressors.  Typically you add additional regressors to adjust your analysis for those effects.  When you orthogonalize you're removing the ability of one regressor's inference to be adjusted for another.  It makes a strong assumption that the shared variability truly belongs to one EV over the other and in 99% of the cases there's no way to make that argument.  Just let the GLM naturally parse out the unique portions for each covariate without orthogonalizing.
 
3.      orthogonalize each covariate wrt to the principal regressor (first level BOLD statistics)?
I'm not sure what you mean here, but as I said above, orthogonalization is not necessary.  Unfortunately if you're EVs are highly collinear, it is what it is and collinearity is not the answer.  

Hope that helps,
Jeanette
 
Many thanks in advance, 
Iwo