Hello Tom,

Thank you so much for your input.

1) Wouldn't one expect the covariance estimates to be more reliable?

The covariance estimates are based on the residuals, and if the models are different you'll have different residuals.

 Ah... I thought - silly me - that the covariance was estimated from the data. So I was thinking that different covariance estimates were a consequence of poor estimation, since I got two very similar W's and one with a different offset (see image in my original e-mail).

3) Is it OK to use the two out of three criterion to select a more correct
whitening matrix?

I'm not sure what criterion you're referring to.  To use R^2 or Extra Sums of Squares for model comparisons the models must be nested.  As you've discovered, the different estimated W's means that the models are not nested.  Tools for non-nested model comparisons include AIC and BIC.

It's a very unscientific criterion, actually. I was thinking the W matrix should be the same for all models, so getting two very similar W's on three models, I was guessing that the "true" W would be the one that came up twice! I now understand I was completely wrong...

However, I don't quite understand why aren't the models nested. The "whole" design matrix includes movement parameters as covariates. I read somewhere on the list that the order of the regressors in the design matrix wasn't interchangeable, although it is not obvious to my why. If I hadn't included the movement parameters in the model, would the models be nested then? 

Perhaps a simpler way forward is to, just for the purposes of model selection, is to turn off autocorrelation modeling, pick the best model, then run it again with autocorrelation modeling.

Indeed. Actually, I'm using SPMd to assess model validity and R^2 to assess model fit. When the model validity is not degraded by adding a regressor (time derivative, for instance) I go on and compare fits. Just thought you would like to know how SPMd is revealing itself useful!

Thanks again.

Rute