Souheil,
> I've been doing some reading (I know always a dangerous thing) and I've
> got myself stuck on what seems to be a question of style:
Yep, it is really a question of style.
> De-meaning the design matrix.
>
> It seems to me, that people who come from psychology are really hung up
> on this point, as opposed to the engineers who are just doing time
> series estimation.
>
> De-meaning doesn't change the parameter estimates, it doesn't change the
> estimate of the parameter variance, so why bother?
> In fact, it seems to me that it makes it harder to ask straightforward
> questions about the signal's properties, like "what is the percent
> signal change above baseline in an event related experiment".
If you demean the data as a preprocessing step, then you need to also
demean the design matrix as a preprocessing step to reflect the fact that
the data has no mean. If you don't demean data or design matrix as a
preprocessing step then you have to put
in a constant regressor in the design matrix to model the mean instead.
In either approach the mean will be modelled out making the estimates of
the parameters approximately the same (as you have noted). Hence, either
approach will suffer from exactly the same difficulties in interpretation
of questions such as "what is the percent signal change above baseline in an
event related experiment".
Cheers, Mark Woolrich.
|