Kalina, Rik & Jesper,
> Would it help to include both covariates in the same model, and use
> a reduced F-test (eg a [0 1] F-contrast in the simple case of 2
> covariates) to see whether, though highly correlated, any additional
> variability is captured by the variable-mini-epoch model than the
> fixed delta function model?
and Jesper suggested a similar approach, first orthogonalizing
the second set of regressors before incorporating them into the
To pin down notation, let's say X_1 are the design matrix columns of
your standard event-related regressors, while X_2 are the regressors
from RT-related regressors.
Rik is suggesting you fit [X_1 X_2] and testing if the additional
variability accounted for by X_2 is significant with an F test. Jesper
is suggesting you create X_2.1, the X_2 columns orthogonalized with
respect to X_1, then fit [X_1 X_2.1], then test the X_2.1 columns with
an F test.
Both approaches will work and give the same F images for comparing the
extra variability accounted for by the second set of regressors, since
the F-test inherently orthogonalizes the effects. Relative to just
fitting [X_1], however, Rik's approach will give different estimates
for the X_1 regressors, while Jesper's approach will give the same
estimates. While the variance estimates for both Jesper and Rik's
models will be the same, they will differ from the [X_1] model, so
Jesper's t-maps will be different from the [X_1] model even though the
parameter estimates are the same.
I would probably use Rik's approach, just because it's easier and you
can test the other direction with the same model (does X_1 account for a
significant additional amount of variability over and above X_2).
The interpretation of the F-images described above is straight
forward; individual columns tested with a t-test are most easily
thought of as signed square roots of F-tests, testing the additional
variability accounted for by that column. Interpretation of parameter
estimates associated with individual columns is not straight forward.
If X_k is the kth column of [X_1 X_2], the associated parameter is
expressing the slope of the regression of the data (after
orthogonalization to all design matrix columns *but* X_k) on X_k
orthogonalized to all other columns. Only when your design matrix is
orthogonal (or mostly so) can you get easy interpretation of
parameters (as when you are only fitting either [X_1] or [X_2] alone).
So, to summarize, you can either orthogonalize or not, though it will
have no impact on the F-images. You can test for additional
variability explained by set of regressors in the presence of another.
Hopefully you will find one model is clearly superior to the other,
though it is feasible that [X_1] will fit better in some areas and
[X_2] will fit better in others.
-- Thomas Nichols -------------------- Department of Biostatistics
http://www.sph.umich.edu/~nichols University of Michigan
[log in to unmask] 1420 Washington Blvd
-------------------------------------- Ann Arbor, MI 48109-2029