Kalina,
Sorry for the delay...
> Thanks so much for you comments and advice on this. But if I could, let me
> further ask for your opinion on the following related questions --
>
> First, can we safely assume that in this case, testing if X_2 accounts for
> significant additional variability is identical to testing if X_2
> by itself provides a regression function with a significantly better line
> fit than the regression function in X_1 alone? In other words,
> for voxels where the reduced F-test [0 1] is significant, have we
> demonstrated a lack of fit for the first model (the first model is [X_1]
> alone)?
Yes, if you use a [0 1] F-test on the model [X_1 X_2] and you get a
significant result, you have demonstrated lack of fit of the model
[X_1].
> Also, if this assumption is true (seems likely to me, but please correct
> me on it) for voxels where the reverse [1 0] F-test is significant, we
> would have demonstrated a lack of fit for the second model (the second
> model is based on [X_2] alone).
Yes, just as before..
> The second question is a bit more conceptual and assumes that the above
> assumtion is correct.
>
> Suppose now after comparing the two models' fits, we go back to the
> original models to make inferences about the parameter estimates (I think
> the consensus was that's not possible from the [Y = X_1 X_2] model).
>
> Let's say we have two types of regions:
>
> type A, for which we have found a lack of fit for model_1 [Y=X_1]
> through a significant partial F-test [0 1], and
>
> type B, for which we have found a lack of fit for model_2 [Y=X_2]
> through a significant partial F-test [1 0]
>
> Strictly speaking, we cannot draw conlusions from a model for which we
> have demonstrated a significant lack of it. Would that imply that the
> proper thing to do would be to use model_1 to draw conclusiong about the
> parameter estimates for regions of type A, and to use model_2 to draw
> conclusions about regions of type B? (keeping in mind the different
> interpretation that the parameter estimates would have in the two
> models, e.g., in my case, model_1 models what can be described as
> transient responses at the onset of each trial, while model_2 models
> sustained responses lasting throughout the working cycle).
We'll you've cut to the crux of the problem... getting interpretable
results when there isn't universal support for one model or the other.
Here is my take: If you find that you need both models ([X_1 X_2]) to
fit the data well in general, I would persue an orthoginalization
strategy suggested before (by Jesper).
I would fit two models [X_1 X_2.1] and [X_1.2 X_2], where X_i.j is
model matrix i orthogonalized with respect to model j. The fitted
values and residual variance of these two models will be identical,
but they give you opportunity to get interpretable t-images for each
model. The [X_1 X_2.1] model will give you t-images for model one,
interpretable as is they are from [X_1] *but* you have the added
advantage that any additional variability that model 2 can account for
will be fitted and removed from the resdiual error. Likewise, the
model [X_1.2 X_2] gives you interpretable model 2 ([X_2]) images, but
you're soaking up all the experimental variability possible.
There is a short note on this very issue...
Ambiguous Results in Functional Neuroimaging Data Analysis Due
to Covariate Correlation
Alexandre Andrade, Anne-Lise Paradis, Stphanie Rouquette,
Jean-Baptiste Poline
NeuroImage, Vol. 10, No. 4, Oct 1999, pp. 483-486
JB et al may want to say more.
Hope this helps.
-Tom
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|