Dear Rebecca,
> I had conducted a straightforward analysis for a group mean.
> However, I have now added an additional covariate and followed the
> instructions given in the manual.
>
> EV1 EV2
> gpmean 1 0
> RT 0 1
>
> However, when I look at the results of the gpmean for this analysis
> they differ
> somewhat from my previous gp analysis.
> Why is this?
> I assumed that the additional covariate was only included in the RT
> results
> (EV2) and not in the gp mean analysis (EV1), but is this not the case?
Yes and no. You are only testing for EV2 when it is included in the
contrast, but its presence in the model will affect the estimates of
the other parameters.
In general one can say that the GLM is always "conservative" in what
it assigns to a given regressor. Let us say e.g. that your first
regressor signifies group (e.g. patients vs controls) and your second
regressor signifies age. Let us further say that all your controls are
young students and all your patients are old (just as an example).
If we now see a difference between the "groups" we cannot know if this
effect is due to disease or to age. So if you test for group with a [1
0] contrast the GLM will not give you significant results (because it
might be due to age). Likewise, if you test for age ([0 1]) GLM will
again not give you strong results (because it might be due to disease).
Hence, the inclusion of a new regressor will typically affect the
results of the other regressors in the model. It may not be as extreme
as I have sketched above, but as soon as the new regressor is non-
orthogonal to an existing regressors it will "steal" a little of the
effect from that regressor.
It can also happen that including another regressor give you stronger
results. This would happen for a regressor that is reasonable close to
orthogonal to the existing regressors and that explains a lot of
variance in the data, i.e. reduces the "error". Since a t-test is
pretty much effect/error, educing the error will increase the t-values.
Good Luck Jesper
|