Alle Meije,
> One last question I'm afraid, to check that I understand all this:
>
> If I run an analysis with two basis functions (say, the `canonical' hrf
> (HRF) and its time derivative, (TD)) and I use a T-test, does this
> actually amount to doing a T-test with only one basis function (ie, HRF
> + TD)? This means that I cannot account for onset delays by including TD
> *if I use the T-test*, doesn't it?
You can use a T-test with a temporal derivative. The temporal
derivative predictor is just soaking up a nuisance source of variation
(variable delay). The contrast [1 0 ...] tests, as always, the
magnitude of the 'canonical' effect; the temporal derivative is just
reducing the residual error and hence possibly increasing the
sensitivity.
(You *could* do an F contrast with [1 0 ...; 0 1 ...], testing for any
experimentally related variation, but then you won't know the sign of
the effect.)
Just as an aside, I need to take this opportunity to state how
nonsensical a [1 1 ... ] contrast is with a temporal derivative. This
contrast would test the average of the canonical and derivative
coefficients, not a useful quantity.
> Tom, the time derivative integrates to 0. If I use TD as a column in
> my design matrix, and I use a T-test, would this mean that the TD
> column in my design matrix consumes all the explained (co)variance?
It's not that the temporal derivative integrates to zero, but rather
that the temporal derivative predictor is orthogonalized with respect
to the canonical predictor. Because of this orthogonalization there
is no risk of the TD predictor soaking up canonical-HRF-related
variance.
-Tom
-- Thomas Nichols -------------------- Department of Biostatistics
http://www.sph.umich.edu/~nichols University of Michigan
[log in to unmask] 1420 Washington Heights
-------------------------------------- Ann Arbor, MI 48109-2029
|