The issue is general, but a specific example will
probably help: my conditions are audio-visual stimuli,
with each condition being a different SOA between
the auditory stimulus and the visual stimulus,
one column in the design matrix for each SOA-type.
Suppose I'm doing a contrast to find areas whose activity
increases with increasing SOA.
Then the contrast has, say,
-2 for the 0ms-SOA cols
-1 for the 50ms-SOA cols
0 for the 100ms-SOA cols
1 for the 150ms-SOA cols
2 for the 200ms-SOA cols
etc.
The problem is that this contrast doesn't only find areas
whose activity scales linearly with SOA. It finds any area
that is more active on average for the cols that have positive
coefficients than for the cols that have negative coefficients.
e.g. if the beta scores in a given voxel
for the conditions [ 0ms 50ms 100ms 150ms 200ms ]
are [ 1 2 3 4 5 ], i.e. if neural activity really does scale
linearly with increasing SOA, as is desired, then
that voxel will give a good contrast score.
However, we'll get an equally big contrast score
if the activity doesn't match the linear-increasing
form of the contrast vector, e.g. if the beta scores
for the conditions [ 0ms 50ms 100ms 150ms 200ms ] are [ 0 0 0 0 5 ].
sum( [ -2 -1 0 1 2 ] .* [ 1 2 3 4 5 ] ) = 10 but also
sum( [ -2 -1 0 1 2 ] .* [ 0 0 0 0 5 ] ) = 10,
so both show up as equally significant, even though the first
one is the better match to our parametric contrast,
coming from a voxel whose activity really does increase
with increasing SOA, whereas the second voxel example
just gives a big kick of activity for the longest SOA only.
|