Dear Tom,
I haven't thought about this particular aspect in detail but I believe this
is precisely the reason why we need to scale according to the design
matrix. When we use a "canonical HRF" we need to assume that the evoked
BOLD response is essentially the same each time it gets evoked. So if two
events happen to be near each other in time, we assume a linear summation
of evoked responses (cf. Boynton et al 1996). If this is true, then the
signal will increase linearly too. But when we compute the (mean) % BOLD
signal change to a particular type of event, then we need to scale this
according to the number of linearly summated events, which is indexed by
the difference in the max and min values of a given column of the design
matrix.
Consider one experiment with two conditions. One is a 30s blocked
stimulation (EV1) while the other is a sparse event-related condition (EV2)
with time for the HRF to return to baseline after each
stimulation. Because we use a linear convolution operator, the magnitude
of EV1 will be much greater than EV2, even though we assume that an
equivalent delta function for both would evoke equivalent HRF heights. If
we didn't scale for these differences, we would erroneously discover that
the % BOLD signal change was approximately an order of magnitude larger for
EV1 than EV2.
There are more subtle cases where this scaling becomes important. Consider
an experiment with only one event-related condition of interest in which
the timing of the stimuli is fast (<4s) but partly dependent on the
reaction of the participants. Participants with slower RTs will have
longer ISIs, leading to slightly smaller summation of the HRFs and thus
smaller magnitudes in the data than those participants with faster RTs. If
we didn't scale according to these magnitude differences, then it would
erroneously appear that faster subjects had larger % percent BOLD signal
change.
So I can only think of cases when we do need to scale according to the
magnitude of the EV in the design matrix. In some designs, such as sparse
sampling, convolution can be unnecessary yielding a magnitude of 1.0 and
therefore scaling becomes unnecessary because it doesn't change the results.
Perhaps others would like to comment, though?
Joe
--------------------
Joseph T. Devlin, Ph. D.
FMRIB Centre, Dept. of Clinical Neurology
University of Oxford
John Radcliffe Hospital
Headley Way, Headington
Oxford OX3 9DU
Phone: 01865 222 738
Email: [log in to unmask]
|