Hi Joe,
I think the issue here is whether one wants a measure of overall percent
signal change for many summed, or overlapping, events, or a measure of
percent signal change to each single event. For a block design, the former
is usually the case; we are normally interested in the total percent signal
change for the block as a whole. Then it makes sense to normalise based on
the magnitiude of the EV for the block.
For a rapid event-related design, we are normally interested in the mean
percent signal change provoked by a single event. In this situation, it
seems to me more correct to normalise based on the magnitude of the ideal
HRF before convolution. Why? Consider a simple case in which we have 2
events, and consider 2 experiments; one sparse and the other rapid. Let's
say we convolve our stimulus-based stick function with an ideal HRF with a
magnitude of 1. Let's say for simplicity that the HRF is just a square wave
of duration 5.
So we have:
Stim onsets:
sparse: 0 0 1 0 0 0 0 0 0 1 0 0 0 0
rapid: 0 0 1 0 1 0 0 0 0 0 0 0 0 0
EVs:
sparse: 0 0 1 1 1 1 1 0 0 1 1 1 1 1
rapid: 0 0 1 1 2 2 2 1 1 0 0 0 0 0
Now let's say that a single event provokes a signal change of 1 percent,
and that our baseline signal is 1000. Then the expected signal would be:
Signal:
sparse: 0 0 10 10 10 10 10 10 0 0 10 10 10 10 10
rapid: 0 0 10 10 20 20 20 10 10 0 0 0 0 0 0
If we fit the EVs to these data, we would in both cases get a beta of 10,
which would yield a correct percent signal change if we normalised
according to the ideal HRF magnitude before convolution. Normalising to the
EV magnitude (i.e. after convolution) would, however, yield a percent
signal change in the rapid event-related case of just 1/2 the real, single-
event signal change.
Hope this is helpful.
Tom
|