Hi,
On Tue, Aug 16, 2011 at 7:48 PM, mmkleung <[log in to unmask]> wrote:
> Dear all,
>
> I've heard a suggestion that never models any event as 0 seconds, as this will make a lot of signals go to the error term in the GLM and reduce the effect size. Is that true? If I did present my stimuli for 1 second, should I model the events as 1s instead of 0s?
The event duration of '0' in SPM is handled as a special case - at
least the last time I looked.
In fact '0' generates an hrf regressor that is very similar to that
for 1 second, so no, the signals won't go into the error term.
If you model your events as 1 second I think you'll see very small
differences in the results.
Regressors for - say - 0.1 seconds - are a lot lower than those for
events of '0'. This means that their beta parameters have to be
larger in order to fit the same amplitude of response.
It's always tricky comparing events of different durations because of
the assumptions you have make about linearity of response -
assumptions that seem unlikely at this level,
Best,
Matthew
|