Hi all,
I have a question regarding the set-up of a model for working memory
tasks with variable delay duration. Currently I model all delays as one
regressor consisting of epochs with variable duration, but the same
height, and convolve these with a gamma function. However, I assume that
such a model makes the assumption of a constant change of the underlying
neuronal activity. I would like to compare this kind of assumption with
two other possibilities, a constant increase or a constant decrease (ramp).
My first question is now, how to realize these two other models. So far
I use a 3 column format format to specify the delay epochs, but this
only leads to a square wave function.
Would it be a reasonable approach to split up a delay period into
smaller bins (let's say 100 ms) and to increase/decrease the third
number with each subsequent bin? I would orthogonalize these three
regressors in order to identify regions that are better described with
an increase compared to an constant change.
Another, more traditional approach would be to create a separate
regressor for each delay duration and to model a linear increase as as
specific contrast. However, I think that this approach has several
disadvantages, mainly that the confound of the flanking event types
(encode and probe) increases especially for short delay durations, i.e.
that the signal variance could not ascribed as well to the different
event types. Am I right here, or would this second method be the better way?
Thanks for any pointers on this issue,
wolf
|