Hi,
I have been following the discussion about jittered trial design and had a
couple of questions related to my design. I am analyzing data from 4 groups
of a delayed response task. The task consist of two runs (32 trials by run)
with 6 different loads of stimulus. The trials start with a fixed 4 sec
period were the stimulus is shown (series of letters) followed by a variable
delay of 2 to 12 seconds, following this there is a fixed period of 1.5 secs
were the subject is supposed to match and answer according to to the series
of letters that were shown at the start of the trial, finishing the trial
with an ITI of variable duration (3-5) seconds. I am able to to model both
the start of the trial quite well and the response section of 1.5 sec. I am
using a duration of 1 second and a parametric modulator that uses the load
effect for the 4 sec section. In the case of the response section I used a
duration of 1.5 (length of the whole answer period) and the same parametric
modulator used before (load). My problem is that I am afraid that since the
jittered minimum interval is 2 secs the signal in those trials overlaps
between each other. Is there a way to model this variable jitter interval in
a way that could maximize my signal? A proposed contrast to model the jitter
interval? Or exclude the jittered trials that have a value of less than 4
seconds?
Thank you very much for any ideas to deal with this.
--
Andres Roman-Urrestarazu
|