I was assuming that the design was something along the lines of:
<1 second sample
3-4+second delay
1-2 second test

These should be separable because their shapes are quite different. As the times of each phase shrink, it becomes harder to separate.

Additionally, you could test the efficiency of the design by building the design matrix and estimating its efficiency to see if the trials can be separated with the pre-specified number of trials.


Best Regards, 
Donald McLaren, PhD


On Tue, Nov 3, 2015 at 9:53 PM, <[log in to unmask]> wrote:

As an additional note, you might want to try using M-sequence to optimize your estimation efficiency since you are planning to do multivariate analysis.

Also, I am not so sure that it is a good idea to model the three event types (sample, delay, test) as suggested by Donald, because their onsets are pretty close in time (within the same trial). In that case, I would expect strong correlation between the regressors in your GLM, which is extremely bad for your parameter estimation. Perhaps Donald has some further insights regarding this issue?


Best,

  Ce



----- 原始邮件 -----
发件人:"MCLAREN, Donald" <[log in to unmask]>
收件人:[log in to unmask]
主题:Re: [SPM] jittering question
日期:2015年11月04日 10点35分

Use optseq2 to jitter the ITI between trials. Do not jitter the time of the delay period unless you want to change the difficulty of each trial. This won't be ideal, but you could use the possible designs to build the correct design matrices containing the three event types (sample, delay, test) and compute the efficiency of them. Additionally, if you test it this way, you can modulate correct/incorrect trials to see which jittering leads to the best efficiency over a range of accuracy and trial orders.

As the delay period will likely be at least a couple of seconds, the sample and test will be separable events.

Best Regards, 
Donald McLaren, PhD


On Tue, Nov 3, 2015 at 7:12 AM, hamed nili <[log in to unmask]> wrote:

Dear all,

 

I guess this might be a rather boring, old question about design (!) but I am looking for a short recommendation.

The question is about jittering the ISI:

I want to design a simple visual experiment (delayed match-to-sample task) and plan to do both univaraite and multivariate analysis on the data.


There are two time windows that I am considering jittering of their length: the delay period (between sample and test) and the ITI.

Having read some articles and posts on design power and efficiency, am I right that if I am happy with a canonical HRF, I could go for the following setting:

  • -       jittering the length of the delay period (to be able to estimate responses to both sample and test stimuli, modelling them as separate events)
  • -       jittering the ITI in an independent manner from the delay jittering (I first thought of jittering the ITI in a way that the sum of the delay and the ITI are constant, giving maximum power in estimating the response to the sample, but then thought this might make the sample special consequently).

Thanks,

Hamed