Yes. All points not explicitly modeled contribute to the implicit baseline value.

Best Regards, 
Donald McLaren, PhD


On Thu, Sep 10, 2015 at 11:04 AM, Joelle Zimmermann <[log in to unmask]> wrote:
Hi Donald,

Thanks for your feedback. Just one point I want to followup with regard to the baseline. Am I correct to say that SPM takes all the non-stimulus intervals and averages across them. So for ex if I have 5 mins of rest at the beginning, and then 15 sec ITI's between trials, it will take all of this non-stimulus time and average across it to get a sort of 'average' baseline condition.

Thanks,
Joelle

On Wed, Sep 9, 2015 at 9:16 PM, MCLAREN, Donald <[log in to unmask]> wrote:
See below.

On Wed, Sep 9, 2015 at 4:10 PM, Joelle Zimmermann <[log in to unmask]> wrote:
Hi Helmut,

Thanks for your guidance. Some comments/questions below:

In my opinion the main limitation seems to be the single condition with a very long trial duration (2 min). Block predictors are already a very crude approximation for longer durations, and in case of a learning paradigm I would especially be surprised if neural activation is constant within a 2 min trial but (possibly) jumps from the 1st to 2nd trial and so on.
Right, I have been considering looking within trials as well. Mechanistically, in SPM, how could I set this up? (in terms of the conditions). I guess I could do 1 condition, but instead of 10 trials, I could divide each trial into 2 and have 20 'onsets' for example..?

By definition, a trial has constant activation throughout. Thus, if you want look within a "trial", then you would need to define multiple trials within each trial. Keep in mind that BOLD is a noisy measure, which is why you need to average multiple instances of each trial type to get a stable estimate.
 
 
Another critical aspect is the high-pass filter, as due to the length, there's probably quite some signal within the low frequencies. Put it simply, the paradigm consists of 10 cycles of ~2 1/4 min each, and a default HPF setting will probably remove major parts of e.g. a linear change over time.
I was worried about the HPF based on some of our discussions in the past. I've played around with this, for example setting HPF to Inf, but doesn't make a difference in the resulting activation patterns.

Concerning overall length, some labs indeed prefer going with short runs (e.g. < 11 min) but usually, it shouldn't be that much of a problem. In the lab I used to work it was not uncommon to go with a single fMRI run exceeding 20 min, which was conducted successfully with  different paradigms and tasks. The data sets I'm currently analyzing are also based on single fMRI runs lasting around 20 - 35 min. With several conditions (say 4 plus a dummy condition for "no-stimulus trials"/blank periods) and in case you're interested in reaction time effects you might want to go with a larger no. of trials than usual (e.g. 60), and with 300 short trials and an average ITI of 5 s this results in a session length of 25 min.
You go with ITI (inter-trial interval?) of only 5 sec, and that is the only rest period for baseline comparison? During the 15 sec Inter-trial-interval, I was worried that this is not enough rest compared to most paradigms, and may not be enough datapoints for a good 'baseline' comparison.

The key is not the amount of continuous baseline, but the amount of non-stimulus intervals. A general rule of thumb is to have about as must rest periods as you have stimulation periods for each condition. In Helmut's example, assume that each trial is 1s. On average, you will have 4s of baseline per trial. This will be more than sufficient for getting good estimates. Additionally, in Helmut's example, the ITI would be variable with more shorter ITIs (see Optseq2 for more details on jittering). 

If you are dealing with young healthy volunteers, longer runs may be acceptable and provide good data; most of my experience is with older individuals or patient populations that do better with shorter runs. If you are aiming for 300 trials, you could break it up into 4 6-7 minute runs.

 

We're already done data-collection, this is why I want to see at least if I am finding sensible and interesting patterns already from the data we have right now. 

Thanks for your help,
Joelle