Print

Print


Hi Donald,

Thanks so much!  A couple of follow up questions are indicated in my inline
responses below.

Cheers,
Daniel


On Thu, Dec 12, 2013 at 9:29 PM, MCLAREN, Donald
<[log in to unmask]>wrote:

> Daniel,
>
> Please see inline responses below.
>
>
> On Wed, Dec 11, 2013 at 10:12 AM, Daniel Weissman <[log in to unmask]>wrote:
>
>> Dear SPMers,
>>
>> I'd like to run an event-related fMRI study to investigate the neural
>> bases of sequential effects involving two trial types: A and B (each trial
>> lasts 2 seconds and involves a stimulus for 300 ms, followed by the
>> subject's response). To this end, I'd like to employ a first-order
>> counterbalanced sequence to produce equal numbers of four sequential trial
>> types.
>>
>> 1) A preceded by A
>> 2) A preceded by B
>> 3) B preceded by A
>> 4) B preceded by B
>>
>> Finally, I'd like to use a constant inter-trial-interval (ITI), because
>> sequential effects in my task will likely change with the ITI. This is a
>> bit different from my normal practice of jittering the ITI and so it raises
>> a few questions in my mind.
>>
>> 1. Although the absence of jitter will make it nearly impossible to
>> observe the common effect of all conditions vs. baseline, my understanding
>> is that I will still be able to perform contrasts involving the 2 main
>> trial types (e.g., A minus B) as well as the 4 sequential trial types
>> (e.g., A preceded by A minus A preceded by B). This is because differences
>> between beta values are relatively stable when no jitter is present, even
>> though the individual beta values themselves are highly unstable. Is this
>> correct?
>>
>
> The absence of jitter reduces the ability to determine the amplitude of
> the events because you can lead to an underdetermined model and unstable
> estimates. However, given the fact that you have 4 trials types, I don't
> think this will be a large concern, especially if you are assuming an
> hemodynamic response.
>
> If you can't get stable estimates of the 4 trials types, then the
> subtractions will also be unstable.
>

****I've always thought that unjittered, rapid event-related designs permit
contrasts between trial types (i.e., subtractions) even though stable
estimates of individual trial types are not possible.  Are you saying this
is not the case?***

>
> One way to insert jitter into this design is to insert miniblocks -->
> ABABABABAB pause BABBABABAB pause. The pauses help insert the jitter.
> Another alternative is to use an ITI that is not matched to the TR. This
> way each trial starts at a different point in the TR leading to sampling
> the response at different timepoints throughout the response.
>

****In the mini-blocks above, you alternate between trial types A and B. Is
this just a coincidence or are you assuming an alternating design? I was
actually thinking about a first-order counterbalanced design.....


>
>
>>
>> If so, then, at the nuts-and-bolts level, would I model the 4 sequential
>> trial types above (i.e., A preceded by A, A preceded by B, B preceded by A,
>> B preceded by B) and then form contrasts based on the resulting betas? If
>> so, then would the A - B contrast be formed by contrasting
>>
>> AVERAGE (A preceded by A, A preceded by B)
>>
>> MINUS
>>
>> AVERAGE (B preceded by A, B preceded by B)?
>>
>
>
> Yes. This is how to form the contrasts.
>

***Thanks!

>
>
>>
>>
>> 2. To be able to look at the common effect of all stimuli vs. baseline
>> (e.g., to functionally define ROIs for subsequent orthogonal contrasts,
>> I've been considering breaking up the run into alternating periods of task
>> and rest as suggested by Rik Henson (2006, Chapter 15 from Human Brain
>> Function, I think). As Rik states on page 209 of this chapter:
>>
>> "Another problem with null events is that, if they are too
>> rare (e.g. less than approximately 33 per cent), they actually
>> become ‘true’ events in the sense that subjects may be
>> expecting an event at the next SOA and so be surprised
>> when it does not occur (the so-called ‘missing stimulus’
>> effect that is well-known in event-related potential (ERP)
>> research). One solution is to replace randomly intermixed
>> null events with periods of baseline between runs of
>> events (i.e. ‘block’ the baseline periods). This will increase
>> the efficiency for detecting the common effect versus
>> baseline, at a slight cost in efficiency for detecting differences
>> between the randomized event-types within each
>> block."
>>
>> In this chapter, Rik doesn't refer to previous event-related studies that
>> have employed this approach. Do you know of any?
>>
>
> I am not aware of studies, but this is exactly what I suggested above. You
> will lose a few trials that don't have any preceding trial, but that
> shouldn't have a huge impact.
>
> ***Ok - sounds good!

>
>> Also, if I employed Rik's suggested approach, does the following sound
>> reasonable? I'd consider breaking up the 96-trial-long sequence in each run
>> into four 24-trial-long sequences and then separating each of these
>> 24-trial-long sequences with a fixation block. Since trial duration is
>> about 2 seconds, this would amount to alternating between 48-second-long
>> task blocks and (I was considering) 20-second-long fixation blocks. While
>> this is far from the most efficient block design (i.e., 16 seconds on, 16
>> seconds off), I'm thinking it may still allow me to observe the common
>> effect of all stimuli vs. baseline as well as the contrasts between
>> different sequential trial types discussed earlier. Does this sound right?
>>
>
> I think that would be fine. Just remember to have fixation at the
> beginning and end of the run as well. I also think you need more than 96
> trials. I would say you need at least 128 analyzable trials + the trials at
> the beginning of each block. If you only want to use correct trials, then
> you will need more than the 128 trials. 128 trials is 32*4.
>
>
>>
>> ****Ah, yes, I should have said 96 trials per run, not total. I plan to
have several runs and so should get way more than 96 trials total.


> 3) Adding one more level of complexity to the design above, I may also
>> wish to model with different regressors different trials from a given trial
>> type (e.g., A preceded by B) depending on the speed with which a subject
>> responds. For example, I might want to model the 20% fastest trials
>> separately from the 20% slowest trials within each trial type. In general,
>> event-related fMRI allows for such "post-hoc" creation of trial types, but
>> I just wanted to check whether anyone sees a problem with this procedure in
>> the designs I am asking about.
>>
>
> This isn't necessarily a problem; however, the post-hoc shorting will
> leave you vastly underpowered. At 128 trials (or 32 per event type), 20%
> would be 6-7 trials - which is not enough to be able to get a stable
> estimate of the BOLD response. Also, depending on the task, you might want
> to simply model RT as the duration of the trial. If you want to look at
> response speed, then you would need 640 trials to get enough power. This
> would be 32 trials in each trial type quintet.
>

*****When you say "if you want to look at response speed", do you mean "if
you want to divide each of the four trial types into 5 quintiles based on
RT?"  If so, I understand why you recommend 640 trials. Out of curiosity,
why do you recommend 32 trials for each "condition" or "trial type".
 What's special about 32?

>
> As an aside, using 0 second duration prohibits any PPI analysis on the
> data.
>
>
*** I was thinking that not jittering the ITI might complicate PPI
analyses. Does including rest periods between "task blocks" as you
recommend alleviate this complication?

***Also, why would modeling each trial with a duration of 0 seconds
prohibit any PPI analysis on the data? And, what's the remedy?  Model each
trial with a constant non-zero value (e.g., 0.5 s) or with a  duration
equal to the RT?

>
>> Finally, if you've read this far, THANK YOU!!!!!
>>
>
>
> Hope you find this useful.
>

***Very much so!  Thanks!!!!!

>
>
>>
>> Best regards,
>> Daniel
>>
>> --
>> Daniel Weissman, PhD
>> Associate Professor
>> Department of Psychology
>> University of Michigan
>> 1012 East Hall
>> 530 Church Street
>> Ann Arbor, MI 48109
>>
>
>


-- 
Daniel Weissman, PhD
Associate Professor
Department of Psychology
University of Michigan
1012 East Hall
530 Church Street
Ann Arbor, MI 48109