Hi Donghoon, Jonathan, Michael,
we've run our analysis using both methods suggested here in order to
get a feel for which is 'best'. It seems in our hands, that a well-
replicated paradigm (self-reflection) produced more robust (higher t
values) and larger (higher k values) clusters of activation when we
concatenated across runs, adding linear trend regressors and session
blocks, as suggested here by Michael. Because we've run this paradigm
many times before, we were able to focus specifically on expected
regions of interest, and showed the following improvements using the
concatenated approach:
Default Mode (baseline > semantic control)
mpfc: Multiple runs t = 6.38, k = 95
Concat. runs t = 6.47, k = 183
Post. cing: Multiple runs t = 7.55, k = 1279
Concat. runs t = 9.71, k = 2220
Self-reflection (self > semantic control)
mpfc: Multiple runs t = 5.78, k = 258
Concat. runs t = 10.06, k =2467
post. cing: Multiple runs t = 5.18, k =129
Concat. runs t = 8.23, k = 504
The upshot of this, is that in 2 well-studied contrasts, we see larger
swaths of more robust activation in each expected region, when
concatenating across runs rather than analysing as multiple runs with
the high-pass filter.
I'd be interested to see whether anybody else has run this type of
comparison on their particular paradigms to see whether these
differences hold up,
Joe
On Apr 2, 2009, at 5:41 AM, Jonathan Peelle wrote:
> Hi Donghoon
>
> As a bit of an alternative to what Michael suggests, I think if you
> add each session to SPM's first level analysis separately you will be
> ok. Session regressors are automatically added, and the standard
> highpass filter would I think take care of slow scanner drift effects.
>
> You will need a separate regressor for each condition, for each
> session. So if you have 9 sessions, you will have 9 columns in your
> design matrix for each condition. To see the average activation for
> this condition you perform a contrast across these 9 columns; you can
> then take this contrast image up to second level analyses.
>
> If you haven't run your study yet, you may want to see if you can
> reduce the number of sessions. It seems that a 10-15 minute session
> is usually well-tolerated by participants, and it does make the
> analysis somewhat easier if you have fewer sessions to keep track of.
>
> Good luck!
>
> Jonathan
>
>
> On Tue, Mar 31, 2009 at 11:20 PM, Michael T Rubens <[log in to unmask]
> > wrote:
>> Concatenate your sessions, adding a linear drift regressor for each
>> session
>> (linspace(-1,1,nscan)) and a scan block regressor for each session
>> except
>> the last one (ones(nscan,1)). Make sure that your sessions are
>> adequately
>> spaced (ie., no trial onsets within last 10 seconds of a session).
>> If you
>> have trial onsets that violate this, you should model them with a
>> separate
>> garbage regressor.
>>
>> Cheers,
>> Michael
>>
>> --
>> Research Associate
>> Gazzaley Lab
>> Department of Neurology
>> University of California, San Francisco
>>
>>
>> On Tue, Mar 31, 2009 at 3:09 PM, Donghoon Lee <[log in to unmask]>
>> wrote:
>>>
>>> Dear SPMers,
>>>
>>> I have a question about beta estimation for a small number of trials
>>> within a session.
>>> For example, I have 12 conditions and 45 trials per condition for
>>> a rapid
>>> event related design.
>>> I divided the full experiment into 9 runs(sessions). So, there are
>>> only 5
>>> trials per condition in each session.
>>> The beta estimation is pretty poor because of the small number of
>>> trials
>>> in each session. So activation is also low and varied across
>>> sessions.
>>> How does SPM care about this problem? Just run a long session? It
>>> will
>>> make participants feel very fatigue and the signal drift should be
>>> matter.
>>> How can I estimate betas for full trials in the experiment?
>>>
>>> Best wishes,
>>>
>>> Donghoon
>>>
>>
>>
>>
--
Joe Moran, Ph.D.
Department of Brain & Cognitive Sciences
46-5081, MIT
Cambridge, MA 02139
tel: 617.324.5124
fax: 617.324.5311
email: [log in to unmask]
http://web.mit.edu/gabrieli-lab/People/moran.htm
|