yes, the -4 means that it estimates a pre-stimulus baseline, which
you might use to normalize the baseline across conditions. I think
that instead of changing marsbar to match my tool, you should change
the time window values in my code to match marsbar (that's what
Daphna did)
cheers
rp
On Dec 15, 2005, at 7:35 AM, Steven Charles Lacey wrote:
> Daphna,
>
> I tried changing the averaging window in marsbar by passing the
> following arguments to event_signal 'window', window=(-4,24), and
> dt=0.0781. I get an error message saying basically that window
> can't have a negative value. In marsbar, window is the time in
> seconds over which to take the average. It makes sense that it
> wouldn't accept a negative value. However, it begs the question
> what does -4 mean? And what are the units? Is that 4 seconds before
> the onset of the event? Four seconds before the peak of the
> response? In addition, how did you code it in marsbar given that
> marsbar won't accept a negative value?
>
> Thanks,
> Steve
>
> On Tue, 13 Dec 2005, Daphna Shohamy wrote:
>
>> We recently conducted a series of comparisons between the Poldrack
>> ROI toolbox and the Marsbar ROI toolbox, using a synthetic dataset
>> (for individual subject data), and importing the identical ROIs
>> into both tools (note that MB defines ROIs differently, which
>> might also account for differences). The brief summary is that we
>> found they both lead to very similar results in the designs I
>> tested, including slow and fast event-related, and on vs. off TR
>> designs.
>>
>> Some of the differences in the results we found earlier appeared
>> to be a result of
>>
>> (a) differences in the default time windows across which the
>> averaging takes place
>> (the default window in RP's toolbox is -4 to 24; in the MB toolbox
>> it is 0-26; you can change the window in the RP toolbox in
>> spm_roi_graph_full_avg.m; in MB, you can easily define the window
>> either in the gui or in your script)
>>
>> (b) how the voxels are represented; Marsbar accesses the original
>> data, while the RP toolbox apparently uses SPM produced parameters
>> to mask the voxels, so sometimes this can differ (Russ might be
>> able to elaborate on this).
>>
>> In any case, in our examinations, when controlling for these two
>> things w synthetic and real data, we found similar results with
>> both tools.
>>
>> Hope this helps,
>>
>> Daphna
>>
>>
>> --
>>
>> Daphna Shohamy, Ph.D.
>> Department of Psychology
>> Stanford University
>> Jordan Hall, Bldg. 420
>> Stanford, California 94305
>> Tel: 650-724-9515
>>
>>
>> On Dec 13, 2005, at 6:46 AM, Steven Lacey wrote:
>>
>>> How could I change the averaging window length to perform the
>>> test myself?
>>> -----Original Message-----
>>> From: Russ Poldrack [mailto:[log in to unmask]]
>>> Sent: Tuesday, December 13, 2005 9:42 AM
>>> To: Steven Lacey
>>> Cc: SPM-mailing list; Daphna Shohamy
>>> Subject: Re: [SPM] Brett's MarsBar and Poldrack's roi toolbox
>>> produce different results
>>> Anthony Wagner's lab has recently examined this in detail. I
>>> beleive that the differences arose primarily because of
>>> differences in the length of the default averaging window -
>>> however, I am cc'ing Daphna Shohamy here in case she has anything
>>> to add, as she did those analyses.
>>> cheers
>>> russ
>>> On Dec 13, 2005, at 6:34 AM, Steven Lacey wrote:
>>>> Hi,
>>>> I have data from a rapid presentation event-related fMRI study.
>>>> For each subject I wanted to extract the percent signal change
>>>> (psc) associated with four different event types for a given
>>>> roi. For each subject, I wanted 4 estimates of psc, one for each
>>>> event type. I am using spm99. Originally, I used Russ Poldrack's
>>>> roi toolbox (version 2.31) to do this, which seemed to work
>>>> fine. The hypothesized effects were present and reliable. Later
>>>> I wanted to extract the psc for each voxel in the roi
>>>> separately. I wasn't sure how to do this using the roi toolbox
>>>> and did it in MarsBar (version 0.38.2) instead. As a sanity
>>>> check, I extracted the average psc from the roi with MarsBar and
>>>> compared the results to that obtained with Poldrack's roi
>>>> toolbox. When averaged across subjects the results were similar,
>>>> but not identical. The differences were generally in the size of
>>>> the effect not the direction of the effect. However, at the
>>>> individual subject level not only were the size of the effects
>>>> different, but the direction as well! For some subjects the
>>>> estimates of psc were quite similar between the two techniques,
>>>> but for others there were noticeable differences. The
>>>> correlation between psc estimates averaged across the roi for a
>>>> given event type was 0.4, and the rank orderings of subjects
>>>> based upon psc were different between the two techniques. What
>>>> provides some comfort is that the average effects were close and
>>>> the significant tests worked out about the same. However, the F-
>>>> values for psc calculated with the roi toolbox were larger (e.g.
>>>> ~17) than that calculated with MarsBar (e.g., ~5).
>>>> Does anyone know what could cause these differences? Are there
>>>> differences in the way each toolbox works that could account for
>>>> these differences? If so, what are they? I would like some
>>>> confirmation that the differences are properties of the methods
>>>> each toolboxes uses and not an error on my part.
>>>> Thanks for any information you can provide,
>>>> Steve
>>> ---
>>> Russell A. Poldrack, Ph.d.
>>> Assistant Professor
>>> UCLA Department of Psychology
>>> Franz Hall, Box 951563
>>> Los Angeles, CA 90095-1563
>>> phone: 310-794-1224
>>> fax: 310-206-5895
>>> email: [log in to unmask]
>>> web: www.poldracklab.org
>>
---
Russell A. Poldrack, Ph.d.
Assistant Professor
UCLA Department of Psychology
Franz Hall, Box 951563
Los Angeles, CA 90095-1563
phone: 310-794-1224
fax: 310-206-5895
email: [log in to unmask]
web: www.poldracklab.org
|