Another follow up of this issue. I analyzed the data again from scratch
without concatenating the runs/sessions. The most significant clusters are
quite similar, though there are slight differences in significance and
cluster pattern. Also it seems the new analysis is more "sensitive" in
general (though see below for a potential reason).
However, another issue in the group model setup troubles me. Basically, the
current design is similar in a way to both examples (Paired Group
Difference, Multi-Session & Multi-Subject) in the feat document. The first
(Paired Group Difference) requires only a second-level analysis to access
the mean group difference, which is basically the setup proposed below to
exam the main effect of IV2 (with ev3-ev7 at group level modeling out the
paired/repeated measure). The second (Multi-Session & Multi-Subject)
requires a second-level analysis to pool over runs/sessions and a third-
level analysis to exam group mean. In the setup below, I originally
proposed to use the first contrast to get the average across means of
subjects (ev3-ev7) to examine the main effect of IV1. With a second though
and reference to the Multi-Session & Multi-Subject example, I found that it
is problematic because it doesn't seem to treat the subjects as a random
variable but instead treating the runs/sessions as a random variable but
subjects as a fixed variable. Am I right? This might be the reason that the
results turned out to be more significant. Therefore, should I have a third-
level analysis to get the main effect of IV1 from the second-level cope
images? In the setup below, I didn't have a contrast/cope setup for each
subject's pooled runs/sessions effect, but the PE maps (pe3-pe7) should be
the same, right?
If it is indeed the case, which I have to set up different stages to
examine effects of different IVs, it seems not efficient and make the
analysis and interpretation of the results confusing. Does it imply that we
should NOT design the study this way in the first place, with one IV
manipulated within a run/session and another IV manipulated across-run?
Thanks again for the feedback.
On Sat, 18 Sep 2004 07:57:19 +0100, Stephen Smith wrote:
>Hi - this sounds ok, but your method of generating data for approach 2, by
>splitting up the results of approach 1, is probably not optimal - in
>practice you'll still probably have some effects in the temproal
>filtering, and varying amounts of smoothness induced by the unusual way of
>doing motion correction. better to compare the approaches, if that's what
>you want to do, byt doing each totally separately!
>
>Cheers.
>
>
>On Mon, 13 Sep 2004, Xun Liu wrote:
>
>> Just to follow up this thread, I finished the analysis of Approach 2
below.
>> Besides the conceptual complexity, Approach 2 also took much much longer
>> time (e.g., one cope#.feat took more than 10 hours with 12 subjects).
>> However, the results are not much different from those of Approach 1
(with
>> concatenated runs). There are some trivial differences but the major
>> clusters/patterns stay the same, especially after thresholding.
>> One thing should be noted though. To better compare the two approaches, I
>> took the pre-processed time series from Approach 1 (i.e.
>> filtered_func_data -- this is the data after all pre-processing steps but
>> before FILM prewhitening, right?) and split them into separate runs, in
>> order to keep the pre-processing part constant. Though there were some
>> subjects with larger across-run motion, I took care to motion correct
those
>> with two-stage motion correction (within-run and then across-run) already
>> when I went with Approach 1. The spatial filter and temporal filter
(high-
>> pass) should not be affected either. In terms of the signal intensity
>> differences across runs, I had two constant EVs to model the mean shifts
of
>> the second and third runs with regard to the first run in Approach 1.
>>
>> P.S. When it is doing the estimation, some times it pops up the "ndtri
>> domain error" message when the percentage numbers progress. But I think
the
>> results are not affected by this. Any idea of what the message means?
>>
>>
>> >> On Sat, 4 Sep 2004 09:21:50 +0100, Stephen Smith wrote:
>> >>
>> >> >Hi, yes that all makes sense. Approach 2 is what you want and you're
right
>> >> >to take out the constant-EV ev1 from your original email.
>> >> >
>> >> >Cheers.
>> >> >
>> >> >
>> >> >On Tue, 31 Aug 2004, Xun Liu wrote:
>> >> >
>> >> >> On Tue, 31 Aug 2004 21:17:33 +0100, Xun Liu wrote:
>> >> >>
>> >> >> >Concatenate run/session or not?
>> >> >> >
>> >> >> >We have a design with 2 independent variables (IV1 and IV2), each
with 3
>> >> >> >levels. IV1 is manipulated across blocks within a run and IV2 is
>> >> >> >manipulated across runs. There are two approaches I can think of
to anaylze
>> >> >> >the data for the main effects and interaction for this design.
What is the
>> >> >> >advantage and disadvantage of each approach, from the conceptual
and
>> >> >> >practical perspectives? Is one more valid than the other from the
>> >> >> >statistics point of view? Thanks very much.
>> >> >> >
>> >> >> >
>> >> >> >Approach 2:
>> >> >> >Analyze each run separately and model just the 3
conditions/levels of IV1
>> >> >> >(ev1, ev2, ev3). Then the main effects of IV1 can be set up as
below.
>> >> >> >
>> >> >> >Contrast ev1 ev2 ev3
>> >> >> >mean (1) 1 1 1
>> >> >> >IV1 (2) 1 -1 0
>> >> >> > (3) 1 0 -1
>> >> >> > (4) 0 1 -1
>> >> >> >
>> >> >> >And then proceed to the group analysis with the EVs/contrasts
setup as
>> >> >> >below for the three levels of IV2 (say for 5 subjects)
>> >> >> >
>> >> >> >Then the .gfeat folder should include 4 cope#.feat subfolders,
one for each
>> >> >> >of the contrasts from the first level. zstat2 to zstat4 of
cope1.feat will
>> >> >> >assess the main effect of IV2 (zstat1 is the overall grand mean
of both IVs
>> >> >> >again baseline). zstat1 of cope2.feat to cope4.feat will assess
the main
>> >> >> >effect of IV1. zstat2 to zstat4 of cope2.feat to cope4.feat will
assess the
>> >> >> >interactions.
>> >> >>
>> >> >> Group ev1 ev2 ev3 ev4 ev5 ev6 ev7
>> >> >> 1 1 1 1 0 0 0 0
>> >> >> 1 1 1 0 1 0 0 0
>> >> >> 1 1 1 0 0 1 0 0
>> >> >> 1 1 1 0 0 0 1 0
>> >> >> 1 1 1 0 0 0 0 1
>> >> >> 1 -1 0 1 0 0 0 0
>> >> >> 1 -1 0 0 1 0 0 0
>> >> >> 1 -1 0 0 0 1 0 0
>> >> >> 1 -1 0 0 0 0 1 0
>> >> >> 1 -1 0 0 0 0 0 1
>> >> >> 1 0 -1 1 0 0 0 0
>> >> >> 1 0 -1 0 1 0 0 0
>> >> >> 1 0 -1 0 0 1 0 0
>> >> >> 1 0 -1 0 0 0 1 0
>> >> >> 1 0 -1 0 0 0 0 1
>> >> >>
>> >> >> Contrast ev1 ev2 ev3 ev4 ev5 ev6 ev7
>> >> >> c1 0 0 0.2 0.2 0.2 0.2 0.2
>> >> >> c2 1 0 0 0 0 0 0
>> >> >> c3 0 1 0 0 0 0 0
>> >> >> c4 1 -1 0 0 0 0 0
>> >> >>
>> >> >
>> >> > Stephen M. Smith DPhil
>> >> > Associate Director, FMRIB and Analysis Research Coordinator
>> >> >
>> >> > Oxford University Centre for Functional MRI of the Brain
>> >> > John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK
>> >> > +44 (0) 1865 222726 (fax 222717)
>> >> >
>> >> > [log in to unmask] http://www.fmrib.ox.ac.uk/~steve
>> >>
|