Another way of thinking about this issue is the following:
The equation used is Y=XB+e, which is solved with B=pinv(X)*Y leaving
e=X*pinv(X)*Y.
This works for all GLM where the errors are independent; however, as
soon as you introduce a repeated measure, then the equation becomes
more complicated. The "e" in the case of repeated-measure designs is
for the within-subject effects. The between subject effects use a
different model. Now, when you have two within-subject effects; there
are three within-subject error terms: factor 1 error, factor 2 error,
factor1*factor2 error. If you include the subject*factor1 and
subject*factor2 interactions, then "e" is the factor1*factor2 error.
If you don't include these subject terms, then "e" is the pooled error
term across the repeated conditions, which is statistically incorrect
since the error between subjects can't decrease simply because the
number of times you each subject. In repeated-measure designs, you
want to partition the variance due to each factor. You also need new
software to compute the other error terms and apply them to the
correct effects. The pooling verus partitioning of error variance is
the reason that effects can get stronger or weaker with the same data.
In summary, you get the basis of why its recommended that you form the
contrasts earlier and why one needs to be very careful about
within-subject models.
Best Regards, Donald McLaren
=================
D.G. McLaren, Ph.D.
Postdoctoral Research Fellow, GRECC, Bedford VA
Research Fellow, Department of Neurology, Massachusetts General Hospital and
Harvard Medical School
Office: (773) 406-2464
=====================
This e-mail contains CONFIDENTIAL INFORMATION which may contain PROTECTED
HEALTHCARE INFORMATION and may also be LEGALLY PRIVILEGED and which is
intended only for the use of the individual or entity named above. If the
reader of the e-mail is not the intended recipient or the employee or agent
responsible for delivering it to the intended recipient, you are hereby
notified that you are in possession of confidential and privileged
information. Any unauthorized use, disclosure, copying or the taking of any
action in reliance on the contents of this information is strictly
prohibited and may be unlawful. If you have received this e-mail
unintentionally, please immediately notify the sender via telephone at (773)
406-2464 or email.
On Tue, Aug 30, 2011 at 10:49 PM, James Porter <[log in to unmask]> wrote:
>
> To throw in my two cents, I'd like to note that I've seen very large differences between running the means of single-subject level contrasts compared to performing the contrast at the group level. Sometimes the difference is just a slightly increased/attenuated map seemingly due to changes in degrees of freedom and clustering thresholds (overlay the unthresholded maps in fslview and you can tell they are essentially identical). Other times, the changes amount to large shifts in activation maps that indicate *very* different apportioning of variance across the two methods. In those cases, viewing the two maps in fslview shows that the results are fundamentally different.
>
> According to Mark Jenkinson (who I have zero reason to second-guess), contrasts should be modeled as early as possible in your analysis pipeline to account for covariances between the EVs. See his reply to my query on this issue a while back:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind0707&L=FSL&P=R35758
>
> -Jim
>
>
> On Tue, 30 Aug 2011 10:45:41 -0500, Jeanette Mumford <[log in to unmask]> wrote:
>
> >Hi,
> >
> >It depends on a few things, including whether you had the same number of
> >trials for each of posHit and posMiss. Basically, it is the difference
> >between calculating the average for two sets of data and then averaging this
> >result versus putting all of the numbers together and averaging them all at
> >once.
> >
> >I can't say which way the result would go or whether they would be that
> >different.
> >
> >Jeanette
> >
> >On Tue, Aug 30, 2011 at 9:18 AM, Courtney Haswell <[log in to unmask]
> >> wrote:
> >
> >> Thanks for the reply. I just want to clarify a simple question. If I
> >> have the positive events separated into two EVs, posHit and posMiss,
> >> and the negative events separated into two EVs, negHit and negMiss,
> >> and I run the contrast at the first level as "1 1 -1 -1" is that the
> >> same thing as having only two EVs with all positive events and all
> >> negative events and running "1 -1" at the first level? The contrast
> >> would be looking at positive events vs. negative events. It seems like
> >> it would be different but I am not sure what the effect would be.
> >>
> >> Thanks!
> >>
> >> On Mon, Aug 29, 2011 at 3:33 PM, Jeanette Mumford
> >> <[log in to unmask]> wrote:
> >> > Hi,
> >> >
> >> > Generally I prefer running all contrasts at the first level. Mostly
> >> because
> >> > if there is any correlation between your regressors, this will be
> >> accounted
> >> > for in your contrast estimate. At the second level, as you've done, they
> >> > are assumed to be independent from each other. Plus, I think it makes
> >> the
> >> > analysis much easier at both levels.
> >> >
> >> > Cheers,
> >> > Jeanette
> >> >
> >> > On Mon, Aug 29, 2011 at 10:48 AM, Courtney Haswell
> >> > <[log in to unmask]> wrote:
> >> >>
> >> >> Hi FSL experts!
> >> >>
> >> >> I have a question about my analysis in FEAT and which level I should
> >> >> combine my EVs at. At the first level I have 6 EVs: posHit, posMiss,
> >> neuHit,
> >> >> neuMiss, negHit, and negMiss and my experiment has 9 runs. The contrast
> >> that
> >> >> I am interested in right now is pos vs. neu (so posHit+posMiss vs
> >> >> neuHit+neuMiss). Currently I model each of this EVs at the first level
> >> and
> >> >> then combine the COPEs at the second level for this contrast. So if
> >> after
> >> >> the first level posHit=COPE1, posMiss=COPE2, neuHit=COPE3, and
> >> >> neuMISS=COPE4, then in the second level I would input run1/COPE1,
> >> >> run2/COPE1, run1/COPE2, run2/COPE2 etc and have the statistical model
> >> like
> >> >> an unpaired t-test instead of an average.
> >> >>
> >> >> My question is if it would be better do just do the comparison at the
> >> >> first level and then average at the second when I combine runs. So if
> >> >> posHit=EV1, posMiss=EV2, neuHit=EV3, and neuMiss=EV4, set up the
> >> contrast
> >> >> for each run as "1 1 -1 -1 0 0." Then at the second level I would just
> >> >> average that COPE across runs. Which is the better way to do this
> >> analysis
> >> >> and is there a difference between doing this at the first or second
> >> level?
> >> >>
> >> >> I can send an example FEAT file if it would make more sense. Thanks in
> >> >> advance!
> >> >
> >> >
> >>
> >
|