Christian -
These are good questions, and covered to some extent in this Tech Report:
http://www.fil.ion.ucl.ac.uk/spm/doc/biblio/Keyword/ANOVA.html
More specifically, I don't think there is an agreed answer to your question
a): many people prefer to partition the error into parts specific to each
contrast (what you the "classical RFX" approach - and this can sometimes
sidestep the nonsphericity issue, when the contrast vector only has 1df),
but others prefer to use a pooled error, because if there IS only one common
source of random error, it will be estimated most efficiently using all the
data available (making allowance for nonsphericity when necessary). A
further issue specific to imaging data - namely, when using RFT correction
for multiple comparisons - is that RFT is very conservative for low error
df's (<~12), so using all your data and assuming a pooled error can help if
you intend to use RFT (FWE).
As for your question b) - which is far from a silly question - yes, I think
one could put all the data into one model, and partition the error by using
TWO F-contrasts, as I described in an appendix in the above document.
Unfortunately, SPM currently only calculates F-ratio for a single
F-contrast, i.e the ratio of variance explained by the reduced relative to
full model (with two F-contrasts, one could calculate the F-ratio for two
different subspaces). But this requires someone to rewrite SPM's
F-contrasts... (JB? ;-)
Rik
>Say we have a 2 x 2 design, where each subject has been tested with two
>factors in a design like this, where A and B are the two levels of one
>factor, and 1 2 3 the two levels of another factor (e.g. A and B could
>be men and women, and 1 2 3 three different types of actions, say
>grasping a simple object, grasping a more clomplex object and grasping
>an even more complex object):
>
> 1 2 3
>A a1 a2 a3
>B b1 b2 b3
>
>The univariate statistic approach to test that kind of design would be
>to do an anova 2x3, and then do post-hocs.
>
>In spm I thus defined contrasts at the first level that combine the
>beta-weights of each basic condition. I then included these 6
>con images
>into a within subject anova at the second level. I can then calculate
>all the usual effects using Ftests, like main effect, interactions etc.
>
>My puzzlement came when I used a simple t-test to say compare
>a2 against
>b2. If I do that within the Anova, the t-test uses the unexplained
>variance of the entire design to divide the contrast, and I get
>relatively little activation
>
>If I would use an more classical RFX approach, I would define
>the a2-b2
>contrast at the first level, and use a one-sample t-test at the second
>level to check if the difference is significant. That uses a much
>smaller unexplained variance of course.
>
>My motivation for using the ANOVA is in part, that it is very
>convenient
>to use: I can mask results of one contrast directly with another. If
>a-priori I'm interested in certain comparisons more than in others
>though, I feel that using the unexplained variance of the
>whole ANOVA is
>not legitimate to test planned contrasts though...
>
>So I have two questions:
>
>a) why use the full unexplained variance in the ANOVA for the t-tests.
>
>b) could SPM incorporate an option for planned contrasts within an
>ANOVA-type design, such that one could bring all the basic
>effects into
>a single second level design, while keeping the statistical power of
>planned comparisons? or is that a silly question?
>
>
>Christian
>
>--
>Christian Keysers, PhD
>Assistant Professor
>
>BCN Neuro-Imaging Center
>University of Groningen
>Antonius Deusinglaan 2 (room 120)
>9713 AW Groningen
>
>Phone: +31 50 3638794
>Fax: +31 50 3638875
>
|