Print

Print


Dear Luca,

> Are there problems with error terms and degrees of freedom estimation in SPM?

Well, it's about partioned vs. pooled error terms, forgive me the Wiki link, but see https://en.wikipedia.org/wiki/Mixed-design_analysis_of_variance#Partitioning_the_sums_of_squares_.26_the_logic_of_ANOVA . SPM relies on a pooled error term, which is/never was a dark secret, see e.g. the description in http://www.fil.ion.ucl.ac.uk/~wpenny/publications/rik_anova.pdf There was a long discussion at https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=spm;710df270.1101 . In short, and mainly focusing on Aaron's last response, for mixed designs a model with partioned errors is to be prefered (or an approch equivalent to such a model, like the "series of tests"). For fully within-subject designs, a model with pooled error terms could be okay (= the Flexible factorial). However, the pooled error term relies on certain assumptions, and whether these are met in a particular data set would require some testing (see Rik's last response for some script). Also see the statement in the linked Henson and Penny paper "This is why, for repeated-measures analyses, the partitioning of the error into effect-specific terms is normally preferred over using a pooled error" (which is probably also the default setting for commonly used statistical packages).

The "series of tests" approach I had suggested for mixed designs corresponds to the two-stage procedure 4.3.3 in Henson and Penny, and could be adapted to fully within-subject designs as well. 

> is SPM miscomputing degrees of freedom?

Not really, see above. However, SPM relies on the dfs of the whole model, independent of the contrast. E.g. I have seen people running a two-sample t-test and reporting [1 0], [-1 0] as (de)activations for group 1. This way, you have too many df of course (those from group 2). It is also quite common to run Full factorial for fully within-subject designs, thus lacking columns for subjects, which is a bad choice (similar to running a two-sample t-test on repeated data instead of a paired t-test).

> If I'm doing an ANOVA and then doing some post-hoc t or f-tests excluding some Groups and things like that

Well, in principle one should adjust the models accordingly then / set up separate models for those tests. The same holds for post-hoc tests, they should be based on the correct error terms / dfs, but I doubt there's an ultimate answer (at least this was my impression when looking at different textbooks focusing on statistics in general, lots of different suggestions).

Best

Helmut