On Thu, 5 Nov 2009 14:50:43 +0000, Conor Wild <[log in to unmask]>
wrote:
<snip>
>As Rik mentions in this post
>(https://www.jiscmail.ac.uk/cgi-bin/wa.exe?A2=SPM;Fr%
2BU3w;20060311132958-0000;ind06) it is
>possible this arises from the fact that RFT is rather conservative for low
>error df's (<~12, though I have 18[?)]), and using all the data to estimate
>a pooled error could help.
That's interesting; I hadn't thought about that. (Replaced your link w/
permalink. :-) )
>There seems to be a lot assumptions inherent in
>pooling your error term (e.g. only a single source of error!), and unless
>one is ready to understand and accept them, would it not be best to go the
>classical route of the more conservative partitioned error ANOVA? What do
>most of you out there use?
Most people in the SPM community use the pooled error method, if you
measure by posts to this list.
I'm skeptical of the method, insofar as every ANOVA text I own states that at
best it's justified only under certain assumptions (viz, that certain interaction
terms are small). I myself want to go through the math and see if it "really"
matters, because the pooled method is (a) easier to execute in the context of
SPM, and (b) probably gives "better" p-values. (My impression is that the
actual ratio of mean squares is "worse," but the dof is much better. Whether
one is justified in using the "boost" from the latter, I'm not sure.)
But maybe my concerns are misplaced, and you certainly wouldn't be wrong in
using the pooled error, insofar as that's the community standard in the SPM
crowd.
>Thank you again for your input!
>
>Conor
|