Print

Print


An interesting discussion - and I may be suffering delayed after-effects of
the festive season, but I don't think that all the comments here about
pooled error analyses are right. I think there are 2 issues:

In probably rather simple terms - in a pooled error test, if done correctly,
the df are greater but are valid. They are not 'inflated'. The error is also
greater - being pooled from what would be different partitioned error cells.
If sphericity holds (or is corrected for), this should be just as valid as a
partitioned error test. I understand it to involve a pooled *estimate* of an
error variance that may (or may not) be the same for all cells - and this
estimate is more powerful, with more df, if assumptions hold.

If many contrasts per subject are taken to the 2nd level as in the 3
subjects/ 12 levels example, and subject effects are not modelled
(discounted from the error term) as the Flexible Factorial allows, then I
think Roberto is right to suggest that the inference is not to the
population, as between- and within- subject effect estimates are mixed up
(mean square effect contains both, incorrectly). But I don't think this is
because there is a problem with the 'pooled' df per se.

If someone (a proper statistician?) can clarify in more precise language, or
correct me, that would be great! I recall a fairly recent discussion on this
in which some differing opinion about the validity of this approach was left
somewhat in the air...

cheers

Alexa

On 5 January 2011 16:22, Roberto Viviani <[log in to unmask]> wrote:

> Right:  texts say you can't use the pooled method unless certain
>>  interaction terms disappear (ie, are not significant), I think.   Though it
>> would be interesting to see exactly why it's wrong if you  use the pooled
>> variance when textbooks say you can't.  I can't  recall seeing a precise
>> exposition on that, other than that you have  a larger error term but also
>> more dof's, which work in different  directions.
>>
>
> Yes, additivity must hold, but I hadn't connected that to the df problem.
> Maybe I misunderstood your point: I thought you meant that the dfs are not
> right for inference on the population. In fact, I looked it up, finding that
> the old texts say it isn't inference on the population: under the null and
> additivity, "... the expectation of the treatment mean square is equal to
> the expectation of the error mean square. It should be noted that the
> expectation is not with respect to some infinite population of repetitions
> of the experiment but over the possible randomizations of the experiment."
> (Kempthorne, 1973 edition, p. 129).
>
> Unsurprisingly, if you have 3 subjects and 12 levels, say, and inflate dfs
> at the second level by taking all effects estimates there, the inference is
> no longer on the population. It's Fisher-style with RFT used as a shorthand
> for permutation. Therefore, the standard claim of using second-level
> estimation to account for subjects as a random effect and conduct inference
> on the population is strictly speaking no longer valid in this setup.
>
> I think the discussion on where the interaction goes tend to conclude it
> inflates the treatment mean square, but this issue is made somewhat obscure
> by controversies as to whether this interaction would be random or not.
>
>
>> Your second point:  I'd have to think about it.  :-)
>>
>
> I empathize, as I tend to feel my brain to run out of steam when I try to
> figure out what happens with F tests at the second level. As an
> afterthought, using permutation would cure these doubts of mine as well;
> furthermore, it is explicitly inference on the randomization, not
> repetitions of the experiment.
>
> Best wishes,
> Roberto Viviani
> Dept. of Psychiatry, University of Ulm, Germany
>
>
>
>  ________________________________________
>> From: [log in to unmask] [[log in to unmask]]
>> Sent: Tuesday, January 04, 2011 5:12 AM
>> To: Fromm, Stephen (NIH/NIMH) [C]
>> Cc: [log in to unmask]
>> Subject: Re: Design matrix for each or all subject(s)?
>>
>> ...
>>
>>> Part of my reluctance is related to my disagreement with the way
>>> repeated measures are handled by SPM, which is a separate topic.  As
>>>  outlined in "ANOVAs and SPM"
>>>    link http://www.fil.ion.ucl.ac.uk/~wpenny/publications/rik_anova.pdf<http://www.fil.ion.ucl.ac.uk/%7Ewpenny/publications/rik_anova.pdf>
>>> there's the partitioned variance method and pooled variance method.
>>>  IMHO the pooled variance method (the one commonly used by the SPM
>>> community) is incorrect (because it gets df counting wrong), though
>>> that appears to be a minority opinion.
>>>
>> Well this is an interesting point, but one that would also apply to F
>> tests conducted in analogous designs in textbook univariate
>> situations. I'd expect there should be something on this in that
>> well-researched (indeed by now dated) literature.
>>
>>
>>  On the other hand, if I  recall correctly, there was a thread on the
>>> listserv devoted to the  topic of the main effect of group which
>>> implicitly showed that the  pooled variance method was indeed faulty.
>>>
>> One thing I'd like to know, where was ever shown that the smoothness
>> of F maps can be estimated from residuals? That is, irrespective of
>> the numerator df's? That does not seem intuitive to me. Given that
>> residuals are good to estimate smoothness of t maps (numerator df =
>> 1), it does not follow they are good for higher df's. When I look at F
>> maps, they seem different from t maps. This seems relevant to the
>> pooled error idea, which relies on F testing.
>>
>> Best wishes,
>> Roberto Viviani
>> Dept. of Psychiatry, University of Ulm, Germany
>>
>
>


-- 
Dr. Alexa Morcom
RCUK Academic Fellow, University of Edinburgh
http://www.ccns.sbms.mvm.ed.ac.uk/people/academic/morcom.html

The University of Edinburgh is a charitable body, registered in Scotland,
with registration number SC005336