Print

Print


I think you are probably correct. This isn't an issue of pooled versus
partitioned variance, but the attribution of error to one factor or another
factor. In a purely between subject analysis, you have only one error term,
in a within-subject analysis with 1 factor you will ony have one error term
as well, in the case where there are both between- and within-subject
factors or multiple within-subject factors, then you have multiple error
terms as noted in many statistical text books.

Best Regards, Donald McLaren
=================
D.G. McLaren, Ph.D.
Postdoctoral Research Fellow, GRECC, Bedford VA
Research Fellow, Department of Neurology, Massachusetts General Hospital and
Harvard Medical School
Office: (773) 406-2464
=====================
This e-mail contains CONFIDENTIAL INFORMATION which may contain PROTECTED
HEALTHCARE INFORMATION and may also be LEGALLY PRIVILEGED and which is
intended only for the use of the individual or entity named above. If the
reader of the e-mail is not the intended recipient or the employee or agent
responsible for delivering it to the intended recipient, you are hereby
notified that you are in possession of confidential and privileged
information. Any unauthorized use, disclosure, copying or the taking of any
action in reliance on the contents of this information is strictly
prohibited and may be unlawful. If you have received this e-mail
unintentionally, please immediately notify the sender via telephone at (773)
406-2464 or email.


On Wed, Jan 5, 2011 at 12:36 PM, Alexa Morcom <[log in to unmask]> wrote:

> An interesting discussion - and I may be suffering delayed after-effects of
> the festive season, but I don't think that all the comments here about
> pooled error analyses are right. I think there are 2 issues:
>
> In probably rather simple terms - in a pooled error test, if done
> correctly, the df are greater but are valid. They are not 'inflated'. The
> error is also greater - being pooled from what would be different
> partitioned error cells. If sphericity holds (or is corrected for), this
> should be just as valid as a partitioned error test. I understand it to
> involve a pooled *estimate* of an error variance that may (or may not) be
> the same for all cells - and this estimate is more powerful, with more df,
> if assumptions hold.
>
> If many contrasts per subject are taken to the 2nd level as in the 3
> subjects/ 12 levels example, and subject effects are not modelled
> (discounted from the error term) as the Flexible Factorial allows, then I
> think Roberto is right to suggest that the inference is not to the
> population, as between- and within- subject effect estimates are mixed up
> (mean square effect contains both, incorrectly). But I don't think this is
> because there is a problem with the 'pooled' df per se.
>
> If someone (a proper statistician?) can clarify in more precise language,
> or correct me, that would be great! I recall a fairly recent discussion on
> this in which some differing opinion about the validity of this approach was
> left somewhat in the air...
>
> cheers
>
> Alexa
>
>
> On 5 January 2011 16:22, Roberto Viviani <[log in to unmask]>wrote:
>
>>  Right:  texts say you can't use the pooled method unless certain
>>>  interaction terms disappear (ie, are not significant), I think.   Though it
>>> would be interesting to see exactly why it's wrong if you  use the pooled
>>> variance when textbooks say you can't.  I can't  recall seeing a precise
>>> exposition on that, other than that you have  a larger error term but also
>>> more dof's, which work in different  directions.
>>>
>>
>> Yes, additivity must hold, but I hadn't connected that to the df problem.
>> Maybe I misunderstood your point: I thought you meant that the dfs are not
>> right for inference on the population. In fact, I looked it up, finding that
>> the old texts say it isn't inference on the population: under the null and
>> additivity, "... the expectation of the treatment mean square is equal to
>> the expectation of the error mean square. It should be noted that the
>> expectation is not with respect to some infinite population of repetitions
>> of the experiment but over the possible randomizations of the experiment."
>> (Kempthorne, 1973 edition, p. 129).
>>
>> Unsurprisingly, if you have 3 subjects and 12 levels, say, and inflate dfs
>> at the second level by taking all effects estimates there, the inference is
>> no longer on the population. It's Fisher-style with RFT used as a shorthand
>> for permutation. Therefore, the standard claim of using second-level
>> estimation to account for subjects as a random effect and conduct inference
>> on the population is strictly speaking no longer valid in this setup.
>>
>> I think the discussion on where the interaction goes tend to conclude it
>> inflates the treatment mean square, but this issue is made somewhat obscure
>> by controversies as to whether this interaction would be random or not.
>>
>>
>>> Your second point:  I'd have to think about it.  :-)
>>>
>>
>> I empathize, as I tend to feel my brain to run out of steam when I try to
>> figure out what happens with F tests at the second level. As an
>> afterthought, using permutation would cure these doubts of mine as well;
>> furthermore, it is explicitly inference on the randomization, not
>> repetitions of the experiment.
>>
>> Best wishes,
>> Roberto Viviani
>> Dept. of Psychiatry, University of Ulm, Germany
>>
>>
>>
>>  ________________________________________
>>> From: [log in to unmask] [[log in to unmask]]
>>> Sent: Tuesday, January 04, 2011 5:12 AM
>>> To: Fromm, Stephen (NIH/NIMH) [C]
>>> Cc: [log in to unmask]
>>> Subject: Re: Design matrix for each or all subject(s)?
>>>
>>> ...
>>>
>>>> Part of my reluctance is related to my disagreement with the way
>>>> repeated measures are handled by SPM, which is a separate topic.  As
>>>>  outlined in "ANOVAs and SPM"
>>>>    link http://www.fil.ion.ucl.ac.uk/~wpenny/publications/rik_anova.pdf<http://www.fil.ion.ucl.ac.uk/%7Ewpenny/publications/rik_anova.pdf>
>>>> there's the partitioned variance method and pooled variance method.
>>>>  IMHO the pooled variance method (the one commonly used by the SPM
>>>> community) is incorrect (because it gets df counting wrong), though
>>>> that appears to be a minority opinion.
>>>>
>>> Well this is an interesting point, but one that would also apply to F
>>> tests conducted in analogous designs in textbook univariate
>>> situations. I'd expect there should be something on this in that
>>> well-researched (indeed by now dated) literature.
>>>
>>>
>>>  On the other hand, if I  recall correctly, there was a thread on the
>>>> listserv devoted to the  topic of the main effect of group which
>>>> implicitly showed that the  pooled variance method was indeed faulty.
>>>>
>>> One thing I'd like to know, where was ever shown that the smoothness
>>> of F maps can be estimated from residuals? That is, irrespective of
>>> the numerator df's? That does not seem intuitive to me. Given that
>>> residuals are good to estimate smoothness of t maps (numerator df =
>>> 1), it does not follow they are good for higher df's. When I look at F
>>> maps, they seem different from t maps. This seems relevant to the
>>> pooled error idea, which relies on F testing.
>>>
>>> Best wishes,
>>> Roberto Viviani
>>> Dept. of Psychiatry, University of Ulm, Germany
>>>
>>
>>
>
>
> --
> Dr. Alexa Morcom
> RCUK Academic Fellow, University of Edinburgh
> http://www.ccns.sbms.mvm.ed.ac.uk/people/academic/morcom.html
>
> The University of Edinburgh is a charitable body, registered in Scotland,
> with registration number SC005336
>
>
>
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
>