Eric and others:
I assume the model to test the interaction and condition effects would
also include group main effect so that the interaction would be
estimated along with the main effects. However, following the
discussion one would not use this model (assuming an improperly
computed H0) to assess the group effects.
darren
On Mon, Apr 6, 2009 at 12:04 PM, Eric Zarahn <[log in to unmask]> wrote:
> Dear Thomas,
>
>
> Quoting Thomas Stephan <[log in to unmask]>:
>
>> Dear Laura, Lennart, Stephen, Darren,
>>
>> let me try to come back to my original question posted last week:
>> https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind0904&L=SPM&P=R12110
>>
>> Following your discussion, and trying to sum things up, you recomend that
>> I
>> should remove the subject factor from the model when testing for group
>> effect (i.e. use model 1 from Darrens posting below),
>
>
> No. That model #1 was (copying from below):
> (1) y_ijk = g_j + c_k + gc_jk + e_ijk
>
> What should be used to test for the main effect of group (in the absence of
> code that properly computes H0 mean squares other than the residual mean
> squared) is the following model:
>
> (1) y_ij = g_j + e_ij
>
> Note that only group is modeled, and that each observation y_ij is the
> average over all conditions.
>
>
>
>
>> and build a separate
>> model including the subject factor (model 2) to test for condition
>> effects,
>> and for group x condition interactions?
>
>
> Yes, the residual mean square from model 2 would be correct when assessing
> the main effect of condition and group x condition interactions.
>
> Eric
>
>
>
>
>
>
>>
>> But even if so, how can we explain the complex values of the
>> standarddeviation originiating from the square root of a negative number
>> that appears in a situation in SPM where numbers are not planned to be
>> negative? Is this a result from using the wrong model, or is it a bug in
>> SPM?
>>
>> Best,
>> Thomas
>>
>> On Mon, 6 Apr 2009 02:32:38 +0100, Stephen J. Fromm <[log in to unmask]>
>> wrote:
>>
>>> On Sun, 5 Apr 2009 18:55:03 -0400, Eric Zarahn <[log in to unmask]>
>>> wrote:
>>>
>>>> Dear Darren,
>>>>
>>>> Not to supplant Laura's response, but the issue is what the
>>>> appropriate error variance (or H0 mean square) estimator is for a
>>>> given effect, not per se what the full model is.
>>>
>>> Exactly. The model is fine, with the caveat that the residual is not
>>> necessarily the right error term to test against.
>>>
>>>> Your model #2 is
>>>> correct for a design (without replications), but it does not
>>>> explicitly provide the correct H0 mean square estimator for the
>>>> different effects. More particularly, the mean squared residual error
>>>> for model #2 would not in general be the correct H0 mean square
>>>> estimator for the main effect of group.
>>>>
>>>> I do not know if the module in SPM computes the correct H0 mean square
>>>> estimator for the effect of interest (does anyone know the answer?). I
>>>> know packages like SAS and SPSS do when the design is correctly
>>>> specified.
>>>
>>> I'm pretty sure SPM always tests against the residual. Perhaps I'm
>>> mistaken,
>>> but I thought this was the reason why most people using SPM in
>>> multifactorial
>>> models use "pooled errors" rather than "partitioned errors" (cf the
>>> Henson/Penny monograph and a few posts to the list). You actually _can_
>>> correctly partition the error, but you have to do that on your own by
>>> setting
>>> up different models such that, in each one, the residual is the correct
>>
>> error for
>>>
>>> the effect you want to test. In more generic packages, this is taken
>>> care of
>>> for you.
>>>
>>>> An aside that I think is not irrelevant to this discussion is the
>>>> parametrization of the model. Specifically, if the s_i(j) in model #2
>>>> are not constrained to be equal to zero within each group, then the
>>>> model might not be estimable. And if they are, then there might
>>>> (depending on the presence of constraints for the other terms) need to
>>>> be an overall grand mean term to the model. In any case, the specific
>>>> parametrization of the model will effect how one expresses expected
>>>> mean squares under H0.
>>>>
>>>> Eric
>>>>
>>>>
>>>> Quoting Darren Gitelman <[log in to unmask]>:
>>>>
>>>>> Laura:
>>>>>
>>>>> So are you suggesting that if modeling a repeated measures design with
>>>>> a group (between) and a condition (within) factor the equation (and by
>>>>> implication the design) should be
>>>>>
>>>>> (1) y_ijk = g_j + c_k + gc_jk + e_ijk
>>>>>
>>>>> and not
>>>>>
>>>>> (2) y_ijk = s_i(j) + g_j + c_k + gc_jk + e_ijk ?
>>>>>
>>>>>
>>>>> As far as I can tell looking at books on mixed model designs they say
>>>>> the 2nd equation is the correct one for a repeated measures mixed
>>>>> model design. I think the 1st equation would be correct for a standard
>>>>> factorial ANOVA if one assumes independence between all the measures,
>>>>> but I may be misunderstanding you or misunderstanding these designs.
>>>>>
>>>>> -----
>>>>> Darren Gitelman
>>>>>
>>>>>
>>>>>
>>>>> On Sat, Apr 4, 2009 at 2:18 PM, Laura Menenti
>>>>> <[log in to unmask]> wrote:
>>>>>>
>>>>>> d
>>>>>
>>>>>
>>
>>
>
--
-----
Darren Gitelman
|