On 15 March 2013 09:56, Brad Mattan <[log in to unmask]> wrote:
> Hello everyone,
>
> I have two stats questions I have been mulling over. I have a 2 x 2
> repeated measures factorial design. The dependent variable is response time
> for each of the four conditions. I want to compare two different groups
> (undergraduate vs mature participants). To do so, I know I should
> standardise the RTs to be able to compare young and old participants.
>
Does that mean it's 2x2x2, or 2x2 mixed?
> Question 1) One way of standardising is (after trimming outlier RTs) to
> compute the mean RTs for each participant in each condition. Then, within
> each group (e.g., undergraduates), I can take the 4 * n means as one
> distribution and standardize them accordingly. In this case I would be
> standardising within age group to eliminate inter-group variability.
> Alternatively, I could standardise RTs within each subject, effectively
> eliminating inter-subject variability both within and between groups. After
> discussing this with my supervisor, it seems the latter method is more
> common, but I want to be justified in my choice of standardisation method.
> Does anyone know when one method over the other would be more appropriate?
>
I'd standardize everyone at once - otherwise you absorb the (degree of
freedom) group difference.
(Also, the best defence is to try both ways, or all three ways, and
see if it makes a difference. If it doesn't, you've got a defence, if
it does, you can try to understand why. But it's easier to understand
why with concrete data).
> Question 2) I expect a significant main effect for one of my
> within-subjects factors. However, for the elderly group, I expect the
> effect size will be larger. Can anyone point me in the right direction for
> a formal comparison of effect sizes? The best way I have thought to do this
> involves creating average RTs for each level of the factor (e.g. high vs low
> exposure), computing paired t-tests to calculate the effect size of the
> difference between the levels for each subject, and lastly computing a
> t-test to compare the mean effect size between groups.
>
This is the interaction effect. The interaction effect says "does the
effect in one group differ from the effect in the other group.
Repeated measures ANOVA is a horrible, horrible technique which should
be (IMHO) banned. Especially for mixed designs. You're much better off
doing a multilevel model (or a structural equation model). It's much
more flexible, and it's easier to understand this sort of comparison.
Andy Field's SPSS book covers how to do it. Although you'll struggle
a bit more to start with, you'll find life gets much easier once
you're past that first stage.
J
|