On 26 October 2010 12:41, SUBSCRIBE PSYCH-POSTGRADS Anonymous
<[log in to unmask]> wrote:
> Hello everyone,
>
> Can one analyse the statistical significance of an interaction between variables, with non-parametric tests?
No. Next question.
> I have had to use non-parametric tests on a set of reaction time data, as various assumptions of parametric tests have been violated (for some variables, it's non-normal distributions, for another homogeneity of variance is violated). I have small and uneven samples too. I have run the obvious between and within-subjects analyses and I understand the results and can report them OK. However, my graphs of the data show clear (and emprically interesting), albeit small, interactions between some of the variables. If I were able to run an ANOVA, say, the interaction would be analysed and its significance given. However, I can't work out how I can check the interaction between the variables having had to use non-parametric tests. Is there a way I should be looking at the differences between the variables, for instance, using those differences as new variables and running tests on those to see if the differences are significant? I suspect, given my small samples, that these interactions won't be significant, however for my thesis it would be useful to discuss these interactions and to have an idea of their effect sizes too, so I do need to run a statistical analysis if I can. I am probably being very dense about this - and I thank you for any help you can offer.
>
OK, here's some more explanation.
When you do a non-parametric test you convert to ranks. You say "how
much higher is a score of 4 than a score of 5?" The answer is, "I
don't know, it's just higher". "How much higher is a score of 78 than
a score of 5" Answer: "I don't know, it's just higher".
When you do an interaction, you say "In group A the effect of X is 10,
in group B, the effect of X is 20, is 20 significantly higher than
than 10". In non-parametric tests, it's just higher, so it can't be a
certain amount higher. If you take differences, then you're
parametric again, and so you can't do that. (Well, not if your data
are truly non-parametric).
You can't plot them - are you plotting means or medians? It's
possible for two groups to have the same median, and have a
significant Mann-Whitney test, and it's possible to have the same mean
and have a significant Mann-Whitney test. (Interestingly, the
Mann-Whitney test isn't really non-parametric, because it's a test of
medians, if the distribution in the two groups are the same shape.)
Well, you can plot them, but that makes (almost) the same assumptions
as ANOVA.
So what do you do? Don't do a non-parametric test. They're horrid.
There are two solutions - are you doing non-parametric tests because
of the distribution? Then either transform it or bootstrap it (or if
you've got a large sample size, don't worry about it). Are you doing
non-parametric tests because you have an ordered categorical variable?
Then do ordinal logistic regression. Both of these will allow you to
test an interaction.
Jeremy
|