Hi Roberto,
> you may find even more of this stuff against correction in Andrew Gelman's
> blog, (a professor of statistic at Columbia),
> http://www.stat.columbia.edu/~gelman/blog/.
I haven't read his blog, but I actually saw Andrew talk when he came
to London School of Economics recently -- he said something along the
following lines:
"I don't care about multiple comparison correction, because I've never
made a type I error in my life -- I've never studied anything which is
*exactly* zero ... I've also never made a type II error, because I've
*never claimed* that anything was exactly zero"
It's an extreme position, and perhaps better suited to socio-political
effects (which may indeed never be truly zero) but arguments along
similar lines appear in Justin Chumbley and Karl Friston's work on
topological FDR, relating to spatially smoothed signals
http://linkinghub.elsevier.com/retrieve/pii/S1053811908006472
> But look again: all these arguments against correcting involve having
> multiple independent variables, not multiple dependent variables.
But you could argue that the more dependency there is the less the
severe the multiple comparisons problem becomes; after all, that's
roughly how RFT and permutation testing are able to do better than
Bonferroni. The arguments for using multivariate statistics are
typically about power. Having said that, I do agree with you, because
it's very difficult for readers to judge dependency, so multiple
comparison corrections which either make no assumptions about
dependence (like Bonferroni) or which attempt to model dependence
(explicitly like RFT or implicitly like P.T.) are probably the safest
bet in cases where there are (statistically and/or scientificially)
dependent comparisons.
All the best,
Ged
|