Hi to all,
in my dispute with Kevin J. McConway I got the impression that within
the statistic community the power of the bootstrap method and its
application to significance tests is not very widespread. Here is *his*
view:
One of my worries about this is that there is no test (parametric or
non-parametric) that is robust against all deviations from the
assumptions
it relies on. For instance, standard non-parametric tests are not at
all
robust against correlation between successive sample values. In
practice
this sort of serial correlation is quite common, I think. In some
situations
it is, as far as I know, simply not possible to produce a test that is
sensitive to one sort of difference while being robust against some
other
sort of difference. I think this is true, for example, of tests for
scale
(i.e. variance). If you want a good test to compare two variances, AND
you
do not want it to be adversely affected by differences in the means of
the
two populations, AND you do not want to make some specific parametric
assumptions about the form of the distribution, I think it can be
shown that
there is no such test.
Here is *my* view:
What about applying the well-known *bootstrap* method, with the ratio of
variances (larger variance divided by
smaller variance) as the test function? This test
- tests on the equality of the two variances
- *is not* affected by the means of the two populations
- *does not* make any specific assumptions about the form of the
distribution (unlike e.g. the F-test)
So a test with the mentioned properties indeed *exists*, quod erat
demonstrandum.
Regards,
Volker
|