Colleagues,
I had not really intended to pose this as an AllStat problem, but at then end
of the day it seemded more efficient than trawling friends and contacts to
ask a favour!!
I am looking for advice on Equivalence Testing, and particularly on k-sample
equivalence testing - which to date I do not seem to have seen referenced.
I guess my problem is two-fold:
1. I am sceptical of the general philosophy of Equivalence Testing, since I
have never been a great believer in trying to prove Null Hypotheses!
2. In addition, since I have a problem with using Multiple Comparison tests
indiscriminately, the concept of k-sample equivalence testing feels
uncomfortable - but this time because of the lack of specification of the
alternative hypothesis!
I can live with the concept of an Equivalence Test to compare a "treatment"
with a standard or a target when it is phrased as :
I require at least a given level of confidence that the true value of a
"treatment" parameter lies within a given tolerance of a target or a standard.
Moving to k-sample testing, I find it more difficult to say that they ALL
fall within such a tolerance region. I know as well as you that a
non-significant F test in Analysis of Variance does not mean that all the
treatments are the same. My problem is that to test the largest deviation
from the target - using the appropriate order statistic distribution on which
to base the test - is not obviously satisfactory since the whole concept
needs to take account of the actual pattern of the values of all the
"treatment" means.
I must have missed something - either in my basic logic, or else in the
literature. Any thoughts would be greatly valued.
Kind Regards,
Derek Pike.
Dr Derek J Pike
AB Statistics International, Reading RG31 6NA, Berks, UK [log in to unmask]
University of Reading, Reading RG6 6FN, Berks, UK [log in to unmask]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|