Thanks to all who answered my question.
Summing up the responses:
- most of the people suggested using the F test, which is bound to normally
distributed data. As it is a well known procedure, I will not expand on it.
- some Allstatters suggested different approaches, particularly in the case
of non-normally distributed data. In the following is a summary of
responses.
********* original question ****************
In a clinical study comparing two drugs I have been asked to compare the
variabilities (the standard deviations) of the scores of a given scale
rather than the means. I have not the faintest idea on how to compare two
SDs: has anybody out there any suggestion on how to proceed?
********* selected responses ************
Do you mean you want to compare spread? If the measurements are on a
continuous scale, there are several different methods to compare spread,
of which one of the more trustworthy is the Fligner-Killeen test. See
Conover et al, 1981, Technometrics, which also summarises the many
difficulties in comparing spread. If the measurements are ordinal, the
problem is much harder.
***************
What most respondents will tell you is that the ratio of two
variances (i.e. the square of the ratios of the SDs) has an F
distribution on the null hypothesis, so you can calculate F, and
refer to tables of the F-distribution, with appropriate numbers of
degrees of freedom (generally both substantial) or equivalent
software. It is necessary to divide the larger variance by the
smaller, and interpret this ratio in a 2-tailed manner. Usually
tables list the one-tailed F critical values used for ANOVA etc.
(These are one-tailed with regard to F, though they provide a 2-sided
test for comparison of the means of the original variable.) The one-
tailed p-value then needs to be doubled.
HOWEVER, this use of the F-ratio is highly dependent on the
assumption of Gaussian distributional form. It is in reality just as
sensitive to non-normality as to heterogeneity of spread. So it
should only be used if inspection of the data and normal plots
suggest this assumption is a reasonable one.
Distribution-free tests for differences in spread exist, though I
haven't got details to hand. Nevertheless, it would be difficult to
interpret a difference in spread, without trying to say something
about whether a difference in location also exists. You can test for
both shift and difference in spread simultaneously using the 2-sample
Kolmogorov-Smirnov test. This is sensitive against both shift and
change-of-scale alternative hypotheses. But the price you pay for
this is that it is less sensitive for shift than a t or Mann-Whitney
test, and less sensitive for location than an F or equivalent
nonparametric test. To put it another way, a study designed to be
analysed by K-S needs to be considerably larger than you would
normally expect, in order to achieve adequate power.
****************
If the data are Normal, you can do an F test if the groups are
independent, Pitman's test (are the difference between the two variables
and the average of the two correlated) if they are paired. If you
cannot assume a distribution, you have a problem.
Note that although simple tansformations (log, square root, reciprocal)
preserve diferences in mean, they do not preserve differences in
variablity, otherwise we would not have variance-stabilising
transformations.
****************
There are 2 methods. If you are confident of normality (the central limit
theorem does not apply) you simply take the ratio of the variances and this
will follow an 'F' distribution.
If you are not so confident, use 'Levene's test'. The usual reference is
Brown and Forsythe (1974) J.Amer.Stat Assn 69, 364-376.
'Minitab' and 'SPSS' will do it
***************
Search for the Bartlett Test for equal variances.
***************
Again, thanks to all .
Kind regards
Roberto
|