A PhD external examiner has put, in comments for a student, that Pearson's
r cannot be used if the variables involved are not normally distributed.
I have written an article (to appear in Chance) saying that this view is
misguided. The most useful and authoritative reference appears to be
Clarke and Cooke "A Basic Course in Statistics", p328 "Provided X and Y
seem to consist of observations that are symmetrically distributed, we
usually accept the asumption of a joint normal distribution." This
applies, however, only to the step of assigning a significance to r. Using
r as a descriptive measure or screening device does not rely upon the
assumption. We also use r based upon the regression model, where the x
values are selected, not a "variable". (Crow et al, "Statistics Manual",
Dover paperback reprint).
Unless someone violently disagrees with the previous paragraph, my question
to allstat is whether you have come across similar examples of examiners
(or referees of papers) apparently stepping out of their area of
expertise and making dogmatic statements that are not valid but work to
the detriment of the candidate (author).
When this happens, what should be done? Should we have a campaign to
make statistics a true profession? In other words, if you want to give
statistical opinions, you ought to be certified.
R. Allan Reese Email: [log in to unmask]
Associate Manager Direct voice: +44 1482 466845
Graduate Research Institute Voice messages: +44 1482 466844
Hull University, Hull HU6 7RX, UK. Fax: +44 1482 466846
====================================================================
Example of inference: the AA has commented on results of an examination of
accident statistics that show that women have more accidents than men
[per day, per mile?], but men have more serious accidents, especially for
males under 25. "The data were American."
Daily Telegraph report, 17 June.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|