Following on from a thread several months ago regarding the peer review
process there is a very interesting article in this months Annals of
Emergency
It should be of interest to anyone who has ever had a paper rejected by a
journal, or those (like me) interested in the peer review process itself.
The abstract is below but the full article is also worth a read.
Simon Carley
Anaesthetics / Intensive Care
Stepping Hill Hospital
Stockport
England
[log in to unmask]
Who reviews the reviewers? Feasability of using a fictitious manuscript to
evaluate peer reviewer performance.
Baxt WG, Waeckerle JF, Berlin JA, Callaham ML. Annals of emergency medicine
1998;32:310-317
Study Objective
To determine whether a fictitious manuscript into which purposeful errors
were placed could be used as an instrument to evaluate peer review.
Methods
An instrument for reviewer evaluation was created in the form of a
fictitious manuscript into which deliberate errors were placed in order to
develop an approach for the analysis f peer reviewer performance. The
manuscript described a double-blind, placebo control study purportedly
demonstrating that intravenous propanolol reduced the pain of acute
migraine headache. There were 10 major and 13 minor errors placed in the
manuscript. The work was distributed to all reviewers of Annals of
Emergency Medicine for review.
Results.
The manuscript was sent to 262 reviewers; 203 (78%) reviews were returned.
199 reviewers recommended a disposition of the manuscript: 15 recommended
acceptance, 117 rejection, and 67 revision. The 15 who recommended
acceptance identified 17.3% (CI 11.3-23.4%) of the major and 11.8% (CI
7.3%-16.3%) of the minor errors. The 117 who recommended rejection
identified 39.1% (CI 36.3-41.9%) of the major and 25.2% (CI 23-27.4%) of
the minor errors. The 67 who recommended revision identified 29.6% (CI
26.1-33.1%) of the major and 22.0% (CI 19.3-24.8%) of the minor errors. The
number of errors identified differed significantly across recommended
disposition. 68% of the reviewers did not realise that the conclusions of
the study were not supported by the results.
Conclusion
These data suggest that the use of a preconceived manuscript into which
purposeful errors are placed may be a viable approach to evaluate reviewer
performance. peer reviewers in this study failed to identify two thirds of
the major errors in such a manuscript.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|