Dear Jan,
Thanks for your response. Nothing in my note suggested an either/or
position. My long post and my short response to you describe
historical facts and current practices in reviewing. My intent was to
describe the benefits and problems associated with each of the three
major editorial and conference selection systems.
The three major systems are the selective system, the invitational
system, and the double blind review system. There are also some
hybrid variations and new systems I have not attempted to describe.
Blind review is the most widely used in scholarly journals, followed
by a small number of invitational and selective systems. Double-blind
review has been extensively described in earlier posts.
In the selective system, known editors or referees choose among
unsolicited submissions from known authors. Selection is most widely
used in popular magazines and journals of opinion. Selection is also
used for scholarly monograph publishing where publishers select among
proposals supported by external reviewers who are unknown to the
author. Some monograph publishers also invite manuscripts from
specific authors, but this is infrequent. Magazines and journals that
select among uninvited submissions also invite contributors.
Both double blind review and selection are common at conferences,
with differences appearing among fields.
The invitational system is generally used for small conferences and
seminars. In publishing, it is also used for edited anthologies.
Since the vast majority of design and art exhibitions are curated,
the invitational system is the most common form in design and art.
(When single-blind review is used for design and art exhibitions, it
is mostly used for juried competitions.)
My long post explicitly argued for many approaches to conferences and
publishing. As Rachel wrote, approaches and structures must be linked
appropriately to goals. I have worked with all three systems. Each
has virtues and each has flaws.
One issue deserves more careful attention. You write, "If as one
email mentioned it is possible to figure out who wrote a particular
paper submission because of style, subject, attitude or cited
referencing, then the referee is not blind."
I responded to that earlier post by saying that it this claim is a
mistaken assumption. The fact that many people believe it does not
make it so.
Since this issue comes up again, I will offer a fuller and more
explicit argument to explain why it is unrealistic to believe that
most reviewers can infer author identities.
It is easy to imagine that a reviewer might say, "I know who wrote
this." It is particularly easy to make such a claim if the reviewer
does not actually know the author's identity. In double blind review,
it is impossible to challenge the opinion of anyone who imagines that
or he she knows the identity of an author. I suspect that an
empirical test would reveal an empty claim in the vast majority of
cases.
Few authors in any field are so distinct that the majority of
reviewers can actually draw a correct inference. This requires
expertise in citation patterns and thematic development as well as an
expert sense for style and tone.
Most subject field experts are problematic or mediocre writers and
editors. Some are notoriously bad. I do not see how the sensitivities
that elude them in their own work could suddenly blossom in
evaluating the work they review.
The one exception takes place when authors use parts of their own
widely published manuscripts WITHOUT self-citation. Since this must
either be plagiarism or authorship, this might reveal an author's
identity to a particularly well-read reviewer. The possibility that
an astute reviewer might detect plagiarism explains why careful
authors who cite their own work do so in the third person. They use
the pronoun "I" only to refer to issues and events in the paper under
submission, treating earlier publications as the work of an
independent author.
Since review is blind, use of uncited but recognizable material in an
article should elicit a warning to the editor on possible plagiarism
rather than an inference to identity. The fact that this is rare has
interesting implications.
The ability of reviewers to detect plagiarism has been tested
empirically. The results suggest that most reviewers are insensitive
to plagiarized material, including massive plagiarism from well-known
and widely published material.
Since reviewers rarely identify widely published material by
well-known authors, I do not see why they should be able to identify
the unpublished work of lesser-known authors.
I agree with you completely on the idea of exploring alternatives.
That is what this on-line conference is about.
Best regards,
Ken
--
Ken Friedman, Ph.D.
Associate Professor of Leadership and Strategic Design
Department of Leadership and Organization
Norwegian School of Management
Visiting Professor
Advanced Research Institute
School of Art and Design
Staffordshire University
|