JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for PHD-DESIGN Archives


PHD-DESIGN Archives

PHD-DESIGN Archives


PHD-DESIGN@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

PHD-DESIGN Home

PHD-DESIGN Home

PHD-DESIGN  2002

PHD-DESIGN 2002

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Blind Review as Seen by a Referee [Long Post]

From:

Ken Friedman <[log in to unmask]>

Reply-To:

Ken Friedman <[log in to unmask]>

Date:

Mon, 16 Sep 2002 15:11:38 +0200

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (619 lines)

Blind Review as Seen by a Referee [Long Post]

This post responds to Rosan Chow's inquiry into the blind review
process at Common Ground in the experience on one referee. It also
examines the blind review process in general.

--

Dear Rosan,

John Shackleton and David Durling offered a broad answer to your
questions on the review process and the rejection letter. Several
contributors, including Rachel Cooper, have considered the issues
from an editor's perspective and that of a conference chair. Lubomir
discussed this from a general perspective, and all have noted their
own experiences as authors. I would like to discuss your questions
from a referee's perspective. First, I will discuss a few specific
aspects of Common Ground. Then I will address a broader range of
issues.

Reviewers for most conferences and journals receive a selection of
papers and a questionnaire for each submission.

The Common Ground questionnaire asked ten questions. For each
criterion, we were asked to give a number ranking from 1 to 4. The
reviewer feedback form asked:

Is the paper relevant to this conference?
Are there well defined research questions?
Were clear methods used competently?
Are there clear findings or outcomes?
Does the paper report original work?
Does the title reflect accurately the paper's contents?
Does the abstract reflect accurately the paper's contents?
Are references accurate and complete?
Is the standard of English acceptable?
Are visual sources presented adequately?

Then, we were asked for a recommendation:

(1) Reject
(2) Request major revision
(3) Accept with minor revisions
(4) Accept

There was also a space for added comments to the editors.

The Common Ground review process went through several steps. The
first phase of submissions reviewed every proposal in abstract form.
A small committee of reviewers examined all abstracts in the first
round. Based on the review of abstracts, some proposals were rejected
and a significant number of papers were invited for submission in
full form.

Papers submitted in full form went through a full, standard review by
two referees. In some cases, other experts were asked for an opinion.

There are two forms of blind review, single blind, and double blind.
In single-blind review, anonymous referees know the identities of
authors. This is common in book publishing, where publishers send a
manuscript for expert review and commentary. In some cases, the
manuscript includes the full supporting material, including even the
author's biography.

Journals and conferences use double-blind review. In double-blind
review, identities of the author and reviewer are each hidden from
the other. Common ground used the double-blind review method. In
these notes, I will use the convention of referring to double blind
review by the simple term, blind review.

As John and David noted, final decisions involved emerging session
themes and subject streams. My sense is that all papers with high
referee scores were invited. It was my sense that some papers were
also included because the organizing committee exercised thematic
discretion. Where judgment calls were involved, the organizers sought
to balance issues, approaches, and such factors as geographical
spread.

David Durling and I discussed these issues in a general sense. This
is my reflection of our conversations. John and David discuss the
larger framework from their viewpoint as conference chairs.

The phrasing of the rejection letter involves several issues.
Whatever the result of the review process, the editor or chair must
inform the author of the decision.

All conference chairs and editors seek ways to write a friendly and
professional rejection letter. The nicest and easiest way to
summarize the decision is in the kind of statement you received.

Your paper may have been rejected for any number of reasons. One is
poor fit with any of the specific themes of the conference session
tracks. The other is thematic in the larger sense of the first
question on the reviewer's feedback form. The question asks, "Is the
paper relevant to this conference?" This question asks whether a
paper is relevant to a large-scale international research conference.

At Common Ground, papers were rejected for many reasons. Some papers
were rejected for several reasons in combination. My sense is that
conceptual, theoretical, and methodological problems were seen as
serious issues. One paper I read would have been a good paper with
major revision, but I doubted that the authors could have done a
major revision in the available time.

The first five questions on the reviewer feedback form involved
substantive issues:

Is the paper relevant to this conference?
Are there well-defined research questions?
Were clear methods used competently?
Are there clear findings or outcomes?
Does the paper report original work?

In contrast, formal issues seem to have been treated gently, with
suggestions and calls for improvement. The final five questions on
the reviewer feedback form involved formal issues:

Does the title reflect accurately the paper's contents?
Does the abstract reflect accurately the paper's contents?
Are references accurate and complete?
Is the standard of English acceptable?
Are visual sources presented adequately?

Richard L. Daft (1995) once wrote a useful article titled, "Why I
Recommended that Your Manuscript Be Rejected and What You Can Do
About It."

Daft studied 111 articles that he had reviewed and recommended for
rejection by the Academy of Management Journal and Administrative
Science Quarterly.

Daft (1995: 167) identified 258 problems in 111 manuscripts. Many
manuscripts were marked by more than one major problem. The most
common problems were,

1. No theory (56)
2. Concepts and operationalization not in alignment (35)
3. Insufficient definition - theory (27)
4. Insufficient rationale - design (27)
5. Macrostructure - organization and flow (26)
6. Amateur style and tone (23)
7. Inadequate research design (22)
8. Not relevant to the field (20)
9. Overengineering (11)
10. Conclusions not in alignment (6)
11. Cutting up the data (5)

Because design research is an interdisciplinary field, some of these
issues may be irrelevant in judging some submissions. Overall, the
issues are worth considering. If someone were to review all
submissions rejected from Common Ground, I am sure that we would see
a different breakdown and I know we would see problems that are not
on this list.

As it was, nearly sixty reviewers took part in the Common Ground
process. This meant a wide range of experience, an extraordinary
depth of expertise, and a broad spectrum of approaches and
perspectives.

Relatively few submissions to most conferences and journals look like
outstanding proposals to all reviewers. That is why all submissions
went through a first review at the abstract level, followed by two or
three reviews in full submission. Even so, there was reviewer
variation.

After working as a reviewer, an editor, and a conference chair, it is
my observation that many reasons account for reviewer variation.

The review process involves a wide range of issues. The process
differs from the viewpoints of author, reviewer, and editor or chair.
There are certainly cases in which the process does not work. Despite
this, both Lubomir and David offer sound arguments for the value of
the blind review.

The issue is how the blind review process ought to work when it works
well. In my view, a good review requires an extensive investment of
time by editors, chairs, and reviewers alike. For the reviewer, this
means the kind of extensive annotated discussion that David
mentioned. This is the mark of a good review.

Implicit in this extensive process is a major review apparatus and
intense development work. These are often absent in design research.
We simply lack the number of experienced reviewers we need.

Outstanding reviewers require research experience, research
supervision experience, and editorial experience. Their own research
experience and subject field expertise provides a foundation of
subject and content knowledge. Research supervision experience
involves understanding and sorting out the multiple strands of
theoretical and methodological issues that come together in any
paper, the good as well as the bad. Editorial experience helps a
reviewer to offer careful notes and appropriate suggestions. Even
though I write my notes blind, I understood perfectly well what
Rachel meant when she said that being a referee can sometimes
resemble thesis supervision. Many of the same issues apply in
offering review advice.

One important difference is that my students are free to email at any
time or call me from 8 am to 10 pm any day of the week, while editors
structure my engagement with author queries and letters. Both kinds
of work involve service, but the conditions are different. I serve my
students directly and I am on call when they need me. In contrast, as
a referee, I serve the field. I also serve authors, but I do not
serve authors directly. I work with authors and their work through
the medium of a structured interaction governed by editors or
conference chairs.

Our field has only one long-standing refereed journal, Design
Studies. We have a second outstanding senior journal, Design Issues.
Design Issues uses editorial selection rather than blind review. Most
of the other design research journals are new or relatively new. Of
the new wave of journals, The Journal of Design Management and The
Design Journal are probably the oldest.

We have only in recent years begun holding refereed conferences. As a
result, we do not have as many experienced reviewers with the full
range of experiences we need.

The person-hours required for comprehensive reviewing is significant.

The best review process in which I have been involved is that of
MISQ, Management Information Science Quarterly. The senior editor
assigns articles to an associate editor. The associate editor guides
the process working with the external reviewers. Following the
referee review, the associate editor writes up the reports, makes a
judgment, and refers the material upward to the senior editor for
action.

After the process is complete, the full documentation - including
blind reviews - is available to all parties. The entire process is
also administered with the help of an editorial secretary.

This kind of extensive and intensive process is, in part, why MISQ
has become a premier journal in two fields, management studies, and
information studies.

There is a second issue implicit in this kind of process. This is
more extensive use of desk reject than we see in our field. For
Common Ground, I estimate that I spent 6 hours on each paper I
reviewed. In all cases but one, I wrote out full and extensive notes.
One paper was a masterpiece. The second best paper presented the
author's experiences well, while failing to develop external material
adequately.

In my opinion, two of the papers sent to me should have been desk
rejects. They were flawed in terms of substantive claims and
documentation.

This does not mean that I disagreed with the author's views, method,
or interpretation. It means that the author made truth claims that
were false or incorrect or truth claims that could not be judged
because they were simply not supported by the evidence made available
in the paper. These were so flawed on this basic level that such
issues as method hardly came up. Since the chair sent me these
papers, I reviewed them.

A third paper was jumbled and confused, and I recommended rejection
for other reasons. In that case, I used the referee form but declined
to write out my full notes.

My general tendency is to write out full notes in addition to the
referee form. I work my way through the paper, interspersing notes
with the author's text at appropriate points.

While many referees simply fill out the form, others see the
reviewing process as a knowledge-building process and a process that
builds a field. This involves more than what is sometimes labeled the
"gate keeping" process.

As I see it, the gate keeping function is a necessary outcome of
refereeing, but it should be a useful by-product of reviewing, rather
than the focus of the process. It is my view that serving as a
referee or reviewer implies engaging carefully with the content of a
submission. Since this is a required aspect of a serious review, one
may as well write up one's notes for the editors and the authors.

If all reviewers were to do this, we would see much better work in
most fields simply because the editorial process would lead to better
developed articles. As it is, reviewing is sometimes a matter of gate
keeping, and, even worse, it is often a matter of sheer opinionating
by reviewers.

While a review is always a matter of judgment, it is my view that
editorial judgment and referee judgment must be based on reasoned
argument and carefully stated foundations. This takes time. This is
why a field with better reviewers also makes more use of desk reject
by editors and conference chairs. (A desk reject take place when an
editor simply rejects a paper without passing it on for review.) Good
reviewers are hard to find. A reviewer who writes up full notes and
returns then on deadline is even more rare. Since editors and chairs
value these kinds of reviewers, they generally do not send
questionable submissions unless a potentially promising but possibly
questionable paper raises questions specifically located in a
reviewer's area of expertise.

This process involves judgment. On several occasions, for example,
editors have relied on my review even though my views contrasted with
the others. The main reason for this is that I substantiate my
judgment with reasoned argument.

There are good reviewers in every field. These reviewers nearly
always write up careful, extensive notes. I probably use more time
than most reviewers use because I review across a slightly wider
range of disciplines. As a result, I use each review as an
opportunity to review issues for my own research and learning as well
as for the benefit of authors and editors.

I have seen review notes in several fields, including management,
design research, and information science. I observe that good
reviewers clearly respond to the substantive content of a paper based
on their own subject expertise. Generous reviewers often offer
specific suggestions for improvement.

The field of design research is weak in this area. It is common to
see carefully articulated reviewer notes in the social sciences or
management studies. It is less common in information studies, but
still frequent. In design research, it is far more common to see
simple review forms with numbers filled in. While I have been asked
to edit and review material in the humanities and liberal arts, I
have little knowledge of general reviewing practices in those fields.
I have no idea of editorial and reviewing practices in the natural
sciences or mathematics. Since there are experts from all these
fields among our subscribers, others may be able to shed light on
relevant issues and practices.

The problems we see in reviewing practices for design often have to
do with the age of the field and the process of developing of a
research culture.

Even in other fields, however, good reviewing is the result of
careful development and conscious planning by editors and reviewers.

MISQ has attained the standing it has specifically because a series
of successive editors has worked diligently to build a culture that
supports good reviewing and careful, articulate response to authors.

It would be a good thing for our field if more reviewers were to take
their responsibilities seriously enough to share their expertise

This topic came up last year when Keith Russell asked about reviewing
for Common Ground. I started to write a note on reviewing. The note
grew into an article, and I have been tweaking and polishing it ever
since. I will eventually finish my notes for Keith and make them
available to this list.

What I am trying to do is write a paper that works in two directions.
Seen from one side, it helps reviewers to do a better job. Seen from
the other side, it should help authors to write better articles and
papers.

In the meantime, I have received two interesting forms of real-world
response on my approach to the reviewing process. The first is a best
reviewer award from one of the Academy of Management divisions. The
second is a request from a leading journal to turn my review notes
into a methodology article explaining some of the issues and
confusions surrounding the use of a specific research method.

Your comments on blind review deserve deeper reflection. You assert
that blind review is not blind, and you argue that the reason for
blind review is misplaced scientism. Both these issues deserve
examination.

It sometimes happens that referees think they can guess the
identities of authors. As a referee for Common Ground, I had no
knowledge of authors or of the selection and balancing criteria
applied to any specific paper. I guessed the identity of three
authors, or at least I think I did.

One author's identity was revealed not by self-citation, but by a
clear style of careful theoretical and empirical development and a
robust intellectual style, that characterizes all of his work. This
was the first time I reviewed his work, but not the first time I have
read it. His record of publishing in refereed publications suggests
that many others agree with me that his work is worth publishing.
This does not mean that we all agree with his ideas. It means we all
agree that his ideas are well developed and therefore that we must
consider them whether we agree or not. The conference revealed that I
guessed correctly.

I guessed another author based on style of argumentation and a
pattern of inappropriate substantive claims combined with poor
management of sources that I have seen elsewhere.

In a third case, I thought that I identified the author in terms of
subject matter, argumentative style, and citation patterns. The
conference revealed that I was right, but the author did seem to cite
himself.

In all three cases, I informed the chair that I suspected the
identity of the author. My review forms were filled out and my notes
filed. I do not know whether the chair submitted these papers to a
third referee or not, but the conference suggests that other referees
concurred with me in two of three cases.

In great part, blind reviewing is blind. I suspect that many people
who complain about the review process do not have as much experience
of the general situation as they think they have.

While I have serious reservations about the blind review process, my
reservations involve aspects of editorial process and flow. Most
important, they involve cases where authors are asked to respond to
the concerns of two or three referees each of whom conflicts with the
others. This is a far different critique than the idea that blind
review is not blind.

My argument is that blind review may be too blind after the early
stage to accept with revisions. I argue that in these cases, editors
should assign a developmental editor to work with the author when an
article seems promising enough to take forward, while working to
adjust the conflict among reviewers.

My view is that many people who believe they can guess an author's
identity are more often wrong than right.

The second issue that deserves reflection is the central reason that
gave rise to the use of blind review.

You write, "The idea of the 'Blind' refereeing seems to me is a
product of the 'objective' scientific thinking/philosophy that is
used to deny all the personal/institutional values, interests,
feelings, emotions etc involved during the refereeing process."

This is not so.

If you study the history of scientific and scholarly journals, you
will find that there was a time in the development of most fields
when all journals and conferences were under the control of an old
and generally powerful elite. In some fields, this control was so
tight that one editor alone made decisions for the leading journal,
and most others followed his lead. (This is not a sexist use of the
male preposition. From the seventeenth century through the second
half of the twentieth century, men controlled all major journals and
most minor journals. Even today, past patterns of sexism in the
development of many fields mean that men still outnumber women in
senior journal positions.)

The ability of a few powerful figures to decide what would be
published and what would not established a reasonably clear list of
acceptable and unacceptable authors. This in turn affected the
opportunity of authors to publish in other venues.

The same held true for conferences. Conferences were often small.
Many were invitational. These conferences were frequently exclusive.
The term exclusive does not merely mean that these conferences were
prestigious and influential. It describes the professional and
ideological structure of many academic, scholarly, and scientific
conferences. At many conferences, the organizers did more than decide
who would present. They decided who would be permitted to attend.
They excluded people with whose ideas they disagreed. They frequently
excluded people whom they did not like. It was also common to exclude
people based on gender, race, or religion.

We still hold invitational conferences today, but they are generally
seminars or small, fully financed research conferences.

Large full-field conferences tend to use blind review rather than invitations.

The practice of blind review was developed to allow fields to expand,
and to remove personal prejudice from the selection process. The
process is not perfect. Blind reviewing does not remove ideological
or intellectual bias. In many cases, the process fails to adjust for
the referee's lack of knowledge or other possible flaws.

However, the idea of blind review did NOT primarily arise as a "a
product of the 'objective' scientific thinking/philosophy." It began
as a way to seek a form of intellectual objectivity that moved beyond
personal prejudice. You place the term "objectivity" in "quotation
marks" to show that you disapprove of the idea. The idea of
objectivity has another meaning. It involves a sense of ideas and
positions that are distinct from the people who espouse them.

In this sense of the term, objectivity applies to scholarship in
almost all fields that use blind review. This form of objectivity is
not limited to the sciences.

Many fields outside science use the blind review process for the same
reasons that scientists and mathematicians do.

Journals and conferences in the humanities, history, liberal arts,
fine arts, theology, and most other fields use blind review. Blind
review is increasingly used to jury art exhibitions and design
exhibitions.

Interestingly, the practice of blind review for art and design
exhibitions only began in the 1970s. In many cases, it began because
artists and designers demanded it. This demand was often a result of
the well-documented tendency of jurors to discriminate against women
in named submissions while selecting men and women in generally equal
number in blind review.

The blind review model did not emerge as a product of what you label
as scientific thinking.

It was an effort to move beyond the decision power of one powerful
person or a committee of powerful people. The practice is so common
that is even used by journals and conferences with a strong
post-modernist and deconstructivist leaning. While these journals and
conferences are organized in opposition to the idea of scientific
thinking, they nevertheless use blind review.

Blind review has had many important positive effects. You can see
this when you measure the percentage of journal articles and
conference inclusions by groups that suffered discrimination when
inclusion in journals and conferences rested on individual decisions
by a closed circle of men with common characteristics.

Blind review makes it more difficult now to discriminate against
people than it used to be. Together with other factors, this means
that more kinds of people can achieve promotion and tenure in each
field than was possible when only members of an old boys' club made
the decisions.

There are problems with blind review. No one denies this.

Before condemning blind review, you might want to find out what
universities were like in the days BEFORE blind review.

Blind review helped to change the world in good directions. In
addition, it has also helped to improve the quality of scholarship
and science in many fields, and it has expanded perspectives in many
ways even while restricting them in others.

One issue you raise has nothing to do with blind review.

Blind review does NOT  "deny all the personal/institutional values,
interests, feelings, emotions etc involved during the refereeing
process."

Authors are free to develop and express values, bring in the
emotions, and describe their feelings. They are not required to
remove their values or their feelings from a submission, just their
names.

It may well be that blind review should be rethought. Rethinking has
three dimensions.

Criticizing the practice as it is now is one dimension. Criticism
requires careful description of the problem as it is. Your
description of the blind review process does not support a robust
critique.

The second dimension of rethinking involves offering an adequate
description of the historical reasons for the practice. If you are
going to bring history into your argument, it is worth examining and
describing the history of journal publishing and blind review.

This history is well documented in general, and it is document in
detail in many fields. You do not need a through historical review to
criticize the blind review process as it is today. However, when you
argue from history as you do, you should present historical facts
properly.

The third aspect of re-thinking involves offering a proposal of where
we ought to go and what we ought to do instead of what we do now. You
have put some good questions forward and opened the floor to ideas.
What do you propose? If we are to rethink these issues, then it is
fair for us to ask where you think we ought to go.

 From my perspective, I see a need for more small one-track
conferences such as we had at Ohio and La Clusaz. I like the
single-track conference format better than the omnibus conference
format. When everyone meets in plenum, we develop a richer dialogue
and greater knowledge of each other and emerging threads.

Small invitational research conferences are also particularly
productive. Two good examples stand out. Anders Ekdahl organized an
excellent one-day Nordic conference at Lund University a couple of
years ago. Moe recently, Staffordshire University hosted a conference
on philosophy in art and design.

Large omnibus conferences such as Common Ground are important for
other reasons. Large conferences allow us all to gather, to meet each
other, and to follow emerging threads according to our interests and
preferences.

Even though I like small conferences better, there are good reasons for both.

These choices among conference formats are linked to the issue of
blind review. In pragmatic terms, I see great difficulty in changing
the practice for large conferences.

Small invitational conferences obviously do not use blind review.

In between, there are probably many approaches. The question is how
to balance opportunities and resources - especially in the light of
the original problems that blind review was created to remedy.

Common Ground had many referees. I would like to hear what others think.

Since you raised the issue, though, you might also respond with
positive suggestions as well as questions.

How do you suggest we move forward?

Best regards,

Ken


References

Daft, Richard L. 1995. "Why I Recommended that Your Manuscript Be
Rejected and What You Can Do About It." Publishing in the
Organizational Sciences. Second Edition. L. L. Cummings and Peter J.
Frost, editors. Thousand Oaks, California: Sage Publications, 164-182.


--

Ken Friedman, Ph.D.
Associate Professor of Leadership and Strategic Design
Department of Leadership and Organization
Norwegian School of Management

Visiting Professor
Advanced Research Institute
School of Art and Design
Staffordshire University

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager