Thanks Terence for sharing his fears that the value of attending
conferences is reduced by 'research quality assessments'.
[There are many related issues here: I'm not convinced that 'research
quality' can be assessed by 'impact factors', 'number of citations'
or the 'number of publications'. Nor am I convinced that the
'research quality' can be valued by governments. Many different
problematic controversies seem to come together. I mention five:
1) Peer reviewed journals & competition. (Do you approve a paper from
a competitor, giving them the benefits of another good publication?
The selection of peer-reviewers must be cross-continental to avoid
2) Impact factors of journals/citation indices. (Does this lead to
even more specialised journals with even more repetitive articles?
It's allready hard to keep up with the main publications.)
3) Costs of peer reviewed journals. (The costs of academic journals
is becoming very high for libraries: severe cuts are frequent. This
leads to a reduction in availability, not to an increase + I'm
embarrassed to read that publishers ask substantial amounts of money
for my papers on their websites. Surely, a pdf of 3 pages A4 is not
worth 45 dollars?)
4) Research quality Assessments. (Who's assessing? And do they really
know enough about specific research?)
5) Competitions for funding. (Again, who is assessing the
applications? and who pays for the time that the application took?)
I think that these are all based on an incompatible view. It's not
possible to value the quality of research, nor the benefits of
attending a conference, in money or numbers.]
Personally, I would like to have access to three different types of research:
1) Research in progress. Novel ideas, applying methods to other
areas, trying new things. There is no guarantee that anything comes
out, but it is likely that some ideas prove fruitful and are
2) Research findings. The clear and concise reporting of the
question, approach, data, conclusions and discussion.
3) Handbooks and reviews: compilations of research findings and
meta-analyses. These are the standards as good as we know at the
The format in which these three types are presented is related to this:
1) Research in progress. A direct discussion with colleagues, cynics
and students at conferences and meetings, but also through e-mail
lists, blogs and websites. This sharpens the discussions and
2) Research findings. A combination of digital formats (pdf?) and
paper formats. Searchable and 'comfortable to study'. (One of the
things that would be really useful for some papers is when the
original digital data are made available too. At least it would be
possible to check if the conclusions make sense.) A combination of
websites, pdf's, and printed journals.
3) Handbooks in book-form. (May be with an accompanying website with
a selection of relevant links and discussion fora.).
If a 'research quality assessment' is required, than I probably would
include criteria related to these:
- Did you present your research in progress to your peers? How? And
where there any reactions?
- How did you make your research results available? Is it easily
accessible to those who are likely to benefit? Which parts are hard
to access or not publicly available?
- How does your research relate to 'best practice' and 'current
knowledge' as it is described in handbooks/meta-analyses.
The conflict might be between the view of research as an ongoing
activity that continuously generates ideas, data, discussions and
suggestions. These activities are discussed in short-term,
medium-term and long-term public venues.
Only when research is limited to 'a project with an end report', it
can fit into measurable criteria of 'citation indexes', 'impact
factors', and 'financial years'.
Visiting conferences is essential as Catherine Harper pointed out.
Unfortunately, it's in direct conflict with 'measurable quality
assessment' and 'project research'.
Karel van der Waarde
[log in to unmask]