Friends,
I tend to agree with the issues in Jeremy Hunsinger's post and David
Durling's implicit views on the problem of ranking journals. Metrics are in
essence flawed, though a partial metric system may have some value within a
larger research assessment mechanism.
If one must use metrics, I like the Norwegian system best -- all journals
count, and a select group chosen for superior quality -- roughly 10% --
count at a higher level.
Schemes that attempt to divide journals into narrow bands are destined to
prejudicial information and flawed evaluation.
We undertook our study for Australia as a protective measure, a way to
ensure that scholars in design research get proper credit for publishing in
the journals that a broad sampling of our field sees as core journals. By
default, we were obliged to fit it to the Excellence in Research for
Australia (ERA) structure.
We are now going to undertake a few additional studies to flesh out the
meaning of the earlier responses, In our full report, we intend to address
the questions and problems inherent in metrics, and to argue for strong use
of other measures.
Sometimes one acts based on necessity. In our case, we acted to keep design
and design research in the mix on terms set by a wide sample of designers
and design research scholars rather than on terms set by scholars in other
fields.
In some cases, it should be possible to kill unworthy initiatives. This
seems to be such a case.
Best regards,
Ken Friedman
|