>journal usage statistics might form the basis of a new metric
>of journal quality.
>
>>What usage statistics do not currently do is provide a
>comparative measure of journal quality or value.
* This is a very welcome initiative, but any measure of "quality", must
in some way include both qualitative and empirical data. Obtaining
accurate qualitative information is difficult.
The eVALUEd team have done some great work here:
http://www.evalued.uce.ac.uk/
The FREQUENCY of document access derived from web logs and reported by
COUNTER, was never intended to measure a users' perception of value that
a collection might offer. For that a different type of information based
on opinion is needed.
But converting a user's opinion (from say a focus group), to a
measurable, accurate value that's useful for making library decisions is
not easily done. (cf COUNTER stats.)
However if it can be demonstrated that just a few downloads from an
expensive (humanities?) journal have contributed to securing significant
research funding, or have resulted in more first class degrees, then
such "value" can be clearly measured, and importantly reported to the
university finance committee. Such tracking would be very difficult I
guess?
How do we combine these two kinds of data - qualitative/quantitative -
to give a meaningful result, which can be used with confidence when
making library decisions? Will this UKSG project help to do this?
"If I was guided solely by usage statistics, I would
>cancel all my subscriptions to humanities journals, which tend
>to publish far fewer issues per year than the monster science titles".
* Thankfully there is no substitute for a well informed subject
librarian, but it's important to compare the same categories. Comparing
your STM with Humanities will result in contextual bias. Maybe use a
factor to equalise the bias?
Collecting a large (national) dataset and using Benchmarking techniques
nationally would give a more informed framework for the interpretation
of COUNTER statistics for all subjects. Analysing COUNTER usage from
subject disciplines across 20 Universities may show trends which
influence your decisions. I think DISC is to look at this?
>
>The initiative launched today by the US sets out to see if
>it will be practical to address this situation in a way
>similar to the way Isis's Impact Factor seeks to compensate for
>the fact that larger journals will tend to be cited more than
>smaller ones.
* One possible way to separate fact from opinion is to see if there is
any correlation between two variables associated with value; usage data
and impact factors. One measures performance, the other is thought to
measure quality (debatable). The observed correlation between the two
may tell us something about value? We can assign a number indicating the
strength of this relationship for each title. At Newcastle I'm doing
this for the top 10 jnls in our STM packages. Trouble is I'm not
convinced about Impact Factors. (I'm very confident about using COUNTER
stats though!) This new initiative moves away from a dependency on
Impact Factors.
>
>Will this initiative succeed?
* Or re-working the question: Would you be confident that a Usage Factor
for Value would be accurate enough to shape your strategic decisions?
* Depends on the accuracy (or should that be precision) of the
normalised data from participating libraries.
* If analysis is done correctly, the best measure of
performance/value/anything is one based on verified empirical data, so
the results should be encouraging.
If you like this sort of stuff why not join lib-stats? Send me a mail
and I'll add you to the list.
[log in to unmask]
Rgds
C.
==
|