On Mon, 18 Feb 2008, John Smith wrote:
> This attempt/plan to tune and weight the outcome of metrics to mimic the
> previous "panel rankings in each discipline" begs the simple question:
> Were the panel rankings the best way to judge research and distribute
> these funds anyway?
No: They were just what we have been treating as the best way for 20 years.
If anyone has a tried, tested and validated better-way, then by all
means, let us cross-validate the metrics (and, for that matter, the
panel rankings) against that better-way.
If not, then let us first cross-validate and initialize the metrics
against what we have been relying on for 20 years, and then work toward
calibrating and optimising them.
> This is making the same mistake that was made when we moved from
> paper-based publishing to net-based publishing and carried across all
> the conventions and limitations of the old system.
Nothing of the sort. The transition to the Net was a natural exploitation
of a newly available medium, and is still in the process of optimization.
The RAE stagnated, expensively and inefficiently, for 20 years, and in
this last parallel exercise, has the unique opportunity to pass on the
baton to metrics by using the old way (panels) to validate and initialize
the new way (metrics).
The first and foremost motivation for the transition is to put an end
to the waste of researcher time and of money that could have been better
spent otherwise. And the first step is to make sure the new way can at
least duplicate the old way, that we used unquestioningly for 20 years,
at far greater cost in time and money.
Once the metrics have been validated and initialized against what we
already had, we can think of how to go on to optimise them -- not
before, or instead.
> Would it not be better, now we have new forms of measurement, to ask
> what these measurements themselves are telling us about research quality
> or value or importance or whatever dimensions we choose to measure it on,
> and build a better (fairer?) form of assessment?
And exactly how do you propose to ask that? Against what face-valid
criterion are you going to test these metrics?
Harnad, S. (2007) Open Access Scientometrics and the UK Research
Assessment Exercise. In Proceedings of 11th Annual Meeting of the
International Society for Scientometrics and Informetrics 11(1), pp.
27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
http://eprints.ecs.soton.ac.uk/13804/
Stevan Harnad
>> -----Original Message-----
>> From: Repositories discussion list [mailto:JISC-
>> [log in to unmask]] On Behalf Of Stevan Harnad
>> Sent: 15 February 2008 11:59
>> To: [log in to unmask]
>> Subject: Re: RAE/REF research
>>
>> On Fri, 15 Feb 2008, Bevan, Simon wrote:
>>
>>> The research undertaken at Cranfield concerning the differences
>> between
>>> the Research Assessment Exercise and the Research Excellence
>> Framework,
>>> discussed in the Times Higher this week is available at:
>>>
>> https://dspace.lib.cranfield.ac.uk/bitstream/1826/2248/1/Citation%
>> 20Counts-RAE%20Scores-2008..pdf
>>
>> This pilot study has some methodological weaknesses.
>>
>> The remedy for the ostensible problems encountered in this study
>> is for
>> the panel rankings in the parallel metric/panel RAE 2008 to be
>> analysed
>> in a full-scale multiple regression study using as rich and
>> diverse
>> as possible a spectrum of predictive metrics (not just citation
>> counts!), discipline by discipline. This will reveal which metrics
>> are
>> most predictive of the panel rankings in each discipline, and it
>> will
>> initialize their weights, which can then be calibrated, updated
>> and
>> fine-tuned (as well as augmented with any new metrics) in
>> subsequent
>> continuous assessment.
>>
>> Stevan Harnad
>
|