This attempt/plan to tune and weight the outcome of metrics to mimic the previous "panel rankings in each discipline" begs the simple question: Were the panel rankings the best way to judge research and distribute these funds anyway?
This is making the same mistake that was made when we moved from paper-based publishing to net-based publishing and carried across all the conventions and limitations of the old system.
Would it not be better, now we have new forms of measurement, to ask what these measurements themselves are telling us about research quality or value or importance or whatever dimensions we choose to measure it on, and build a better (fairer?) form of assessment?
Just a thought :-) .
John Smith,
University of Kent, UK
> -----Original Message-----
> From: Repositories discussion list [mailto:JISC-
> [log in to unmask]] On Behalf Of Stevan Harnad
> Sent: 15 February 2008 11:59
> To: [log in to unmask]
> Subject: Re: RAE/REF research
>
> On Fri, 15 Feb 2008, Bevan, Simon wrote:
>
> > The research undertaken at Cranfield concerning the differences
> between
> > the Research Assessment Exercise and the Research Excellence
> Framework,
> > discussed in the Times Higher this week is available at:
> >
> https://dspace.lib.cranfield.ac.uk/bitstream/1826/2248/1/Citation%
> 20Counts-RAE%20Scores-2008..pdf
>
> This pilot study has some methodological weaknesses.
>
> The remedy for the ostensible problems encountered in this study
> is for
> the panel rankings in the parallel metric/panel RAE 2008 to be
> analysed
> in a full-scale multiple regression study using as rich and
> diverse
> as possible a spectrum of predictive metrics (not just citation
> counts!), discipline by discipline. This will reveal which metrics
> are
> most predictive of the panel rankings in each discipline, and it
> will
> initialise their weights, which can then be calibrated, updated
> and
> fine-tuned (as well as augmented with any new metrics) in
> subsequent
> continuous assessment.
>
> Stevan Harnad
|