I suspect that what fits some disciplines may well not fit others. The RAE and REF (if it ends up functioning as forecast) are one-size-fits-all solutions, whatever their benefits and faults. Better consultations with academics, rather than a government think tank approach, would be most likely to get the right result for each research area - and perhaps even allocate some funds for teaching excellence, who knows? Then the system would have the political support that you mention. Of course, it would need the kind of detailed analysis you describe, since what people want may not always correlate perfectly with the best system. I do suspect that proper consultation would go a long way towards finding it, which is precisely what I hear academics complaining does not happen. Clearly, the main fault with the panels was the high cost, and thus why the government wants a change! Whether metrics works or not, we're stuck with it now!
Sorry to sound fatalistic. Thanks,
Talat
-----
Dr Talat Chaudhri, Ymgynghorydd Cadwrfa / Repository Advisor
Tîm Cynorthwywyr Pwnc ac E-Lyfrgell / Subject Support and E-Library Team
Gwasanaethau Gwybodaeth / Information Services
Prifysgol Aberystwyth / Aberystwyth University
CADAIR: http://cadair.aber.ac.uk
-----Original Message-----
From: Repositories discussion list [mailto:[log in to unmask]] On Behalf Of Leslie Carr
Sent: 18 February 2008 12:20
To: [log in to unmask]
Subject: Re: RAE/REF research
On 18 Feb 2008, at 12:00, John Smith wrote:
> This attempt/plan to tune and weight the outcome of metrics to mimic
> the previous "panel rankings in each discipline" begs the simple
> question: Were the panel rankings the best way to judge research and
> distribute these funds anyway?
Who knows. They certainly weren't "the only way", they might not have
been "the best way", but they certainly were "the way". An important
property of the RAE was that it had sufficient support to be
politically achievable. Whatever faults it had, at least it (judgement
by peers) was ultimately acceptable to all parties. Whatever replaces
it has to be acceptable to all parties, and that would seem to imply
that it has to be demonstrably similar to "judgement by peers".
> This is making the same mistake that was made when we moved from
> paper-based publishing to net-based publishing and carried across
> all the conventions and limitations of the old system.
I'd argue that we haven't yet moved :-)
> Would it not be better, now we have new forms of measurement, to ask
> what these measurements themselves are telling us about research
> quality or value or importance or whatever dimensions we choose to
> measure it on, and build a better (fairer?) form of assessment?
Or perhaps, before we change horses, we ought to work out what it is
that constitutes fair assessment in the current system, try to target
it through our new set of metrics, and check that we got it right.
> Just a thought :-) .
Just my tuppence.
--
Les Carr
|