Well indeed.... it does reduce burden... but of course not completely free of issues... not least of which is the absence of peer review, which seems to be close to the hearts of some academics... (the having, not the absence)

===
Dr Simon Kerridge
Director of Research Services
University of Kent

On 26 Jan 2017, at 19:09, Alan Dix <[log in to unmask]> wrote:


I've always been all in favour of Monte Carlo funding Simon, especially for project grants: use some loose measure of past performance to allocate lottery tickets and see what happens.  Get rid of all that wasted effort writing proposals and allow academic actually to do real research.

Alan


On 26 January 2017 at 17:17, Simon Kerridge <[log in to unmask]> wrote:

Thanks Alan,

I was aware that UOA11 used a longer scale (although I thought it was a ten point one.. maybe that was RAE2008…) anyway I think that UOA11 was fairly rare is using a longer scale… (could be wrong there too) although it is of course far more sensible as it allows for adjustments based on panel member scoring styles… and to me seems a far superior method.

 

I have to say that I still like my proposal from last time.. not taken up for some unknown reason…

Define, say, 10 metrics… ideally in tension with each other collect all of them… and then roll a few dice (10 sided I guess) and just use those (fewer than ten) randomly selected metrics to allocate the funding.  It should I think reduce game playing… institutions would have to just do good stuff and try to maximise all metrics…

Just a thought…

 

From: A bibliometrics discussion list for the Library and Research Community [mailto:LIS-BIBLIOMETRICS@JISCMAIL.AC.UK] On Behalf Of Alan Dix
Sent: 26 January 2017 14:39


To: [log in to unmask]UK
Subject: Re: Use of metrics in the REF - further thoughts

 

 

The recommendation seems to be a natural consequence of:

 

a)  the feedback (I think from virtually all panels) to make future REF non-selective on staff as this led to endless game playing which, inter alia, was damaging to staff careers for the who got gamed out

 

b)  not wanting panels to have to read vast swathes of low quality outputs that won't count for funding anyway

 

Having a smaller number of outputs per staff member, but not tied so tightly to specific staff nicely squares the circle!

 

Of course (b) would not be an issue with a largely metrics-based approach and (a) will almost certainly lead to more staff on teaching-only contracts, so lots of complications!

 

In my sub-panel (SP11) we rated everything on a 12 point scale anyway, I'm not sure how widespread that was.  The conversion to 4 point star ratings was done quite late in the process.  That is, we effectively made low/mid/hi 4* ratings for each output ... or in other words we already have 5* and 6* Simon!

 

However, I'd be really worried if these were used for a highly ramped funding formula.  In my post-hoc analysis of SP11, GPA was less biased than 4* heavy (funding related) measures, and from a basic stats point of view we know that extremal measures have high variability

 

Arguably, in the long run, less 'cliff edge' metrics could have some advantages.

 

In REF2014, we saw some quite sharp rank changes at the top of 4* heavy 'league tables' (at least in SP11).  Some of that will be due to genuine changes in strength over the period; some strategic decisions (aka game playing) such as a Russell Group Uni only entering half the staff for a high-ranked UoA; but also some will be pure chance.  Having a lower leveraging effect for REF would dampen these 5 year swings and maybe also reduce some of the fetishisation of REF.

 

As you say Ray, this would also spread resources more widely and hence more thinly; closer to less selective pre-1992 University funding.  However, the last time (I'm aware of) that HEFCE did a bang-for-buck measure (I think early 2000s), it was the lower ranked universities that gave best performance in terms of research output per pound spent, so maybe this would not be a bad thing!

 

Of course, as I'm at a Russell Group Uni (which did submit near 100% for my UoA), maybe I should just fight for status quo ;-)

 

Alan

 

 

On 26 January 2017 at 13:18, Simon Kerridge <[log in to unmask]> wrote:

Or...
Most of your issues could be addressed by converting the "upper 4*" into 5* with an appropriate funding formula to reward them...
... the man problem would be the naming of the category... "galactically recognised" anyone (to leave room for the 6* needed in 2028...)


-----Original Message-----
From: A bibliometrics discussion list for the Library and Research Community [mailto:LIS-BIBLIOMETRICS@JISCMAIL.AC.UK] On Behalf Of Kent, Ray
Sent: 26 January 2017 11:26
To: [log in to unmask]UK
Subject: Re: Use of metrics in the REF - further thoughts

Hello

I've been trying to think through some of the 'big questions' posed by the Consultation on REF 2021, notably what might be the outcome of implementing Lord Stern's proposal to decouple outputs from individuals (Recommendation 2 of Stern, 2016).

If Recommendation 2 is implemented, all institutions will seek to maximize their REF score by submitting only the very best (potentially 4*) outputs as judged through peer review. This would be greatly facilitated by the possibility to submit staff with no outputs, or perhaps only one output.

Faced with a submission consisting entirely, or almost entirely, of 4* outputs, how will REF panels respond? As I see it, panels will have a choice: they can either rate the whole set of outputs as 4* (applying the logic of REF 2014), or reach for bibliometric indicators in an attempt to separate 'top 4*' outputs from 'bottom 4*' outputs, with possibly a third category of 'middle 4*'.

A consequence of choosing to rate a whole set of outputs as 4*, is that in REF 2021, there could be many submissions in each UoA that consist almost entirely of 4* outputs. With Impact and Environment contributing only 35%, in total, to the weighting of QR, this would prove a real headache for Research England, HEFCE's successor, and the devolved funding councils. If the field becomes bunched-up in this way, QR will be spread much more evenly than was the case following REF 2014. One might argue that this reduction in concentration would be a good thing, but unless Research England steps in to moderate the effect on the 'Golden Triangle' and other research-intensive institutions, they would likely see a substantial drop in their QR funding. This would create significant turbulence across the sector, particularly when considered alongside the as yet largely unknown consequences of the TEF and Brexit.

Alternatively, if the REF panels decide to use bibliometrics to assign a 3* rating to 'middle 4*' and 'bottom 4*' outputs (papers that, in REF 2014, would each have received a 4* rating), the effect will be to deflate, relative to REF 2014, the grades awarded to outputs in REF 2021. If that were to happen, come 2021 we could be faced with a situation where a significant number of UK universities have the appearance of being weaker, in research terms, than they were at the time of REF 2014. That doesn't send a good message to Government when we are lobbying for more funding.

I am sure that HEFCE and its equivalent bodies in the devolved administrations have thought about this carefully, and are acutely aware of the unintended consequences that might result if the decoupling of outputs and individuals were to be implemented. However, if as a sector we fail to strongly object to Recommendation 2 in our Consultation responses (the temptation being to focus on less complex issues), the funding councils will have no grounds for rejecting the proposal.

Let's make our voices heard, and say no to Recommendation 2!

When doing so, we should propose an alternative way forward. This could be to retain the approach to outputs taken in REF 2014, or there may be a better way - ideally one that doesn't involve having to submit Individual Staff Circumstances. Any thoughts?

Best regards,
Ray

--
Dr Ray Kent
Director of Research Administration
Research Office
The Royal Veterinary College
Royal College Street
London
NW1 0TU
T: +44 (0)20 7468 1206
E: [log in to unmask]




[RVC Logo - link to RVC Website]<http://www.rvc.ac.uk>    [Twitter icon - link to RVC (Official) Twitter] <http://twitter.com/RoyalVetCollege>     [Facebook icon - link to RVC (Official) Facebook] <http://www.facebook.com/theRVC>     [YouTube icon - link to RVC YouTube] <http://www.youtube.com/user/RoyalVetsLondon?feature=mhee>     [Pinterest icon - link to RVC Pinterest] <http://pinterest.com/royalvetcollege/>     [Instagram icon - link to RVC Instagram] <http://instagram.com/royalvetcollege>

This message, together with any attachments, is intended for the stated addressee(s) only and may contain privileged or confidential information. Any views or opinions presented are solely those of the author and do not necessarily represent those of the Royal Veterinary College (RVC). If you are not the intended recipient, please notify the sender and be advised that you have received this message in error and that any use, dissemination, forwarding, printing, or copying is strictly prohibited. Unless stated expressly in this email, this email does not create, form part of, or vary any contractual or unilateral obligation. Email communication cannot be guaranteed to be secure or error free as information could be intercepted, corrupted, amended, lost, destroyed, incomplete or contain viruses. Therefore, we do not accept liability for any such matters or their consequences. Communication with us by email will be taken as acceptance of the risks inherent in doing so.



 

--

Professor Alan Dix

 

      University of Birmingham, UK

      and

      Talis, Birmingham, UK

 

Tiree Tech Wave: http://tireetechwave.org/




--
Professor Alan Dix

      University of Birmingham, UK
      and
      Talis, Birmingham, UK

Tiree Tech Wave: http://tireetechwave.org/