JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for LIS-ELIB Archives


LIS-ELIB Archives

LIS-ELIB Archives


LIS-ELIB@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

LIS-ELIB Home

LIS-ELIB Home

LIS-ELIB  November 2007

LIS-ELIB November 2007

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

UK Research Evaluation Framework: Validate Metrics Against Panel Rankings

From:

Stevan Harnad <[log in to unmask]>

Reply-To:

Stevan Harnad <[log in to unmask]>

Date:

Thu, 22 Nov 2007 16:57:38 +0000

Content-Type:

TEXT/PLAIN

Parts/Attachments:

Parts/Attachments

TEXT/PLAIN (208 lines)

    ** Cross-Posted** Fully Hyperlinked version of this posting:
    http://openaccess.eprints.org/index.php?/archives/333-guid.html

SUMMARY: Three things need to be remedied in the UK's proposed 
HEFCE/RAE Research Evaluation Framework:
http://www.hefce.ac.uk/pubs/hefce/2007/07_34/
    (1) Ensure as broad, rich, diverse and forward-looking a battery of 
candidate metrics as possible -- especially online metrics -- in all 
disciplines.
    (2) Make sure to cross-validate them against the panel rankings in 
the last parallel panel/metric RAE in 2008. The initialized weights 
can then be fine-tuned and optimized by peer panels in ensuing years.
    (3) Stress that it is important -- indeed imperative -- that all 
University Institutional Repositories (IRs) now get serious about 
systematically archiving all their research output assets (especially 
publications) so they can be counted and assessed (as well as 
accessed!), along with their IR metrics (downloads, links, 
growth/decay rates, harvested citation counts, etc.).
    If these three things are systematically done -- (1) comprehensive 
metrics, (2) cross-validation and calibration of weightings, and (3) a 
systematic distributed IR database from which to harvest them -- 
continuous scientometric assessment of research will be well on its 
way worldwide, making research progress and impact more measurable and 
creditable, while at the same time accelerating and enhancing it.
Once one sees the whole report, it turns out that the HEFCE/RAE 
Research Evaluation Framework is far better, far more flexible, and 
far more comprehensive than is reflected in either the press release 
or the Executive Summary.

It appears that there is indeed the intention to use many more metrics 
than the three named in the executive summary (citations, funding, 
students), that the metrics will be weighted field by field, and that 
there is considerable open-mindedness about further metrics and about 
corrections and fine-tuning with time. Even for the humanities and 
social sciences, where "light touch" panel review will be retained for 
the time being, metrics too will be tried and tested.

This is all very good, and an excellent example for other nations, 
such as Australia (also considering national research assessment with 
its Research Quality Framework), the US (not very advanced yet, but no 
doubt listening) and the rest of Europe (also listening, and planning 
measures of its own, such as EurOpenScholar).

There is still one prominent omission, however, and it is a crucial 
one:

The UK is conducting one last parallel metrics/panel RAE in 2008. That 
is the last and best chance to test and validate the candidate metrics 
-- as rich and diverse a battery of them as possible -- against the 
panel rankings. In all other fields of metrics -- biometrics, 
psychometrics, even weather forecasting metrics ? before deployment 
the metric predictors first need to be tested and shown to be valid, 
which means showing that they do indeed predict what they were 
intended to predict. That means they must correlate with a "criterion" 
metric that has already been validated, or that has "face-validity" of 
some kind.

The RAE has been using the panel rankings for two decades now (at a 
great cost in wasted time and effort to the entire UK research 
community -- time and effort that could instead have been used to 
conduct the research that the RAE was evaluating: this is what the 
metric RAE is primarily intended to remedy).

But if the panel rankings have been unquestioningly relied upon for 2 
decades already, then they are a natural criterion against which the 
new battery of metrics can be validated, initializing the weights of 
each metric within a joint battery, as a function of what percentage 
of the variation in the panel rankings each metric can predict.

This is called "multiple regression" analysis: N "predictors" are 
jointly correlated with one (or more) "criterion" (in this case the 
panel rankings, but other validated or face-valid criteria could also 
be added, if there were any). The result is a set of "beta" weights on 
each of the metrics, reflecting their individual predictive power, in 
predicting the criterion (panel rankings). The weights will of course 
differ from discipline by discipline.

Now these beta weights can be taken as an initialization of the metric 
battery. With time, "super-light" panel oversight can be used to 
fine-tune and optimize those weightings (and new metrics can always be 
added too), to correct errors and anomalies and make them reflect the 
values of each discipline.

(The weights can also be systematically varied to use the metrics to 
re-rank in terms of different blends of criteria that might be 
relevant for different decisions: RAE top-sliced funding is one sort 
of decision, but one might sometimes want to rank in terms of 
contributions to education, to industry, to internationality, to 
interdisciplinarity. Metrics can be calibrated continuously and can 
generate different "views" depending on what is being evaluated. But, 
unlike the much abused "university league table," which ranks on one 
metric at a time (and often a subjective opinion-based rather than an 
objective one), the RAE metrics could generate different views simply 
by changing the weights on some selected metrics, while retaining the 
other metrics as the baseline context and frame of reference.)

To accomplish all that, however, the metric battery needs to be rich 
and diverse, and the weight of each metric in the battery has to be 
initialised in a joint multiple regression on the panel rankings. It 
is very much to be hoped that HEFCE will commission this all-important 
validation exercise on the invaluable and unprecedented database they 
will have with the unique, one-time parallel panel/ranking RAE in 
2008.

That is the main point. There are also some less central points:

The report says -- a priori -- that REF will not consider journal 
impact factors (average citations per journal), nor author impact 
(average citations per author): only average citations per paper, per 
department. This is a mistake. In a metric battery, these other 
metrics can be included, to test whether they make any independent 
contribution to the predictivity of the battery. The same applies to 
author publication counts, number of publishing years, number of 
co-authors -- even to impact before the evaluation period. (Possibly 
included vs. non-included staff research output could be treated in a 
similar way, with number and proportion of staff included also being 
metrics.)

The large battery of jointly validated and weighted metrics will make 
it possible to correct the potential bias from relying too heavily on 
prior funding, even if it is highly correlated with the panel 
rankings, in order to avoid a self-fulfilling prophecy which would 
simply collapse the Dual RAE/RCUK funding system into just a 
multiplier on prior RCUK funding.

Self-citations should not be simply excluded: they should be included 
independently in the metric battery, for validation. So should 
measures of the size of the citation circle (endogamy) and degree of 
interdisciplinarity.

Nor should the metric battery omit the newest and some of the most 
important metrics of all, the online, web-based ones: downloads of 
papers, links, growth rates, decay rates, hub/authority scores. All of 
these will be provided by the UK's growing network of UK Institutional 
Repositories. These will be the record-keepers -- for both the papers 
and their usage metrics -- and the access-providers, thereby 
maximizing their usage metrics.

REF should put much, much more emphasis on ensuring that the UK 
network of Institutional Repositories systematically and 
comprehensively records its research output and its metric performance 
indicators.

But overall, thumbs up for a promising initiative that is likely to 
serve as a useful model for the rest of the research world in the 
online era.

References

Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online
RAE CVs Linked to University Eprint Archives: Improving the UK Research
Assessment Exercise whilst making it cheaper and easier. Ariadne 35.
http://www.ecs.soton.ac.uk/~harnad/Temp/Ariadne-RAE.htm

Brody, T., Kampa, S., Harnad, S., Carr, L. and Hitchcock, S. (2003)
Digitometric Services for Open Archives Environments. In Proceedings of
European Conference on Digital Libraries 2003, pp. 207-220, Trondheim,
Norway. http://eprints.ecs.soton.ac.uk/7503/

Harnad, S. (2006) Online, Continuous, Metrics-Based Research Assessment.
Technical Report, ECS, University of Southampton.
http://eprints.ecs.soton.ac.uk/12130/

Harnad, S. (2007) Open Access Scientometrics and the UK Research
Assessment Exercise. In Proceedings of 11th Annual Meeting of the
International Society for Scientometrics and Informetrics 11(1), pp.
27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds.
http://eprints.ecs.soton.ac.uk/13804/

Brody, T., Carr, L., Harnad, S. and Swan, A. (2007) Time to Convert to
Metrics. Research Fortnight pp. 17-18.
http://eprints.ecs.soton.ac.uk/14329/

Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and
Swan, A.  (2007) Incentivizing the Open Access Research Web:
Publication-Archiving, Data-Archiving and Scientometrics. CTWatch
Quarterly 3(3). 
http://eprints.ecs.soton.ac.uk/14418/

    Fully Hyperlinked version of this posting:
    http://openaccess.eprints.org/index.php?/archives/333-guid.html

Stevan Harnad
AMERICAN SCIENTIST OPEN ACCESS FORUM:
http://amsci-forum.amsci.org/archives/American-Scientist-Open-Access-Forum.html
     http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/

UNIVERSITIES and RESEARCH FUNDERS:
If you have adopted or plan to adopt an policy of providing Open Access
to your own research article output, please describe your policy at:
     http://www.eprints.org/signup/sign.php
     http://openaccess.eprints.org/index.php?/archives/71-guid.html
     http://openaccess.eprints.org/index.php?/archives/136-guid.html

OPEN-ACCESS-PROVISION POLICY:
     BOAI-1 ("Green"): Publish your article in a suitable toll-access journal
     http://romeo.eprints.org/
OR
     BOAI-2 ("Gold"): Publish your article in an open-access journal if/when
     a suitable one exists.
     http://www.doaj.org/
AND
     in BOTH cases self-archive a supplementary version of your article
     in your own institutional repository.
     http://www.eprints.org/self-faq/
     http://archives.eprints.org/
     http://openaccess.eprints.org/

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
January 2024
December 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
February 2023
January 2023
December 2022
February 2022
December 2021
October 2021
September 2021
August 2021
May 2021
September 2020
October 2019
March 2019
February 2019
August 2018
February 2018
December 2017
October 2017
September 2017
August 2017
June 2017
April 2017
March 2017
February 2017
January 2017
November 2016
August 2016
July 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
September 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
July 1999
June 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998
August 1998
July 1998
June 1998
May 1998
April 1998
March 1998
February 1998
January 1998
December 1997
November 1997
October 1997
September 1997
August 1997
July 1997
June 1997
May 1997
April 1997
March 1997
February 1997
January 1997
December 1996
November 1996
October 1996
September 1996
August 1996
July 1996
June 1996
May 1996
April 1996
March 1996


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager