Alan
I think you’ve hit the nail on the head in one important respect. We have literally hundreds of bibliometric indicators to choose
from now (including >40 variants of the h-index). Trouble is, we don’t have much of a handle on the dependent variable (is it `excellence’, `quality’, `significance’, `rigour’, `visibility’, `utility’, `scholarly attention’?) so we don’t actually know WHAT
we’re measuring. My intuition, like yours, is that different indicators may measure these aspects differentially. This is an important area where I’d like to see more research, but at the moment, the construct validity of most bibliometric practice is pretty
weak, and that’s not a good place to be.
Ian
From: A bibliometrics discussion list for the Library and Research Community
[mailto:[log in to unmask]] On Behalf Of Alan Dix
Sent: 31 January 2018 15:15
To: [log in to unmask]
Subject: Re: FW: Announcing the Metrics Toolkit
Thanks Ian, Lizzie,
The Waltman and Traag article mentions the Italian equivalent of REF. I've done some reviewing for that and liked the way they combine IF, citations and expert evaluations.
Between them they really pick out the issues of journal metric (e.g. IF) as predictor of document level metric (e.g. long-term citations) and either as proxy for some abstract notion of research quality.
In REF language one could argue that long-term citation is a possible proxy for significance whereas the venue's impact factor might be a *better* indicator of rigour (or maybe accuracy/truth).
Alan
On 31 January 2018 at 14:11, Elizabeth Gadd <[log in to unmask]> wrote:
Hi Alan, all,
There is another defence of the impact factor here:
https://arxiv.org/abs/1703.02334
You make a good point about the h-index. In my view this is a far more problematic metric than the IF but it doesn’t get nearly as much bad press!
Lizzie
From: A bibliometrics discussion list for the Library and Research Community [mailto:[log in to unmask]] On Behalf Of Rowlands, Ian
Sent: 31 January 2018 10:55
To: [log in to unmask]
Subject: Re: FW: Announcing the Metrics Toolkit
Hi Alan
You should read this for a nuanced defence of impact factors in research evaluation:
https://arxiv.org/abs/1612.04075
There are glaring dangers in terms of conflating journal and article level impact, but I don’t think the objections to impact factors are entirely to do with the construction of the metric!
Ian
From: A bibliometrics discussion list for the Library and Research Community [mailto:[log in to unmask]] On Behalf Of Alan Dix
Sent: 31 January 2018 10:17
To: [log in to unmask]
Subject: Re: FW: Announcing the Metrics Toolkit
... at the risk of getting myself lined up in front of a firing squad ...
Although everyone hates impact factor, actually (certainly in my field) and despite the odd anomaly, the IF ranking is not so far from my subjective ranking of venue quality.
Furthermore, 'better' venues typically have more effective review mechanisms and higher standards.
That is quality of venue is an indicator of quality of publication (there I said it :-/). It is not a particularly great one, and certainly not the preferred one if other more output-specific information is available instead. However, for assessment where no other information is available (e.g. roughly assessing candidates with recent publications), quality of venue will continue be used as a proxy for output quality, and not completely without merit.
So, having good advice on appropriate venue-level metrics does seem really important.
I've been particularly alarmed recently to see journal metrics using variants of H-index - IF based on arithmetic averages is bad enough, but H-index is far far worse. Not a bad metric to compare people in the same field at similar stages of their career, but for venues it is so volume-dependent to be totally misleading, if two equal quality journals that merge their combined H-index suddenly goes up - not sensible!
Alan
On 31 January 2018 at 09:32, Elizabeth Gadd <[log in to unmask]> wrote:
Dear colleagues
Looks like the Metrics Toolkit has now been launched. I’ve only had a quick look. I really like the concept. Currently there is no category for journal metrics so some are listed as measures of journal articles imapct instead which I can see as being problematic from a responsible metrics standpoint. There are some key indicators missing (SNIP, SJR) but I’m guessing it will develop over time.
One to watch!
Lizzie
From: [log in to unmask] [mailto:[log in to unmask]] On Behalf Of Heather Coates
Sent: 30 January 2018 18:56
To: [log in to unmask]
Subject: Announcing the Metrics Toolkit
Colleagues,
Today, we’ve launched the Metrics Toolkit, a resource built to help researchers and evaluators navigate the ever-changing research metrics landscape.
The Metrics Toolkit includes 27 expert-written, time-saving summaries for the most popular research metrics including the Journal Impact Factor and Altmetric Attention Score.
Even better, the Metrics Toolkit includes an app that can recommend discipline-specific metrics to satisfy your specific use cases.
Best of all, the Metrics Toolkit carries a CC-BY license, so you can reuse its content as you wish!
Best wishes,
Heather, Robin & Stacy
Creators, Metrics Toolkit
--
Heather Coates
--
Professor Alan Dix
University of Birmingham, UK
Alan walks Wales: http://www.alandix.com/alanwalkswales
Tiree Tech Wave: http://tireetechwave.org/
--
Professor Alan Dix
University of Birmingham, UK
Alan walks Wales: http://www.alandix.com/alanwalkswales
Tiree Tech Wave: http://tireetechwave.org/