Print

Print


Chris
       Whilst I agree with the suggestion re alt-metrics, I am still of the
opinion that requiring and specifying metrics in this document is premature.
The set of mandatory things in a standard should be the absolute minimum
necessary. One can then add many desirable things; desirable things with
wide uptake then can become mandatory in future.

I think we all agree that having and publishing metrics of all sorts is
a good thing and to be encouraged. Mandating it at this stage seems
unnecessary and it stands apart from the other requirements in the draft
document to my eyes. I think it is likely that the market will do the sifting
here and those who cannot provide this information will eventually lose
custom.

As for gaming the JIF - it's a side issue here, but I would use stronger
words than 'allegedly.' We have evidence that it certainly did happen,
and I have yet to see evidence that the behaviours described are no longer
possible. Which leads me to suspect that it probably still does happen.
Or, of course, the evidence that the system is fixed may exist and I am
not aware of it.

On 24/05/13 09:53, Chris Rusbridge wrote:
> Hans, In terms of "what to measure", I'd suggest looking at the alt-metrics movement, as they are building up a great deal of experience in this area.
>
> It may be worth thinking about the "impact of measuring" issue, but I suspect it's small. The issue mainly comes into effect at extremely small scales (if I understand it correctly). If I measure the rooms in my house, the measurement has no discernible effect. I think you're right though, to be concerned about "gaming the system" (a somewhat different effect, I suspect). Given how smart humans can be (and how dumb), we'll never be free of this, but that doesn't mean that publishing information about takeup and impact is a bad thing. Rather, we have to try to ensure our community norms evolve to make this gaming unacceptable, as gaming the JIF clearly is (though it sill allegedly happens).
>
> When IRs publish information about aspects of takeup, eg downloads, citations etc, I find that very useful and interesting, both as author and consumer.
>
> --
> Chris Rusbridge
> Mobile: +44 791 7423828
> Email: [log in to unmask]
> Adopt the email charter! http://emailcharter.org/
>
>
>
>
> On 24 May 2013, at 09:09, Hans Pfeiffenberger wrote:
>
>> Am 22.05.13 09:24, schrieb Angus Whyte:
>>> The criterion could be better expressed as ' repositories must publish information to enable journals and depositors to assess its take-up in the community it aims to serve, and the level of access to deposited items, e.g. how frequently these are accessed by repository users"
>>>
>>> In either that or the original wording, is this relevant to support a decision on repository recommendation in this context? If so, should it be 'important' or 'mandatory' for data repositories to publish this information?
>>>
>>> Angus
>> Dear Angus,
>>
>> there are two points:
>>
>> - should we *already* ask for a measure of take-up?
>> - how to measure it (counting downloads? metadata views? likes?) and how to normalize and compare that (per community?)
>>
>> As a former physicist I like to say that you cannot observe a system without modifying it, perhaps even destroying it.
>> And I guess that in many communities the repository landscape is still so fragile that the destroying clause may apply.
>> => so: no *mandatory* criterion on measure of take-up *yet*
>>
>> As to the counting of activity at a repository site, this can even more easily be manipulated than the notorious Impact Factor. You would set into motion a race for something ... probably without evidence that it actually has a strong correlation with community take-up (whatever *that* means) or actually: value to a community.
>>
>> best,
>>
>> Hans