Hi Jenny,
Thanks for asking a practical question. In limited answer to them we
are operating small scale and grass-roots here, however, efficiencies
have still been thought about in every turn.
A focus on increasing system sophistication/load is a goal worth (I
think) keeping and is the ability to look at the most useful (and
available) person to upload and add/moderate metadata. I expect this
balance of workload to change over time.
Metadata
The design of the metadata entry of the submission UI in our repository
has 5 mandatory fields* for all deposits:
> Author, Title, Copyright Date, Abstract, Keyword, Research Code
There are other metadata fields being filled that are important,
however, the cost of that relies upon the resources available and that
cost is yet to be calculated. All content is will be loaded by library
staff and all standards of metadata are kept by library staff.
Time: it takes a librarian about 20 minutes to load the work and add
metadata (the full complement, not the mandatory 5).
Automation
We plan to automate whatever we can and design and refine systems to
reduce the work to make the keystrokes required. If in some way open
access can be tied in to the deposit and reporting of research, there
may be information management efficiencies and 'benefits' to be gained,
aside the reduction of workload by submitters and librarians.** At some
point it may be feasible to pull back upload and moderation undertaken
by the library with user submission.
For example, the drop-down menus with authority files are kept up to
date and there is a priority to implement this for the standard research
codes.
Copyright
Admittedly, I see this as an education issue, but that there are also
ways to render this more efficient in system design. A few well posed
questions in an upload interface would sort out when copyright is or
isn't an issue (and might require human oversight). When an IR can
interoperate with databases or website like SHERPA or OakList (with
copyright policies) it will be very helpful. In the meantime, library
staff and brain power are being used to sort this out.
Time: issues with copyright can take 1 minute to 2 hours (cumulatively)
for a librarian to resolve (or not resolve) this. After half an hour I
would ask the question of the merit of pursuit, unless there were other
benefits to this (i.e. a new depositor and/or all their deposits are
affected in this way).
Recruitment
We are targeting getting works from academics that are a) interested and
b) can be made available efficiently first.
Depositors are emailed links to their works, if they spot errors,
particularly in subject attribution, we ask for their
feedback/correction. Ideally, they would be in the system already and
have permission/ability to do this anyway (within bounds).
I hope this sheds some light on one example for you.
Regards, Ingrid
*for theses there are extra fields to meet ETD metadata standards
**theses are a slightly different matter, where because of the
requirement to deposit both print and electronic the nature of the
deposit and the relationship with the depositors is different
Ingrid Mason
Digital Research Repository Coordinator
ResearchArchive@Victoria
Victoria University of Wellington
ph: 64-4-463 6844
fx: 64-4-471 2070
em: [log in to unmask]
Location: Kelburn Campus, Rankine Brown, RB501A
- - research deposited in in ResearchArchive@Victoria can be found
within 1 day via the national research hub nzresearch.org and within 2-3
days via the search engine Google - -
-----Original Message-----
From: Repositories discussion list
[mailto:[log in to unmask]] On Behalf Of Delasalle, Jenny
Sent: Saturday, 27 September 2008 1:24 a.m.
To: [log in to unmask]
Subject: Re: processing times
Thanks, yes: but what I really want to know about is whether other
repositories are processing items significantly faster than us, and if
so, what are the factors that make the difference?
I do have three potential solutions to my metadata creation problem:
1) Employ more cataloguing staff
2) Minimise the processing and cataloguing by requiring academics to do
it
3) Employ a technical wizard to automate stuff.
All of these could work in conjunction with a fourth option, which is to
compromise the thoroughness of the metadata record, which could be a
temporary approach to be addressed by any of the other methods at a
later date. Or indeed it might turn out to be a permanent solution. The
point of detailed metadata records is what functionality they support,
either in terms of search/reporting within our own repository, or
interoperability with others. This is hard to judge because we're trying
to look into the future at what technology might enable us to do with
our metadata.
But what I still need to know is what difference any of these three
methods would make, in a very practical, concrete sense.
I can work out how many cataloguing staff I would need if I make no
other changes. I think it's unlikely that we can ever persuade all our
academics to make the cataloguing effort for us (even if they could be
relied upon for quality, and I certainly wouldn't rely on them to do the
copyright checks). I have no idea what technical solutions could be
developed, how long they would take or how much they would cost,
although I'm sure that there is a lot of potential there.
I have been offered another solution, which is that at Northampton they
train the admin staff within the departments in record creation. That's
fine if I want metadata only records: these don't require copyright
checks and they don't require a suitable version to be supplied by the
authors. It would be a much larger project to change our full text
deposit method to be at a departmental level, and to train the admin
staff to check that we have an appropriate version. But it is an
alternative model that I could consider. There would still need to be
editorial control, of course, and I would not expect admin staff to be
able to add LCSHs.
What I would very much like to know is, what are other repositories
doing?
Kind regards
Jen
Jenny Delasalle
E-Repositories Manager
Research & Innovation Unit
University of Warwick Library
Gibbet Hill Road
Coventry CV4 7AL
United Kingdom
Tel: (+44) (0) 24 765 75793
http://go.warwick.ac.uk/repositories
> -----Original Message-----
> From: Repositories discussion list
> [mailto:[log in to unmask]] On Behalf Of Peter Cliff
> Sent: 26 September 2008 13:44
> To: [log in to unmask]
> Subject: Re: processing times
>
> Hey Mahendra,
>
> Mahendra Mahey wrote:
> > I am not sure this is about 'dumbing down' pre-existing beautifully
> > crafted metadata. I think (correct me if I am wrong,
> Pete, Phil) it is
> > about:
> >
> > * having a strategy to cope with a large of amount of content to
> > deposit into a repository with limited resources and
> pressures to
> > show repository brimming with 'stuff'
> > * making content available quickly - exposing it to the
> web so that
> > it can be discovered quickly (hopefully?)
> > * increasing the amount of content in the repository quickly
> > * making a judgement about using a quick fix strategy
> where there is
> > not simply the time to catalogue the content to the
> high standards
> > originally started out doing (I am sure Jenny has done
> the maths
> > in terms of how long it would take to catalogue the content
> >
> > Is that right?
>
> Yep. On all the points. Talat's experience as repository
> manager suggests that adding metadata after the deposit takes
> a long time - Talat, is it longer than it'd be on creation?
>
> I'm not talking about getting the metadata wrong (which I
> think would be a hassle to fix - imagine suddenly realising
> you had to change your subject classification scheme) but
> getting the metadata incomplete - so you have the same
> problem as creating metadata on submission, but delayed so
> that you can prioritise deposit. (Why do today...? ;-))
>
> As for automated augmentation of metadata - well, that would
> be doable and perhaps should be part of the tool - and from
> what I know of SWORD, it'll allow for metadata updates.
>
> Pete Cliff
> RSP/UKOLN
>
|