JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for JISC-REPOSITORIES Archives


JISC-REPOSITORIES Archives

JISC-REPOSITORIES Archives


JISC-REPOSITORIES@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Monospaced Font

LISTSERV Archives

LISTSERV Archives

JISC-REPOSITORIES Home

JISC-REPOSITORIES Home

JISC-REPOSITORIES  March 2014

JISC-REPOSITORIES March 2014

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: DC OAI-PMH

From:

Philip Hunter <[log in to unmask]>

Reply-To:

[log in to unmask]

Date:

Fri, 21 Mar 2014 17:16:47 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (425 lines)

My tuppence worth on DC, which come from a different angle (I'm interested in
the history). First I'll look at what DC was supposed to be, and to be for, at
the time of its first formal discussion in 1995. Then I'll discuss Brin and
Page's paper on the development of Google, published around 2000. Google always
was the competition here.

The DCMI goals.

The position paper which followed the first DCMI workshop 1-3 March 1995
[http://dublincore.org/workshops/dc1/report.shtml], says a number of interesting
things:

“Goals of the workshop included (1) fostering a common understanding of the
needs, strengths, shortcomings, and solutions of the stakeholders; and (2)
reaching consensus on a core set of metadata elements to describe networked
resources.”

And that:

“Given that the majority of current networked information objects are
recognizably "documents", and that the metadata records are immediately needed
to facilitate resource discovery on the Internet, the proposed set of metadata
elements (The Dublin Core) is intended to describe the essential features of
electronic documents that support resource discovery.”

So essentially the purpose of creating the DC metadata standard was to enable an
easily understood and interoperable set of metadata elements to describe
networked resources (a pre-web phrase) which would support resource discovery.
And it was understood that metadata records were “immediately needed.”

We get a strong sense of the completely different environment that existed in
March 1995 from the following remarks:

 “The explosive growth of interest in the Internet and the World Wide Web in the
past five years has created a digital extension of the academic research library
for certain kinds of materials. Valuable collections of texts, images and sounds
from many scholarly communities--collections that may even be the subject of
state-of-the-art discussions in these communities--now exist only in electronic
form and may be accessible from the Internet. Knowledge regarding the
whereabouts and status of this material is often passed on by word of mouth
among members of a given community. For outsiders, however, much of this
material is so difficult to locate that it is effectively unavailable.”

This was written three years before Google made its appearance as the Stanford
search engine. At this time resource discovery was very difficult in the open
and uncontrolled space of the web. Many scholarly collections were in controlled
space of inhouse databases and electronic archives (the digital extensions of
the academic research library). ISo you might be able access the resources if
you knew the database existed, and if there was some kind of internet interface.
The report expands on the problem:

“ A number of well-designed locator services, such as Lycos are now available
that automatically index every resource available on the Web and maintain up-to-
date databases of locations. But it has not yet been demonstrated that indexes
contain sufficiently rich resource descriptions, especially if the location
databases are large and span many fields of study. Moreover, a huge number of
resources on the Internet have no description at all beyond a filename which may
or may not carry semantic content. If these resources are to be discovered
through a systematic search, they must be described by someone familiar with
their intellectual content, preferably in a form appropriate for inclusion in a
database of pointers to resources. But current attempts to describe electronic
resources according to formal standards (e.g, the TEI header or MARC cataloging)
can accomodate only a small subset of the most important resources.”

The key sentence here is “but it has not yet been demonstrated that indexes
contain sufficiently rich resource descriptions, especially if the location
databases are large and span many fields of study.”
Is this true? I think we now know that the alternative approach to resource
discovery taken by Google has demonstrated that search engine indexes do contain
'sufficiently rich resources descriptions', which facilitate resource discovery,
and I think we knew that was going to be the case fifteen years ago.

The Google approach to resouce discovery.

The first detailed description of what Google was supposed to do, in the Brin
and Page paper at:

 [http://infolab.stanford.edu/~backrub/google.html 'The Anatomy of a Large-Scale
Hypertextual Web Search Engine' Sergey Brin and Lawrence Page {sergey,
[log in to unmask] Computer Science Department, Stanford University,
Stanford, CA 94305].

“Apart from the problems of scaling traditional search techniques to data of
this magnitude [Brin and Page were talking about a database covering some 24
million pages in 2000], there are new technical challenges involved with using
the additional information present in hypertext to produce better search
results. This paper addresses this question of how to build a practical large-
scale system which can exploit the additional information present in hypertext.
Also we look at the problem of how to effectively deal with uncontrolled
hypertext collections where anyone can publish anything they want.”

That was the goal. Note that they are talking about both controlled and
uncontrolled hypertext collections. We tend not to talk in this way anymore,
though I daresay they still do inside Google.

 In the detail of the document Brin and Page note that:

“...as the collection size grows, we need tools that have very high precision
(number of relevant documents returned, say in the top tens of results). Indeed,
we want our notion of "relevant" to only include the very best documents since
there may be tens of thousands of slightly relevant documents. This very high
precision is important even at the expense of recall (the total number of
relevant documents the system is able to return). There is quite a bit of recent
optimism that the use of more hypertextual information can help improve search
and other applications ... In particular, link structure... and link text
provide a lot of information for making relevance judgments and quality
filtering. Google makes use of both link structure and anchor text.”

In other words, Google was intended to work both with link structure and with
the actual text of documents (the hypertextual documents) in order to judge
relevance and quality. That's what they did, and it is what they are still
doing. We despaired that they weren't much interested in formal metadata, but
then, as they say in their paper, metadata options were being misused.

One of their principle design goals was

“to build an architecture that can support novel research activities on large-
scale web data. To support novel research uses, Google stores all of the actual
documents it crawls in compressed form. One of our main goals in designing
Google was to set up an environment where other researchers can come in quickly,
process large chunks of the web, and produce interesting results that would have
been very difficult to produce otherwise. In the short time the system has been
up, there have already been several papers using databases generated by Google,
and many others are underway.”

They explore the differences between the Web and Well Controlled Collections
(section 3.2)
 

“The web is a vast collection of completely uncontrolled heterogeneous
documents. Documents on the web have extreme variation internal to the
documents, and also in the external meta information that might be available.
For example, documents differ internally in their language (both human and
programming), vocabulary (email addresses, links, zip codes, phone numbers,
product numbers), type or format (text, HTML, PDF, images, sounds), and may even
be machine generated (log files or output from a database). On the other hand,
we define external meta information as information that can be inferred about a
document, but is not contained within it. Examples of external meta information
include things like reputation of the source, update frequency, quality,
popularity or usage, and citations. Not only are the possible sources of
external meta information varied, but the things that are being measured vary
many orders of magnitude as well. For example, compare the usage information
from a major homepage, like Yahoo's which currently receives millions of page
views every day with an obscure historical article which might receive one view
every ten years. Clearly, these two items must be treated very differently by a
search engine.”

“Another big difference between the web and traditional well controlled
collections is that there is virtually no control over what people can put on
the web. Couple this flexibility to publish anything with the enormous influence
of search engines to route traffic and companies which deliberately manipulating
search engines for profit become a serious problem. This problem... has not been
addressed in traditional closed information retrieval systems.”
So they are using metadata to create rankings for resource discovery, just not
metadata descriptions which have been deliberately created. Which they in fact
avoid, to avoid misleading rankings. The metadata they use is *the document
itself*, and information associated with it. This metadata includes words in the
document, the position of the words, and even information about the fonts used.

I think we know that this works. It is an approach which expressly does not
require the user to type in ever more complicated search strings in order to get
relevant results. In essence a DC metadata record is an extended search string
with a formal structure. The recasting of DC in the form of the SWAP profile,
based on a FRBR style approach to structured metadata, made the search string
much more complicated, but not necessarily more efficient in terms of resource
discovery.

***
The creation of the Dublin Core metadata standard was of its time, and its
particular circumstances. It addressed issues extrapolated from experience with
traditional closed information retrieval systems, and attempted to apply the
responses to these issues in the web environment. It will work within a closed
information retrieval system, such as a library or a network of libraries, where
standards can be agreed, and implemented in a more or less standardised way, but
it isn't ideally adapted to the conditions of the uncontrolled web, as we have
found. And much interesting information is now outside the academic digital
library, and entirely without DC metadata, or even any formal metadata at all.
Which is why Google is our first point of call in looking for information.

CERIF is a metadata format which is ideal for use within a closed information
retrieval system, and has a future ahead of it in connection with scientific
research. It will not have a significant future beyond closed environments, even
if the closed environments are within web clouds, because it is too complicated,
and consequently too expensive to implement properly. RDF and the FRBR approach
to record creation had the same problems.
 
I leave out any discussion of other forms of information retrieval systems which
use text-mining techniques, and are able to find and sort documents which have
relevance, but not obvious relevance, to a query. I've seen these in operation
and they work rather well. These tools are of more use to researchers than the
development of ever more complex metadata.

Best,

Philip Hunter
[log in to unmask]


Quoting Andy Powell <[log in to unmask]>:

> History again...
>
> Re: 'the DC flat file format' makes no sense to me.
>
> Unfortunately, this is a misunderstanding that, to this day, the DCMI has not
> managed to overcome. I don't really know why - I spent long enough trying so
> I see it as something of a personal failure.
>
> My suspicion is that the 'flat' use of, so-called, simple DC in things like
> the OAI-PMH played a large part in promoting the misunderstanding and quite
> probably did harm to the adoption of both DC and OAI-PMH (though I may be out
> of touch) over the long term. Unfortunately, the alternative, and correct,
> world view of DC as being closely aligned with the RDF model struggled with
> the same kind of adoption issues as RDF itself.
>
> I don't know CERIF, but my suspicion is that it probably represents a more
> realistic middle ground in terms of likelihood of adoption against expressive
> capability in the repositories space.
>
> Andy Powell
> Head of Strategic Communications
>
> Eduserv
>
> [log in to unmask] | 01225 474 319 | 07989 476 710
> www.eduserv.org.uk | http://www.twitter.com/andypowe11 |
> http://www.eduserv.org.uk/blog | http://www.linkedin.com/company/eduserv
>
> Eduserv is a company limited by guarantee (registered in England & Wales,
> company number: 3763109) and a charity (charity number 1079456), whose
> registered office is at Royal Mead, Railway Place, Bath, BA1 1SR.
> -----Original Message-----
> From: Repositories discussion list [mailto:[log in to unmask]]
> On Behalf Of Paul Walk
> Sent: 20 March 2014 17:54
> To: [log in to unmask]
> Subject: Re: DC OAI-PMH
>
> Anna,
>
> 'the DC flat file format' makes no sense to me.
>
> CERIF or Dublin Core (or many other things) can be serialised to XML -
> whereupon they are often conveyed in a file.
>
> CERIF has an entity-relationship model behind it - I think this must be what
> you mean by 'normalised'. But so does Dublin Core.
>
> Also - the word 'standard' is used variously in these discussions. I think
> the most usual meaning in this context is "agreement on what terms to use and
> in what arrangement". I don't see that CERIF is a standard in this sense, any
> more than Dublin Core is, as either will need extra constraints to be
> applied.
>
> I think, perhaps, that the main point you are making is that "we can use
> CERIF". I agree. We could also use Dublin Core. However - CERIF has gained
> enough momentum for it to be the approach that I would back for future
> development.
>
> So, we may be in essential agreement about what to do (if not why)
>
> :-)
>
> Cheers,
>
> Paul
>
> On 20 Mar 2014, at 17:26, Anna Clements <[log in to unmask]> wrote:
>
> >
> > ... we don't need a new standard .. we can use CERIF. It will need
> guidelines agreed as happening for OpenAire, but being a normalised data
> structure (unlike the DC flat file format) it is inherently easier to
> identify where specific data items should be recorded. The semantic model
> within CERIF also allows flexible and scalable use of vocabularies and the
> mapping between them; and the ability to record time-stamped, role-based
> relationships between entities provides rich, and again scalable, contextual
> information.
> >
> > Anna
> >
> > ______________________________________________________
> > Anna Clements | Head of Research Data and Information Services
> >
> > University of St Andrews Library | North Street | St Andrews | KY16
> > 9TR|
> > T:01334 462761 | @AnnaKClements
> >
> > ________________________________________
> > From: Repositories discussion list [[log in to unmask]]
> > on behalf of Jez Cope [[log in to unmask]]
> > Sent: 20 March 2014 17:02
> > To: [log in to unmask]
> > Subject: Re: DC OAI-PMH
> >
> > I had a similar experience for the exceptionally simple use case of
> > trying to map DOIs onto repository records, in naive hope of allowing
> > users to look up a green OA copy of a paper from its DOI.
> >
> > I picked two repositories at random to try and do this with and found
> > two completely different ways of reporting the DOI: one in dc:relation
> > and one in dc:identifier.
> >
> > I suspect the problem is that for things like this, DC is too generic
> > and therefore too open to interpretation.
> >
> > If anyone's interested, the code is here:
> >
> > https://github.com/jezcope/doi2oa
> >
> > Of course, coming up with a new standard does put me in mind of this
> > cautionary tale:
> >
> > https://xkcd.com/927/
> >
> > Jez
> >
> > Chris Keene <[log in to unmask]> writes:
> >
> >> In the early days of repositories I know a lot of work went in to defining
> standards for making them inter-operable and to expose their data, notable
> the OAI initiative. I'm hoping some who were involved in (or who followed)
> those developments could help enlighten me.
> >>
> >> For a number of years I've been curious around the reasoning behind
> adopting Dublin Core via OAI-PMH as the de facto way to harvest and obtain
> metadata from a repository. (DC isn't the only format, but it is by far the
> most common used).
> >>
> >> To use data exposed by a system - such as a repository - the first thing I
> would have thought you need to do is interpret the incoming information.
> >>
> >> When reading information from an IR, the system/script that is importing
> it needs to establish a number of things:
> >> - common bibliographic fields; title, authors, date, publisher, vol/issue,
> issn/isbn, publication title etc.
> >> - DOI
> >> - link to IR record
> >> - is full text available? if so where, and in what format.
> >> - what type of item is it.
> >> - Description, citation, subjects etc.
> >>
> >> While using a common standard (DC) is clearly a good thing.
> >> Processing the above can be a challenge, especially as different
> >> repository software platforms and versions can present key pieces of
> >> information in different ways. This is perhaps made a little harder
> >> as there is no field to specify the software/version in the metadata
> >> output
> >>
> >> I'll give a couple examples:
> >> To extract the vol/issue/publication title involves looking at all the
> "dc:identifier" fields, identifying which identifier contains a citation, and
> then deconstruction the citation to extract the data (and parsing citations
> is no easy process in itself).
> >>
> >> To obtain if a record has the full text openly available, ie OA (with an
> Eprints system): Check to see if there is a dc:format - if it exists there is
> a file associated with the record.
> >> But to check it is OA, and not locked down (which is quite common) find
> the dc:identifier which starts with the same domain name as the OAI
> interface, presume it is a link to the full text, try and access it, if you
> succeed (http status code 200) then it is OA. Though if you only have the
> metadata to work with and can't try and retrieve the URL while processing the
> record, you obviously can't do this.
> >> Dspace provides quite different data via OAI-PMH so this method would not
> work.
> >>
> >> The reason I bring this up now is that I'm currently trying to improve how
> our repository records are displayed in our discovery system (Primo, from Ex
> Libris), the metadata is currently so poor we have hidden them.
> >> A key concept of these systems is that they know which items the user has
> access to (across all the library's collections and subscriptions), and by
> default only returns those which the user can access. While Primo has quite
> a complex system for configuring how records are imported, it doesn't extend
> to the sort of logic described above.
> >>
> >> So from my specific use case (and other dabbling in this area) the data
> provided by OAI-PMH DC seems difficult to work with.
> >>
> >> I'd be interesting to learn a bit of the history of the thinking of how
> this approach cam about, and whether there are better approaches in
> processing the data than those I have described here.
> >>
> >> Regards, and thanks in advance to any insights Chris
> >>
> >> For reference here are two examples (you may find using Firefox, view
> >> source, works best) Eprints (record with a file attached, but not OA)
> >> http://sro.sussex.ac.uk/47853/ oai
> >> http://sro.sussex.ac.uk/cgi/oai2?verb=GetRecord&metadataPrefix=oai_dc
> >> &identifier=oai:sro.sussex.ac.uk:47853
> >>
> >> Dspace
> >> https://www.era.lib.ed.ac.uk/handle/1842/164
> >> http://www.era.lib.ed.ac.uk/dspace-oai/request?verb=GetRecord&metadat
> >> aPrefix=oai_dc&identifier=oai:www.era.lib.ed.ac.uk:1842/164
> >>
> >>
> >> Chris Keene - Technical Development Manager, University of Sussex
> >> Library
> >> Contact: http://www.sussex.ac.uk/profiles/150000
> >
> > --
> > Jez Cope, Academic Digital Technologist Centre for Sustainable
> > Chemical Technologies, University of Bath
> > http://people.bath.ac.uk/jc619
> >
> > Please note: I check email at fixed intervals and aim to respond
> > within 24 hours of receiving your message. If you need a response
> > sooner, please use the following (in order of decreasing preference):
> > IM (Jabber/XMPP): [log in to unmask]
> > Skype: jezcope
> > Twitter: @jezcope
> > Tel: +44(0)1225 38 5827
>
> -------------------------------------------
> Paul Walk
> http://www.paulwalk.net
> -------------------------------------------
>



---------------------------------------------------
This mail sent through http://www.easynetdial.co.uk

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
November 2005
October 2005


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager