My tuppence worth on DC, which come from a different angle (I'm interested in
the history). First I'll look at what DC was supposed to be, and to be for, at
the time of its first formal discussion in 1995. Then I'll discuss Brin and
Page's paper on the development of Google, published around 2000. Google always
was the competition here.
The DCMI goals.
The position paper which followed the first DCMI workshop 1-3 March 1995
[http://dublincore.org/workshops/dc1/report.shtml], says a number of interesting
things:
“Goals of the workshop included (1) fostering a common understanding of the
needs, strengths, shortcomings, and solutions of the stakeholders; and (2)
reaching consensus on a core set of metadata elements to describe networked
resources.”
And that:
“Given that the majority of current networked information objects are
recognizably "documents", and that the metadata records are immediately needed
to facilitate resource discovery on the Internet, the proposed set of metadata
elements (The Dublin Core) is intended to describe the essential features of
electronic documents that support resource discovery.”
So essentially the purpose of creating the DC metadata standard was to enable an
easily understood and interoperable set of metadata elements to describe
networked resources (a pre-web phrase) which would support resource discovery.
And it was understood that metadata records were “immediately needed.”
We get a strong sense of the completely different environment that existed in
March 1995 from the following remarks:
“The explosive growth of interest in the Internet and the World Wide Web in the
past five years has created a digital extension of the academic research library
for certain kinds of materials. Valuable collections of texts, images and sounds
from many scholarly communities--collections that may even be the subject of
state-of-the-art discussions in these communities--now exist only in electronic
form and may be accessible from the Internet. Knowledge regarding the
whereabouts and status of this material is often passed on by word of mouth
among members of a given community. For outsiders, however, much of this
material is so difficult to locate that it is effectively unavailable.”
This was written three years before Google made its appearance as the Stanford
search engine. At this time resource discovery was very difficult in the open
and uncontrolled space of the web. Many scholarly collections were in controlled
space of inhouse databases and electronic archives (the digital extensions of
the academic research library). ISo you might be able access the resources if
you knew the database existed, and if there was some kind of internet interface.
The report expands on the problem:
“ A number of well-designed locator services, such as Lycos are now available
that automatically index every resource available on the Web and maintain up-to-
date databases of locations. But it has not yet been demonstrated that indexes
contain sufficiently rich resource descriptions, especially if the location
databases are large and span many fields of study. Moreover, a huge number of
resources on the Internet have no description at all beyond a filename which may
or may not carry semantic content. If these resources are to be discovered
through a systematic search, they must be described by someone familiar with
their intellectual content, preferably in a form appropriate for inclusion in a
database of pointers to resources. But current attempts to describe electronic
resources according to formal standards (e.g, the TEI header or MARC cataloging)
can accomodate only a small subset of the most important resources.”
The key sentence here is “but it has not yet been demonstrated that indexes
contain sufficiently rich resource descriptions, especially if the location
databases are large and span many fields of study.”
Is this true? I think we now know that the alternative approach to resource
discovery taken by Google has demonstrated that search engine indexes do contain
'sufficiently rich resources descriptions', which facilitate resource discovery,
and I think we knew that was going to be the case fifteen years ago.
The Google approach to resouce discovery.
The first detailed description of what Google was supposed to do, in the Brin
and Page paper at:
[http://infolab.stanford.edu/~backrub/google.html 'The Anatomy of a Large-Scale
Hypertextual Web Search Engine' Sergey Brin and Lawrence Page {sergey,
[log in to unmask] Computer Science Department, Stanford University,
Stanford, CA 94305].
“Apart from the problems of scaling traditional search techniques to data of
this magnitude [Brin and Page were talking about a database covering some 24
million pages in 2000], there are new technical challenges involved with using
the additional information present in hypertext to produce better search
results. This paper addresses this question of how to build a practical large-
scale system which can exploit the additional information present in hypertext.
Also we look at the problem of how to effectively deal with uncontrolled
hypertext collections where anyone can publish anything they want.”
That was the goal. Note that they are talking about both controlled and
uncontrolled hypertext collections. We tend not to talk in this way anymore,
though I daresay they still do inside Google.
In the detail of the document Brin and Page note that:
“...as the collection size grows, we need tools that have very high precision
(number of relevant documents returned, say in the top tens of results). Indeed,
we want our notion of "relevant" to only include the very best documents since
there may be tens of thousands of slightly relevant documents. This very high
precision is important even at the expense of recall (the total number of
relevant documents the system is able to return). There is quite a bit of recent
optimism that the use of more hypertextual information can help improve search
and other applications ... In particular, link structure... and link text
provide a lot of information for making relevance judgments and quality
filtering. Google makes use of both link structure and anchor text.”
In other words, Google was intended to work both with link structure and with
the actual text of documents (the hypertextual documents) in order to judge
relevance and quality. That's what they did, and it is what they are still
doing. We despaired that they weren't much interested in formal metadata, but
then, as they say in their paper, metadata options were being misused.
One of their principle design goals was
“to build an architecture that can support novel research activities on large-
scale web data. To support novel research uses, Google stores all of the actual
documents it crawls in compressed form. One of our main goals in designing
Google was to set up an environment where other researchers can come in quickly,
process large chunks of the web, and produce interesting results that would have
been very difficult to produce otherwise. In the short time the system has been
up, there have already been several papers using databases generated by Google,
and many others are underway.”
They explore the differences between the Web and Well Controlled Collections
(section 3.2)
“The web is a vast collection of completely uncontrolled heterogeneous
documents. Documents on the web have extreme variation internal to the
documents, and also in the external meta information that might be available.
For example, documents differ internally in their language (both human and
programming), vocabulary (email addresses, links, zip codes, phone numbers,
product numbers), type or format (text, HTML, PDF, images, sounds), and may even
be machine generated (log files or output from a database). On the other hand,
we define external meta information as information that can be inferred about a
document, but is not contained within it. Examples of external meta information
include things like reputation of the source, update frequency, quality,
popularity or usage, and citations. Not only are the possible sources of
external meta information varied, but the things that are being measured vary
many orders of magnitude as well. For example, compare the usage information
from a major homepage, like Yahoo's which currently receives millions of page
views every day with an obscure historical article which might receive one view
every ten years. Clearly, these two items must be treated very differently by a
search engine.”
“Another big difference between the web and traditional well controlled
collections is that there is virtually no control over what people can put on
the web. Couple this flexibility to publish anything with the enormous influence
of search engines to route traffic and companies which deliberately manipulating
search engines for profit become a serious problem. This problem... has not been
addressed in traditional closed information retrieval systems.”
So they are using metadata to create rankings for resource discovery, just not
metadata descriptions which have been deliberately created. Which they in fact
avoid, to avoid misleading rankings. The metadata they use is *the document
itself*, and information associated with it. This metadata includes words in the
document, the position of the words, and even information about the fonts used.
I think we know that this works. It is an approach which expressly does not
require the user to type in ever more complicated search strings in order to get
relevant results. In essence a DC metadata record is an extended search string
with a formal structure. The recasting of DC in the form of the SWAP profile,
based on a FRBR style approach to structured metadata, made the search string
much more complicated, but not necessarily more efficient in terms of resource
discovery.
***
The creation of the Dublin Core metadata standard was of its time, and its
particular circumstances. It addressed issues extrapolated from experience with
traditional closed information retrieval systems, and attempted to apply the
responses to these issues in the web environment. It will work within a closed
information retrieval system, such as a library or a network of libraries, where
standards can be agreed, and implemented in a more or less standardised way, but
it isn't ideally adapted to the conditions of the uncontrolled web, as we have
found. And much interesting information is now outside the academic digital
library, and entirely without DC metadata, or even any formal metadata at all.
Which is why Google is our first point of call in looking for information.
CERIF is a metadata format which is ideal for use within a closed information
retrieval system, and has a future ahead of it in connection with scientific
research. It will not have a significant future beyond closed environments, even
if the closed environments are within web clouds, because it is too complicated,
and consequently too expensive to implement properly. RDF and the FRBR approach
to record creation had the same problems.
I leave out any discussion of other forms of information retrieval systems which
use text-mining techniques, and are able to find and sort documents which have
relevance, but not obvious relevance, to a query. I've seen these in operation
and they work rather well. These tools are of more use to researchers than the
development of ever more complex metadata.
Best,
Philip Hunter
[log in to unmask]
Quoting Andy Powell <[log in to unmask]>:
> History again...
>
> Re: 'the DC flat file format' makes no sense to me.
>
> Unfortunately, this is a misunderstanding that, to this day, the DCMI has not
> managed to overcome. I don't really know why - I spent long enough trying so
> I see it as something of a personal failure.
>
> My suspicion is that the 'flat' use of, so-called, simple DC in things like
> the OAI-PMH played a large part in promoting the misunderstanding and quite
> probably did harm to the adoption of both DC and OAI-PMH (though I may be out
> of touch) over the long term. Unfortunately, the alternative, and correct,
> world view of DC as being closely aligned with the RDF model struggled with
> the same kind of adoption issues as RDF itself.
>
> I don't know CERIF, but my suspicion is that it probably represents a more
> realistic middle ground in terms of likelihood of adoption against expressive
> capability in the repositories space.
>
> Andy Powell
> Head of Strategic Communications
>
> Eduserv
>
> [log in to unmask] | 01225 474 319 | 07989 476 710
> www.eduserv.org.uk | http://www.twitter.com/andypowe11 |
> http://www.eduserv.org.uk/blog | http://www.linkedin.com/company/eduserv
>
> Eduserv is a company limited by guarantee (registered in England & Wales,
> company number: 3763109) and a charity (charity number 1079456), whose
> registered office is at Royal Mead, Railway Place, Bath, BA1 1SR.
> -----Original Message-----
> From: Repositories discussion list [mailto:[log in to unmask]]
> On Behalf Of Paul Walk
> Sent: 20 March 2014 17:54
> To: [log in to unmask]
> Subject: Re: DC OAI-PMH
>
> Anna,
>
> 'the DC flat file format' makes no sense to me.
>
> CERIF or Dublin Core (or many other things) can be serialised to XML -
> whereupon they are often conveyed in a file.
>
> CERIF has an entity-relationship model behind it - I think this must be what
> you mean by 'normalised'. But so does Dublin Core.
>
> Also - the word 'standard' is used variously in these discussions. I think
> the most usual meaning in this context is "agreement on what terms to use and
> in what arrangement". I don't see that CERIF is a standard in this sense, any
> more than Dublin Core is, as either will need extra constraints to be
> applied.
>
> I think, perhaps, that the main point you are making is that "we can use
> CERIF". I agree. We could also use Dublin Core. However - CERIF has gained
> enough momentum for it to be the approach that I would back for future
> development.
>
> So, we may be in essential agreement about what to do (if not why)
>
> :-)
>
> Cheers,
>
> Paul
>
> On 20 Mar 2014, at 17:26, Anna Clements <[log in to unmask]> wrote:
>
> >
> > ... we don't need a new standard .. we can use CERIF. It will need
> guidelines agreed as happening for OpenAire, but being a normalised data
> structure (unlike the DC flat file format) it is inherently easier to
> identify where specific data items should be recorded. The semantic model
> within CERIF also allows flexible and scalable use of vocabularies and the
> mapping between them; and the ability to record time-stamped, role-based
> relationships between entities provides rich, and again scalable, contextual
> information.
> >
> > Anna
> >
> > ______________________________________________________
> > Anna Clements | Head of Research Data and Information Services
> >
> > University of St Andrews Library | North Street | St Andrews | KY16
> > 9TR|
> > T:01334 462761 | @AnnaKClements
> >
> > ________________________________________
> > From: Repositories discussion list [[log in to unmask]]
> > on behalf of Jez Cope [[log in to unmask]]
> > Sent: 20 March 2014 17:02
> > To: [log in to unmask]
> > Subject: Re: DC OAI-PMH
> >
> > I had a similar experience for the exceptionally simple use case of
> > trying to map DOIs onto repository records, in naive hope of allowing
> > users to look up a green OA copy of a paper from its DOI.
> >
> > I picked two repositories at random to try and do this with and found
> > two completely different ways of reporting the DOI: one in dc:relation
> > and one in dc:identifier.
> >
> > I suspect the problem is that for things like this, DC is too generic
> > and therefore too open to interpretation.
> >
> > If anyone's interested, the code is here:
> >
> > https://github.com/jezcope/doi2oa
> >
> > Of course, coming up with a new standard does put me in mind of this
> > cautionary tale:
> >
> > https://xkcd.com/927/
> >
> > Jez
> >
> > Chris Keene <[log in to unmask]> writes:
> >
> >> In the early days of repositories I know a lot of work went in to defining
> standards for making them inter-operable and to expose their data, notable
> the OAI initiative. I'm hoping some who were involved in (or who followed)
> those developments could help enlighten me.
> >>
> >> For a number of years I've been curious around the reasoning behind
> adopting Dublin Core via OAI-PMH as the de facto way to harvest and obtain
> metadata from a repository. (DC isn't the only format, but it is by far the
> most common used).
> >>
> >> To use data exposed by a system - such as a repository - the first thing I
> would have thought you need to do is interpret the incoming information.
> >>
> >> When reading information from an IR, the system/script that is importing
> it needs to establish a number of things:
> >> - common bibliographic fields; title, authors, date, publisher, vol/issue,
> issn/isbn, publication title etc.
> >> - DOI
> >> - link to IR record
> >> - is full text available? if so where, and in what format.
> >> - what type of item is it.
> >> - Description, citation, subjects etc.
> >>
> >> While using a common standard (DC) is clearly a good thing.
> >> Processing the above can be a challenge, especially as different
> >> repository software platforms and versions can present key pieces of
> >> information in different ways. This is perhaps made a little harder
> >> as there is no field to specify the software/version in the metadata
> >> output
> >>
> >> I'll give a couple examples:
> >> To extract the vol/issue/publication title involves looking at all the
> "dc:identifier" fields, identifying which identifier contains a citation, and
> then deconstruction the citation to extract the data (and parsing citations
> is no easy process in itself).
> >>
> >> To obtain if a record has the full text openly available, ie OA (with an
> Eprints system): Check to see if there is a dc:format - if it exists there is
> a file associated with the record.
> >> But to check it is OA, and not locked down (which is quite common) find
> the dc:identifier which starts with the same domain name as the OAI
> interface, presume it is a link to the full text, try and access it, if you
> succeed (http status code 200) then it is OA. Though if you only have the
> metadata to work with and can't try and retrieve the URL while processing the
> record, you obviously can't do this.
> >> Dspace provides quite different data via OAI-PMH so this method would not
> work.
> >>
> >> The reason I bring this up now is that I'm currently trying to improve how
> our repository records are displayed in our discovery system (Primo, from Ex
> Libris), the metadata is currently so poor we have hidden them.
> >> A key concept of these systems is that they know which items the user has
> access to (across all the library's collections and subscriptions), and by
> default only returns those which the user can access. While Primo has quite
> a complex system for configuring how records are imported, it doesn't extend
> to the sort of logic described above.
> >>
> >> So from my specific use case (and other dabbling in this area) the data
> provided by OAI-PMH DC seems difficult to work with.
> >>
> >> I'd be interesting to learn a bit of the history of the thinking of how
> this approach cam about, and whether there are better approaches in
> processing the data than those I have described here.
> >>
> >> Regards, and thanks in advance to any insights Chris
> >>
> >> For reference here are two examples (you may find using Firefox, view
> >> source, works best) Eprints (record with a file attached, but not OA)
> >> http://sro.sussex.ac.uk/47853/ oai
> >> http://sro.sussex.ac.uk/cgi/oai2?verb=GetRecord&metadataPrefix=oai_dc
> >> &identifier=oai:sro.sussex.ac.uk:47853
> >>
> >> Dspace
> >> https://www.era.lib.ed.ac.uk/handle/1842/164
> >> http://www.era.lib.ed.ac.uk/dspace-oai/request?verb=GetRecord&metadat
> >> aPrefix=oai_dc&identifier=oai:www.era.lib.ed.ac.uk:1842/164
> >>
> >>
> >> Chris Keene - Technical Development Manager, University of Sussex
> >> Library
> >> Contact: http://www.sussex.ac.uk/profiles/150000
> >
> > --
> > Jez Cope, Academic Digital Technologist Centre for Sustainable
> > Chemical Technologies, University of Bath
> > http://people.bath.ac.uk/jc619
> >
> > Please note: I check email at fixed intervals and aim to respond
> > within 24 hours of receiving your message. If you need a response
> > sooner, please use the following (in order of decreasing preference):
> > IM (Jabber/XMPP): [log in to unmask]
> > Skype: jezcope
> > Twitter: @jezcope
> > Tel: +44(0)1225 38 5827
>
> -------------------------------------------
> Paul Walk
> http://www.paulwalk.net
> -------------------------------------------
>
---------------------------------------------------
This mail sent through http://www.easynetdial.co.uk
|