A couple of comments on Andy's email:
1. We have a repository with more than one million links -
PictureAustralia, which links to distributed images. It currently
contains 1,078,129 metadata records, contributed by 38 distributed
agencies.
2. Most of these are harvestable for search engines, from
http://www.pictureaustralia.org/url-lists.html Only the NLA's images
are excluded from this list, but they are made available to some search
engines via OAI-PMH.
To follow on from Lorcan's points:
3. "If one is looking towards creating large scale aggregations of data,
or if one is anticipating trying to provide metasearch environments
across repositories, I think there is potentially a lot of value in
working towards a simple consistent schema which is accompanied by some
'data entry' guidelines to ensure consistency."
The NLA uses the Dublin Core for PictureAustralia, mapped from existing
schema. The data entry guidelines are available at
http://www.nla.gov.au/guidelines/metaguide.html The architecture
supports the discovery of the richer metadata as a two-part process.
4. "If one wants to traverse this aggregated/federated corpus with a
controlled vocabulary there is merit in asking that people use the same
one, or use several between which mappings have been created."
We support discovery of controlled vocabs as text, but do not exploit
hierarchies. There doesn't seem to be an interoperable core for mapping
of thesauri, like there is for descriptive metadata. And given the broad
coverage of the images, it is unlikely to be possible to obtain
agreement on a single controlled vocabulary.
Thanks,
Debbie
Debbie Campbell
Director, Coordination Support Branch
National Library of Australia
Parkes Place
Canberra ACT 2600
Australia
em: [log in to unmask]; ph: +61 2 6262 1673; fx: +61 2 6273 2545
Australia's Research Online www.arrow.edu.au
-----Original Message-----
From: The CETIS Metadata Special Interest Group
[mailto:[log in to unmask]] On Behalf Of Andy Powell
Sent: Thursday, 7 April 2005 7:59 AM
To: [log in to unmask]
Subject: Re: cordra
On Wed, 6 Apr 2005, Dan Rehak wrote:
> First, as noted and described in the links, you have to let the
googlebot
> in, and you need to give it a list of links to *all* of the content
that you
> want to be indexed. You probably don't want to have a human readable
page
> with a millon links, so an appropriate solution is to recognize when
the
> googlebot is visiting and give it a different view of your site -- the
page
> with the links.
Or have a fairly shallow browse tree which end-users and Google can
crawl
sensibly?
Show me a repository in the UK (or anywhere) with a million links? OK,
I'm sure that some exist... but if we limit ourselves to thinking about
learning object repositories or eprint archives then if we get above
1000
objects we're doing well. In most cases 10,000 is a distant dream still?
And in the case of eprints, most links into the eprint archive will be
directly from external pages (e.g. from an academics list of
publications)
the internal links within the archive are neither here nor there. In
that
sense, the objects in the repostory become just like any other resource
on
the Web - they sit at the end of URLs that people will use to create
links.
The same will be true of learning opbject repositories unless people put
daft authentication challenges in the way or design their systems in
such
a way that people can't make direct links in to the content.
Now, I agree that there's an issue about how deep Google will crawl.
But
one of the interesting features of the Google Scholar discussions is
that
Google seem to be willing to modify their crawling strategies in order
to
pull in high-quality stuff.
So I'd anticipate that the environment will change significantly over
the
next year or so in terms of what Google does and doesn't get to.
> Next you have to make sure that googlebot will harvest all of the
links.
> The various descriptions indicate that it is not by default an
exhaustive
> harvest, and the googlebot will revisit the site many times.
>
> Once google harvests, it has to index what it found. Again, by
default it
> doesn't treat learning content in any special way. Does DC:Title mean
> anything special? How do I get precise search results using the
metadata
> that is associated with the content?
W.r.t. both these points, there do appear to be indications that Google
is
tentatively considering the use of OAI-PMH to get at stuff in
repositories
- at least for DSpace repositories. What impact this may have, even if
Google does start to do this, is debatable in the current environment,
since people use OAI-PMH somewhat inconsistently (in terms of how they
construct their metadata records and links to the object) - but, again,
it's potentially quite an interesting development.
And the issue of metadata-based approaches vs. full-text indexing is
clearly contentious. Is it fair to say that there are few examples of
really successful services based on end-user created metadata? There
are exceptions of course - arXiv is one. Is it also fair to say that
cataloguer created metadata is expensive - to the point that it doesn't
scale up well to cataloguing stuff in the Internet environment?
And is it fair to say that in the learning object world there are likely
to be even fewer examples of good quality metadata created by end-users,
since the properties and allowed values in the educational parts of LOM
are so fuzzy - the evidencve I've seen (e.g. Jean Godby's work at OCLC)
is
that people don't actually create much metadata that isn't essentially
Dublin Core-like.
Given that we're typically not willing to pay cataloguers to describe
stuff in repositories and we may not be able to rely on the quality of
end-user supplied metadata (particularly educational metadata), my
suspicion is that we're still a long way from being able to create
really
good discovery services based solely on the metadata in repositories.
Now, it seems to me, the answer lies in some hybrid approach where you
mix
end-user supplied metadata, automatically content-derived metadata, and
full-text indexing and you get the best of both worlds. And this is the
direction I'd like to see Google Scholar going in.
> I also understand that the googlebot makes many ranking decisions --
what to
> harvest, what to index, what to display, so the google view of your
> repository, and what the user in the google search result sees may
both be
> different from what you have or what you would see from a direct
repository
> search.
>
> There have also been problems with content that has a URI that is a
> persistent ID, e.g., a PURL, a DOI. Google thinks that the content is
> "owned" by the URL owner. The pagerank for http://resolver/id is
based on
> the pagerank of "resolver", not of the actual content.
Don't get me started on identifiers! :-) But just to note that this is
one of the problems with any identifier that can only be used on the Web
by mapping it to a URL by some sort of proxy (and the same is true of
PURLs). Essentially this approach breaks the current Web, particularly
for services like Google that try to infer knowledge from the linkages
between stuff.
That said, I thought I'd done some limited experiments that seemed to
indicate that Google treated HTTP redirects reasonably sensibly - i.e.
that it passed on the Google Pagerank to the linked resource. But
perhaps
I misunderstood what I was seeing...
> But I think they
> have been working on this problem for some collections, like Crossref.
Yes. If you are sitting on a collection thast Google think is valuable
(i.e. of value to Google's end-users) then Google are probably willing
to
talk to you about how they can get at your content.
> So while Google Scholar helps, it does not yet solve the problem of
getting
> precise results from all the content in the repositories.
Agreed... but I think the future lies in sensible dialogue with services
like Google and not simply knocking them because they don't use the same
notions of metadata as we do?
Andy
--
Distributed Systems, UKOLN, University of Bath, Bath, BA2 7AY, UK
http://www.ukoln.ac.uk/ukoln/staff/a.powell/ +44 1225 383933
Resource Discovery Network http://www.rdn.ac.uk/
|