Hi Scott,
Thanks for these thoughts. It's so nice to see that you're finally
starting to appreciate the joys of metadata, I knew you'd come round
eventually... ;-)
> One of the things I find quite difficult in discussions of LOM is a
> kind of vagueness of purpose; LOM seems to be considered to be
> metadata for discovery, for use, and for the management of objects.
I agree, LOM really suffers from trying to be all things to all
people. We can't really fault the developers of LOM for being
ambitious, but this vagueness of purpose has resulted in serious
implementation issues further down the line.
> For example, the recent suggestions on identifying authority lists for
> keywords and subject provenance would fit within a metadata management
> 'use case', but I'm not sure this could be used in discovery (too
> 'expensive' in processing time compared with the more usual
> thesaurus-lookup or pre-normalized index approach).
>
> Perhaps the "future of LOM" should be to look at breaking up the
> specification into:
>
> - metadata for discovering learning objects (probably DC + a couple of
> LOM fields; an Attribute Set in other words)
> - metadata for learning object deployment and use (technical format,
> rights, etc)
> - metadata for learning object management (all that classification
> stuff, metametadata etc, more like a traditional catalogue schema)
>
> In the case of systems that use adaptive learning and/or intelligent
> tutoring there is a case for these being combined, but in the HE/FE
> scenario it makes more sense I think to break the spec up, as the
> resulting profiles in each usage area will have a greater degree of
> homogeneity than profiles of the whole LOM.
There has been quite widespread discussion over the last few years
about the pros and cons of disaggregating LOM and moving towards more
granular metadata specifications. I'm inclined to agree that it would
be very useful to have multiple discrete metadata schema that could be
aggregated in different configurations and application profiles to meet
the requirements of many different communities, applications and use
cases. I suppose ultimately this leads us towards the rdf / semantic
web model however I think we need to be pragmatic in the way we move
from the kind of metadata we have now (LOM and DC) to the kind of
metadata we want in the future (??)
We still require considerable debate in order to identify what kind of
metadata we actually need and how granular any future metadata
specifications and schema should be. I rather like the functional
characteristics you've outlined above, however we also need to think
carefully about what type of metadata we need to fulfill these key
functions. For example, where would accessibility metadata fit in?
And what about educational / pedagogic metadata, quality metadata,
contextual metadata, user generated metadata, etc, etc?
> Typically the only thing 'usefully' shared at the moment is the
> discovery metadata.
>
> I think Lorna may also have suggested something similar some time ago,
> but I can't be sure!
I'm sure I probably did! :-)
Bye
Lorna
>
> Anyway, just an idea...
>
> Responding to Boon's comments on searching, I think the future will
> yield two types of federated search:
>
> (1) searching open-access collections via Google or more specialized
> services, using metadata harvesting as the primary approach
> (2) searching restricted-access collections via parallel search
> requiring identity assertions via SAML or similar
>
> (You can't do (2) with a harvesting approach or you need to use a
> rights expression language, in which case ContentGuard will impose a
> patent toll on you)
>
> On the performance side of parallel search, in practical terms adding
> more targets can actually speed up a search, rather than slow it down
> - it takes roughly the same time to get 10 records each from 100
> targets as for 10 (assuming you have plenty of spare threads), but you
> amass a local cache of 1000 records instead of 100 - so you don't hit
> the network again as often in the same session as you can iterate over
> the results in cache (25 at a time, typically). The issue with scaling
> federated parallel search is memory usage, not response time, as the
> threads multiply to quite horrific numbers when you have lots of users
> cross-searching lots of targets.
>
> Its swings and roundabouts though - with harvesting you need ever
> greater storage capacity, with parallel search you need lots of RAM or
> great RAM caching. Its just the former is cheaper these days, so (1)
> looks like a good bet to me for open collections. Me, I've been doing
> a lot of work on (2) lately...
>
> -S
>
> On 11 Apr 2005, at 06:21, Dempsey,Lorcan wrote:
>
>>
>> A few comments on this interesting thread .... I deliberately take a
>> pragmatic short-term view. Maybe magic will emerge further out ...
>>
>> 1. Metadata fields.
>>
>> In another post on the blog Boon mentions I comment on some recent
>> discussion about MARC and XML:
>> http://orweblog.oclc.org/archives/000616.html. There I discuss what I
>> call the 'classical' library metadata stack:
>> encoding (e.g. ISO 2709/z39.2)
>> 'content designation' or 'element sets' (e.g. various MARC
>> formats)
>> content values (e.g. cataloging rules, authority files,
>> terminologies, ..)
>>
>> Putting to one side how effective this approach is ;-) one of the
>> issues that experiences with harvesting have clearly shown is the
>> difficulty of creating a consolidated resource from data that is
>> progressively less uniform as you move up the stack. One of the issues
>> with consolidating IEEE LOM metadata will be the absence of content
>> standards and the variety within the 'element sets'.
>>
>> (I am not saying that the 'classical library metadata stack' should be
>> adopted, merely using it to identify some levels of interoperability.
>> And certainly not suggesting that anybody look at something of the
>> complexity of AACR!)
>>
>> 2. Terminologies
>>
>> Scott Wilson suggested that the recent discussion of terminologies on
>> this list would have benefited from some use cases. This is clearly
>> so.
>> If you are interested in creating a specialised resource for a defined
>> community, then a specialised vocabulary which can you can grow based
>> on
>> your understanding of your domain and your users' practice may be
>> sensible. If you want to create large aggregated resources across many
>> repositories, or if you want to build services on top of distributed
>> repositories, or if you want to 'publish' your resource into a larger
>> federation/aggregation, then there is benefit in looking for more
>> consistent general approaches. Clearly in each case there are
>> trade-offs
>> (this is putting to one side questions about the value of controlled
>> vocabularies in the first place).
>>
>> 3. An ideal world
>>
>> Well an ideal world will never exist ;-) Which does not mean that we
>> should not work towards it. But in working towards it we should bear
>> in
>> mind what is likely to remain hypothetical and unfulfilled and what is
>> likely to be achieved. This involves questions of cost, of service
>> development, of technology and so on. Cost is an issue that tends to
>> be
>> ignored in many discussions: much of our current metadata creation
>> activity simply will not scale for cost reasons.
>>
>> 4. So ...
>>
>> If one is looking towards creating large scale aggregations of data,
>> or
>> if one is anticipating trying to provide metasearch environments
>> across
>> repositories, I think there is potentially a lot of value in working
>> towards a simple consistent schema which is accompanied by some 'data
>> entry' guidelines to ensure consistency.
>>
>> If one wants to traverse this aggregated/federated corpus with a
>> controlled vocabulary there is merit in asking that people use the
>> same
>> one, or use several between which mappings have been created.
>>
>> 5. But ...
>>
>> Of course this does not address the issue of working between this
>> corpus
>> of data - over which you collectively can make some design decisions -
>> and data which is outwith your control. Which comes back to Boon's msg
>> below.
>>
>>
>>
>> Lorcan
>>
>> Lorcan Dempsey [http://orweblog.oclc.org]
>> OCLC Research [http://www.oclc.org/research/]
>>
>> -----Original Message-----
>> From: The CETIS Metadata Special Interest Group
>> [mailto:[log in to unmask]] On Behalf Of Boon Low
>> Sent: Thursday, April 07, 2005 7:46 AM
>> To: [log in to unmask]
>> Subject: Re: cordra
>>
>>
>>
>>
>> So while Google Scholar helps, it does not yet solve
>> the
>> problem of getting
>> precise results from all the content in the
>> repositories.
>>
>>
>>
>> Ideally, the most precise ways of getting what you want is through
>> subject/managed databases and searching metadata fields. But if you
>> use
>> a digital library these days, you are redirected to 3rd-party
>> databases
>> and end up dealing with lots of user interfaces. As the use of
>> learning
>> objects become ubiquitous (we speculate), islands of LORs will pop up.
>> Dealing with fragmentations, like those in the libraries, would
>> become a
>> main issue.
>>
>> And the solution, federated searching technology is in a mess. How
>> much
>> computing power requires to simultaneously deal with 10 databases?
>> Scale
>> that up for multi-users & targets environments such as the
>> universities.. and the preference for all results dynamically pooled,
>> automatically dedup, ranked, filtered, not to mention the plausible
>> algorithms each requires computation.. plus the targets/network
>> fluctuation to address, it's no surprise.. people are resorting to
>> Googling or dealing with individual databases. And libraries and
>> product
>> vendors alike are looking into harvesting/caching solution to meet the
>> federated search demands, e.g. Encompass EJOS -
>> http://encompass.endinfosys.com/ejos_description.htm for caching
>> journal
>> content locally.
>>
>> What Google has demonstrated is a preference to be done away the
>> fragmentations, to embrace one robust and pragmatic view of
>> repositories. I agree with Andy about the hybrid approach mixing the
>> use
>> of centralised fulltext indices and disaggregated views of metadata
>> repositories. It is more intuitive for a general user to discover
>> something by "search it and see" and then slice the results using
>> metadata accordingly (something Google can't, do but libraries well
>> poised), instead of considering which LOM fields and classification
>> heading to begin with (vice versa for other scenarios, I'm sure). You
>> may be interested in a recent blog:
>> http://orweblog.oclc.org/archives/000615.html , discussing about these
>> two polarised views of repositories (google vs. meta-search) and the
>> feasible views in between. The latter, I think merit more
>> developments,
>> as we are doing here. I think also this is not about Google vs.
>> Cordra..
>> but rather how Cordra would also provide for a google like view.
>>
>>
>> Best wishes
>>
>> Boon
>>
>> -----
>> Boon Low
>> System Development, EGEE Training
>> National e-Science Centre
>> http://homepages.ed.ac.uk/boon/
>>
>>
>> On 6 Apr 2005, at 22:59, Andy Powell wrote:
>>
>>
>> On Wed, 6 Apr 2005, Dan Rehak wrote:
>>
>>
>>
>> First, as noted and described in the links, you have
>> to
>> let the googlebot
>> in, and you need to give it a list of links to *all*
>> of
>> the content that you
>> want to be indexed. You probably don't want to have a
>> human readable page
>> with a millon links, so an appropriate solution is to
>> recognize when the
>> googlebot is visiting and give it a different view of
>> your site -- the page
>> with the links.
>>
>>
>>
>> Or have a fairly shallow browse tree which end-users and
>> Google
>> can crawl
>> sensibly?
>>
>> Show me a repository in the UK (or anywhere) with a million
>> links? OK,
>> I'm sure that some exist... but if we limit ourselves to
>> thinking about
>> learning object repositories or eprint archives then if we get
>> above 1000
>> objects we're doing well. In most cases 10,000 is a distant
>> dream still?
>>
>> And in the case of eprints, most links into the eprint archive
>> will be
>> directly from external pages (e.g. from an academics list of
>> publications)
>> the internal links within the archive are neither here nor
>> there. In that
>> sense, the objects in the repostory become just like any other
>> resource on
>> the Web - they sit at the end of URLs that people will use to
>> create
>> links.
>>
>> The same will be true of learning opbject repositories unless
>> people put
>> daft authentication challenges in the way or design their
>> systems in such
>> a way that people can't make direct links in to the content.
>>
>> Now, I agree that there's an issue about how deep Google will
>> crawl. But
>> one of the interesting features of the Google Scholar
>> discussions is that
>> Google seem to be willing to modify their crawling strategies
>> in
>> order to
>> pull in high-quality stuff.
>>
>> So I'd anticipate that the environment will change
>> significantly
>> over the
>> next year or so in terms of what Google does and doesn't get
>> to.
>>
>>
>>
>> Next you have to make sure that googlebot will harvest
>> all of the links.
>> The various descriptions indicate that it is not by
>> default an exhaustive
>> harvest, and the googlebot will revisit the site many
>> times.
>>
>> Once google harvests, it has to index what it found.
>> Again, by default it
>> doesn't treat learning content in any special way.
>> Does
>> DC:Title mean
>> anything special? How do I get precise search results
>> using the metadata
>> that is associated with the content?
>>
>>
>>
>> W.r.t. both these points, there do appear to be indications
>> that
>> Google is
>> tentatively considering the use of OAI-PMH to get at stuff in
>> repositories
>> - at least for DSpace repositories. What impact this may
>> have,
>> even if
>> Google does start to do this, is debatable in the current
>> environment,
>> since people use OAI-PMH somewhat inconsistently (in terms of
>> how they
>> construct their metadata records and links to the object) -
>> but,
>> again,
>> it's potentially quite an interesting development.
>>
>> And the issue of metadata-based approaches vs. full-text
>> indexing is
>> clearly contentious. Is it fair to say that there are few
>> examples of
>> really successful services based on end-user created metadata?
>> There
>> are exceptions of course - arXiv is one. Is it also fair to
>> say
>> that
>> cataloguer created metadata is expensive - to the point that
>> it
>> doesn't
>> scale up well to cataloguing stuff in the Internet
>> environment?
>>
>> And is it fair to say that in the learning object world there
>> are likely
>> to be even fewer examples of good quality metadata created by
>> end-users,
>> since the properties and allowed values in the educational
>> parts
>> of LOM
>> are so fuzzy - the evidencve I've seen (e.g. Jean Godby's work
>> at OCLC) is
>> that people don't actually create much metadata that isn't
>> essentially
>> Dublin Core-like.
>>
>> Given that we're typically not willing to pay cataloguers to
>> describe
>> stuff in repositories and we may not be able to rely on the
>> quality of
>> end-user supplied metadata (particularly educational
>> metadata),
>> my
>> suspicion is that we're still a long way from being able to
>> create really
>> good discovery services based solely on the metadata in
>> repositories.
>>
>> Now, it seems to me, the answer lies in some hybrid approach
>> where you mix
>> end-user supplied metadata, automatically content-derived
>> metadata, and
>> full-text indexing and you get the best of both worlds. And
>> this is the
>> direction I'd like to see Google Scholar going in.
>>
>>
>>
>> I also understand that the googlebot makes many
>> ranking
>> decisions -- what to
>> harvest, what to index, what to display, so the google
>> view of your
>> repository, and what the user in the google search
>> result sees may both be
>> different from what you have or what you would see
>> from
>> a direct repository
>> search.
>>
>> There have also been problems with content that has a
>> URI that is a
>> persistent ID, e.g., a PURL, a DOI. Google thinks
>> that
>> the content is
>> "owned" by the URL owner. The pagerank for
>> http://resolver/id is based on
>> the pagerank of "resolver", not of the actual content.
>>
>>
>>
>> Don't get me started on identifiers! :-) But just to note
>> that
>> this is
>> one of the problems with any identifier that can only be used
>> on
>> the Web
>> by mapping it to a URL by some sort of proxy (and the same is
>> true of
>> PURLs). Essentially this approach breaks the current Web,
>> particularly
>> for services like Google that try to infer knowledge from the
>> linkages
>> between stuff.
>>
>> That said, I thought I'd done some limited experiments that
>> seemed to
>> indicate that Google treated HTTP redirects reasonably
>> sensibly
>> - i.e.
>> that it passed on the Google Pagerank to the linked resource.
>> But perhaps
>> I misunderstood what I was seeing...
>>
>>
>>
>> But I think they
>> have been working on this problem for some
>> collections,
>> like Crossref.
>>
>>
>>
>> Yes. If you are sitting on a collection thast Google think is
>> valuable
>> (i.e. of value to Google's end-users) then Google are probably
>> willing to
>> talk to you about how they can get at your content.
>>
>>
>>
>> So while Google Scholar helps, it does not yet solve
>> the
>> problem of getting
>> precise results from all the content in the
>> repositories.
>>
>>
>>
>> Agreed... but I think the future lies in sensible dialogue
>> with
>> services
>> like Google and not simply knocking them because they don't
>> use
>> the same
>> notions of metadata as we do?
>>
>> Andy
>> --
>> Distributed Systems, UKOLN, University of Bath, Bath, BA2 7AY,
>> UK
>> http://www.ukoln.ac.uk/ukoln/staff/a.powell/ +44 1225
>> 383933
>> Resource Discovery Network http://www.rdn.ac.uk/
>>
>>
--
Lorna M. Campbell
Assistant Director, CETIS
University of Strathclyde
+44 (0)141 548 3072
http://www.cetis.ac.uk/
|