Please forgive all cross-postings.
I've been asked to provide some advice on the implementation of Dublin
Core for the description of state government Web pages.
As I understand it, there are several good reasons to create and manage
DC metadata in a separate file that's associated with a page through a
<LINK REL="metadata" HREF=""> reference within the page, rather than
coding all the metadata directly into the HTML as META elements.
This is the approach taken by D-Lib Magazine
http://www.dlib.org
and the DC site itself
http://purl.org/dc/
(though their implementation in XML is a bit different)
My question is related to support for such an approach by major server
search and indexing software. I would specifically like to know if
anyone has experience trying to use either Ultraseek Server or MS Site
Server to crawl, index and provide access to pages that link out to
external metadata.
This is distinct from the (also important) question of whether the
software can correctly parse and index XML and RDF from that file once
it gets to it. If the only approach currently supported by the major
products is to for externally linked metadata files that are saved as a
set of HTML META elements, that seems like a reasonable approach.
Though I would think markup in XML would be higly preferable.
Any practical advice on this matter would be greatly appreciated.
=======================================================
Cal Lee
Electronic Records Project Archivist
Kansas State Historical Society
Phone: 785-272-8681, ext. 280 Fax: 785-272-8682
http://da.state.ks.us/itab/erc/
http://www.kshs.org/archives/recmgt.htm
"Obsolete power corrupts obsoletely."
- Ted Nelson
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|