Now everybody will cry out, "that's horribly cryptic, and XML or
HTML will not need to be read or written by humans, it will all be encoded
and managed by software. But that's true for MARC too. And I fail to see
(after 20 years of library computing and programming) why XML should be
preferred over USMARC tagging (which is not the best of formats either),
which, I repeat it, keeps things together that belong together, and in a
very efficient way.
>>I presume the answer to the question of "why XML instead of MARC" is that
XML will be a >>cross-industry (as opposed to library-specific) standard
way of representing metadata and >>it will be supported by Web browsers.
XML is perfectly capable of grouping related fields >>so that things that
belong together stay together - such a data design can be expressed >>in
the XML DTD.
>>But this question also leads me to ask *who* we think will be adding this
metadata? I've >>assumed that it would be the publisher since it makes
sense to add this data upstream and >>in a regulated way. Those publishers
who already have metadata tend to use SGML, so the >>files could be
converted to XML relatively easily (depending on compliance with XML rules
>>of "well-formedness", such as no tag minimization, no exclusions, etc. -
many of us are >>already creating our SGML DTDs with an eye on making them
XML-compliant too). Whether the >>metadata currently being produced maps on
to Dublin Core is another matter.... Anyway, if >>it isn't going to be the
publisher who adds this stuff, who else will do it? Authors? >>Libraries?
Third parties? What's been the general assumption in the Dublin Core Group?
>>Cliff Morgan
>>John Wiley & Sons Ltd
|