On Mon, 6 May 1996 [log in to unmask] wrote:
> > Now of course this means that the WWW browser needs to understand
> > multipart MIME stuff but that shouldn't be too tricky to do.
>
> Anything involving getting 30 or 100 software vendors to change their
> products needs a far stronger business case than the metadata group
> can ever provide. `your users will benefit when they use your product
> to search the web, if everyone else implements this, people use it, and
> the webcrawlers index it, and ...' Forget it.
Well, I disagree completely with the "forget it bit". Otherwise we'd all
still be using HTML 1.0 and the CERN line mode browser and we might as
well go home now. WWW browser implementors _do_ implement neat new
features all the time. Not all of them I-candy stuff like <BLINK> and
<FRAME>; how about proxy support, multilingual support, user agent
spoofing (a great X Mosaic feature :-) ), active objects, etc? And we
know the browser writers/index generators are interested in metadata
already because of the rumblings from W3C and the various hype, erm, I
mean press announcements we've seen recently (eg: Netscape using
Harvest). Indexing is getting important and people are realising that.
Howver, metadata generation is something that is going to take time; even
if you let people stick in HTML files, someone has to create it, someone
has to index it, etc, etc. Someone has to write/glue together some code
somewhere, be it web browser vendors or webcrawler writers. That's
something I think that we've got to accept. Look how long GILS is taking
even with the US Federal Government giving it a boot up the botty. :-)
> Note that Netscape Navigator already understands multipart MIME messages,
> but with a compltely different semantics -- this is how `server push'
> animations are done. You won't get that to change easily, since it's
> *extremely* widely deployed.
Different semantics to multipart/mixed and multipart/alternative??? That
isn't MIME in that case (the RFCs tell you what the semantics of the
multipart content types are and I wouldn't be at all surprised if
Netscrape have broken yet another standard along with HTML/SGML.
Unfortunately the Netscrape site is as ever too slow to get to at the
moment (and its 3am!) so I can't check this). Or am I misunderstanding what
you mean by semantics here?
> I know I keep saying this, but I'm not vey interested in a metadata standard
> for the year 2010. Nor even in one for the year 1998. And if you require
> coordinated multi-vendor software changes, you'll be lucky to get it by then.
Well years 1996 and 1997 are pretty much out of the question. I doubt
that we'll all be able to write and ship the multiple, independent
implementations of code required to progress a standards track protocol in
under a year. We _might_ be able to get a few, rough and ready
implementations out there for folks to try out in a year or so, but there
won't be much data in it for a while. Even the WWW took a while to get
going (I remember when it was in the same league as Hyper-G alongside the
then rapidly growing gopher). I would say at least 1998 for the _start_
of a new metadata standard. If you're after a standard today, there's
always MARC and Z39.2/Z39.50. And its GILS compliant. :-)
I don't think we want or even need coordinated multi-vendor software
changes. What is needed is killer-app, Mosaic style, that gets everybody
generating and using metadata. Something that makes it worth their while
creating the stuff (just like Mosaic's GUI made it worthwhile bothering
with inserting HTML tags into documents). It might be something like Silk
or Harvest modified to understand WF, DCES embedded in HTML files and the
DCES-SGML DTD for example. Maybe agent (yuck, nasty word) technology that
sits on your desktop and gives you a nice, user friendly search
mechanism that is constantly updated for your favourite search term; I
don't know. However, once one killer-app is out there, the sheep
will follow (or perish - definition of a killer app :-)). Coordinating
multi-vendor changes is ISO committee thinking, not free market
thinking. Making a kick-ass application that everyone wants and is
willing to put the extra effort into in order to gain some benefits is
what we should be thinking about. If it worked for Marc A.... $$$ :-)
> The MIME stuff has a certain elegance. But anything that alters existing
> non-DublinCore metada (e.g. by wrapping it), or that alters the way an
> HTML file is deliverd, is doomed to failure. Two years of hard experience
> on the various IETF mailing lists and meetings has demonstrated this clearly.
Err, the point of wrapping the non-DCES metadata is that you don't have to
alter it, you just wrap it. To my mind altering the metadata means going
in and fiddling with bits inside the unencoded packages which I didn't see
anyone proposing (maybe I missed it?). At the moment few browsers
understand any metadata at all and none of them generate the accept
headers proposed for MIME wrapping of metadata, so its not like proprosing
wrapping of metadata formats in MIME is going to break stuff. Library
OPACs are going to carry on exchanging raw MARC between systems without
MIME encoding it. The world isn't going to fall apart if we decide to
wrap up metadata for use in applications we don't have yet.
And we don't alter the way HTML files are delivered to non-WF aware
browsers; they don't generate the correct accept headers to get MIME
encoded metadata wrapped round their objects and so just get the objects
with no multipart content-types (which is, after all, all they know how to
process). WF aware browsers also wouldn't break existing HTTP servers
as the server would just return them the object (or no match if all they
wanted was the metadata). Allowing metadata to be transported with HTML
files without breaking them whole point of developing the cool
DCES-embedded-in-HTML stuff (or so I thought - was I wrong?).
The most likely "browsers" to implement a MIME (or SGML or whatever) based
WF initially aren't likely to be end user browsers anyway; IMHO the WWW
client side of the indexing engines is the place this code will appear
first (if anywhere). There are far fewer of these deployed than
Netscapes and their owners/implementors are likely to have a vested
interest in getting their hands on any and all metadata that they can in
order to improve their service. Of course, if even these people aren't
interested in getting metadata, one has to start questioning who _is_
interested in this outside of the 50 of us that met in Warwick! :-)
Apologies for any rambling but its very late (or is that early) and my
MBONE recording tools _still_ don't want to record Stu's session at the
Paris WWW conference. Time for bed me thinks...
Tatty bye,
Jim'll
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Jon "Jim'll" Knight, Researcher, Sysop and General Dogsbody, Dept. Computer
Studies, Loughborough University of Technology, Leics., ENGLAND. LE11 3TU.
* I've found I now dream in Perl. More worryingly, I enjoy those dreams. *
|