Following eLib's push for Dublin Core to be taken up by eLib projects, I
have implemented DC metadata into the header of the home page at
Sociological Research Online. I would, however, question the effect at
this stage, and wonder how widely this metadata is utilised in the wider
Web.
The metadata for the home page of Sociological Research Online amounts to
1429 bytes of 'hidden' information. This page is downloaded around 200
times per day, and about 70% of our traffic is from outside the UK. This
amounts to an additional daily server load of some 300Kb for this one page
alone, and if widely implemented across our site, the total would be
considerably higher, perhaps 2Mb of metadata served per day (assuming DC
for other pages was considerably shorter, c. 200 bytes).
The question I would put is how much do we gain in return? How widely is
this metadata currently read? ...and how much is this set to rise? As an
aside, it would seem more sensible if metadata could be stored in separate
files as robot exclusion lists are now. In the meantime, is the extra
burden on the servers worth the return?
I welcome some healthy debate on this,
Regards,
Stuart Peters
____________________________________________________________________________
SOCIOLOGICAL RESEARCH ONLINE
Editor: Liz Stanley
Book Review Editors: Victoria Alexander and Sue Heath
Editorial and IT Officer: Stuart Peters
Department of Sociology http://www.socresonline.org.uk/socresonline/
University of Surrey mailto:[log in to unmask]
Guildford, Surrey GU2 5XH tel: (+44) (0)1483 259292
United Kingdom fax: (+44) (0)1483 259356
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|