I thought I'd bung this out as an Internet Draft and see if there was
any interest among HTTP implementors. Comments welcome... !
Martin
INTERNET-DRAFT Martin Hamilton
draft-???-00.txt Loughborough University
Expires in six months April 1996
Experimental HTTP methods to support indexing and searching
Filename: draft-XXXX.txt
Status of this Memo
This document is an Internet-Draft. Internet-Drafts are working
documents of the Internet Engineering Task Force (IETF), its
areas, and its working groups. Note that other groups may also
distribute working documents as Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other
documents at any time. It is inappropriate to use Internet-
Drafts as reference material or to cite them other than as ``work
in progress.''
To learn the current status of any Internet-Draft, please check
the ``1id-abstracts.txt'' listing contained in the Internet-
Drafts Shadow Directories on ftp.is.co.za (Africa), nic.nordu.net
(Europe), munnari.oz.au (Pacific Rim), ds.internic.net (US East
Coast), or ftp.isi.edu (US West Coast).
Abstract
This document proposes some experimental mechanisms which may be
deployed within HTTP [1] to provide a local search capability on the
information being made available by an HTTP server, and reduce both
the bandwidth consumed by indexing agents, and the amount of work
done by HTTP servers during the indexing process.
1. Introduction
As the number of HTTP servers deployed has increased, providing
searchable indexes of the information which they make available has
itself become a growth industry. As a result there are now a large
number of "web crawlers", "web wanderers" and suchlike.
These indexing agents typically act independently of each other, and
do not share the information which they retrieve from the servers
being indexed. This can be a major cause for frustration on the part
of the server maintainer, who sees multiple requests for the same
information coming from different indexers. It also results in a
[Page 1]
INTERNET-DRAFT April 1996
large amount of redundant network traffic - with these repeated
requests for the same objects, and the objects themselves, often
travelling over the same physical infrastructure. It can be
conjectured that the volume of indexing related traffic will in some
cases be responsible for degraded network performance, but the author
does not have any statistics with which to back up this
supposition...
The HTTP protocol has supported the "conditional GET" feature for
some time. This allows clients to request that an object only be
returned if it has been modified since a particular date and time,
hence the use of the HTTP header name "If-Modified-Since" to refer to
it. It is hoped that all indexing agents deployed on the Internet at
large will make use of conditional GET when gathering the information
they index.
Whether or not conditional GET is supported, the normal approach to
indexing an HTTP server is to transfer the full content of each
object being indexed back to the indexer. Typically the only objects
which the index server is interested in will be those from which
plain text can readily be extracted - perhaps only HTML [2]
documents, or those documents which are served up with a top level
Internet Media Type of "text". The web crawler's data gathering
process normally uses hyperlinks in HTML documents to discover the
existence of new objects, and new servers, so that a single link to
your server from another server which is already being indexed may be
enough to make the index server aware of its existence.
To get around some of the problems associated with this brute force
approach to indexing, the robots exclusion convention [3] has been
widely adopted. This takes the form of an object, referred to by the
HTTP path name "/robots.txt", which server maintainers can use to
indicate their preferences as to which objects it is acceptable for
agents to retrieve. The robots.txt convention provides a more finely
grained alternative to simply allowing or denying HTTP access from
the indexing hosts. It is hoped that all indexing agents deployed on
the Internet at large will support this feature.
2. Additional HTTP methods
It would also be useful if the HTTP servers being indexed were
capable of generating indexing information themselves, and making
this information available in a bandwidth friendly manner - e.g. with
compression, and sending only the indexing information for those
objects which have changed since the indexing agent's last visit.
Furthermore, HTTP servers should support a native search method, in
order that (where a suitable search back end is available), HTTP
[Page 2]
INTERNET-DRAFT April 1996
clients may carry out a search of the information provided by an HTTP
server in a standardised manner.
In the following examples, "C:" is used to indicate the client side
of the conversation, and "S:" the server side.
2.1 The COLLECT method
The COLLECT method is drawn from the Collector/Gatherer protocol used
by the Harvest software [4]. It represents a request for the
indexing information about either all of the information being made
available by the the HTTP server, or the indexing information
pertaining to a particular collection of information being made
available by the HTTP server.
In COLLECT requests, the Request-URI (to use the jargon of [1])
should be an asterisk "*" if the request is for all of the indexing
information the HTTP server can provide, or a symbolic name which
refers to a particular collection.
Implementors should note that this collection selection is in
addition to the virtual host selection provided by the "Host:" HTTP
header.
The normal HTTP content negotiation features may be used in any
request/response pair. In particular, the "If-Modified-Since:"
request header should be used to indicate that the indexing agent is
only interested in object which have been created or modified since
the date specified, and the request/response pair of "Accept-
Encoding:" and "Content-Encoding:" should be used to indicate whether
compression is desired - and if so, the preferred compression
algorithm.
e.g.
C: COLLECT * HTTP/1.1
C: Accept: application/soif
C: Accept-Encoding: gzip, compress
C: If-Modified-Since: Mon, 1 Apr 1996 07:34:31 GMT
C: Host: www.lut.ac.uk
C:
S: 200 OK indexing data follows
S: Content-type: application/soif
S:
S: [...etc...]
[Page 3]
INTERNET-DRAFT April 1996
2.2 The SEARCH method
The SEARCH method embeds a query in the Request-URI component of the
request, using the search syntax defined for the WHOIS++ protocol
[5]. Any characters in the Request-URI which fall outside the legal
character set for Request-URI, such as spaces, should be hex escaped.
This is in order that SEARCH requests may readily be written as URLs
in HTML documents.
e.g.
C: SEARCH keywords=venona HTTP/1.1
C: Accept: application/whois, text/html
C: Host: www.lut.ac.uk
C:
S: 200 OK search results follow
S: Content-type: application/whois
S:
S: [...etc...]
WHOIS++ requests normally fit onto a single line, and no state is
preserved between requests. Consequently, embedding WHOIS++ requests
within HTTP requests does not add greatly to implementation
complexity.
3. Discussion
There is no widespread agreement on the form which the indexing
information retrieved by web crawlers would take, and it may be the
case that different web crawlers are looking for different types of
information. As the number of indexing agents deployed on the
Internet continues to grow, it seems likely that they will eventually
proliferate to the point where it becomes infeasible to retrieve the
full content of each and every indexed object from each and every
HTTP server.
Having said this, distributing the indexing load amongst a number of
servers which pooled their results would be one way around this
problem - splitting the indexing load along geographical and
topological lines. To put some perspective on this discussion, the
need to do this does not yet appear to have arisen.
On the format of indexing information there is something of a
dichotomy between those who see the indexing information as a long
term catalogue entry, perhaps to be generated by hand, and those who
see it merely as an interchange format between two programs - which
may be generated automatically. Ideally the same format would be
useful in both situations, but in practice it may be difficult to
[Page 4]
INTERNET-DRAFT April 1996
isolate a sufficiently small subset of a rich cataloguing format for
machine use.
Consequently, this document will not make any proposals about the
format of the indexing information. By extension, it will not
propose a default format for search results.
However, it seems reasonable that clients be able to request that
search results be returned formatted as HTML, though this in itself
is not a particularly meaningful concept - since there are a variety
of languages which all claim to be HTML based. A tractable approach
for implementors would be that HTML 2 should be returned unless the
server is aware of more advanced HTML features supported by the
client. Currently, much of this feature negotiation is based upon
the value of the HTTP "User-Agent:" header, but it is hoped that a
more sophisticated mechanism will eventually be developed.
The use of the WHOIS++ search syntax is based on the observation that
most search and retrieval protocols provide little more than an
attribute/value based search capability, and that WHOIS++ manages to
do this in arguably the simplest and most readily implemented manner.
Other protocols typically add extra complexity in delivering requests
and responses, and management type features which are rarely
exercised over wide area networks.
This document has suggested that search requests be presented using a
new HTTP method, primarily so as to avoid confusion when dealing with
servers which do not support searching. This approach has the
disadvantage that there is a large installed base of clients which
would not understand the new method, a large proportion of which have
no way of supporting new HTTP methods.
An alternative strategy would be to implement searches embedded
within GET requests. This would complicate processing of the GET
request, but not require any changes on the part of the client. It
would also allow searches to be written in HTML documents without any
changes to the HTML syntax - they would simply appear as regular
URLs. Searches which required a new HTTP method would presumably
have to be delineated by an additional component in the HTML anchor
tag.
This problem does not arise with the collection of indexing
information, since the number of agents performing the collection
will be comparatively small, and there is no perceived benefit from
being able to write HTML documents which include pointers to indexing
information - rather the opposite, in fact.
[Page 5]
INTERNET-DRAFT April 1996
4. Security considerations
Most Internet protocols which deal with distributed indexing and
searching are careful to note the dangers of allowing unrestricted
access to the server. This is normally on the grounds that
unscrupulous clients may make off with the entire collection of
information - perhaps resulting in a breach of users' privacy, in the
case of White Pages servers.
In the web crawler environment, these general considerations do not
apply, since the entire collection of information is already "up for
grabs" to any person or agent willing to perform a traversal of the
server. Similarly, it is not likely to be a privacy problem is
searches yield a large number of results.
One exception, which should be noted by implementors, is that it is a
common practice to have some private information on public HTTP
server - perhaps limiting access to it on the basis of passwords, IP
addresses, network numbers, or domain names. These restrictions
should be considered when preparing indexing information or search
results, so as to avoid revealing private information to the Internet
as a whole.
It should also be noted that many of these access control mechanisms
are too trivial to be used over wide area networks such as the
Internet. Domain names and IP addresses are readily forged,
passwords are readily sniffed, and connections are readily hijacked.
Strong cryptographic authentication and session level encryption
should be used in any cases where security is a major concern.
5. Conclusions
There can be no doubt that the measures proposed in this document are
implementable - in fact they have already been implemented and
deployed, though on nothing like the scale of HTTP. It is a matter
for debate whether they are needed or desirable as additions to HTTP,
but it is clear that the additional functionality added to HTTP for
search support would be at some implementation cost. Indexing
support would be trivial to implement, once the issue of formatting
had been resolved.
6. Acknowledgements
Thanks to <<your name here!!>> for comments on draft versions of this
document.
This work was supported by grants from the UK Electronic Libraries
Programme (eLib) and the European Commission's Telematics for
[Page 6]
INTERNET-DRAFT April 1996
Research Programme.
The Harvest software was developed by the Internet Research Task
Force Research Group on Resource Discovery, with support from the
Advanced Research Projects Agency, the Air Force Office of Scientific
Research, the National Science Foundation, Hughes Aircraft Company,
Sun Microsystems' Collaborative Research Program, and the University
of Colorado.
7. References
Request For Comments (RFC) and Internet Draft documents are available
from <URL:ftp://ftp.internic.net> and numerous mirror sites.
[1] R. Fielding, H. Frystyk, T. Berners-Lee, J. Gettys,
J. C. Mogul. "Hypertext Transfer Protocol --
HTTP/1.1", Internet Draft (work in progress).
April 1996.
[2] T. Berners-Lee, D. Connolly. "Hypertext Markup
Language - 2.0", RFC 1866. November 1995.
[3] M. Koster. "A Standard for Robot Exclusion." Last
updated March 1996.
<URL:http://info.webcrawler.com/mak/projects/robots/
norobots.html>
[4] C. M. Bowman, P. B. Danzig, D. R. Hardy, U. Manber,
M. F. Schwartz, and D. P. Wessels. "Harvest: A
Scalable, Customizable Discovery and Access Sys-
tem", Technical Report CU-CS-732-94, Department of
Computer Science, University of Colorado, Boulder,
August 1994.
<URL:ftp://ftp.cs.colorado.edu/pub/cs/techreports/sc
hwartz/HarvestJour.ps.Z>
[5] P. Deutsch, R. Schoultz, P. Faltstrom & C. Weider.
"Architecture of the WHOIS++ service", RFC 1835.
August 1995.
8. Author's Address
Martin Hamilton
Department of Computer Studies
Loughborough University of Technology
Leics. LE11 3TU, UK
Email: [log in to unmask]
[Page 7]
INTERNET-DRAFT April 1996
This Internet Draft expires XXXX, 1996.
[Page 8]
|