JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for DC-GENERAL Archives


DC-GENERAL Archives

DC-GENERAL Archives


DC-GENERAL@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Monospaced Font

LISTSERV Archives

LISTSERV Archives

DC-GENERAL Home

DC-GENERAL Home

DC-GENERAL  May 1996

DC-GENERAL May 1996

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

stirring things up a bit

From:

Martin Hamilton <[log in to unmask]>

Reply-To:

[log in to unmask]

Date:

Wed, 01 May 1996 21:10:43 +0100

Content-Type:

multipart/mixed

Parts/Attachments:

Parts/Attachments

text/plain (7 lines) , text/plain (452 lines)

I thought I'd bung this out as an Internet Draft and see if there was
any interest among HTTP implementors. Comments welcome... !

Martin









INTERNET-DRAFT Martin Hamilton
draft-???-00.txt Loughborough University
Expires in six months April 1996


      Experimental HTTP methods to support indexing and searching
                        Filename: draft-XXXX.txt


Status of this Memo

      This document is an Internet-Draft. Internet-Drafts are working
      documents of the Internet Engineering Task Force (IETF), its
      areas, and its working groups. Note that other groups may also
      distribute working documents as Internet-Drafts.

      Internet-Drafts are draft documents valid for a maximum of six
      months and may be updated, replaced, or obsoleted by other
      documents at any time. It is inappropriate to use Internet-
      Drafts as reference material or to cite them other than as ``work
      in progress.''

      To learn the current status of any Internet-Draft, please check
      the ``1id-abstracts.txt'' listing contained in the Internet-
      Drafts Shadow Directories on ftp.is.co.za (Africa), nic.nordu.net
      (Europe), munnari.oz.au (Pacific Rim), ds.internic.net (US East
      Coast), or ftp.isi.edu (US West Coast).

Abstract

   This document proposes some experimental mechanisms which may be
   deployed within HTTP [1] to provide a local search capability on the
   information being made available by an HTTP server, and reduce both
   the bandwidth consumed by indexing agents, and the amount of work
   done by HTTP servers during the indexing process.

1. Introduction

   As the number of HTTP servers deployed has increased, providing
   searchable indexes of the information which they make available has
   itself become a growth industry. As a result there are now a large
   number of "web crawlers", "web wanderers" and suchlike.

   These indexing agents typically act independently of each other, and
   do not share the information which they retrieve from the servers
   being indexed. This can be a major cause for frustration on the part
   of the server maintainer, who sees multiple requests for the same
   information coming from different indexers. It also results in a



                                                                [Page 1]

INTERNET-DRAFT April 1996


   large amount of redundant network traffic - with these repeated
   requests for the same objects, and the objects themselves, often
   travelling over the same physical infrastructure. It can be
   conjectured that the volume of indexing related traffic will in some
   cases be responsible for degraded network performance, but the author
   does not have any statistics with which to back up this
   supposition...

   The HTTP protocol has supported the "conditional GET" feature for
   some time. This allows clients to request that an object only be
   returned if it has been modified since a particular date and time,
   hence the use of the HTTP header name "If-Modified-Since" to refer to
   it. It is hoped that all indexing agents deployed on the Internet at
   large will make use of conditional GET when gathering the information
   they index.

   Whether or not conditional GET is supported, the normal approach to
   indexing an HTTP server is to transfer the full content of each
   object being indexed back to the indexer. Typically the only objects
   which the index server is interested in will be those from which
   plain text can readily be extracted - perhaps only HTML [2]
   documents, or those documents which are served up with a top level
   Internet Media Type of "text". The web crawler's data gathering
   process normally uses hyperlinks in HTML documents to discover the
   existence of new objects, and new servers, so that a single link to
   your server from another server which is already being indexed may be
   enough to make the index server aware of its existence.

   To get around some of the problems associated with this brute force
   approach to indexing, the robots exclusion convention [3] has been
   widely adopted. This takes the form of an object, referred to by the
   HTTP path name "/robots.txt", which server maintainers can use to
   indicate their preferences as to which objects it is acceptable for
   agents to retrieve. The robots.txt convention provides a more finely
   grained alternative to simply allowing or denying HTTP access from
   the indexing hosts. It is hoped that all indexing agents deployed on
   the Internet at large will support this feature.

2. Additional HTTP methods

   It would also be useful if the HTTP servers being indexed were
   capable of generating indexing information themselves, and making
   this information available in a bandwidth friendly manner - e.g. with
   compression, and sending only the indexing information for those
   objects which have changed since the indexing agent's last visit.

   Furthermore, HTTP servers should support a native search method, in
   order that (where a suitable search back end is available), HTTP



                                                                [Page 2]

INTERNET-DRAFT April 1996


   clients may carry out a search of the information provided by an HTTP
   server in a standardised manner.

   In the following examples, "C:" is used to indicate the client side
   of the conversation, and "S:" the server side.

2.1 The COLLECT method

   The COLLECT method is drawn from the Collector/Gatherer protocol used
   by the Harvest software [4]. It represents a request for the
   indexing information about either all of the information being made
   available by the the HTTP server, or the indexing information
   pertaining to a particular collection of information being made
   available by the HTTP server.

   In COLLECT requests, the Request-URI (to use the jargon of [1])
   should be an asterisk "*" if the request is for all of the indexing
   information the HTTP server can provide, or a symbolic name which
   refers to a particular collection.

   Implementors should note that this collection selection is in
   addition to the virtual host selection provided by the "Host:" HTTP
   header.

   The normal HTTP content negotiation features may be used in any
   request/response pair. In particular, the "If-Modified-Since:"
   request header should be used to indicate that the indexing agent is
   only interested in object which have been created or modified since
   the date specified, and the request/response pair of "Accept-
   Encoding:" and "Content-Encoding:" should be used to indicate whether
   compression is desired - and if so, the preferred compression
   algorithm.

   e.g.

     C: COLLECT * HTTP/1.1
     C: Accept: application/soif
     C: Accept-Encoding: gzip, compress
     C: If-Modified-Since: Mon, 1 Apr 1996 07:34:31 GMT
     C: Host: www.lut.ac.uk
     C:
     S: 200 OK indexing data follows
     S: Content-type: application/soif
     S:
     S: [...etc...]






                                                                [Page 3]

INTERNET-DRAFT April 1996


2.2 The SEARCH method

   The SEARCH method embeds a query in the Request-URI component of the
   request, using the search syntax defined for the WHOIS++ protocol
   [5]. Any characters in the Request-URI which fall outside the legal
   character set for Request-URI, such as spaces, should be hex escaped.
   This is in order that SEARCH requests may readily be written as URLs
   in HTML documents.

   e.g.

     C: SEARCH keywords=venona HTTP/1.1
     C: Accept: application/whois, text/html
     C: Host: www.lut.ac.uk
     C:
     S: 200 OK search results follow
     S: Content-type: application/whois
     S:
     S: [...etc...]

   WHOIS++ requests normally fit onto a single line, and no state is
   preserved between requests. Consequently, embedding WHOIS++ requests
   within HTTP requests does not add greatly to implementation
   complexity.

3. Discussion

   There is no widespread agreement on the form which the indexing
   information retrieved by web crawlers would take, and it may be the
   case that different web crawlers are looking for different types of
   information. As the number of indexing agents deployed on the
   Internet continues to grow, it seems likely that they will eventually
   proliferate to the point where it becomes infeasible to retrieve the
   full content of each and every indexed object from each and every
   HTTP server.

   Having said this, distributing the indexing load amongst a number of
   servers which pooled their results would be one way around this
   problem - splitting the indexing load along geographical and
   topological lines. To put some perspective on this discussion, the
   need to do this does not yet appear to have arisen.

   On the format of indexing information there is something of a
   dichotomy between those who see the indexing information as a long
   term catalogue entry, perhaps to be generated by hand, and those who
   see it merely as an interchange format between two programs - which
   may be generated automatically. Ideally the same format would be
   useful in both situations, but in practice it may be difficult to



                                                                [Page 4]

INTERNET-DRAFT April 1996


   isolate a sufficiently small subset of a rich cataloguing format for
   machine use.

   Consequently, this document will not make any proposals about the
   format of the indexing information. By extension, it will not
   propose a default format for search results.

   However, it seems reasonable that clients be able to request that
   search results be returned formatted as HTML, though this in itself
   is not a particularly meaningful concept - since there are a variety
   of languages which all claim to be HTML based. A tractable approach
   for implementors would be that HTML 2 should be returned unless the
   server is aware of more advanced HTML features supported by the
   client. Currently, much of this feature negotiation is based upon
   the value of the HTTP "User-Agent:" header, but it is hoped that a
   more sophisticated mechanism will eventually be developed.

   The use of the WHOIS++ search syntax is based on the observation that
   most search and retrieval protocols provide little more than an
   attribute/value based search capability, and that WHOIS++ manages to
   do this in arguably the simplest and most readily implemented manner.
   Other protocols typically add extra complexity in delivering requests
   and responses, and management type features which are rarely
   exercised over wide area networks.

   This document has suggested that search requests be presented using a
   new HTTP method, primarily so as to avoid confusion when dealing with
   servers which do not support searching. This approach has the
   disadvantage that there is a large installed base of clients which
   would not understand the new method, a large proportion of which have
   no way of supporting new HTTP methods.

   An alternative strategy would be to implement searches embedded
   within GET requests. This would complicate processing of the GET
   request, but not require any changes on the part of the client. It
   would also allow searches to be written in HTML documents without any
   changes to the HTML syntax - they would simply appear as regular
   URLs. Searches which required a new HTTP method would presumably
   have to be delineated by an additional component in the HTML anchor
   tag.

   This problem does not arise with the collection of indexing
   information, since the number of agents performing the collection
   will be comparatively small, and there is no perceived benefit from
   being able to write HTML documents which include pointers to indexing
   information - rather the opposite, in fact.





                                                                [Page 5]

INTERNET-DRAFT April 1996


4. Security considerations

   Most Internet protocols which deal with distributed indexing and
   searching are careful to note the dangers of allowing unrestricted
   access to the server. This is normally on the grounds that
   unscrupulous clients may make off with the entire collection of
   information - perhaps resulting in a breach of users' privacy, in the
   case of White Pages servers.

   In the web crawler environment, these general considerations do not
   apply, since the entire collection of information is already "up for
   grabs" to any person or agent willing to perform a traversal of the
   server. Similarly, it is not likely to be a privacy problem is
   searches yield a large number of results.

   One exception, which should be noted by implementors, is that it is a
   common practice to have some private information on public HTTP
   server - perhaps limiting access to it on the basis of passwords, IP
   addresses, network numbers, or domain names. These restrictions
   should be considered when preparing indexing information or search
   results, so as to avoid revealing private information to the Internet
   as a whole.

   It should also be noted that many of these access control mechanisms
   are too trivial to be used over wide area networks such as the
   Internet. Domain names and IP addresses are readily forged,
   passwords are readily sniffed, and connections are readily hijacked.
   Strong cryptographic authentication and session level encryption
   should be used in any cases where security is a major concern.

5. Conclusions

   There can be no doubt that the measures proposed in this document are
   implementable - in fact they have already been implemented and
   deployed, though on nothing like the scale of HTTP. It is a matter
   for debate whether they are needed or desirable as additions to HTTP,
   but it is clear that the additional functionality added to HTTP for
   search support would be at some implementation cost. Indexing
   support would be trivial to implement, once the issue of formatting
   had been resolved.

6. Acknowledgements

   Thanks to <<your name here!!>> for comments on draft versions of this
   document.

   This work was supported by grants from the UK Electronic Libraries
   Programme (eLib) and the European Commission's Telematics for



                                                                [Page 6]

INTERNET-DRAFT April 1996


   Research Programme.

   The Harvest software was developed by the Internet Research Task
   Force Research Group on Resource Discovery, with support from the
   Advanced Research Projects Agency, the Air Force Office of Scientific
   Research, the National Science Foundation, Hughes Aircraft Company,
   Sun Microsystems' Collaborative Research Program, and the University
   of Colorado.

7. References

   Request For Comments (RFC) and Internet Draft documents are available
   from <URL:ftp://ftp.internic.net> and numerous mirror sites.

         [1] R. Fielding, H. Frystyk, T. Berners-Lee, J. Gettys,
                     J. C. Mogul. "Hypertext Transfer Protocol --
                     HTTP/1.1", Internet Draft (work in progress).
                     April 1996.

         [2] T. Berners-Lee, D. Connolly. "Hypertext Markup
                     Language - 2.0", RFC 1866. November 1995.

         [3] M. Koster. "A Standard for Robot Exclusion." Last
                     updated March 1996.
                     <URL:http://info.webcrawler.com/mak/projects/robots/
                     norobots.html>

         [4] C. M. Bowman, P. B. Danzig, D. R. Hardy, U. Manber,
                     M. F. Schwartz, and D. P. Wessels. "Harvest: A
                     Scalable, Customizable Discovery and Access Sys-
                     tem", Technical Report CU-CS-732-94, Department of
                     Computer Science, University of Colorado, Boulder,
                     August 1994.
                     <URL:ftp://ftp.cs.colorado.edu/pub/cs/techreports/sc
                     hwartz/HarvestJour.ps.Z>

         [5] P. Deutsch, R. Schoultz, P. Faltstrom & C. Weider.
                     "Architecture of the WHOIS++ service", RFC 1835.
                     August 1995.

8. Author's Address

   Martin Hamilton
   Department of Computer Studies
   Loughborough University of Technology
   Leics. LE11 3TU, UK

   Email: [log in to unmask]



                                                                [Page 7]

INTERNET-DRAFT April 1996


                  This Internet Draft expires XXXX, 1996.


















































                                                                [Page 8]

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

February 2024
May 2022
April 2022
March 2022
March 2020
February 2019
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
July 1999
June 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998
August 1998
July 1998
June 1998
May 1998
April 1998
March 1998
February 1998
January 1998
December 1997
November 1997
October 1997
September 1997
August 1997
July 1997
June 1997
May 1997
April 1997
March 1997
February 1997
January 1997
December 1996
November 1996
October 1996
September 1996
August 1996
July 1996
June 1996
May 1996
April 1996
March 1996


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager