Daniel LaLiberte writes:
| I comment on some aspects of Martin's draft, and then launch into how I
| think HTTP should deal with metadata.
:-))
| > This document proposes some experimental mechanisms which may be
| > deployed within HTTP [1] to provide a local search capability on the
| > information being made available by an HTTP server, and reduce both
| > the bandwidth consumed by indexing agents, and the amount of work
| > done by HTTP servers during the indexing process.
|
| This is an excellent goal. Are you perhaps planning (hoping?) to attend
| the distributed searching and indexing workshop?
I'm not real big on workshops and conferences!
| There are a number of other rationales for why the goal is worth seeking
| even if one does not want to actively support more web crawling. On the
| other hand, I don't know if there is a strong enough case for the
| argument that web crawling is excessively loading the network and
| servers. Some alternative rationales are the desire for contrained
| replication of indexing services within an intranet, and client directed
| searching of distributed indexes.
It's an interesting one, for sure. To get a quick snapshot I had a
look at the usage stats on our main WWW server for the last few months.
Most months it looks like this:
%Reqs %Byte Bytes Sent Requests Reversed Subdomain
----- ----- ------------ -------- |--------------------
56.67 35.89 772653477 261273 | uk.ac.lut
12.78 13.23 284833493 58939 | Unresolved
4.02 0.86 18421219 18550 | uk.co.spice
1.53 0.47 10035051 7037 | com.lycos.srv
0.63 0.34 7308669 2924 | net.ja.lut
0.58 0.41 8734050 2653 | com.mckinley
0.54 0.74 15900604 2504 | uk.co.demon
0.49 0.57 12309822 2242 | uk.ac.hensa
0.38 0.23 5053358 1773 | com.atext
0.37 0.52 11151840 1726 | com.compuserve
i.e. most web crawlers account for less than 1% of the requests and
bytes delivered every month. Those "spice" people seem to be a bit
more agressive than most ;-)
This might look quite reasonable, but when you add up the known and
suspected robots' entries we start to head up towards the 10% mark. I
don't want my server to spend 10% of its time servicing requests from
web crawlers, and I don't want to tie up anything like that much
bandwidth talking to them.
| Another alternative to keep in mind is that some servers might want
| indexing to be done by an associated server, perhaps one they contract
| with for this service. So a request for indexing info or searching
| services might reasonably be redirected to another server.
Good one!
Perhaps via an HTTP "Location:" header and a redirect response code ?
| This much is great, although I am skeptical of the utility of any
| request having to do with everything on a server. Frequently there
| are many disjoint collections on one server, so it might make
| more sense to first ask for the list of collections.
Yes, it begs the question of how you discover what collections of info
the server offers...!
In the context of current web crawler technology, I think "*" is the
only thing they'll be interested in ? What's important is not to make
it hard to introduce other more advanced indexing scenarios in the
future - e.g. I will only let you index my server if you pay me $$$
derived from your advertising revenue
| Just as COLLECT was based on either everything in the server or
| everything in a particular collection, so should SEARCH be. So the
| Request-URI for a SEARCH request should be either "*" or the URI of a
| collection. The parameters of the search should be in additional
| header lines specific to the search request, just as the COLLECT request
| used additional header lines to parameterize it.
Yep! Arguable whether Request-URI: should actually be used for
anything ? ...or just there as a filler to make up the HTTP request :-)
| Rewriting your example, I might do it something like this:
|
| C: SEARCH /vips HTTP/1.1
| C: Accept: application/whois, text/html
| C: Host: www.lut.ac.uk
| C: Protocol: whois++
| C: Query: keywords=venona
| C:
| S: 200 OK search results follow
| S: Content-type: application/whois
| S:
| S: [...etc...]
I think the Protocol attribute wants to include a URI to the spec, and
mandate that it needs to be supported, in which case the header would
end up looking something like this ... ?
Protocol: {ftp://ftp.internic.net/rfc/rfc1835.txt {str req}}
And a PEP aware server's response would use one of the ?2? response
code series? e.g.
220 Umm, OK, I think I understand...
Question: with PEP, is there really any point in using separate methods
?
In any case, for the COLLECT operation at least, it would seem to be
desirable to have something which could be used straight away with GET
to retrieve the entire collection of indexing info for a server, or
with a couple of PEP headers to retrieve a subset of the available info
- a la Harvest. Hmm!
This would still have the drawback that server admins would need to run
something like Robert Thau's site-index.pl to generate the indexing
info. With support built into the server, we can make the server
generate this automagically - and take them out of the loop!
| > This document has suggested that search requests be presented using a
| > new HTTP method, primarily so as to avoid confusion when dealing with
| > servers which do not support searching. This approach has the
| > disadvantage that there is a large installed base of clients which
| > would not understand the new method, a large proportion of which have
| > no way of supporting new HTTP methods.
|
| Deployment is an interesting hard problem.
And that's before you get onto choosing between metadata formats... ;-)
| Changes to HTML would not necessarily be needed even to support new
| methods. In addition to the method name, where are the additional
| parameters of the request? One solution is to package up the whole
| request, including the method name, the URI, and additional parameters
| into a new URI. I've been calling this the "call" URI scheme. The
| above example might appear as:
|
| call:SEARCH;Protocol='whois++';Query='keywords=venona';http://www.lut.ac.uk/v
| ips
Question: should HTTP clients need fixing up in order to be capable of
supporting (ableit perhaps not in the most sophisticated way) a common
search scheme ?
[...]
| That's enough for now. Comments appreciated, but please try to respond
| before the web conference next week as I plan to present some of this in
| the URC panel.
I was going to say "see you on the MBONE" but I see this isn't one of
the sessions being multicast. Awww, shucks!
|