>One interesting presentation claimed that Hyper-G is an acceptable
>candidate for running a large ejournal server, while the web is not. Two
>reasons are put forward:
I would like to be a trifle more precise. We were claiming that a
conventional server such as "httpd" from NCSA or Netsite
did not scale correctly. Our understanding is that most servers of this
type become unstable much beyond around 300,000 or so documents.
Hyper-G is claimed to have been implemented with around 2 million
documents, and may be extensible to around 6 million or more.
Thus the reference to "web" above is not quite precise in our context
>
>a) automatic preservation of link integrity (the links are kept in an
>external database; delete a document and all references to it cease to be
>links), and
>
>b) automatically up to date indexing of document content.
>
>The first seems a strong reason in favour of Hyper-G. I was reminded,
>though of Microcosm and the OJF project; Microcosm also has an external
>link-base, and might provide some of the same advantages.
I think our other point was that Hyper-G can index against a defined
dtd (and not just against HTML). If one is working with other
dtds, then this would be an advantage.
I think inevitably, problems of scaling and indexing are going to
be more important. Hyper-G may represent one solution. Certainly,
it also has an academic developer program, and we are visiting them on
April 11 to discuss how we may interact further. Some of the other products
our there are targetted at large commercial organisations and may not be
"affordable" by many of the e-lib projects.
We might also note here that Hyper-G has a number of tools designed for
people who might want to produce stand alone CD-ROMS of their
product. This too may be very important for journals, etc.
Dr Henry Rzepa, Dept. Chemistry, Imperial College, LONDON SW7 2AY;
[log in to unmask]; Tel (44) 171 594 5774; Fax: (44) 171 594 5804.
URL: http://www.ch.ic.ac.uk/rzepa/
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|