On 7 Dec 2018, at 9:51, Cope, Jez wrote:
> Hi folks, we’re currently looking at ways of improving the way we
> deliver the largest datasets on https://data.bl.uk/ to users. The
> largest today are several hundred GB, and it won’t be long before
> we’re into the TB. Large downloads can be a pain for users because
> they take a long time and can easily be interrupted. They also
> potentially present a significant cost to us as a data provider
> because outbound bandwidth costs can be high for data stored in the
> cloud. I know this is something that many people in the community will
> already have grappled with so I’m hoping there will be some
> experience to share.
I don't have very concrete experience here, but the following rather
random remarks might be of interest.
> · Just let people download over HTTP but advise use of a
> download manager to handle interruptions to the connection
That's nice and simple, and it would work. It would, as you suggest,
require a download manager, which could be as simple as curl's
--continue-at option; wget has similar support.
> · Publish via BitTorrent, which has the advantage that if a
> large number of people are downloading the same thing at once (e.g.
> during a workshop) our outgoing bandwidth use could be significantly
> less than filesize × number of people
Torrenting files would work neatly at a workshop, but ends up being just
a slightly clumsy download manager if there aren't in fact other people
retrieving the dataset at the same time.
Note also that many/most institutions monitor traffic to detect
torrents, and either block them or alert local IT staff to investigate
(because a lot of the time what's being torrented is movies and the
like), so there could be administrative roadbumps to this approach.
> · Split datasets into smaller chunks to make it easier to get
> just the bit you need (but makes it more effort if you do want the
> whole lot)
If the dataset is naturally available as a file hierarchy, then rsync
would both solve the restartability problem, and potentially allow users
to retrieve subsets (subtrees) of the data. I'm not aware, off the top
of my head, of non-command-line rsync clients (if that's a desirable
thing), but I'm sure they must exist.
> · Allow users to move their compute to the data, either in
> the cloud or by renting out space in our machine room (this is
> essentially what AWS have done for some big public
> · Provide a dedicated API and/or UI to allow users to browse
> the collection and select a custom subset to download
It's not quite the same thing, but what's fairly often done with large
scientific databases is to allow users to run (possibly very
sophisticated) SQL queries against a database, save the smallish result
to a staging area, and retrieve that. That's not trivial to set up, but
it's not as hard as developing a new API. It also requires either that
your users can become at least minimally familiar with SQL, or that the
queries are stereotypical enough that you can provide a UI which lets
users produce suitable SQL queries.
> There was also some discussion a few years ago of using GridFTP but I
> don’t know where those went.
That's sort-of the right thing to do, since GridFTP was developed with
exactly this sort of purpose in mind. However GridFTP can be quite
tricky to set up on the server side, and can be non-trivial to use (ie,
require local technical support) on the client side. It might not be
worth the users' while investing the time to work out how to use this,
unless they were going to be retrieving lots of data from you.
One group I support retrieves 100-GB-scale datasets from a server that
provides them as both GridFTP and HTTP. At some point I hope to spend
the time to work out how to get that with GridFTP, but that's partly for
my own satisfaction, since they're in practice happy enough with
I hope this helps; I may be able to dig up more details for some of
these if necessary.
Norman Gray : https://nxg.me.uk
SUPA School of Physics and Astronomy, University of Glasgow, UK
To unsubscribe from the RESEARCH-DATAMAN list, click the following link: