I've just posted this to the LCG Security list, but I think it's
very directly relevant to GridPP too, since the service hosting work
we've being doing as part of GridSite has been aimed at solving this
problem (it's sort of obvious that this requirement would come up.)
I'd be very interested in direct feedback and ideas from the GridPP
sites about this, so we can accomodate them in the way GridSite
implements things.
-------- Original Message --------
Subject: Service Hosting (was Re: VObox)
Date: Tue, 20 Sep 2005 12:56:36 +0100
From: Andrew McNab <[log in to unmask]>
To: [log in to unmask]
David Groep wrote:
> FYI: the VOs (Atlas in particular this time) are indeed starting to
> put up more outlandish requirements (like root access to boxes,
> access to host certificates, running web servers on port 80/443, ...).
One of the ideas that came up a couple of times last week was "Third
Party Service Hosting", and I've updated the GridSite Wiki to explain
how we can now support remote deployment of services, using Unix account
permissions to provide partial sandboxes of different users' services,
and to get round the problem of giving host certificate keys to service
owners: http://www.gridsite.org/wiki/GRACE_Paradigm
(There's an equivalent remote deployment scheme for Tomcat, I
believe?)
Given that Ad-hoc/VO boxes are being discussed widely at the moment,
I think it would be useful for the security group to have a position
on the "ideal" solution, as well as the short term document in
response to the initial VO-box requirements from the applications.
My view is that we *are* going to need a way for applications to
deploy long lived services somewhere (Tier-0? Tier-1? Tier-2?), in
a way which doesn't involve applications guys having root on boxes at
some friendly institute (if these are vital services, how do the
operations people maintain that????) These kinds of containers which
can host remotely deployed services seem to be the way to do it, with
the underlying OS and container (eg Apache) provided by the site.
To do this, we'd need to define a list of acceptable transport
protocols, and I think that means limiting new protocols to HTTP (GET,
POST, PUT of files; REST web services; or RSS feeds), SOAP
(simple messages) and Web Services (ie with WSDL) over plain TCP or
SSL/TLS (we're trying to do that already for other reasons.) That
simplifies the life of the operations people in that there are fewer
servers to run and keep up to date with patches. The other side
would be to define a base set of supported containers, but I think
that is pretty straightforward: Apache/CGI and Tomcat.
Does this seem reasonable? Is it worth drawing up a document about
it, as longer term goals and as a model to offer the experiments?
Cheers,
Andrew
-------------------------------------------------------------------
Dr Andrew McNab [log in to unmask] +44-(0)161-275-4227
/C=UK/O=eScience/OU=Manchester/L=HEP/CN=Andrew McNab
Co-ordinator of Security Middleware Groups, GridPP & Manchester HEP
|