Following up David's remark about permanent and volatile storage.
I think it is important that volatile storage has a lifetime and after
that it is gone. Volatile storage which "stays around until we need to
delete it" ends up by being a form of semi-permanent storage. It sounds
as if only deleting files from volatile storage when needed is a good way
of maximising use of resources, but it rarely works.
Part of the arguments which have occurred over the last few days
is due to different expectations between users and centres. I was
suggest if permanent storage means "we will do our best to keep a
copy of this data", volatile means "we will do our best to delete (for
instance) it more than 7 days and not less than 8 days after creation".
Can I ask if "as far as humanly possible we will keep a copy of
the data" means creating a backup?
Paul
> Hi Oxana,
>
> I think that the problem is (as you say) in the configuration here and in
> more than one way.
>
> Firstly, the default installation (which most T2s use) makes it possible
> for one experiment to completely block the activity at that site by
> filling up the disk, even blocking software installation. This is
> optimally bad! Yes, you can put them on different partitions but this
> isn't the default and is not very pretty to manage as Chris Brew pointed
> out, so what is needed here are something like quotas. Even when running
> locally we have always had disk quotas for the different experiments to
> stop this from happenning and I guess we need them now. Quotas would have
> prevented the problems seen at these different sites (including us)
> recently.
>
> The second point is perhaps more long term judging from James' reply to
> the configuration question. We need both permanent and volatile (scratch)
> areas. Permanent should mean "as far as humanly possible" we will not
> loose this data. Volatile should mean "as far as humanly possible" we will
> not loose this data for a period of (say) 1 week then it is gone.
> We also need the space reservation capability that James mentions.
>
> This is not to say that Atlas (CMS, LHCb etc) should not have a way of
> clearing up old data. If it doesn't then these volumes will become full of
> obsolete data. There needs to be some sort of way determining what is
> obsolete from what is not. However this is an experiment software
> management issue and is up to the experiment.
>
> All the best,
> david
>
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Paul Kyberd Brunel University +
+ E-mail: [log in to unmask] Department of Electronic and +
+ Phone: +44-(0)1895-203201 Computer Engineering +
+ Fax: Uxbridge, Middlesex UB8 3PH +
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|