JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for LCG-ROLLOUT Archives


LCG-ROLLOUT Archives

LCG-ROLLOUT Archives


LCG-ROLLOUT@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

LCG-ROLLOUT Home

LCG-ROLLOUT Home

LCG-ROLLOUT  2005

LCG-ROLLOUT 2005

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: IMPORTANT: clarifying purpose of Storage Elements etc

From:

Oxana Smirnova <[log in to unmask]>

Reply-To:

LHC Computer Grid - Rollout <[log in to unmask]>

Date:

Fri, 14 Jan 2005 21:53:55 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (79 lines)

Hi Steve,

Burke, S (Stephen) пишет:
>
> As Owen said, this is not a good solution because you won't be able to read
> the files, the normal replica management tools need to find the SE in the
> information system.

How's that? I need no infosys to query RLS; and RLS records have pretty
explicit SFNs, don't they? globus-url-copy needs no infosys either.

Anyway, I am not really insisting on removing SEs from the infosystem;
but do you know of any LCG tool or method that makes use of the
published free space? And you actually mentioned yourself that what is
published is the overall space, not a per-VO quota. I'm suggesting a
least damaging solution (in my opinion), and I'm willing to discuss
alternatives.

> Also, intrinsically a full SE is not a fatal error any
> more than a full disk on any system, it's just that users need some way of
> dealing with the condition.

A full disk on a system is not a fatal error. I have plenty of them full
sitting around. Just checked, NorduGrid has 17 out of 43 disk SEs
completelly full. You just use the system read-only, which is perfectly
fine for a Storage Element. A full *system* partition is fatal, but I am
sure nobody has storage area and system area on the same partition.

> The free space is published in the information
> system so it should be possible to recognise the situation and deal with it
> in whatever way you like - maybe atlas would actually rather leave the SEs
> full and write new files somewhere else.

This is effectively the situation. If a job fails to write to an SE -
whatever is the reason, - it will eventually store the file wherever it
can be stored. It doesn't use the free space reported in the infosystem,
just a "kamikaze" method :-)

BTW, the reported free space is useless for yet another reason: imagine
there's 10 GB reported free, and 10 jobs read this information
simultaneously (and they do, even more than 10), and duly start
uploading a 2GB file each. Guess what will happen. Right, all will fail.
  Meanwhile, the SE GRIS will time out because the system will get
overloaded with 10 multithreaded transfers, and 10 more jobs will still
see the 2 GB free because this is what will be cached in the BDII. And
so on. Ain't that cool.

> The only reason it can be a problem
> is on systems where all VOs share the space and there are no quotas, so one
> VO can block the others.

So, we can block LHCb and they can block us. We're even ;-)

>   A separate point is the question of reliability. Tier-1s will typically
> commit to a high level of reliability so you can resonably expect that files
> there are safe. Many sites, even ones with large amounts of space, may not
> have much reliability or backup, so if disks crash data may be lost. I'm not
> sure how that can be represented, how do you quantify the likelihood of
> losing data?

Nobody's perfect. A certain person here suggested to have data loss
insurance :-) Smaller is the site, less compensation is to be paid.
Profits from the insurance  company should finance purchase of more
storage hardware. How's that? ;-)

Seriously, I would suggest to change the entire LCG SE model - and the
information system schema. As a SE, only a reasonably reliable facility,
committed for long-term storage, should qualify. SEs should not
necessarily be linked to sites, they should be standalone services
available via GridFTP, SRM, whatever, and register to GIISes
independently on the rest of the site. Thus we will be able to have a
set of sites for running data, and a [different] set of SEs for storing
the results. The disk space local to the site and necessary for its
proper functioning should be renamed and treated as "cache" and must not
be used for long-term data storage. Of course, the "real" SE disk space
may well be cross-mounted on the WNs, it is up to sysadmin's choice.

Oxana

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
November 2023
June 2023
May 2023
April 2023
March 2023
February 2023
September 2022
June 2022
May 2022
April 2022
February 2022
December 2021
November 2021
October 2021
September 2021
July 2021
June 2021
May 2021
February 2021
January 2021
November 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
February 2018
January 2018
November 2017
October 2017
September 2017
July 2017
June 2017
May 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager