JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for TB-SUPPORT Archives


TB-SUPPORT Archives

TB-SUPPORT Archives


TB-SUPPORT@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

TB-SUPPORT Home

TB-SUPPORT Home

TB-SUPPORT  September 2011

TB-SUPPORT September 2011

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: CreamCE Tuning

From:

Stuart Purdie <[log in to unmask]>

Reply-To:

Testbed Support for GridPP member institutes <[log in to unmask]>

Date:

Thu, 8 Sep 2011 13:10:41 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (177 lines)

On 6 Sep 2011, at 16:56, Chris Brew wrote:

> Hi,
> 
> Thanks, there's a fair amount there to go on so a few follow up questions.
> 
> What are suggested reasonable times for the automatic job purging? We've got
> between a couple of thousand and about 30,000 for our newest (in production
> today) and oldest Creams.

Honestly - I'm not 100% sure!  I _think_ that the JOB_PURGE_POLICY time refer to how long the job has been in the given state - so REGISTERED=2 days would purge after 2 days stuck at registered (i.e. never made it to batch system).

You need to work out how long it's 'reasonable' for a job to spend in a particular state.  Queue times and longest job queues are the biggest site to site variables.  

Interestingly, this does suggest that the CE's working set is dependant on the throughput * latency (i.e. the product of the two).  For a site with a mixture of many short jobs on one queue, and few long jobs on the other, it would be a reasonable optimisation to put those on different CE's.

I've seen the CREAM developers suggest

<parameter name="JOB_PURGE_POLICY" value="ABORTED 10 days; CANCELLED 10 days; DONE-OK 10 days; DONE-FAILED 10 days; REGISTERED 2 days;" />, 

which what we have (and the default?) as a reasonable first stab - I know Daniella set Imperial to something slightly shorter than that.  The reason to give a grace period after the job is finished is for people doing direct submission, and to give them time to get the logging data back (output files are sent directly).  It's not clear to me how long the WMS takes to pull the data, but it aught to be less than a day, so I suspect a couple of days or so be safe there - if you know that directly submitted jobs are handled quickly (as ATLAS do, for example).

> Are you using a custom nagios check command for the file count in
> registry.npudir? A quite check of the default ones didn't seem to have that
> functionality and I can always knock one up but if someone else already
> has... 

Yep.  Well, I say _we_ - really that was Dave Crook's work, and I'll leave it to him to expand on that.

> Any suggestions of mysql tunings for the creamdb would be very welcome and
> I'm by no means a mysql expert.

Mostly, make sure it'd got a decent sized innodb_buffer_pool.  If you see mysqld doing a lot of WAITIO, make it bigger.

(Anything else can make things worse, and is at best a small boost).

> In know you can split the creamce and blah parts on to separate nodes and
> even have one blah parser support multiple CreamCEs, is that just putting
> into a single point of (likely) failure?

Blah failures are the only problems we see.  Putting more pressure on it strikes me as a plan with zero upsides!  It's also more difficult to configure.

It's smacks of the sort of plan that looks good before any code was written, and not one that was drawn from real experience.

Using multiple Blah parsers for a single CREAM CE probably _would_ provide some benefit.  It's not supported.

The only problem I forsee with using one Blah parser per CREAM is with many CREAM CE's it could put a lot of load on the batch server to service the information requests.  Given that we current;y have 6 CE's attached to our production batch server, on an old 2x2 Opeteron, I think that's a 'problem' that's not significant.  I estimate we have to start thinking about this when we get to 15-25 CE's, on our current hardware.  Given that the hardware is due for replacement, I doubt we'll ever have to think about that.

> We've 3 Cream CEs with between 6 and 8GB of RAM so I'm hoping that will be
> enough.

Should be.

Note that if a CREAM CE gets a bit over loaded, you can glite-ce-disable-submission to it for a day or so, to let it calm down (and mark it as down int he GOCDB), which can often prevent a dodgy CE turning into a dead one.  By preventing new jobs from landing on it, gives it some space.  However, that can put more load on the other CE's, and that can sometimes cause more problems.



> Thanks,
> Chris.
> 
>> -----Original Message-----
>> From: Testbed Support for GridPP member institutes [mailto:TB-
>> [log in to unmask]] On Behalf Of Stuart Purdie
>> Sent: 06 September 2011 15:06
>> To: [log in to unmask]
>> Subject: Re: CreamCE Tuning
>> 
>> On 6 Sep 2011, at 13:35, Chris Brew wrote:
>> 
>>> Hi All,
>>> 
>>> Having replaced all our CEs with CreamCEs we've been having problems
>> keeping
>>> them up for extended period of time.
>>> 
>>> They seem to run fine for a few days to a week or so before falling
>> over
>>> with "The endpoint is blacklisted" errors (generally) but the error
>> isn't
>>> transient - it always seems to go down just after I leave work and
>> stay that
>>> way until I get in the next morning.
>>> 
>>> The "help" for that error is not in actual fat helpful - "Oh, that's
>> the WMS
>>> seeing timeouts." I paraphrase. Timeouts on what? What can I do on
>> the CE to
>>> fix them?
>>> 
>>> So, after much googling I've increased the MySQL temporary table
>> space
>>> "innodb_buffer_pool_size=1024M" and reduced the blah purge time
>>> "purge_interval=1000000"
>>> 
>>> So does anyone have any more CreamCE tuning tips I can try? We're
>> running
>>> the UMD release but I've "backported" the trustamanager fix from emi.
>> 
>> Not directly tuning, but:
>> 
>> Cream is actually two parts - the bit that talks to the outside world,
>> X509 and such, which keeps it's data base in mysql, and Blah, which
>> talks to the batch system, is the BNotifier/BUpdatorXXX thing, and
>> keeps it's database in an ad hoc, informally-specified, bug-ridden,
>> slow implementation of half of a proper database engine.
>> 
>> 
>> On the cream/mysql side: it's generally seen that tightening up the
>> purger times - so that once a job is finished it doesn't hang about for
>> too long, is a handy step.  Users can request purging in advance of
>> this interval - so this is a maximum limit, not a minimum limit.
>> http://grid.pd.infn.it/cream/field.php?n=Main.HowToPurgeJobsFromTheCREA
>> MDB covers how to set up JOB_PURGE_POLICY.  Note that this is totally
>> different from the blah configuration purge_interval.  I don't think
>> we've done that up here - my memory says I indexed the mysql DB beyond
>> the default (hence making a larger DB less of a problem), but I
>> honestly can't find any notes I made on that (I went through all the
>> grid services with mysql at one point).  I think I'll have to revist
>> that at some point.
>> 
>> Given the timeout's you're seeing, load on the mysql server might well
>> cause them [0], hence the JOB_PURGE_POLICY is where I'd look first.
>> (We run with innodb_buffer_pool_size=256M).  I think the default policy
>> is 10 days for everything?  Depending on how much traffic you get, that
>> can make a difference.
>> 
>> 
>> The Lease manager, which often seems more like a mis-feature, can slow
>> things down a lot if there are a lot of 'leases' created - at one point
>> Atlas pilot factories were creating many of these, although they sorted
>> that now, it's possible that on an old install you might have old ones
>> slowing things down.  I can't dig out the query used to count the
>> number of them, anyone have it to hand?
>> 
>> 
>> Although it's not one you've noted above, the biggest performance sink
>> for us is when Blah breaks, and has to start using the registry.npudir.
>> This happens when one instance of the blah code can't access the
>> "proper" registry, and has to degrade to one file per job.  Because
>> this is not optimised, it ends up stat(2) ing the directory for each
>> operation, then walking though each file.  So it's something like an
>> O(n^3) algorithm or thereabouts, for n files in the dir.  We have
>> nagios alarms setup if the number of files in there gets large - a
>> value we've set to 5.  After a clean restart, when the locks are sorted
>> out, blah tidies up that dir at around 50 a minute, so that can be
>> responsible for a very long wait on restart that's sometimes observed.
>> 
>> More directly, it's also responsible for timeouts of the sort:
>> 
>> failureReason=BLAH error: no jobId in submission script's output
>> (stdout:) (stderr: <blah> execute_cmd: 200 seconds timeout expired,
>> killing child process.-)
>> 
>> Keeping the purge_interval low can help with this situation, but I'm
>> (now) of the opinion that it's best avoided, by intervention when the
>> npudir starts to fill up.
>> 
>> 
>> Fundamentally, however, most of these issues are load dependant, so the
>> best return for sysadmin effort is probably to set up another Cream CE.
>> We run 3 (+ 1 experiemental) at the moment, Imperial has 4.  In between
>> CPR on them, I'm poking at ways of alleviating the worst of the
>> problems by poking through the source RPM's.
>> 
>> Speaking of hardware - I'm of the opinion that 4GB of ram is not
>> enough, and 6GB is the minimum, with 8GB a sensible baseline - and
>> that's allowing a 256M / 512M innodb buffer pool. With a 1024M buffer
>> pool, I don't think everything will fit in 4GB of ram.
>> 
>> Hrm - sorry if that's turned a bit rambly, fire alarm halfway through
>> de-railed my train of thought...
>> 
>> [0] That is: I think that the timeouts are cream taking a long time to
>> respond to the WMS's request for status updates - probably because the
>> WMS uses a single Lease, and thus is trying to get a lot of data at
>> once, or at least forcing cream to search a large subset.

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager