Print

Print


Hi Federico,

Yes, it does seem like you're using cgroups. If you're not convinced you can look in /cgroup on a worker node and you should see information about each job, e.g. (*).

And yes, as you say, if you have less slots configured in condor compared to the number of cores, "bad" jobs will be able to use those extra cores. Condor sets the cpu.shares for each job, which is like a lower bound on the amount of CPU the job is able to get, but there is no upper bound enforced (when there are free resources - it tries to keep everything busy!) 

If you want to ensure there are some CPU resources available for the OS you could tweak your /etc/cgconfig.conf and change the cpu share for the htcondor cgroup (you could do something similar for memory so that there's always some memory dedicated to the kernel etc). If you want to enforce upper CPU limits I think you need to look into CFS bandwidth control.

Regards,
Andrew.

(*)
[root@lcg1667 ~]# ls /cgroup/cpu/htcondor/
cgroup.event_control                                 [log in to unmask]
cgroup.procs                                         [log in to unmask]
[log in to unmask]  [log in to unmask]
[log in to unmask]  [log in to unmask]
[log in to unmask]  [log in to unmask]
[log in to unmask]  [log in to unmask]
[log in to unmask]  [log in to unmask]
[log in to unmask]  [log in to unmask]
[log in to unmask]  [log in to unmask]
[log in to unmask]  [log in to unmask]
[log in to unmask]  [log in to unmask]
[log in to unmask]  [log in to unmask]
[log in to unmask]   [log in to unmask]
[log in to unmask]  cpu.cfs_period_us
[log in to unmask]  cpu.cfs_quota_us
[log in to unmask]  cpu.rt_period_us
[log in to unmask]  cpu.rt_runtime_us
[log in to unmask]  cpu.shares
[log in to unmask]  cpu.stat
[log in to unmask]  notify_on_release
[log in to unmask]  tasks


________________________________________
From: Testbed Support for GridPP member institutes [[log in to unmask]] on behalf of Federico Melaccio [[log in to unmask]]
Sent: Thursday, September 17, 2015 11:31 AM
To: [log in to unmask]
Subject: Re: anomalous CPU usage for DIRAC ilc jobs

Hi Andrew,

Thanks for your reply. I have double checked our Condor configuration, and it matches what is in https://www.gridpp.ac.uk/wiki/Enable_Cgroups_in_HTCondor , so we do have CPU cgroups enabled, right? Therefore I had a look at the affected nodes and maybe we are in the "free CPUs available" scenario you mentioned. On some nodes we are committing to Condor less cores than the available ones, for example 28 slots on a 32-core machine (this had the optimal hepspec value), so I think jobs could use the extra 4-cores regardless of cgroups. This is probably allowing those "bad" jobs to overload our nodes, unless I am getting something wrong here.

Best,
Federico

-----Original Message-----
From: Testbed Support for GridPP member institutes [mailto:[log in to unmask]] On Behalf Of Andrew Lahiff
Sent: 16 September 2015 14:09
To: [log in to unmask]
Subject: Re: anomalous CPU usage for DIRAC ilc jobs

Hi Federico,

We've been using CPU cgroups (& memory) for a very long time now. We haven't had any problems. Jobs can use more CPUs if there are free CPUs available, but otherwise they are restricted to however many CPUs they requested. They can run as many threads as they want and they won't affect other users.

Regards,
Andrew.

________________________________
From: Testbed Support for GridPP member institutes [[log in to unmask]] on behalf of Federico Melaccio [[log in to unmask]]
Sent: Wednesday, September 16, 2015 12:16 PM
To: [log in to unmask]
Subject: Re: anomalous CPU usage for DIRAC ilc jobs

Hi Daniela,

Thanks. They are running whizard. I will submit a GGUS ticket then, however we have just thought that since we are using Condor with cgroups on memory, we could enable CPU cgroups so that no job could get more than its fair share of resources even if it tries. Has anyone ever tried this (e.g. Andrew Lahiff)? Would it be OK or is it too much of a constraint?

Best,
Federico

From: Testbed Support for GridPP member institutes [mailto:[log in to unmask]] On Behalf Of Daniela Bauer
Sent: 16 September 2015 11:49
To: [log in to unmask]
Subject: Re: anomalous CPU usage for DIRAC ilc jobs

Hi Federico,
it's most likely user error, we see this a couple of times a year, usually, but not always (CMS I am looking at you) from the small VOs. Can you see which executable they are running ? Things like madgraph have a tendency to grab more than their fare share of resources.
It's probably worth submitting a GGUS ticket to the ilc VO with all pertinent details, so they can check from their end.
Cheers,
Daniela


On 16 September 2015 at 11:05, Federico Melaccio <[log in to unmask]<mailto:[log in to unmask]>> wrote:
Hi all,

We are seeing a weird CPU usage for some DIRAC glexec jobs of the ILC VO running at RALPP. Despite the pilot output showing a request for 1 CPU to be used, top displays a 800% CPU usage for that job, effectively overloading the worker node. It looks to me that the job is using 8 cores but Condor had allocated it a 1 -core slot, as per the pilot request. Has anyone seen that? Could it be a wrong job submission by the user, or is there something wrong in our site configuration? I am no expert about DIRAC so please forgive me if this questions sound silly. Also, I can provide further information if needed.

Regards,

Federico Melaccio
GridPP Linux System Administrator

Particle Physics Department
R1-S-2.84
Rutherford Appleton Laboratory
Harwell Oxford, Didcot
OX11 0QX
Tel:    (01235) 445670




--
Sent from the pit of despair

-----------------------------------------------------------
[log in to unmask]<mailto:[log in to unmask]>
HEP Group/Physics Dep
Imperial College
London, SW7 2BW
Tel: +44-(0)20-75947810
http://www.hep.ph.ic.ac.uk/~dbauer/