Print

Print


On 03/04/2012 13:33, Stuart Purdie wrote:
> On 3 Apr 2012, at 12:54, Alessandra Forti wrote:
>
>> The equivalence at the end was
>>
>> RAM available+swap available=virtual memory
>>
>> the fact that there are memory management techniques that can make the physical memory look bigger and that on 64bit machines the address space  has increased exponentially although I doubt the OS will allow to use it all doesn't make the request of what is physically available so out of place. I don't know how athena handles memory and there are still 32bit releases being used which are limited in the address space they can use.
> Yes.  That's either a 4GB or a 64GB limit in 32bit mode, depending on the underlying hardware.
>
> The 'physically available' is the core of the issue.  If you're going to consume a lot of Swap (as opposed to address space), then that has _serious_ implications.  If you're going to use a lot of address space, then that's not.
>
> Even then, Virtual Memory != RAM + Swap, and assuming it is can get one into tricky places.
>
> For example, I can (and have) make a linux box with 0 swap attached use Virtual Memory (i.e have more stuff in RAM than the quantity of Physical RAM).
>
>> BTW Glasgow is publishing 2GB physical and 4GB of virtual you might want to change that  to 128Tb and argue with the Glueschema people too.
> 128 TB is not a valid number for the maximum Vmem - it's 4GB / 64GB / 256TB, depending on which node you use.
I think 128TB is the limit in linux but I might recheck that.
> As 4GB is the _lowest_ of these, that is what we publish, as that is the largest amount that a job can safely use.  The Glue semantics (from 2.0) include the specification that exceeding this number is when the the LRMS may kill the job.  (Ok, on some nodes, it would be the kernel, but the net effect is indistinguishable from the point of view of the end user).  Accordingly, we cannot realistically  publish a higher number than that.
>
> If ATLAS wish us to split the underlying nodes into different clusters, then we can do so. That would allow use to have one homogeneous cluster per node type, and therefore report actuals, not the bounded limits, for such values.  I do note that this would mean that we would have to split the atlas queue into ... 7 different, independent, queues, that they would have to target independently (So we can hang the different publishing off these queues.)
That's what they do at T1 sites they have different queues associated 
with classes of nodes with different amount of physical memory.
> If you want to go ahead with that, let us know and we can discuss a schedule for making those changes.
I don't think there is any need for now but if  problems appear we'll 
keep it in mind.
> I have noted the construction about 'RAM + Swap' to the Glue authors before - the response was that they are aware of the complications, but that the important point was that the value was _not_ pure physical RAM.  We can argue that one, but standards are forced to compromise on a large number of factors; in the context with which this is written, this is not a major problem with using the spec for it's intended purpose.
It is exactly the same. The information in the glue schema is supposed 
to be consumed by users. If atlas had used the BDII instead of asking 
the cloud squads all this discussion wouldn't have even happened 
(Glasgow vmem=4GB). If you accuse one set of people of incompetence for 
doing a simplification you might want to do that for everyone doing the 
same or none of the above.

cheers
alessandra

>
>> ldapsearch -xLLL -b mds-vo-name=UKI-SCOTGRID-GLASGOW,mds-vo-name=local,o=grid -p 2170 -h top-bdii|grep -i mem
>> GlueHostMainMemoryVirtualSize: 4096
>> objectClass: GlueHostMainMemory
>> GlueHostMainMemoryRAMSize: 2048
>> GlueHostMainMemoryVirtualSize: 4096
>> objectClass: GlueHostMainMemory
>> GlueHostMainMemoryRAMSize: 2048
>> GlueHostMainMemoryVirtualSize: 4096
>> objectClass: GlueHostMainMemory
>> GlueHostMainMemoryRAMSize: 2048
>> GlueHostMainMemoryVirtualSize: 4096
>> objectClass: GlueHostMainMemory
>> GlueHostMainMemoryRAMSize: 2048
>> GlueHostMainMemoryVirtualSize: 4096
>> objectClass: GlueHostMainMemory
>> GlueHostMainMemoryRAMSize: 2048
>>
>>
>> cheers
>> alessandra
>>
>>
>> On 03/04/2012 11:39, Stuart Purdie wrote:
>>> There's a number of different types of memory that we can discuss.
>>>
>>>
>>> There is:
>>>
>>> Physical memory used
>>> Physical memory available
>>> Virtual memory used
>>> Virtual memory available
>>> Address space used
>>> Address space available
>>> Swap space used
>>> Swap space available.
>>>
>>> _All_ of these numbers are different.  Some of them are functions of the node, and some of them are per process values.  To ask about certain parts of these, without understanding how they relate to each other, is going to end up with numbers that don't make sense.
>>>
>>> The term 'VMem', _as measured by top_ is the 'Address space used', where 'used' means 'mapped', as in mmap / malloc sense.
>>>
>>> Note that 'Virtual Memory' != 'Swap space', as the kernel has more facilities for juggling memory than just swap space.  In particular, 'Virtual Memory'>  'Swap space', for all practical workloads.
>>>
>>> It is useful to have the concept of a 'working set' of memory - how much the job has to keep in memory at one point in time.  Note that it is very common for a job to have a working set smaller than the total mapped Address Space.
>>>
>>> --
>>>
>>> It sounds like these Atlas Reco jobs have a peak footprint of 3.5 ish GB.  The _important_ question is if sites will kill jobs like that.  (Glasgow won't).
>>>
>>> The next important question is if those jobs will kill everything on the box.  We, as site admins, consider this an important point.
>>>
>>> If Atlas _really_ expect to drive worker nodes into heavy swapping, then that's going to kill _everything_ on the worker node.  Once swapping starts, everything gets a lot slower.  This means that the walltime limits of jobs will be hit long before the job is near complete.
>>>
>>> If Atlas expect these reco jobs to spend a minute or so with a working set of 3GB, then this is extremely unlikely to cause problems, and probably wont swap.  Even though the job is going to be useing more the usual 2GB per core.
>>>
>>> If you _need_ us to have so much swap, as is being suggested, then this is entirely the wrong approach, and _will not work_.
>>>
>>> --
>>>
>>> The whole process reads very much as if someone has assumed that 'VMem' = 'Physical RAM used + Swap space used' - which is false.
>>>
>>> This is not just a technical point (although it is frustrating to get asked questions that clearly demonstrate the asked don't understand what they are asking for) - it is that if we _need_ that much swap, then without special handling of those jobs they will kill everything on the worker node.  We don't want that, hence having to drive into the midst of the issue in order to find out what is actually going to happen.