Hi Gareth,
I plan to submit some jobs over weekend. I can try Glasgow.
Thanks
Elena
On 10 Aug 2018, at 12:23, Gareth Roy <[log in to unmask]> wrote:
> Hi Elena,
>
> This is specifically VAC @ Glasgow, the VMs are configured with 4GB per processor at present on the pool that you would access.
>
> If that's reasonable I can re-enable LZ and we can see what happens?
>
> Thanks,
>
> Gareth
>
> -----Original Message-----
> From: Testbed Support for GridPP member institutes [mailto:[log in to unmask]] On Behalf Of Elena Korolkova
> Sent: 10 August 2018 12:19
> To: [log in to unmask]
> Subject: Re: The grid is awesome/rant of the day
>
> Hi Gareth,
>
> I submitted 300 jobs specifically to IC (they also use 2 cores) (these which are queueing now).
> Yes, we can use Glasgow for LZ. Typical jobs needs ~ 4 GB of memory. What is a memory limit?
>
> Thanks
> Elena
>
> On 10 Aug 2018, at 12:13, Gareth Roy <[log in to unmask]> wrote:
>
>> Hi Daniela,
>>
>> Is LZ able to use VAC now, I wasn't sure of the resolution of the ticket and then there seemed a little confusion (perhaps on my part). I can re-enable them and see if there jobs would run?
>>
>> Thanks,
>>
>> Gareth
>>
>> -----Original Message-----
>> From: Testbed Support for GridPP member institutes [mailto:[log in to unmask]] On Behalf Of Daniela Bauer
>> Sent: 10 August 2018 11:15
>> To: [log in to unmask]
>> Subject: The grid is awesome/rant of the day
>>
>> Hi All, hi Management,
>>
>> I don't know if Imperial is the only site, but we currently see a huge pressure on our resources from a large variety of VOs, several of them (lsst, lz) marked as urgent.
>> So the Grid *is* usable outside the LHC.
>> So far so good, but what do I do now ? (That's a serious question. I'm writing an IRIS request as we speak, but what I really need is a bunch of cheap worker nodes and a couple of containers.)
>>
>> Daniela
>> [Running,Waiting,Error]
>> duneplt [0, 8, 0]
>> snoplplt [0, 4, 0]
>> phenoplt [288, 320, 0]
>> ilc [0, 35, 0]
>> lzplt [24, 9143, 0]
>> lhcbplt [18, 24, 0]
>> skaplt [0, 5, 0]
>> t2kplt [0, 99, 0]
>> biomed [87, 619, 0]
>> lsstplt [264, 5, 0]
>> lhcbprd [0, 10, 0]
>> cmsplt [2792, 1592, 0]
>> atlasprd [1640, 2069, 0]
>> gridppplt [2, 341, 0]
>> nastplt [15, 390, 0] (that's NA62)
>> ops [0, 1, 0]
>> cometplt [0, 14, 0]
>> cxsys [3, 633, 0] (complex systems)
>> solidplt [0, 3339, 0]
>> total [5133, 18651, 0]
>> total grid [5133, 18651, 0]
>> (and a lot of the are multicore jobs)
>> --
>> Sent from the pit of despair
>>
>> -----------------------------------------------------------
>> [log in to unmask]
>> HEP Group/Physics Dep
>> Imperial College
>> London, SW7 2BW
>> Tel: +44-(0)20-75947810
>> http://www.hep.ph.ic.ac.uk/~dbauer/
>>
>> ########################################################################
>>
>> To unsubscribe from the TB-SUPPORT list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
>>
>> ########################################################################
>>
>> To unsubscribe from the TB-SUPPORT list, click the following link:
>> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
>
> ########################################################################
>
> To unsubscribe from the TB-SUPPORT list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
>
> ########################################################################
>
> To unsubscribe from the TB-SUPPORT list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
########################################################################
To unsubscribe from the TB-SUPPORT list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
|