He hasn't replied yet. I think it equally likely that the make-gmf has
produced different results at different sites.
John
> -----Original Message-----
> From: Testbed Support for GridPP member institutes
> [mailto:[log in to unmask]] On Behalf Of Alessandra Forti
> Sent: 24 May 2006 09:57
> To: [log in to unmask]
> Subject: Re: shared experiment area load
>
> I was wondering can you ask if ricardo is using
>
> voms-proxy-init -voms lhcb:Role=Admin
>
> or something like that?
>
> It is possible that he is starting jobs with different roles proxies
> this is might be he is mapped differently on different systems.
>
> cheers
> alessandra
>
> Alessandra Forti wrote:
> > It is extracted from voms.
> >
> > Gordon, JC (John) wrote:
> >> Nick and Philippe of LHCb tell me Ricardo is in the lhcb
> sgm group in
> >> VOMS but he should currently be running tests, not
> production. It sounds
> >> like the gmf generation is not consistent across sites.
> >> I have asked Ricardo what he thinks should be happening.
> Then I'll raise
> >> a ticket (or ask Olivier to do so).
> >>
> >> Does anyone know how the sgm in the gmf should be defined
> today? Is it
> >> extracted from VOMS? Or defined in YAIM?
> >>
> >> John
> >>
> >>> -----Original Message-----
> >>> From: Testbed Support for GridPP member institutes
> >>> [mailto:[log in to unmask]] On Behalf Of Olivier van der Aa
> >>> Sent: 23 May 2006 17:35
> >>> To: [log in to unmask]
> >>> Subject: Re: shared experiment area load
> >>>
> >>> Gordon, JC (John) wrote:
> >>>> Olivier, I am sitting next to Nick Brook and he says that lhcb
> >>>> production jobs should not run as sgm. Is this happening at
> >>> other sites?
> >>> When checking the gridmapfile I can find only 3 sgm users.
> >>> Alex could you tell us when you saw a lot of lhcb sgm
> jobs ? When I
> >>> look now i only see normal lhcb
> >>>
> >>>
> >>> Olivier.
> >>>> Can you tell me the DN of the user being mapped to sgm, if
> >>> that doesn't
> >>>> break your data security policy:-) Nick thinks the gridmapfile
> >>>> generation may not be correct.
> >>>> John
> >>>>> -----Original Message-----
> >>>>> From: Testbed Support for GridPP member institutes
> >>>>> [mailto:[log in to unmask]] On Behalf Of Olivier
> van der Aa
> >>>>> Sent: 23 May 2006 15:49
> >>>>> To: [log in to unmask]
> >>>>> Subject: shared experiment area load
> >>>>>
> >>>>> Dear All,
> >>>>>
> >>>>> At QMUL we have a load problem with the experimental
> shared area.
> >>>>> The farm is running around 900 jobs and the nfs server
> serving the
> >>>>> experimental area is overloaded.
> >>>>>
> >>>>> The result of that is that lhcb jobs sits for a long time
> >>> on the wn
> >>>>> waiting for data (mainly libraries).
> >>>>>
> >>>>> We would like to know how this is solved at ral,
> >>> manchester where the
> >>>>> size is similar. We where thinking of setting up a set
> of pbs slots
> >>>>> for the sgm to have rw access. The other nodes would
> just have a
> >>>>> copy on the local disk or access through several nfs servers.
> >>>>>
> >>>>> I think the problem with the small set of wn having rw
> >>> access is that
> >>>>> lhcb is sending a lot of jobs via one user who is sgm.
> >>> Most of those
> >>>>> jobs do not write to the experimental software area but
> they would
> >>>>> stack to wait for the wn to be freed.
> >>>>>
> >>>>> We are keen to have your experience on that topic.
> >>>>>
> >>>>> Cheers, Olivier.
> >>>>>
> >>>>> --
> >>>>> - O. van der Aa - Imperial College London -
> >>>>> - LT2 Technical Coordinator -
> >>>>> - tel: +442075947810, +442071005426 -
> >>>>> - SIP: [log in to unmask] -
> >>>>> - fax: +442078238830 -
> >>>>> - http://surl.se/agtu -
> >>>>>
> >>>
> >>> --
> >>> - O. van der Aa - Imperial College London -
> >>> - LT2 Technical Coordinator -
> >>> - tel: +442075947810, +442071005426 -
> >>> - SIP: [log in to unmask] -
> >>> - fax: +442078238830 -
> >>> - http://surl.se/agtu -
> >>>
> >
>
> --
> *******************************************
> * Dr Alessandra Forti *
> * Technical Coordinator - NorthGrid Tier2 *
> * http://www.hep.man.ac.uk/u/aforti *
> *******************************************
>
|