Hi Davide,
> - The CREAM's "caller" (the user, the VO...) gets her exit code in the
> job's final status report.
>
> - The LRMS' caller is *CREAM* (and not the VO or even the user, who
> has in principle no access to batch system information), so it gets
> exactly the exit code it needs to manage the jobs.
IMO the purpose is not to run the grid.... is to run users job via the
grid.
> - The site administrators concerned about the payloads' exit status
> can find it in a log file. I don't see why the LRMS' one should be
> more "standard" than CREAM's one, provided that the bug mentioned by
> Massimo is fixed. After all, the payload owner interacts with CREAM,
> not with the batch system.
if I have the batch ID I just grep the batch system accounting records
or even better I'd use tracejob to return all the information I need
instead of going around cream log files.
>
> - The site administrators who are not interested (e.g. because they
> support a lot of VOs, each one with several applications and possibly
> with different error reporting schemas), can assume that they have to
> investigate every non-zero terminated jobs in the batch system, as the
> error cannot ever be user-related.
>
> So, each layer involved in the job management has the proper
> information, available in a pretty standard way.
> I know that for those sites committed (mostly) to just one VO, where
> the administrator is also a VO member, it could be convenient to
> "short-circuit" the information sources, but IMHO this cannot be
> considered the general use case. So if such a behaviour has to be
> introduced, it must be configurable (and 'off' by default, I'd say).
albeit I'm more involved with atlas we support 13 active VOs and the
problem I started this thread with was lhcb related. The reason I
started this thread is because I was trying to understand where the lhcb
jobs failed since lhcb insisted it was a site problem but according to
the batch system files everything run to successful completion. And I
have to say that at least in production most atlas transformation errors
are caused by site problems not by coding errors. Of course user
analysis might have more user errors but still.
cheers
alessandra
>
> Cheers,
> David
>
>> I don't think that other exit codes should necessarily be "masked".
>> I suspect that the CREAM CE could easily print this into its log file.
>> JT
>>
>> On Sep 11, 2011, at 18:25 , Pablo Fernandez wrote:
>>
>>> Hi,
>>>
>>>> these days (at our site) most of the user-level (payload) errors have
>>>> nothing to do with the worker node or cluster itself. common
>>>> problems:
>>>>
>>>> a storage element somewhere is not responding
>>>> something is wrong with the VO-installed software
>>>> user error (job just crashes due to programming errors)
>>> Actually, from the list you've given, the first two items may be
>>> local sysadmin
>>> business... on the third there is little we can do.
>>>
>>> I still don't see the reason for masking... is it WMS resubmission?
>>> If so, the
>>> only reason I see for not resubmitting is the last, the other two
>>> may have
>>> been temporal stuff, timeouts...
>>>
>>> I am also of the opinion that Grid should work as close as Unix as
>>> possible,
>>> and this seems to be an effort on the opposite direction.
>>>
>>> BR/Pablo
>>>
>>>> if it were true that most payload errors were due to site problems,
>>>> i'd
>>>> agree with the approach. making it configurable is always okay as
>>>> long as
>>>> the configuration does not lead to lots of complexity. which in
>>>> itself is
>>>> another source of error.
>>>>
>>>> JT
>>>>
>>>> On Sep 10, 2011, at 23:49 , Maarten Litmaath wrote:
>>>>> Ciao Massimo,
>>>>>
>>>>>> First of all: there isn't anything different wrt the LCG-CE. Also
>>>>>> for
>>>>>> the LCG-CE the exit code that you see in the pbs log file is the
>>>>>> one of
>>>>>> the job wrapper (jw), and not the one of the user job, because it is
>>>>>> the jw that is executed in the batch system.
>>>>>> As I said, the jobwrapper is a script. Oversimplifying it, it is
>>>>>> something like:
>>>>>>
>>>>>> #/bin/sh
>>>>>> < prepare exection env in WN>
>>>>>> <get ISB>
>>>>>> <run user job>
>>>>>> <put OSB>
>>>>>>
>>>>>> If this script runs properly, it returns 0 as exit code, and not the
>>>>>> exit code of the user job. Again there is the very same scenario
>>>>>> in the
>>>>>> jw used for the LCG-CE.
>>>>>> A value different than 0 means that there was a problem in the
>>>>>> execution
>>>>>> of the job wrapper (e.g. a problem with sandbox transfers)
>>>>> That is the traditional view indeed.
>>>>>
>>>>>> User job exit code is not hidden: it is returned in
>>>>>> glite-ce-job-status
>>>>>> output, in wms-job-status, in wms-logging-info.
>>>>>> It was supposed to be reported also in the glite-ce-cream.log:
>>>>>> investigating why this is not the case.
>>>>>>
>>>>>> The management of jobs finished with an exit code<> 0 is
>>>>>> something that
>>>>>> was discussed several years ago, in the days of Datagrid. It was
>>>>>> decided that they should consider as successfully done (so e.g.
>>>>>> the WMS
>>>>>> shouldn't trigger a resubmission) but the exit code<> 0 should be
>>>>>> returned to the user so she can investigate.
>>>>> Even that could be discussed again: since the payload may have
>>>>> failed due
>>>>> to a problem with the site (e.g. full file system), a resubmission
>>>>> could
>>>>> be desirable if the JDL allows it. We may want to be careful
>>>>> there and
>>>>> make that behavior depend on a new JDL attribute.
>>>>>
>>>>>> I don't fully understand what is the RFE here. To have the jw
>>>>>> returns
>>>>>> with the user job exit code (so that this value is reported in
>>>>>> the PBS
>>>>>> log file) ?
>>>>> Right. It would seem nice if:
>>>>>
>>>>> - the site admin could configure that behavior;
>>>>> - the WMS could still distinguish between job wrapper and payload
>>>>> problems.
>
>
|