Excellent, thank you David. I'll give that a test today.
Cheers,
Brad.
On Thu, 29 Sep 2005, David Berry wrote:
> P.S. The message, if issued, will have this form:
>
> WARNING: The number of active GRP identifiers increased from 0 to 6 during
> execution of WCSMOSAIC (KAPPA programming error).
>
> Obviously,"0", "6" and "WCSMOSAIC" are just example values...
>
> David
>
>
>
> On Thu, 29 Sep 2005, David Berry wrote:
>
>> Brad,
>> I've modified the kappa monolith routines so that they check the
>> number of active GRP identifiers on entry and exit, reporting a warning
>> message (not an error) if the number on exit exceeds the number on entry.
>> This should hopefully help you track down which applications are not
>> releasing their GRP identifiers (assuming it is a kappa problem and not a
>> polpack or ccdpack problem).
>>
>> David
>>
>>
>> On Wed, 28 Sep 2005, Brad Cavanagh wrote:
>>
>>> I apologise in advance for the vagueness of this error report. It happened
>>> after 434 WFCAM frames were processed over a few hours of reduction time
>>> with ORAC-DR. The pipeline first crashed on PSF, giving an error similar
>>> to the following:
>>>
>>> !! GRP1_GTSLT: Maximum number of groups (500) exceeded
>>> ! GRP1_GTSLT: Unable to create a new group
>>> ! GRP_NEW: Unable to create a new group
>>> ! Error obtaining a group of existing NDFs using group expression
>>> ! "y20050907_00557_ff"
>>> ! Unable to associate a group of NDFs with parameter NDF.
>>> ! STATS: Error computing simple statistics for an NDF's pixels.
>>> !! GRP__NOMOR: GRP common arrays full
>>> Error in obeyw to monolith kappa_mon (task=stats): 233603130
>>> Arguments were: ndf=y20050907_00557_ff clip=[2,2.5,3,3]
>>>
>>> Then it crashed on every subsequent file, but not on the first KAPPA call
>>> of the recipe, but always on STATS. What are the GRP common arrays, how
>>> can they get full, and how can I avoid them getting full? I've never seen
>>> this happen before, and I could have sworn I've run through more than 434
>>> files in one reduction before. I'm not out of disk space -- I have about
>>> 46 gigs free on this drive.
>>>
>>> I'm running the pipeline from the top again with debuggin on to see if it
>>> crashes again, but it'll take a while.
>>>
>>
>
|