I've changed the way that NDG allocates and releases temporary HDS
objects to follow the pattern used by ARY. This fixes the CALCNOISE
problem, and also seems to stops the unexplained occasional increase
in HDS temporary file size I was seeing yesterday.
This is a bit of a sticking plaster. A better solution is probably to
work out what the problem in HDS is that requires this trick to be
used.
David
On 6 December 2010 20:00, Tim Jenness <[log in to unmask]> wrote:
> On Dec 6, 2010, at 7:15 AM, David Berry wrote:
>
>> OK - I think I *may* have got to the bottom of this. I've fixed two
>> problems today - one in ARY (I was allocating the memory to hold a
>> mapped array twice rather than once), and one in NDG (I was simply
>> annulling temporary HDS objects, rather than annulling them AND
>> erasing them).
>>
>
> The NDG "fix" breaks SMURF calcnoise:
>
>
> Processing data from instrument 'SCUBA-2' for object 'DARK' from the following
> observation :
> 20101204 #3 scan
>
> OUT - Noise images [log in to unmask]*/ >
> POWER - Power spectra files /@xxx2_power/ > !
> Found 32 continuous chunks
> smf_concat_smfGroup: Warning, Couldn't create TSWCS for data cube
> !! Routine rec1_read_file called with an invalid bloc argument of 0 for file
> ! /Volumes/AstroHD/timj/scuba2/eng/t16d19.sdf (internal programming
> ! error).
> ! DAT_PAREN: Error locating the parent structure of an HDS object.
> ! Application exit status DAT__FILRD, File read error
> ! testdata/*.sdf
>
> So it's an "internal programming error" and should therefore not be happening. Get the same error with a single input file for CALCNOISE.
>
> MAKEMAP seems to work though. It's possible that I'm doing something funky with provenance in CALCNOISE but that HDS error looks a bit crazy so I'm voting on the wrong thing being erased at some point.
>
> --
> Tim Jenness
> Joint Astronomy Centre
>
|