David,
> After Tim pointed out that NDFCOMPRESS was not actually making any
> difference to the size of the container file, I started to look at
Ditto COLLAPSE.
> There are two ways (short of modifying HDS) that can be used to get round
> this:
There was another way, although it might be regarded as equivalent to
the HDS_COPY. What I did in COLLAPSE and CHANMAP (and was about to
apply to COMP*) was to make a temporary NDF, propagate input to the
temporary NDF, adjust its size then propagate it to the output. I can't
say that I noticed much cost for this extra propagation, given that the
number of output bytes have been reduced.
Still a generic correction to NDF_PROP is preferable. It makes the
applications code more transparent, and it does what you'd naively
expect.
> 2) modify NDF_PROP so that it does not actually allocate any disk space
> for undefined array components in the output NDF (i.e. components that are
> not copied from the input NDF), until the array component is mapped.
Excellent.
> But that doesn't mean there arn't any... If you get chance, could you give
> your favorite NDF applications a thrashing to see if you can make them
I'll update and build, then run the ORAC-DR imaging regression tests.
> Tim, if some show-stopper appears with this system after I've gone on
> holiday, I suggest you use the ARY system as of yesterday for the release,
> and live with ndfcompress not actually doing any compression for the
> moment. Does that sound OK?
In this event, I could apply my workaround to NDFSAMESIZE, sorry,
NDFCOMPRESS.
If the NDF_PROP modification looks OK, at some point I'll remove the
NDF_TEMP stage from COLLAPSE and CHANMAP.
Malcolm
|