On Nov 25, 2010, at 10:21 PM, David Berry wrote:
> On 26 November 2010 08:04, Tim Jenness <[log in to unmask]> wrote:
>> On Fri, 26 Nov 2010, David Berry wrote:
>>
>>> Not sure I follow what you are suggesting for 3) - are you saying you
>>> would check for (effectively) VAL__BAD values in the input and replace
>>> them by corresponding VAL__BAD values in the output? If so won't this
>>> break the whole bad pixel flag concept in NDF? That is, HDS will
>>> convert bad-looking values regardless of the setting of the NDF bad
>>> pixel flag. As Malcolm points out, there could well be _UBYTE or
>>> _UWORD data out there for which setting the bad pixel flag off is
>>> essential.
>>>
>>
>> If you are mapping an _INTEGER as a _WORD and the integer array has bad
>> values in it then currently you are guaranteed to get a conversion error. My
>> proposal in this case is simply to not trigger the conversion error if the
>> integer was the bad value (the conversion will have already put the bad word
>> into the array so the question is whether to set status to bad or not).
>
> Hmmm. If the _INTEGER array has an effective bad pixel flag of FALSE
> (i.e. all values are to be interpreted literally - no magic values)
> and you map as _WORD then don't you *want* to know about such
> conversion errors? I'll grant you it's unlikely that using the full
> range of an _INTEGER is ever going to be essential, but it could well
> be the case when say mapping a _UWORD array as _UBYTE.
>
HDS doesn't have a bad pixel flag. The bad values are only inserted during conversion errors and are not used anywhere else in HDS.
>> think this question is different to whether mapping to a bigger type should
>> translate bad values since you don't get a bad status then.
>>
>> I assume that ARY maps in the native format regardless and then does the
>> conversion itself so ARY won't care what HDS does when mapping with a
>> different type. I assume this is the case because neither ARY nor NDF trap
>> DAT__CONER.
>>
>> Sounds like everyone wants me to trap DAT__CONER in SMURF and assume that
>> the error is from a bad value shrinkage.
>
> Or map with the native type and then use VEC to do the conversion?
>
Yes. But that's real work in sc2store.c (this is all happening because I shrunk the JCMTState struct integer entries but I still need to read old data and SOME of that old data used bad values when the SMU is inactive). I know in sc2store.c that SMU_JIG_INDEX is never going to go out of range unless it's a VAL__BADI.
> I suppose another option would be to add a new attribute to each HDS
> primitive saying whether to check for bad values or not, but this
> sounds like a lot of work.
Since this is only an issue during data conversion the easiest would be to have a tuning parameter to indicate whether bad values should be retained on conversion.
--
Tim Jenness
Joint Astronomy Centre
|