On Sat, 17 Jun 2006, Malcolm J. Currie wrote:
>> This can be easily tested by creating an NDF of all zeros, then doing
>> either:
>>
>> % div in1=orig in2=zero out=div_by_zero
>
> This actually tests for a zero denominator rather than use the
> non-portable numerical handler in PRIMDAT and can set the value bad,
> whereas MATHS still uses PRIMDAT (called from the bowels of TRN via a
> computed GOTO for each arithmetic operation). If I recall correctly the
> numeric handling isn't portable. Norman?
Correct. NUM also uses a common block to store the status (which means
that some code in another library won't see the result anyway). I
occassionally think about these issues but the real problem is that IEEE
arithmetic is designed such that this is all irrelevant. That's the entire
reason that INF and NaN exist. The CPU is optimised for them and makes
programs more efficient since there is no need to check each numerical
result at the user level. Some oses do allow for trapping of this but it's
not portable.
The only real solution is for NDF_UNMAP to go through and convert inf, nan
to BAD automatically. Or to provide a macro/functoin that does the bad
value checking (but that would require too much code to be changed) and
include IEEE bad as well as PRIMDAT bad.
>
> Is it too big an overhead to apply a threshold to set the Inf values to
> bad with THRESH?
>
NATIVE does this I think.
--
Tim Jenness
JAC software
http://www.jach.hawaii.edu/~timj
|