Keith Bierman ADT/QED <[log in to unmask]> wrote:
...
>
>>(and the implementation can do anything it likes) or are IEEE exceptions
>
>There is an IEEE TR, which will be part of F2k.
>
>
>>Is a program which generates an exception in violation of the
>>standard
>
>That was the old arguement, once one is outside the scope of the
>Standard all bets are off.
>
>Various useful functionalities (e.g. extended precision registers,
>intervals, ieee arithmetic) make such transformations visible. One can
>argue that such things should always remain outside the standard.
Yes, but once *in*, what consequences do thay have? The
original Ada definition was criticized when they insisted that
any exceptions raised by an optimized code had to be the same
ones and *in*the*same*order* as if no optimization had taken
place. (What does Ada require now?)
Is this the kind of thing the new Fortran standard will require?
I'm not even sure it's possible to do in those implementations
where, for example, loops are divided up and separate processors
handle distinct ranges of the iteration. I'm pretty sure it wouldn't
be practical to require such a constraint on exceptions.
The standard could say that the conditions that cause exceptions,
when, and in what order are beyond the scope of the standard, but
that those that arise are handled according to the TR. This would
permit the reorderings currently common and still give the programmer
the benefit of exception handling. There are probably other ways to
address this as well.
I don't see any obvious consequences (on the use of reordering
optimizations) with respect to the rest of the features you mention.
There are semantic issues (when can the implementation provide
more precision than the programmer specified?). The intuitive rule
I've been promoting would still be the guide: an optimization is
allowed if it doesn't change the meaning required by the standard
(assuming the standard pins that down).
Interval arithmetic is an interesting case in point. The definition of
multiply and the definition of square are different. That is: if both
X and Y are the interval from -1.0 to 2.0, the product of the two
is the interval from -2.0 to 4.0, but the square of either is the
interval from 0.0 to 4.0. So suppose you have the following:
Y = X
Z = X * Y
Is the processor allowed to notice that Y is really X (and not just
another value that just happens to have the same interval)? That is,
can the processor use the definition of squaring rather than multiply?
Once the semantics is pinned down, the allowed optimizations (if
any) will be clear. But, leave a hole in the semantic definition
and the question of valid optimizations will be clear as mud.
--
J. Giles
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|