Aleksandar Donev wrote:
> Catherine Moroney wrote:
>> Writing a function seems to work though. Or is there a problem with
>> this solution that I don't know?
> Don't even open that bag of worms.
> The short answer: Do not use a function for this, use a subroutine.
> Functions are not guaranteed to be executed:
> a=f(x)+f(x)
> is x incremented twice or once, i.e., how many times is f really
> called???
This issue is problematical (and Aleks' example is one of the
very few cases in which the answer is fuzzy).
The issue involves two standard provisions. First, when an
expression is evaluated, an implementation is free to evaluate
any mathematically equivalent expression instead (or, if it's
a LOGICAL or CHARACTER expression, it is free to evaluate
any logically or stringly(?) equivalent expression). It is permitted
to do this even if the alternative mathematical (logical, string)
expression is *computationally* different. So, in a statement
like:
a = 0*int_f(x)
The compiler is free to substitute the mathematically equivalent
expression 0 (zero) for the original. (There is some dispute
about whether 0.0*real_f(x) is mathematically equivalent to
0.0 since it depends on whether you regard the semantics of
IEEE NaNs to be mathematical or just computational).
In any case, there are not a lot of mathematical identities that a
compiler might apply that could eliminate a function reference
and so, not need to evaluate that function. Most compilers
don't bother. There are easier to discover optimizations
that are more often useful. Few expressions contain obvious
instances of these shortcuts that can be detected at compile
time and run-time tests usually cause the code to run slower.
But, it is a kind of optimization of which you ought to be aware
just in case.
Now, the second case is the one Aleks mentioned above. The
standard requires that functions must not have any side effects that
change the value of any other entity in the same simple statement.
With that rule in place, the compiler can assume for
a = f(x) + f(x)
that F() neither changes its argument (since that's used elsewhere
in the expression), nor does it change A, nor does it have saved
internal state that would cause F itself to deliver a different answer.
Those assumptions are collectively considered sufficient to allow
processors to evaluate F(x) just once and use the result twice.
Unrelated to the above, if the function is declared PURE, there
are additional optimizations the compiler can apply, but since
PURE functions have no side-effects anyway, that's not relevant
to the current discussion.
Now, there are those that *claim* (and increasingly decline to
even defend the claim) that functions might *never* be executed
in standard compliant implementations. Such claims always
involve out of context quotations and careful disregard of other
explicit requirements of the standard (for example, that "the
value of a function reference is determined by execution of the
function", a statement which is the *only* provision in the entire
document which states how to evaluate a function).
In real life you can usually count on side-effect in functions to
work as you expect. If you make sure that the usual mathematical
identities (like multiplying by zero) don't apply to your code
and that you don't call the same function with the same argument
twice in the same expression, that "usually" can be reliably replaced
with "always". If a given compiler doesn't do that, demand your
money back and post the description here or to the usenet newsgroup.
--
J. Giles
"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare
|