Paddy O'Brien writes:
> Isn't it
> about time that the standard made it mandatory that the constant is
> hoisted to the kind of the variable being assigned to, or in an
> expression hoisted to the greatest precision in that expression.
>
> I cannot believe that this change will ever break any code, rather it
> will do what we (and there seem to be many of us falling into this trap)
> naive programmers thought that we were getting.
This has been discussed innumerable times before. Part of the problem
is that there are cases where it is flatly impossible for the compiler
to infer the intended precision. There are other cases where,
although it is possible, it is extremely obscure, confusing, and
inconsistent. I'm a big fan of consistency and simplicity. The
existing rules clearly aren't what everyone would intuit at first,
but they are simple and consistent, and thus not that difficult to
learn. Anyone who can't learn them is going to be unable to even
come close to learning the complicated set of rules necessary to
determine exactly when you promote in all cases.
On the other hand, I do agree with the position expressed by
Dick (and by others, such as Giles on clf) that it would be good of
compilers to give warnings when they notice questionable cases.
I personally think it more helpful in the long run for
compilers to help the users notice the problem areas than to
try to quietly hide the issue by fixing upthe simple cases,
leaving the user completely unprepared to deal with the harder
ones because he is used to it all just working by magic.
> I'm pretty certain that hoisting is done within expressions for
> variables, and believe it to be part of the standard. My belief in this
> respect may not be reality.
No, it is never done by the standard. Some compilers try to "help"
out (see my diatribe above about such alleged help) by quietly
fixing such things, but the standard absolutely never specifies
this. You are probably thinking of things like x+y where x
and y are of different precisions. The standard says that this
is done by converting the lower precision one to the higher
precision an dthen doing the addition. However, there is nothing
special here about constants. If you have, say 0.1d0+0.2,
then the 0.2 is a single precision constant. It is converted to
double precision before the addition, but that is only *AFTER*
its single precision value is determined. There are no cases
anywhere that the standard says to interpret 0.2 as a double
precision constant. See my comment above about consistency. If
you think the rules are complicated, then you don't know them.
Easy to make a mistake, yes - complicated, no.
A few trivial examples of problem cases. I'm not going to spend
much time on this, so these are just off the top of my head.
I've been through this too many times before (and I'm home today,
just briefly checking my work email).
double precision x
x = f(0.2)
Should the 0.2 be promoted? If your answer is yes, then you just
broke lots of existing code because f is a function that takes a
single precision argument. If the answer is no, then we are starting
down the road to one of the many complications - namely, where exactly
is the line drawn.
Suppose f is a generic with both single and double precision versions.
Does that change the answer? Or suppose it is generic and has only
one of those. Suppose f is a strange but legal generic where the
specific with a single precision argument gives a double result,
and vise versa. Yes, that would be strange, but us standards-writers
have to define all the possibilities, no matter how strange.
Once you have all the answers for user-written generics, how about
the intrinsic ones, such as
x = sqrt(0.2)
I bet you are going to tell me that the 0.2 should be promoted in that
case. If so, you are going to have a fun time explaining exactly
where the line got drawn between the cases where the 0.2 can't be
promoted and the cases where you want it to be. And you need to
explain this not just to me, but to all those users who get confused
by the current simple rule. Good luck. Don't forget to cover user
extensions of intrinsic generics. And defined assignments, just in
case you thought you could avoid trouble by giving up on things that
look like function arguments. Oh, and if your answer is that the 0.2
is not promoted, you'll get to argue with lots of posters who have
said it should be.
This is all just one of the more "obvious" lines of complication,
and this is all off the top of my head. It gets really, really
complicated. I'm not at all sure that I am even capable of
coming up with a complete and consistent set of rules. I'm
absolutely certain that any set of rules I could come up with
would be considered "wrong" by plenty of people. And I'm also
certain that I couldn't explain them to the average Fortran
programmer. I'd just have to summarize them as "sometimes
the compiler will fix things up for you; other times you'll be
screwed."
--
Richard Maine | Good judgment comes from experience;
[log in to unmask] | experience comes from bad judgment.
| -- Mark Twain
|