> > A*(B+C) is better, write that
> > (A*B)+(B*C) is better, write that
> > you don't care ***AND IT IS IN AN INNER LOOP***, write A*B+A*C
>
> The problem with the third notation as a means of requesting the fastest
> code possible is that it almost certainly will _not_ produce the fastest
> code on a processor that attempts little or no optimization -- for those
> processors, A*(B+C) will usually produce faster code.
But if you are concerned with speed, surely you are using an optimising
compiler? (OK, perhaps you want to hand-code speed BECAUSE you have no
optimising compiler.)
> This illustrates why
> someone might want the proposed A*[B+C].
My main "motivation" was for expressions much more complicated than the
above, where multiplying them out by hand would make the code illegible.
This is perhaps not as relevant for "mainly numerical" code as for true
FORmula TRANslation, where you have what are essentially analytic
expressions, but they have to be calculated millions of times (yes,
there is real-world code like this) and thus speed matters.
> There have been so many different suggestions about how additional
> bracketing characters might be used if they were available that I doubt J3
> would be inclined to consume a bracketing pair on something as "small" as
> this proposal. [I tend to think it unlikely that they would use "[ ]" for
> subscripts for much the same reason.]
Probably true on both counts. People have objected to [] due to
Co-Array Fortran, and I think that interval-arithmetic stuff likes them
as well. So why not {} for this?
> My gut feeling is that in most programs, "( )" is used primarily to group
> and only occasionally to force an evaluation stategy, so I like the
> suggestion that was made when this topic was discussed in the context of
> interval arithmetic optimization -- provide for a second expression
> evaluation mode in which parentheses group but do not force evaluation.
Right---this was the motivation for me starting this thread.
> separate notation would be used to force evaluation -- I happen to like
> doubled parentheses, but an intrinsic function would also work. Thus, in
> the new mode you might write
>
> A*((B+C)) or A*eval(B+C) to force addition then multiplication
> ((A*B))+((A*C)) or eval(A*B)+eval(A*C) to force multiplication then addition
> A*(B+C) to give the processor freedom with "addition first" as the default
> A*B+A*C to give the processor freedom with "multiplication first" as the
> default
>
> [Note that if a programmer mistakenly uses the current evaluation mode
> instead of the proposed new one, the above expression will still give
> correct answers, but the processor would fail to recognize it had the
> freedom to consider alternative evaluation strategies in the third case.]
This isn't backward-compatible, as A*(B+C) forces evaluation now; you
need a NEW notation for a new feature.
> There is a related question about propagating this kind of optimization
> through assignments. E.g., in
> T=B+C
> D=A*T
> should it be permissible for the processor to evaluation D as A*B+A*C.
Can't this be done now with standard-conforming optimisation?
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|