Alvaro Agustin Fernandez writes:
> Now then, my question - is transfer() a particularly slow function,
> or not? I realize the question is so broad as to be almost
> meaningless, but I'm particularly thinking of it being used to mimic
> run-time polymorphism. Would doing so in a computationally intensive
> code be a bad idea? Or is this implementation dependent? Also,
> since it is a bit obscure as far as F90 commands go, is it often
> buggy, in people's experience?
It is highly implementation dependent. In my intial experiments with
f90, about 10 years ago, I had assumed that TRANSFER would be
essentially a no-op, needing essentially no run-time code, but
instead just amounting to a way to tell the compiler to interpret
the same bits as a different type.
But it turns out that this can't be in all cases (try, for example
a TRANSFER of a non-contiguous array slice). Perhaps some compilers
might be able to optimize some of the simple cases - notably the
cases where it really is just re-interpreting the same bits and
shouldn't need any run-time code. Such cases do exist (and are even
pretty common). But some transfer implementations can end up awfully
slow in some cases. You may end up with such things as dynamic
allocation of a compiler temporary, followed by a subroutine call
to copy the data into that temporary.
Summary - some might be fast in some cases. I don't know what
compilers and what cases, but I'll just acknowledge that there
probably are some. But others cases and compilers are going to
be slow.
And buggy. Very. One of the buggiest features in early compilers.
Perhaps they are all better by now; I don't know because I gave up on
using TRANSFER to implement polymorphism. I got tired of fighting
with a new set of compiler bugs every time I ported to a new compiler.
--
Richard Maine
[log in to unmask]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|