>
> Aleksandar Donev writes:
> > It seems to me that the Fortran standard is made so that array sizes are
> > always assumed to be "small" enough to fit in a default integer. So,
> > SIZE(array) returns a default integer. Is it at all possible to have arrays
> > whose size is, say, a double precision integer (this is not even in the
> > standard set of types in the standard). I am thinking of a parallel
> > application with lots of memory, say 10-100 GB, which can not be addressed
> > by a default 32-bit integer. I haven't ever used such a huge memory, but it
> > seems a possibility.
> >
> > What is the standard standing on this?
>
> This is a current subject of discussion. It got a bit of time at the
> most recent J3 meeting. I believe that the previous feeling was that
> systems that could support arrays that large would typically make
> default integers be 64 bits. That certainly seems like the "cleanest"
> solution to me. Otherwise you are going to end up having to
> explicitly specify kinds all over the place; I'd think that would
> get out of hand.
>
> I believe that the main argument the other way (excuse me if I
> misrepresent the positions) is that there are large numbers of
> users and codes that assume default integer is 32-bits and will
> break if this changes. Thus the request to support large arrays
> while keeping default integer smaller. In my opinion, those are
> just broken codes...but I guess there are a lot of them.
>
> So the current f2k draft adds KIND arguments to a bunch of
> intrinsics. (Seems like SIZE is bound to be one of them, but I'd
> have to look it up to be absolutely sure). I think its going to
> be awfully ugly to code that way, but I guess that having these
> arguments at least allows the vendor to present the user with
> both choices....use a default integer kind appropriate to "clean"
> coding in the environment or cater to codes that hard-wired
> assumptions about 32-bit default kinds and now want to work in
> environments with large arrays.
With current trends in microprocessor design, the facts that processors
are much faster than memory, and that memory bandwidth is _important_,
it is in practice quite important to code for 32-bit integers and
floats, while at the same time the increases in system (and problem)
size dictate a need for larger (in paractice, this means 64-bit)
subscripts, addresses, record numbers, etc.
To my mind, it is therefore important to support both of these
practices/trends in a straightforward manner. I think that it
is terribly important to preserve standard-conforming programs
over even a majority that make a nonstandard assumption that
INTEGER means 32-bit.
And it would be terrible to do as the C committee has done and say in
one release of the standard that "long" is the longest integer type,
only in the next version to say, 'No, we didn't mean it; too many
people have made the assumption that "long" means 32-bit; "long long"
is a new integer type longer than "long" -- breaking what code was
carefully crafted to rely upon the standard, but to be size-independent.
So (for example), it would be useful to have in the standard a
PARAMETER whose value is KIND( <longest INTEGER type> ), or some
equivalent facility.
fwiw
Carlie J. Coats, Jr. [log in to unmask]
MCNC Environmental Programs phone: (919)248-9241
North Carolina Supercomputing Center fax: (919)248-9245
3021 Cornwallis Road P. O. Box 12889
Research Triangle Park, N. C. 27709-2889 USA
"My opinions are my own, and I've got *lots* of them!"
|