Hello.
Actually during one of our applications we ran into similar trouble with
array size being too large to be represented. But that I guess was a
system limitation. But knowing that a program would involve such a large
size we can design our data structures to circumvent this problem can't
we? After all the actual problem is not that there is no memory but just
that the "address" required is unmanageable. For instance, if the indices
go from -n/2 .. n/2 instead of 0 .. n we reduce the highest integer
involved. Are there any other such tricks? And if they are very common
then can't a compiler have them built in? This is all probably too naive
but I would appreciate it if I got a reply all the same.
Thanks
Varadharajan S
On Tue, 27 Feb 2001, Richard Maine wrote:
> Aleksandar Donev writes:
> > It seems to me that the Fortran standard is made so that array sizes are
> > always assumed to be "small" enough to fit in a default integer. So,
> > SIZE(array) returns a default integer. Is it at all possible to have arrays
> > whose size is, say, a double precision integer (this is not even in the
> > standard set of types in the standard). I am thinking of a parallel
> > application with lots of memory, say 10-100 GB, which can not be addressed
> > by a default 32-bit integer. I haven't ever used such a huge memory, but it
> > seems a possibility.
> >
> > What is the standard standing on this?
>
> This is a current subject of discussion. It got a bit of time at the
> most recent J3 meeting. I believe that the previous feeling was that
> systems that could support arrays that large would typically make
> default integers be 64 bits. That certainly seems like the "cleanest"
> solution to me. Otherwise you are going to end up having to
> explicitly specify kinds all over the place; I'd think that would
> get out of hand.
>
> I believe that the main argument the other way (excuse me if I
> misrepresent the positions) is that there are large numbers of
> users and codes that assume default integer is 32-bits and will
> break if this changes. Thus the request to support large arrays
> while keeping default integer smaller. In my opinion, those are
> just broken codes...but I guess there are a lot of them.
>
> So the current f2k draft adds KIND arguments to a bunch of
> intrinsics. (Seems like SIZE is bound to be one of them, but I'd
> have to look it up to be absolutely sure). I think its going to
> be awfully ugly to code that way, but I guess that having these
> arguments at least allows the vendor to present the user with
> both choices....use a default integer kind appropriate to "clean"
> coding in the environment or cater to codes that hard-wired
> assumptions about 32-bit default kinds and now want to work in
> environments with large arrays.
>
> --
> Richard Maine | Good judgement comes from experience;
> [log in to unmask] | experience comes from bad judgement.
> | -- Mark Twain
>
|