I wrote:
> > I believe that the previous feeling was that systems that could
> > support arrays that large would typically make default integers
> > be 64 bits.
Aleksandar Donev writes:
> I disagree emphatically with the suggestion on making default integers this
> large. The default integer kind should and is determined by the word size of
> the architecture, it is meant to provide the fastest possible integer
> operations, not to address the whole memory.
I was under the impression that the typical word size on machines of
that class *IS* 64 bits (and the busses are sometimes even 128 bits
wide or more). It's been over 15 years now since I personally first
started using machines where 64-bit floats were sometimes faster that
32-bit floats (as long as you stayed in the CPU and weren't limitted
by memory bandwidth) because the 32-bit ones were done by extending to
the "native" 64-bit size, doing the 64-bit operation, and then
truncating the result.
If a system supports arrays larger than 2**32 elements, and integer
arithmetic for 64-bit integers isn't pretty darn good, then I'd guess
performance in practice is going to s...um, oh yes, better watch my
language in public... be abysmal. Doesn't seem like something likely
to be a big commercial success.
--
Richard Maine | Good judgement comes from experience;
[log in to unmask] | experience comes from bad judgement.
| -- Mark Twain
|