Dan Nagle writes:
> Historically, when there was more cross-compiling,
> not using the native arithmetic of the target machine
> was also a consideration.
There is, if anything, more cross-compiling now than ever
before. Pretty much anything embedded is done with cross-compilers,
and there are *LOTS* of embedded processors today. It would be a
non-trivial exercises to even count the number of embedded processors
in a typical kitchen of today - though it is dwarfed by the number
of embedded processors in a current airplane.
It is unlikely that there is a compiler that runs on your microwave
oven - the code in your microwave was probably developed using
a cross compiler.
It probably is true, however, that most of today's embedded processors
use the same floating point format (if they have floating point at
all), so that cross-compiling float stuff isn't as traumatic as it
once was. Pretty much everything is IEEE. Standards can be a nice
thing for portability, and this is one case where there pretty much
everyone is sticking to the same standard. About the only other
floating point format that you see much of on new machines is the old
IBM mainframe one (the IBM one is a horrid format, IMO, but that's
irrelevant).
> I don't think having to roll-your-own real constants
> module is so hard- the CRC handbook has many
> of the desired constants in the first few pages.
It is also likely to be correct to as many digits as you want to
copy. There is no guarantee that atan will return values accurate
to the last bit, or necessarily even close, though I'd mostly
expect it to be at least close.
The number of intrinsics allowed in initialization expressions has
increased with every one of the last several language revisions.
As of the f2k draft, it is pretty much everything *EXCEPT* for the
transcendentals (but the one you want is, alas, a transcendental).
It wouldn't surprise me to see the transcendentals allowed in some
future version.
</BEGIN RANT>
Frankly, I consider the argument of wanting to use atan to
generate a pi constant to be a pretty poor one. People keep bringing
up that argument, but if that is really the best argument for the
feature, then I would be against it. I'm probably for adding the
feature at some future time, but this particular argument serves more
to convince me against its position than for it.
Asking for a substantial new compiler feature, mandatory on all
compiler developers, just in order to avoid writing out a value of pi,
strikes me as *WAY* out of balance. Oh, and to make the feature do
what people clearly expect, you'd also have to add an accuracy
requirement....which might be a good idea, but I'd think it more
appropriate to address requirements like that 1.0+1.0 be within an
order of magnitude of 2.0 before we got to fine points like atan being
accurate to the last bit. It's always just pi that gets mentioned,
even, as though there wasn't even a second case. If it is really just
pi that is wanted, it would make more sense to ask for that than for
all the transcendental functions...or just get Dan to type it out
for you in his portability module.
I think there probably are better reasons than this for wanting
transcendentals in initialization expressions...but perhaps I'm
wrong if this is the only example that keeps coming up.
</END RANT>
One argument that I consider good is that I think the cost of
doing this feature has probably gone down a lot over the years.
That's likely to be a significant factor. But it would be nice to
have a good positive argument too, rather than just a lessening
of the negative one.
--
Richard Maine | Good judgment comes from experience;
[log in to unmask] | experience comes from bad judgment.
| -- Mark Twain
|