> Most folks think in terns of the number of decimal digits required.
> They don't think of 64 (bears no relation to the actual precision).
This might be true if you could actually ask for arbitrary numbers of
decimal digits. Since you can't, most programmers that I have met --
physicists included, and I'm an astronomer by training -- know that
there are only a couple of kinds of real numbers, and apply the labels
"32 bits" and "64 bits" to them. They're also aware that counting
binary things with decimal digits doesn't work.
-- greg
|