Bill Long wrote:
> James Giles wrote:
...
> The sign bit actually works much better than 1 vs. 0 in this case.
>> The subtract (C-B) produces the sign bit (and, on the machines
>> where it's relevant, the "negative" condition code). The produced
>> sign is the correct value for the logical predicate in question
>> (negative means true, non-negative means false).
>
> In the case of floating point operands, subtracting as a means of
> doing compares is not the best idea. In the today's IEEE world,
> you get into a mess when NaN's and Infinities are involved. Modern
> instruction sets have explicit compare instructions.
Do any of these compare instructions return an INTEGER valued
result? Is that INTEGER value either 0 or 1? I suspect not. I suspect
the Intel processors are most familiar to most people, an they don't
return INTEGERs, nor even a *single* bit. I determine that two
operands were unordered requires that you look at a different bit
than to determine whether they were equal, and a different bit again
to determine which was largest.
>> The bottom line is, and this is repeated information, the language
>> doesn't specify what the internal representation of LOGICALs
>> are. It shouldn't. On most hardware I have any interest in at
>> all, neither implementation (-1 vs. 0 or 0 vs. 1) is produced
>> directly be compare instructions. In many cases that interest
...
> Sorry to hear you have no interest in our processors. Our hardware
> returns 0 for false compares and 1 for true. The reason in simple.
> C defines false and true this way, and there is usually one (really
> large) C program running on systems - the OS. And it involves a
> lot of comparison instructions. Given this constraint, and that
> there is no penalty for Fortran programs, it is reasonable to
> optimize the hardware accordingly.
Well, adopting C as an implementation language pessimizes the
system. In any case, C isn't zero or one, it's zero or non-zero.
Any choice you made would not penalize C.
But I still don't believe you set logicals *in hardware* to an
INTEGER 0 or 1. I'll bet the result of a floating point compare
is not of type INTEGER at all. It's probably a single bit somewhere
:-(if not more than one bit like Intel chips use). Or maybe even a
conditional jump form.
Once again, the language should *not* specify what the canonical
internal form of a LOGICAL should be. That is, the standard
document *should* be mute on the subject. Whatever you do as
implementors is your business. It should be your business. It
should not be the business of any language designer.
--
J. Giles
"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare
|