On Wed, 1 Mar 2000, Catherine Moroney wrote:
> This is a very elementary question I know, but I just realized that it's
> something that I really should understand.
I see that others have already answered your question (and better than I
would have been able to do). But it _is_ a frequently-asked question
among those coming to Fortran from C. It's good to know that the traffic
isn't all one way. I think the root cause is that the two languages have
quite different models of how computers process data.
My understanding of the C model is that the computer processes bunches of
bits; because of hardware restrictions these bunches are typically 8-bits,
16-bits, or 32-bits long. The user is then free to map these on to the
domain of integers in any convenient way, e.g. to allocate 8-bit chunks to
the number range 0 to 255, or if you choose, to the range -128 to +127.
Fortran, on the other hand, is designed to process numbers, and doesn't
encourage you to look under the hood to see what the bit patterns look
like. Since engineers and scientists generally want a more-or-less
symmetric number range around zero, that's what Fortran does. It has
no concept of unsigned numbers (though in a few particular cases, such as
I/O unit numbers, the values are restricted to being non-negative for
essentially arbitrary reasons).
My question in return to C programmers is: how do you explain the
difference between a signed and unsigned char? For example if I store 'a'
in these two distinct data types, what's the difference? And if there
isn't any difference, what's the point of having the two data types?
:-)
--
Clive Page,
Dept of Physics & Astronomy,
University of Leicester.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|