On Jan 9, 2006, at 9:06 AM, Michael Milgram wrote:
> Just what does the standard have to say about mixing variables of
> different KINDS and and can you get into trouble by doing so?
The short version is that things get converted to the kind with the
highest precision. Trouble? Well, a big possibility of trouble is user
confusion - expecting conversion in cases where it won't happen, such
as actual/dummy arguments. Such user confusion is a very real problem,
though. And there is one other big issue relating to when the
conversion happens - see below.
> For example, mixing integers and double as in
>
> A=B/(1+C)
>
> implicitly converts the integer "1" into a real, but is it a single
> "real" or a double real? (A, B and C are declared double).
Double - because C is double. But A and B do *NOT* matter. That part is
important. There isn't some magic global scheme to look at the whole
expression. Conversion is done for an individual operator - in this
case the addition; the integer is converted to double real to match the
other operand of the plus.
> Also, what is the default definition for "E" vs "D" specifications.
> Will 0.1D-1 give the same result on a 64 bit processor as on a 32 bit
> processor (I know, the standard says nothing about processor, but
> somewhere it must define "D" vs "E").
*NO*, *NO*, *NO*! For many reasons. The definition of E is that it
gives single precision - whatever that is. The definition of D is that
it gives double - whatever that is. The standard says almost nothing
about what single and double are (just that double has more precision
than single and takes twice the space). In order to specify precision
portably, use the f90 KIND facility and selected_real kind. That's what
it is for. Single and double are not portable in terms of precision.
But also, Fortran aside, the term "64-bit processor" means essentially
nothing about precision of data. The term is primarily a marketing one
rather than a technical one. As such, its meaning tends to vary
depending on what is being "sold". However, to the extent that "64-bit
processor" has an accepted meaning in the current market, it refers to
address size, *NOT* data size. The question of whether you have a
64-bit processor is relevant to matters such as whether arrays can be
larger than 2 GB (it doesn't completely answer the question, but it is
relevant). It is essentially irrelevant to the question of whether
single precision reals are 32 or 64 bits. In most (but not all)
implementations, single precision reals are 32 bits on 64-bit
processors.
> - why does C4 appear to have full precision when it's denominator is
> "single"
I'd guess that the compiler is doing you the "favor" of noticing that
you have a double precision left-hand-size and promoting some of the
operations to double. You can't count on this happening. Counting on
such things is a common error. Compilers are allowed to do it, but you
aren't allowed to count on it. Usually, the only thing it hurts is in
user confusions - users get used to it happening and start assuming it,
later getting bit when it doesn't happen.
In fact, this is the other big caveat that I mentioned earlier. You
need to be aware of *WHERE* conversions happen. If you have something
like
X = 1.3
where X is double precision, that does *NOT* mean that the 1.3 is a
double precision constant. The form 1.3 is always a single precision
real *REGARDLESS OF CONTEXT*. What the standard says about this
statement is that 1.3 is evaluated as a single precision real constant.
The assignment causes conversion of the single precision value to
double, but that doesn't change the fact that 1.3 is first evaluated as
single, and thus need not be more accurate than single. Some compilers
will "guess" what you really mean (which is probably X = 1.3D0), but
you can't count on that.
> - are integer constants (e.g. 1.0) implicitly converted differently
> than non-integer constants (e.g. 1.5)
>
> - can you lose precision when dividing by a real single precision
> constant that may be converted to a multiplication (e.g. A/3. ->
> a*.333... at compilation time) or
> must it always be written A/3._double to be sure.
You "should" be ok, but only because the values 1.0, 1.5, and 3.0 all
happen to be ones that are represented exactly even in single
precision. The standard doesn't require this - it just happens to be so
on all current machines (or ones you are ever likely to see). To be
completely 100% safe according to the standard, you should not assume
such exactness, but many people do (including myself).
As a matter of style, some people recommend varying degrees of
avoidance of implicit conversions. The degree of preferred avoidance
varies from person to person. My own preferences on this matter are
"middle of the road". I am happy enough to write (where X is real)
X = 1
accepting the implicit conversion on assignment, but I'm more leery of
things like
X = Y/3
Even though the standard allows it, I feel that this is getting close
enough to invite the user error of writing
X = J/3
which does not mean at all the same thing as
X = J/3.0
> - how does one properly set a parameter using "D" or "E" notation
> (ERR2 vs ERR3) in the example.
D means double precision; it is processor-dependent what precision is
double. Thus, you really should not use D with constants where the kind
is selected with selected_real_kind. If you use a DOUBLE PRECISION
declaration, use a D exponent. If you use a kind-style declaration, use
the same kind for constants. There will be some kind value equivalent
to double precision, but it is processor-dependent which one that is.
--
Richard Maine | Good judgment comes from experience;
[log in to unmask] | experience comes from bad judgment.
| -- Mark Twain
|