Print

Print


From: "Van Snyder" <[log in to unmask]>
Sent: Friday, April 26, 2013 9:26 AM

> On Thu, 2013-04-25 at 23:18 +0200, Giorgio Pastore wrote:
>> forget everything else and stay with the approach of modern Fortran:
>> there is no "single" or double precision, no 32 or 64 bit standards to
>> worry about or other technicalities. Only the number of significant
>> figures and range of "real" data you are interested in actually
>> matters! This way you decouple forever your code from all possible
>> choices and changes of floating point technology and you stay with
>> what should be the main concern of any computational scientist: range
>> and precision (as opposite to bit, bites, binary representation etc.).
> 
> It depends upon what you're doing.
> 
> If you're writing specific procedures for a generic interface, selecting
> one with 6 digits and one with 12 digits could get you the same kind
> type parameter, and therefore failure to compile.  So, in that case, use
> kind(0.0) and kind(0.0d0).  It's unlikely that 0.0d0 will be removed
> from Fortran any time within the next fifty years.

There are 2 requirements, and it's well worth distinguishing them.

One is the case of writing generic procedures.
Here, the procedures for different kinds MUST be written in such a way
that the kinds ALWAYS resolve to distinct kind numbers.
This suggests that such generic procedures use REAL and DOUBLE PRECISION
or their equivalents kind(0.0) and kind(0.0d0).

The other relates to user-written procedures that call those generics.
They can specify precision in such a way that will yield the desired
precision, such as selected_real_kind(6) and selected_real_kind (12).

Thus, if hardware single precision is 15 digits or so, then single precision
and double precision calls both resolve to hardware single precision on the
particular system.
If hardware single precision is 6 digits, 
the generic procedure calls resolve to using single and double precision hardware.