On Sun, 7 Mar 2004 [log in to unmask] wrote:
> > Some of the problem is with floating-point literals. What if the
> > spec were to say that if a literal does not have a KIND specified,
> > it is to be interpreted as the "greatest" kind that the implementation
> > supports (rather than the default kind)?
> >
> > For instance, in "REAL a; a= 1./7.", 1. and 7. might be interpreted
> > as, say, DOUBLE PRECISION constants.
> >
> > It seems to me that this simple proposal would solve a lot of gotchas
> > that arise when, say, you want to take old code with an expression
> > such as the above and make "a" DOUBLE PRECISION instead.
>
> Yes, but it introduces new problems when there is no obvious
> way to correct a "mistake" do to this promotion.
>
> Things like
> call expects_a_single_precision_arg (3.14)
> or
> call expects_a_single_precision_arg (3.14*single_prec_variable)
> or
> single_prec = max(single_prec, 1.0)
jumping into this thread late (and might have missed bit of it). i
also find adding floating-point literals is very irritating and prone to
errors. what is wrong with casting the right hand side expressions'
floating point literals to the precision of the left hand side
variable and do it ONLY for the floating point literals. for eg in a case
like:
integer, parameter :: SP = selected_real_kind(6,20)
integer, parameter :: DP = selected_real_kind(14,30)
real(DP) :: a
real(SP) :: b
a = sin(3.0)
b = sin(3.0)
currently one would write this as:
a = sin(3.0_DP)
b = sin(3.0_SP)
why not just use whats expected on the lhs and properly cast the literals
on the rhs to the required precision to whats needed.
renchi
|