Aleksandar Donev writes:
> I see--I have never used these switches before. NAG has a float-store option
> and the manual says it helps "avoid problems with excess precision". How can
> extra precision be a problem?
>
> Needing more precision at no cost :-)
Steve explained some things. But there can also be different
problems. Yes, it can make some algorithms fail horribly.
Consistency in precision can be important in some algorithms,
particularly ones that are adaptive to precision effects.
Having everything be at a higher precision might be fine, but
having some things higher precision and others not, with it
being very difficult to predict which is which, can be a killer.
That's pretty much the situation you have in many of these cases.
For the most trivial example (and a *VERY* common one) consider
pre-f90 code (of which there is still quite a lot, thank you)
that tries to calculate the precision. In f90, we have it easy,
but prior to f90, this was an extremely common issue that came up
in some of the most carefully written and widely used codes.
Problems with code to try to automatically detect precision are
notorious and often related to the storage of intermediate
results in higher precision.
Sometimes the precision-determination code infinite loops. Other
times it might do a perfectly fine job of determining the precision of
the intermediate storage...which other parts of the code then
incorrectly assume is a precision their operations can get. This
can easily result in unachievable convergence goals and other
numerical problems.
My codes don't tend to have problems relating to extra precision.
Many codes that do have "problems" are really just illustrating
that they are using numerically unstable algorithms and had
hidden problems all along anyway. But there certainly exist
codes where this is a real issue.
My main application area does have a somewhat simillar issue that
surprises many people. (I sure keep having to point it out to
people here). In statistical analysis of measured data, the
algorithms often have a singularity for the situation of perfect
noiseless measurements. This does not tend to be a problem with
real data :-(. Algorithms that falsely assume perfect measurements
are *MUCH* more likely to have big problems.
But when people generate simulated test data using the exact same
model as the analysis tools (but omitting the noise), it tends to be
close enough to the singularity to cause big problems. Yes, higher
precision will drive it closer to the singularity. The solution, of
course, is not to run the simulation in lower precision, but instead
to add pseudo-random noise to it. I'm forever having to point out to
people that for these analyses, noiseless data can actually be worse
than data contaminated with a little noise. Generally, it is
true that if you can't make an algorithm work on a simulation, it
is pretty pointless to try an actually fly it. And that is certainly
very true of the kinds of algorithms I work with...but the simulation
better include some simulated noise.
--
Richard Maine | Good judgment comes from experience;
[log in to unmask] | experience comes from bad judgment.
| -- Mark Twain
|