David wrote:
> I'd argue that separation of specification from implementation is desirable
> in the *output* produced by the compiler, but not in the *input*. I, as a
> programmer, do not want to write a separate specification of the interface to
> my procedures....
I don't see cut-and-paste every ten years or so as an onerously arduous
task. But that's irrelevant: It's possible to define a facility of the
language to support separate specification and implementation, that is
compatible to existing Fortran, and that does not require writing
interfaces twice.
This issue has been dismissed for too long as "merely a quality of
implementation" issue. In principle, it's "merely a quality of implementation"
issue whether INTERCAL compilers generate codes that give decent floating-
point performance, but it's better to attack the problem at the language
design level. (The only control statements in INTERCAL are COMEFROM and IF;
the only operations are right circular shift 1 and exclusive or; the only
data type is bit string).
Notwithstanding that problems of compilation cascade were warned about
more than a decade ago, vendors are _only_now_ even beginning to _think_
about attacking this as a "quality of implementation" issue. If
specification and implementation had been separated, at the language
level, from the get-go, users wouldn't have needed to struggle with this
problem for a decade. Compilers might have appeared sooner, because
vendors wouldn't have needed to agonize over what to put in the .mod
file. In fact, Ada compiler vendors already knew, when Fortran 90 was
being designed, that it's faster just to read the text of a package
spec than it is to "compile" it and produce a .mod file, and then read
the .mod file whenever "use" is encountered. In addition, in these
environments, it's impossible to cause a compilation cascade by
accidentally compiling a package spec.
This "quality of implementation" issue is raised quite selectively,
almost like a mantra that caught on more by faddism or by religious
fervor than by judgement. Why weren't array operations dismissed as
a "quality of implementation" issue? Why weren't modern control
structures dismissed as a "quality of implementation" issue? It's
because they have utility to the user community.
Dismissing something about the language design that provides significant
benefit to the user community, while _reducing_ the effort that vendors need
to invest to provide a "quality implementation" is a disservice to everybody.
It's also a little bit loony to discuss separation of specification and
implementation only on the grounds of compilation cascade. There are
at least two other reasons to do it:
1. Code is the best documentation for code. But vendors of libraries
rarely take the trouble to strip out their trade secrets from their
modules and send what remains to users as the most reliable interface
documentation that could exist. In place, users make do with paper
documentation of questionable authenticity. I guess this is also
a "quality of implementation" issue.
2. If specification and implementation were separated, it would be
possible for A's implementation to use B's specification, and
vice-versa, without causing any problems of circularity. This
is sometimes used in Modula-2 when implementation modules get too
big for the compiler. One _really_should_ attack the "huge
implemenation" problem with the same kind of solution as in Ada.
Best regards,
Van
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|