[log in to unmask] wrote:
> Many of you will have noticed that the source code for Sun fpp has been placed on netlib, and it has been outfitted with autoconf etc. so that it compiles and runs on several systems, including my cygwin/w2k.
Hi,
I have already checked out Sun's fpp. I also read the ISO specifications of CoCo, and I am not very happy with either.
In summary, they are both *too* simple and have very limited functionality. A preprocessor should not just let you select pieces of code to compile and maybe every now and then make a macro for x**3, that can be done well with cpp even in a Fortran context. Also, the specification of fpp by Sun doesn't at all tell you how
say recursive or nested macros are treated, which is a *very* important topic.
I like the f90ppr of M. Olagnon as a proposal because:
1. It recognizes the *important* fact that most extensions to the language come in the form of directives that appear to Fortran as comments, like !HPF. So it allows me to treat HPF code well by simple saying -C '!HPF' at the command line.
2. It indents/capitalizes Fortran code well, even in the presence of directives like !HPF
3. It can have names that are concatenations of names with ?, so one can say:
Function CubeOf?type (x) Result (cube)
type(Kind=kind) :: x, cube
cube=x**3.0_?kind
End function
and just set
define type Real
define kind dp
to get a nice definition for a function giving the cube of a double precision number:
Function CubeOfReal (x) Result (cube)
Real(Kind=dp) :: x,cube
cube=x**3.0_dp
End function
or something to that effect.
But there are several things that I don't at all enjoy about it:
1. It is written in Fortran! Who on earth said that a Fortran preprocessor needs to be written in Fortran? Aren't we focusing on language interdependance? For example, since CHARACTER datums can not be allocatable even in Fortran 90 (they are apparently treated differently from strings--arrays of datums), f90ppr has some
silly and very anoying limits on the length of files, number of commenting lines, etc. Also, in its naked form, it does not accept command line arguments. I mean, why not do this right in C++ or C?
1. It does not recognize the names of most intrinsics (just lack of time for typing all of them in, I guess)
2. Macros are expanded at one level only, with no multiline or variable argument macros! Macro/define nesting and recursion with a specified depth of recursion (or an equivalent macro looping constuct) and especially macros with variable number of arguments enable writing some very elegant code that can make a whole library
out of a few source files. For example, say I want to make two functions CubeOfReal and CubeOfInteger.
If I were allowed multiline macros and nested macros and some other nifty macro features like varargin, this would be done soo elegantly in something like:
# macro CybeOfType ( varargin (type, kind=sp)) begin
Function CubeOf?type (x) Result (cube)
type(Kind = kind) :: x, cube
cube=x**3.0_?kind
End function
#end macro
and then simply use:
CubeOfType( Real, sp) ! or just CubeOfType( Real)
to get a definition of x**3 for single precision numbers. Or maybe if one has to do this for all possible kinds/types for several different functions:
#macro FunctionOfAllKinds ( function) begin
function?OfType (Real, sp)
function?OfType (Real, sp)
function?OfType (Real, sp)
....
#end macro
and then just type:
FunctionOfAllKinds ( Cube) ! Fortran 90's features can be used only partly to do something like this.
Yes, I now that this is a preprocessor that is in itself a small programming language. But nesting macros and varargin already exist in some nifty preprocessors like cpp and the literate programming tool FWEB. So, instead of 20 people writing 20 different preprocessors with the same limited simplistic functionality like fpp
or CoCo, why doesn't somebody work out a really nice proposal and write the preprocessor in standard portable C or C++?
As another removed example, when solving a PDE on a 2D and 3D grid, there are many boundaries and corners that need to be coded separately. This can be done very well with the above features in N dimensions (must be known at compile time) and for all corners/boundaries. The compiler will then be able to efficiently compile
the resulting code.
Also, I point you to C++'s inline definition for procedures. Why not let the user make his own inlined procedures using the above tools.
....
3. Is not at all extensible. Say, in HPF, subroutine declarations start with the keyword (macro) EXTRINSIC(...), and this totally freaks out fpp and other f90 tools like f90doc (for html documenting). So it should be possible to add keywords and maybe even some limited definitions of what to do these keywords when they
appear (so I can extend fpp to handle HPF commands). Other possibilities of extending the base pre-processor should also be thought about.
4. One can not indent code that has preprocessing directives in it. So, as I develop the code, I can not indent it for better visual overview since this will envoke the preprocessor and execute the # instructions. I went around this by defining my own commenting symbol for fpp, !FPP$, which goes back to positive side #1.
OK, enough of my rambling, but I am really frustrated because we have such complex and beatiful *standardized* programming languages, but no one seems to recognize the importance of such preprocessors (I should call them "code generators"; and I especially dislike the "conditional compilation" expression because it shows
that the functionality is mostly limited to #if statements),
Aleksandar
--
_____________________________________________
Aleksandar Donev
Physics Department
Michigan State University
East Lansing, MI 48824-1116
E-mail: [log in to unmask]
Work phone: (517) 432-6770
_____________________________________________
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|