What I do now, with PVM which has the same difficulty as MPI, and for
the same reasons, is to make up arrays, one for each type and kind of
component in the structure. The extent of each array is the number of
components of that type and kind. Then I explicitly copy components
into the appropriate arrays, and send each array separately. On the
receiving side, I do the opposite. The code is ugly and fragile, but it
works. A tool to generate this code would be nice to have.
On Thu, 2010-12-16 at 18:30 -0800, Kurt W Hirchert wrote:
> The reason the MPI standard doesn't do something like this internally is
> that what you are doing doesn't work across the full range of
> environments the MPI standard was intended to support. What you are
> doing will (probably) work in homogeneous MPI environments (i.e., those
> where all the processors are the same kind of hardware), as is most
> common today, but will almost certainly fail in heterogeneous
> environments, as was common when the standard was first developed. The
> MPI routines are defined to transfer values, not bit patterns. If you
> are transferring integers between two heterogeneous processors, this
> might involve swapping bytes because of a difference in endianness or
> propagating the sign bit to account for a difference in word length. If
> any kind of transformation is done between processors, it will be likely
> to mess up your TRANSFER. Even if no transformation is done on
> integers, you could have trouble if the kinds of values that actually
> make up a "something" have differing representations on the various
> types of processors.
>
> The MPI standard does have its own facilities for supporting structured
> types, but as I remember, they are a little bit clunky. Because the MPI
> library has no access to what the compiler knows about the composition
> of a structured type, it depends on you to make a series of calls to
> provide it that information. In your example, you would have to tell it
> that a "something" has three real components and the locations of those
> components relative to the beginning of the "something". (There are
> ways to use an actual "something" on each machine to compute these
> relative locations in a way that is independent of the kind of processor
> you are using.) Once you've told the library what a "something" looks
> like (on both the sending and receiving ends), you can tell MPI to
> transmit a "something" (or array of "somethings") and it will
> automatically decompose the "something" into values it knows how to
> transmit correctly. (I'm being intentionally a little bit sketchy here
> because I last looked at these facilities nearly a decade ago, and my
> memory of the details is far from complete.)
>
> [Sometimes working with the tools available means understanding how the
> tools were intended to be used instead of complaining about how poorly a
> screwdriver pounds in nails. :-) ]
>
> -Kurt
>
> P.S. Having said the above, I would be sympathetic to complaints that in
> today's environments, MPI should support a method for untransformed
> transfer of bits between homogeneous processors, and that there ought to
> be a tool that accepts the textual definition of a Fortran TYPE and
> generates the necessary MPI calls to inform the library of the structure
> of that TYPE.
>
> On 12/16/2010 2:30 PM, Ted Stern wrote:
> > Thanks, Steve, for both the workaround and the promise of an eventual
> > improvement.
> >
> > All I'm trying to do is get some data from a namelist and propagate it
> > throughout the MPI job.
> >
> > IMO the MPI standard should be able to do something like this
> > internally with Fortran derived types so you don't have to go through
> > these shenanigans just to pass data around.
> >
> > But MPI is somewhat primitive in that respect and has more of an f77/c
> > flavor. I don't like the size(transfer(...)) syntax either, but one
> > has to work with the tools available.
> >
> > Ted
> >
|