> Maybe we should take this offline to save bandwidth on the > mailing list? At least one person asked for an encore :-). > > It's nice if the compiler can catch some obviosu dependencies. > > I don't know what you meant by your last sentence. The compiler will alert me if any modification of global variables have slipped in from serial test versions, etc. Squeezing the algorithm into the harness of pure procedures goes a long way toward parallelizability (theoretically, at least :-). > Since you made the field PRIVATE, it must be allocated in a routine > in the module that defines the type. As long as you have declared > the mapping of the array of these objects, they will be allocated > correctly (i.e., on the correct processors) by the ALLOCATE > statement. We're getting somewhere: my concern was whether it suffices to declare the mapping of the objects (which contain pointers) as a whole or whether I have to specify mappings _inside_ the module defining the type as well. In other words: if t is already mapped to processor p, will allocate(t%x(100)) allocate t%x on p as well or do I have to add more HPF directives to insure that? If it doesn't waste too much bandwidth, here's a more complete example. May question is essentially, whether the footnote Caveat emptor: the scalability of this version has not been tested yet, because we don't have access to a reliable HPF compiler. In particular, one might have to insert further HPF directives that distribute the array [[gs]] properly. Furthermore [[vamp_fork_grid]] is not local and one might want to tune it to the processor topology. The gain will be very small, however. is correct. NB: `call vamp_create_grid (gs)' just executes nullify for the components of gs(:), but `call vamp_fork_grid (g, gs, gx, d)' does a lot of allocating. @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Practice} In this section we show three implementations of~$S_n$: serial, HPF~\cite{Koelbel/etal:1994:HPF} and MPI~\cite{MPI}. From these examples it should be obvious how to adapt VAMP to other parallel computing paradigms. @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Serial} Here is a bare bones version of~$S_n$, the real implementation in [[vamp_sample_grid]] includes some error handling, diagnostics and the projection~$P$ (cf.~(\ref{eq:P})): <<Serial implementation of $S_n=S_0(rS_0)^n$>>= type(tao_random_state), intent(inout) :: rng type(vamp_grid), intent(inout) :: g integer, intent(in) :: iterations <<Interface declaration for [[func]]>> integer :: iteration iterate: do iteration = 1, iterations call vamp_sample_grid0 (rng, g, func) call vamp_refine_grid (g) end do iterate @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{HPF} Here is the HPF version of~$S_n$. Instead of one random number generator state~[[rng]] it takes an array consisting of one state per processor. These [[rng(:)]] are assumed to be initialized that the resulting sequences are statistically independent. For this purpose, Knuth's random number generator~\cite{Knuth:1997:TAOCP2} is most convenient and is included with VAMP. Before each~$S_0$, the procedure [[vamp_distribute_work]] determines a good decomposition of the grid~[[d]] into [[size(rng)]] pieces. This decomposition is encoded in the array [[d]] where [[d(1,:)]] holds the dimensions along which to split the grid and [[d(2,:)]] holds the corrsponding number of divisions. Using this information, the grid is decomposed by [[vamp_fork_grid]]. A good HPF compiler will then distribute the [[!HPF$ INDEPENDENT]] loop among the processors. Finally, [[vamp_join_grid]] gathers the results. <<Parallel implementation of $S_n=S_0(rS_0)^n$ (HPF)>>= type(tao_random_state), dimension(:), intent(inout) :: rng type(vamp_grid), intent(inout) :: g integer, intent(in) :: iterations <<Interface declaration for [[func]]>> type(vamp_grid), dimension(:), allocatable :: gs, gx integer, dimension(:,:), pointer :: d integer :: iteration, num_workers iterate: do iteration = 1, iterations call vamp_distribute_work (size (rng), vamp_rigid_divisions (g), d) num_workers = max (1, product (d(2,:))) if (num_workers > 1) then allocate (gs(num_workers), gx(vamp_fork_grid_joints (d))) call vamp_create_grid (gs) call vamp_fork_grid (g, gs, gx, d) !HPF$ INDEPENDENT do i = 1, num_workers call vamp_sample_grid0 (rng(i), gs(i), func) end do call vamp_join_grid (g, gs, gx, d) call vamp_delete_grid (gs) deallocate (gs, gx) else call vamp_sample_grid0 (rng(1), g, func) end if call vamp_refine_grid (g) end do iterate @ Since [[vamp_sample_grid0]] performes the bulk of the computaion, an almost linear speedup\footnote{Caveat emptor: the scalability of this version has not been tested yet, because we don't have access to a reliable HPF compiler. In particular, one might have to insert further HPF directives that distribute the array [[gs]] properly. Furthermore [[vamp_fork_grid]] is not local and one might want to tune it to the processor topology. The gain will be very small, however.} with the number of processors can be achieved, if [[vamp_distribute_work]] finds a good decomposition of the grid. The version distributed with VAMP does a good job in most cases, but will fail if the number of processors is a prime number larger than the number of divisions in the stratification grid. Therefore it can be beneficial to tune [[vamp_distribute_work]] to specific hardware. Furthermore, using a finer stratification grid can improve performance. @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{MPI} The MPI version is more low level, because we have to keep track of message passing ourselves. Note that we have made this synchronization points explicit with three [[if ... then ... else ... end if]] blocks: forking, sampling, joining. These blocks could be merged for almost no performance gain at the expense of readability. We assume that [[rng]] has been initialized in each process such that the sequences are again statistically independent. <<Parallel implementation of $S_n=S_0(rS_0)^n$ (MPI)>>= type(tao_random_state), dimension(:), intent(inout) :: rng type(vamp_grid), intent(inout) :: g integer, intent(in) :: iterations <<Interface declaration for [[func]]>> type(vamp_grid), dimension(:), allocatable :: gs, gx integer, dimension(:,:), pointer :: d integer :: num_proc, proc_id, iteration, num_workers call mpi90_size (num_proc) call mpi90_rank (proc_id) iterate: do iteration = 1, iterations if (proc_id == 0) then call vamp_distribute_work (num_proc, vamp_rigid_divisions (g), d) num_workers = max (1, product (d(2,:))) end if call mpi90_broadcast (num_workers, 0) if (proc_id == 0) then allocate (gs(num_workers), gx(vamp_fork_grid_joints (d))) call vamp_create_grid (gs) call vamp_fork_grid (g, gs, gx, d) do i = 2, num_workers call vampi_send_grid (gs(i), i-1, 0) end do else if (proc_id < num_workers) then call vampi_receive_grid (g, 0, 0) end if if (proc_id == 0) then if (num_workers > 1) then call vamp_sample_grid0 (rng, gs(1), func) else call vamp_sample_grid0 (rng, g, func) end if else if (proc_id < num_workers) then call vamp_sample_grid0 (rng, g, func) end if if (proc_id == 0) then do i = 2, num_workers call vampi_receive_grid (gs(i), i-1, 0) end do call vamp_join_grid (g, gs, gx, d) call vamp_delete_grid (gs) deallocate (gs, gx) call vamp_refine_grid (g) else if (proc_id < num_workers) then call vampi_send_grid (g, 0, 0) end if end do iterate @ -- Thorsten Ohl, Physics Department, TU Darmstadt -- [log in to unmask] http://crunch.ikp.physik.tu-darmstadt.de/~ohl/ [<=== PGP public key here] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%