Print

Print


Well, there are some problems with this and I really don't know what
you are trying to parallelize.  But let me say a few things:

1.  You can't call a pure subroutine inside a FORALL construct.  You
can call a PURE *function*, however.  (And a PURE function can call a
PURE subroutine.)

2.  So you could write something like this:

program foo
  integer, parameter :: SIZE_TA = 100
  integer, dimension(SIZE_TA) :: status

  type  t
     real, dimension(:), pointer :: x
  end type t

  type(t), dimension(SIZE_TA) :: ta
!hpf$ distribute(block) :: ta
!hpf$ align(:) with ta(:) :: status

  forall (i = 1:size(ta))
     status(i) =  p (ta(i))
  end forall

contains
  pure function p (one_t)
    integer p
    type(t), intent(in) :: one_t
    p = 3
  end function p

end program foo

The Digital Fortran/HPF compiler will distribute the iterations of the
forall under these conditions.

Note that the way I have written this program, the pointer field x is
itself not mapped to more than one processor.  That is, all the
elements of ta(i)%x(:) live on one processor -- the one processor that
owns ta(i).  The different elements of ta (i.e, ta(1), ta(2), ...) are
distributed in a block fashion over the available processors.  (This
is as defined by the HPF language, not a peculiarity of the Digital
compiler.)

I don't know if this is what you wanted to accomplish, or if you
really had in mind distributing the pointer field x.  In that case,
you would need to write the forall differently, so that it looped over
elements of x, rather than elements of ta.  But this would violate
Fortran rules -- you can't write something like ta(:)%x(i) in this
case.  You could, however, use an INDEPENDENT loop instead --
something like this (to go back to your original subroutine rather
than function):

program foo

  type  t
     real, dimension(:), pointer :: x
  end type t

  type(t), dimension(100) :: ta
!hpf$ distribute(block) :: ta

!hpf$ independent
  do j = 1, 100
     do i = 1, 100
!hpf$ on home(ta(j)%x(i)), resident
        call p (ta(j)%x(i))
     end do
  end do

contains
  pure subroutine p (one_t)
    real, intent(in) :: one_t
  end subroutine p

end program foo

But as I say, I don't really know whether any of this addresses your
real concern.  If you want, you can send me email with a more detailed
question, and I can look at it.

	      --Carl Offner

****************************************************************
Carl Offner
High Performance Fortran Compiler Development
Digital Equipment Corporation
129 Parker Street, PKO3-2/B12
Maynard, MA 01754-2198
USA

(978) 493-3051
[log in to unmask]
****************************************************************



>Date: Wed, 29 Apr 1998 16:01:09 +0200
>From: Thorsten Ohl <[log in to unmask]>
>
>I have a question for the HPF wizards out there:
>
>Assume that I have a derived type containing a POINTER component.
>This component is a pointer because the standards forbids ALLOCATABLE
>arrays a components and it will only appear in ALLOCATE and DEALLOCATE
>statements, never in a pointer assignment (if that makes any
>difference).
>
>  type :: t
>    private
>    real, dimension(:), pointer :: x
>  end type t
>
>Assume further that a have an array of this derived type
>
>  type(t), dimension(SIZE_TA) :: ta
>
>and a PURE procedure
>
>  pure subroutine p (one_t)
>    type(t), intent(in) :: one_t
>  end subroutine p
>
>what will a typical (or a good) HPF compiler do with code like
>
>  forall (i = 1:size(ta))
>     call p (ta(i))
>  end forall
>
>Will it parallelize the code (assuming appropriate $HPF directives) or
>will the pointer component prevent this?
>
>I hope that
>
>  type :: t
>    private
>    real, dimension(MAX_SIZE_X) :: x
>    integer, size_x
>  end type t
>
>and replacing size(t%x) by t%size_x everywhere is _not_ the only
>solution ...
>
>I don't have a working HPF environment available.  I'm asking, because
>I want to make sure that a library that I have developed with MPI
>makes some sense with HPF as well.
>
>Thanks,
>-Thorsten Ohl
>-- 
>Thorsten Ohl, Physics Department, TU Darmstadt -- [log in to unmask]
>http://crunch.ikp.physik.tu-darmstadt.de/~ohl/ [<=== PGP public key here]
>


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%