I am trying to find a way to configure code before compile time to set the optimal loop vectorization size for the user’s machine and then (using the Fortran preprocessor) get that value and set the loop size to this value.  For example,  on Machine1, nvec might be 64, on machine2, it might be 1024, and the code would “do i=1,nvec”  (obviously not quite that way).  The question is whether there’s a way to automatically get the optimal vector size from each machine (using Linux) or whether there’s  a better way to get the same result?  Any suggestions are welcome!

 

Naomi Greenberg

Member of the Research Staff

Riverside Research Institute

 

(212) 502-1718  (ph)

(212) 502-1729  (fax)

[log in to unmask]