On Fri, Mar 20, 2009 at 18:19, Michael Hanke <[log in to unmask]> wrote:
> That is right. There is no simple porting at all, since the whole
> approach is totally different. Having 800 shader units in a GPU, you
> would have to split a problem into 800 pieces to take full advantage of
> it. It is already non-trivial to refactor a codebase from single-thread
> logic to multi-threading (e.g. two or four parallel threads to harvest
> the capabilities of today's CPUs). Moreover, these shaders typically
> support single precision data only -- which is usually not the common
> datatype for scientific computing.
<great big tick!>
Mind you there are usually chunks/functions in your code that are
applicable to this, typically not in things like registration but
perhaps in things like SVD fits. I have fiddled about with this --
well OK, not me a masters student whom I foisted this upon -- in MINC
(arrrgh! that must almost be a taboo word in here) for DTI and the
results are very impressive in terms of speed. Still as Michael says,
given the hardware dependencies perhaps not always the best or most
maintainable approach.
If you are keen on doing such a project go for it! GPU programming is
a lot of fun and you have a decent excuse to buy that card you have
always wanted... :) Perhaps join in some of the others who are porting
parts of the GSL library to such things. Of course you would then
focus on the parts of GSL that FSL might/could use.
a
|