Hi John,
Julia is certainly an interesting direction, especially since stable
versions are now available. If others have experience or interest with
it, I'm all ears.
Best regards,
Guillaume.
On 15/10/2019 18:44, [log in to unmask] wrote:
>> This would have to be optional as it would otherwise require
> SPM users to have dedicated hardware and a license for the Parallel
> Computing toolbox
>
> Alternatively, you could port SPM to Julia and use Julia's CUDA wrappers
> <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdevblogs.nvidia.com%2Fgpu-computing-julia-programming-language%2F&data=02%7C01%7C%7Ceab149b615a649f7aec008d751975296%7C1faf88fea9984c5b93c9210a11d9a5c2%7C0%7C0%7C637067582675369630&sdata=UHh3WJMptWUN009rmsz7z8j0oH%2BYI8s5hmZCGrzoUOc%3D&reserved=0>,
> which don't require end-users to purchase a separate license. I'm aware
> that "[p]orting SPM to Julia would be a major investment" (as you
> rightly point out in your wikibooks textbook
> <https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikibooks.org%2Fwiki%2FSPM%2FMATLAB&data=02%7C01%7C%7Ceab149b615a649f7aec008d751975296%7C1faf88fea9984c5b93c9210a11d9a5c2%7C0%7C0%7C637067582675374615&sdata=Bjs5VJE6motMnh4P12BLwAxuHoOtiJik%2BV5RaHh4WiE%3D&reserved=0>),
> but I can only foresee the need for GPU acceleration in neuroimaging to
> grow, and I believe that requiring people to shell out more money to the
> Mathworks for something that other packages like BROCCOLI
> <https://github.com/wanderine/BROCCOLI> can do for free is not
> desirable. My 2 cents: porting SPM to Julia, while a big investment
> upfront, would pay off in the long term.
>
> On Wed, Oct 9, 2019 at 12:56 PM Flandin, Guillaume <[log in to unmask]
> <mailto:[log in to unmask]>> wrote:
>
> Dear Erdem,
>
> You are right that, so far, SPM does not take advantage of
> GPU-accelerated computations but this is likely to change in future
> versions. This would have to be optional as it would otherwise require
> SPM users to have dedicated hardware and a license for the Parallel
> Computing toolbox - unless they are so widespread that we can take them
> for granted??
> Which computations would you like to see accelerated? You mention model
> estimation (I assume of a GLM?): is that really the bottleneck of your
> pipeline? Chris Rorden shared a script recently illustrating speed
> improvements with gpuArray:
> https://github.com/andersonwinkler/PALM/issues/20#issuecomment-534552341
>
> Best regards,
> Guillaume.
>
>
> On 09/10/2019 09:19, Erdem Pulcu Ph.D. wrote:
> > Dear experts
> >
> > I am wondering if anyone has any experience with using Matlab's gpu
> > acceleration functions for SPM? I am guessing these are not natively
> > built into SPM (i.e. SPM would not natively detect Nvidia GPUs and run
> > things on gpuarrays) but maybe possible to implement by tweaking the
> > codes. I am thinking for large datasets and complex models this
> approach
> > can have huge time saving benefits and I would like to know whether
> > anyone has done it, how straightforward to implement such changes etc.
> >
> > Please let me know what you think
> >
> > Erdem
>
> --
> Guillaume Flandin, PhD
> Wellcome Centre for Human Neuroimaging
> UCL Queen Square Institute of Neurology
> London WC1N 3BG
>
--
Guillaume Flandin, PhD
Wellcome Centre for Human Neuroimaging
UCL Queen Square Institute of Neurology
London WC1N 3BG
|