Dear Nadira,
In your previous message, you mention that your first level model
contains about 100 sessions. How many scans do you have on average per
session?
You are probably correct that the slowdown is due to the large (block
diagonal) covariance matrices that are stored within the SPM structure.
A quick way to verify this is the case is to set 'serial correlations'
to 'none' during fMRI model specification and check whether estimation
happens much faster. One way forward would be to improve spm_spm.m and
spm_est_non_sphericity.m to better handle and store covariance matrices
and covariance components. Otherwise, perhaps you could specify and
estimate one GLM per run (or group of runs); If you need to compute
average effects across runs as summary statistics for a group level
analysis, you could then use ImCalc.
Best regards,
Guillaume.
On 30/06/2021 19:52, Nadira Yusif wrote:
> Hello list,
>
> I was wondering what else could be done to run very large models faster in SPM12 besides changing the default maxmem and resmem options. Is there something else that helps with model estimation speed? I know that SPM outputs very large matrices and have been told that sparse matrices should speed up processes, however I do not know whether this is already implemented in the SPM machinery. Are there other defaults that I may be missing? I do not run models in my interactive matlab sessions either, just through command line.
>
> Thanks in advance!
>
> Nadira
>
--
Guillaume Flandin, PhD
Wellcome Centre for Human Neuroimaging
UCL Queen Square Institute of Neurology
London WC1N 3BG
|