> Thank you very much, that was really helpful. For now we tend not to average across subjects because they are stroke patients, but we will definitely need the information for future studies.
> I just have two more general questions on FIR and HRF modeling:
> 1. I have read that the time window of FIR typically spans the assumed full range of HRF, i.e. ~20 seconds after stimulus onset, but does it depend on whether I am using a block design versus event-related design? For example, here I have a block design of 30 seconds each (which is long), should I prolong the time window accordingly?
You only need to use the length of the HRF (~20 seconds) as the stimulus length will superimpose enough HRFs to model the whole block. However, be aware the the influence of the HRF on the response to a block design is quite minimal by comparison with an event related design. So you might have very little information to drive the estimation of a lot of the HRF components within the FIR, meaning that you are likely to have poor statistical power. For this reason it is uncommon to use FIR together with a block design. Something simpler, like a 3 component basis from FLOBS is likely to be better conditioned and give you better statistical power.
> 2. Is there a way to assess "goodness of fit" in FSL? What I have done is calculating the temporal variance of the residual and dividing it by the temporal vaiance of the filtered_func_data to look at how much variances are left over:
> fslmaths res4d.nii.gz -Tstd res4d_Tstd.nii.gz
> fslmaths filtered_func_data.nii.gz -Tstd filtered_func_Tstd.nii.gz
> fslmaths res4d_Tstd.nii.gz -div filtered_func_Tstd.nii.gz percentage_unexplained_variance.nii.gz
> I wonder if this is valid and whether there are better ways to do it.
This is a perfectly good way to assess the fit, and would be what we would recommend.
All the best,
> Thank you very much!