Thank you for your response.
I realize that a number of factors could be at play here and contributing to various degrees.
Our initial second level model was conducted on 31 subjects but subsequent tests of different modeling approaches have used a subset of 10. Our design has three sessions and for some subjects there is considerably variability between sessions (an average con image was used in group analyses), however, many display no activation. Between and within subjects there are some differences in signal intensity relating to the inhomogeneity I mentioned but I cannot say I have a good sense of how significant this is. In terms of motion, there are certainly some subjects who move more than others but for the most part mean and maximum parameters are within limits of what is usually discussed here and was originally regressed out if deemed problematic. Perhaps reliable was not the best word to use as there have been sadly few intentional studies on this but thermal pain generally produces a quite robust response.
-Drew
________________________________________
From: Watson, Christopher [[log in to unmask]]
Sent: Monday, March 10, 2014 9:53 PM
To: Sevel,Landrew S; [log in to unmask]
Subject: RE: fMRI Data Troubleshooting
It depends. How many subjects? First-level analyses aren't producing
reliable results? Maybe there's an issue with the experimental design. Maybe
there is too much motion artifact, or perhaps some hardware issue. What do
the raw BOLD images look like? How "reliable" is the task in other fMRI
studies?
________________________________________
From: SPM (Statistical Parametric Mapping) [[log in to unmask]] on behalf
of Sevel,Landrew S [[log in to unmask]]
Sent: Monday, March 10, 2014 9:29 PM
To: [log in to unmask]
Subject: [SPM] fMRI Data Troubleshooting
Hello all,
I'm looking for some advice on troubleshooting data for potential issues
that could be related no finding no activation in first level models (we
have a significant behavioral effect that is pretty reliably associated with
changes in activation compared to baseline).
Our initial approach was to model the first-level with T/D derivatives,
motion regressions, and outlier removal with the ART toolbox. We
subsequently utilized a different masking approach setting the threshold to
-inf and using an explicit whole brain mask to correct for possible
inhomogeneity from out 32-channel coil (we found there were a number of
voxels not included in the second level mask image). We've also attempted to
remove the regressors of no interest. None of these strategies have lead to
notable improvement. We're wondering if there may be some problems with our
preprocessing approach.
Our data went through the following preprocessing pipeline: slice-time
correction, realign, normalize, smooth (6 mm FWHM). We've additionally
confirmed the accuracy of our SOT files.
Are there any other avenues that would be worthy of pursuit?
Many thanks,
Drew
|