Just to add my 2 cents ---
I've not dealt with this myself, but I agree with Stephen's take on
this. One point to emphasize is that the more subjects you have, and
the more equally they are distributed across the two scanners, the
better. As Stephen says the only way I think you'd be completely
stuck is if scanner was confounded with some other between-subjects
factor of interest (i.e., all/most controls on one scanner, all/most
patients on another). But hopefully that's not the case.
I also think you need to include scanner effects in your second level
models, although as Stephen points out, with small numbers of subjects
it might be hard to interpret a null result.
Finally, the following paper is based on structural scanning, but
might be of some interest:
Stonnington et al. (2008) Interpreting scan data acquired from
multiple scanners: A study with Alzheimer's disease. NeuroImage 39,
1180-1185. http://dx.doi.org/10.1016/j.neuroimage.2007.09.066
Best regards,
Jonathan
PS Regarding the implicit assumption that error variance is constant
across subjects, could this be addressed through a mixed-effects
analysis in SPM? i.e. Friston et al (2005) Mixed-effects in fMRI
studies. NeuroImage 24, 244-252.
http://dx.doi.org/10.1016/j.neuroimage.2004.08.055
On Wed, May 26, 2010 at 1:24 PM, Stephen J. Fromm <[log in to unmask]> wrote:
> I think referees might not like it, if you do the right thing and are honest in your submission.
>
> That being said, if you have only one patient population, it's not clear to me why it's a problem. That is, if you're trying to show an effect is statistically significant, you're just averaging the results of the two scanners. There is of course the implicit assumption in the way SPM does things that error variance is fairly constant across subjects (I think FSL for example gets around this), but I don't think that assumption is that big a deal (meaning, the relevant stats are often fairly robust to violations of it).
>
> The problem would be if you're comparing two populations, and you don't have a good balance of both populations on the two scanners. The extreme case, which would make the results useless, would be if all the subjects of group A were on one scanner, and all the ones in group B on the other. More generally, one could check for a group-by-scanner interaction and claim that it's not statistically significant (though often the cells are small enough that that's weak evidence).
>
|