To what extent would FIX classifier performance suffer when applying a classifier to a different dataset (i.e. a test set) with different acquisition parameters? I know that ideally a new classifier should be trained in such scenarios, and that FIX includes a small collection of pre-trained classifiers for those who like to take shortcuts, but I’m wondering about the extent to which differences in voxel dimensions, TR, and # of slices in particular may negatively impact FIX classifier performance. The project that I’m working on would benefit from being able to use the same classifier across multiple datasets, where voxel dimensions and number of slices differ slightly, but TR and other major acq. params are roughly equivalent. Might performance benefit from spatially resampling the test set (i.e. to make it more homogeneous to the training set) before applying the classifier? Or would the best solution simply be to train a classifier based on examples from all datasets, even if their acq. params may differ?
Department of Psychology | The University of Texas at Austin