Hi all,

I am seeking suggestions or advice on how to modify standard preprocessing steps in order to analyze an fMRI experiment that assesses drug-induced deactivation of brain regions. The protocol is to acquire standard whole-brain fMRI volumes for 30 minutes continuously, where the first 15 minutes is a pre-drug baseline, and the following 15 minutes is post-injection of the drug. My hypothesis is the baseline BOLD signal will shift down specifically in my regions of interest. Therefore, I would like to compare the mean BOLD signal for the 15 minutes post-injection versus the 15 minutes pre-injection. According to my lit search, this type of analysis is uncommon though apparently not unprecedented with standard fMRI. Clearly, fMRI is very susceptible to baseline artifacts (i.e. scanner drift, physiology, head position) that are very difficult to remove, and *may* be indiscriminable from the experimental manipulation itself.

So I have two questions:

1) What is the best way to remove scanner drift with this design? I have avoided any highpass voxelwise filtering, as I believe this could remove the "shift" in activity in the regions that I am interested in. My first thought was to remove the global drift component using single-session MELODIC. But, surprisingly, no such component appeared. Rather, the drift was included on most other components. Now I am thinking to remove the average white matter signal using linear regression instead. Any thoughts?

2) What is likely to be more powerful when comparing post-drug versus pre-drug - comparing mean scanner units (i.e. %signal change) or modeling the shift in activity and extracting the beta weight?

Thanks all!

David