I am analyzing a simple n-back task arranged in a block design with two conditions of interest (2-back, 0-back), and I want to rigorously examine differences between these conditions (1, -1), while controlling for differences in accuracy between the conditions.
For example, fewer correct responses (as well as more misses and more errors) occurred in the 2-back condition than the 0-back condition, which may account for much of the difference between the conditions of interest.
My question is how do I control for these extraneous differences between conditions in the first-level design? To illustrate, I have a column for the 2-back blocks and a column for the 0-back blocks in the first-level design matrix. I think I also need a third column for correct responses, which will account for variation due to correct answers. But it also occurs to me that this might be better done in two columns, with one column for correct responses during 2-back and a separate column for correct responses during 0-back.
So in general terms my question is, when including regressors-of-no-interest which systematically vary between conditions of interest, is it sufficient to include a single regressor to control for variation, or is it necessary to include separate regressors-of-no-interest for the events in each condition-of-interest?
I imagine the issue of how to analyze an n-back task has been done to death, but as I'm new to this task, I'm unaware of where these discussions have taken place. Any pointers would be appreciated.