Dear Tom,
"If I understand correctly, this should enable the model to account for greater neural activation for sections that are rated more pleasantly."
The PM can be used to account for differences between trials within a particular condition, either just to explain some variance or because you're really interested in a corresponding effect. Often people just test for linear relationships, the true relationship might be completely different / more complex though.
You can also use the PM estimates to adjust for differences between conditions / to simulate activation levels: assume condition A has an average rating of 5, condition B an average rating of 7. Differences between A and B might then be due to differences in ratings and/or due to more fundamental differences. If there's a linear relationship between rating and activation then you can estimate the PM for one of the two conditions, e.g. for B, then take the beta estimates from the standard regressor B - 2 * beta estimates from the PM B, this should correspond to simulated activation level for B at a rating level of 5. You could then contrast this to A = differences between A and B corrected for differences in activation due to different average ratings. If you have more than a single session it's going to be difficult to decide whether to use the PM estimates averaged across sessions or PM B1 for session 1, PM B2 for session 2, ... and you might also get different results depending on whether you combine A and PM A or B and PM B.
"This is because all pieces of music were given the same pleasantness ratings."
This is a common problem, especially when dealing with experiments that consist of several rather short sessions. Simply put, for the corresponding session/condition there is no data to estimate the PM, so you have to discard the session. You can't use values like 0 because this would imply no change in activation when changing the PM, which is NOT the case (you don't know whether activation would change in case the rating underlying the PM had changed). For consistency reasons you should probably discard the whole session for any of the contrasts, otherwise some of the contrasts might be based on three sessions, others on two. This is suboptimal of course, maybe the best way is to discard the whole subject (stating that it was impossible to properly estimate the PM due to the response pattern). By the way, if almost all the ratings have the same value you can estimate the PM but the estimates can be expected to be very noisy, this wouldn't make much sense either.
"parametrically modulated normal music > normal music"
Just to make sure: Do you really have two different single-subject models (PM yes/no) that you want to contrast?
Best,
Helmut
|