Hi everyone,
here is my concern:
in my experiment, I have 4 experimental conditions which are modeled with 4 EVs in FSL and entered with three columns (onset, duration and "magnitude") in the Full model setup.
Our items are videos but all the videos are not controlled for visual motion (this was impossible). For instance, in some videos, there are more motions than for other videos. It may influence our results, more particularly because there is more motion in some experimental conditions compared to other.
What is the best way to control for visual motion at the item level? We imagined 2 solutions but we are not sure.
- Creating a 5th EV containing all the items of the run and entering in the 3rd column a scaled value (between 0 and 1) representing the quantity of motion of a particular item? If this solution was the best, do we have to orthogonalize this EV compared to the 4 others?
- Adding directly the scaled value for motion quantity in the third column for each experimental EV.
Problem: this scaled value becomes a parametric modulator and we do not want that, we just want to control for motion disparity between items.
Thanks a lot for your help!
Mathieu
########################################################################
To unsubscribe from the FSL list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=FSL&A=1
To unsubscribe from the FSL list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=FSL&A=1