Hi FSL group,
I'm analyzing an emotional declarative memory task. I show subjects negative and neutral pictures and ask the subject to indicate if the picture is negative or neutral. The pictures are shown in a pseudorandom order with a jittered ISI. I'm interested in the effect of memory (encoding) and emotion. My question is how to properly set up the models. Should I have 2 separate models (one for encoding and another for emotion) or is it best to combine everything into 1 model?
Option 1:
Model 1 (Encoding): I would have 7 EVs (1 EV for picture onsets and the other EVs for motion). The contrast for encoding would be: [1 0 0 0 0 0 0]
Model 2 (Emotion): I would have 8 Evs (EV 1 for Negative pictures, EV 2 for Neutral pictures, and the rest for motion). The contrasts would be:
Neg-baseline (effect of negative pictures. sanity check): [1 0 0 0 0 0 0 0]
Neutral-baseline (effect of neutral pictures. sanity check): [0 1 0 0 0 0 0 0]
Negative-Neutral (effect of negative emotion): [1 -1 0 0 0 0 0 0]
Option 2:
Model (Encoding and Emotion): I would have 8 Evs (EV 1 for Negative pictures, EV 2 for Neutral pictures, and the rest for motion). The contrasts would be:
Encoding-baseline (effect of encoding): [0.5 0.5 0 0 0 0 0 0]
Neg-baseline (effect of negative pictures. sanity check): [1 0 0 0 0 0 0 0]
Neutral-baseline (effect of neutral pictures. sanity check): [0 1 0 0 0 0 0 0]
Negative-Neutral (effect of negative emotion): [1 -1 0 0 0 0 0 0]
Besides the loss of a degree of freedom for Option 2, is one option more valid than another?
|