Hi, 

Thanks for this, I agree with your comments.  I was just trying to understand why the results are so different from one way of modeling to another.  You are right that by getting negative values on the custom files (the demeaned option), one would  expect deactivation, but this is not so unreasonable, if you think that the absence of the semantic feature should then give you deactivation in the areas in which the corresponding semantic activity is processed. 

I got confused because Worsley’s chapter in the book edited by Jezzard et al. (if I am reading it correctly) suggests demeaning intensity levels in this way, and also suggests that doing this should be similar to the analysis with the weight of 1, at least for the contrast looking for the linear trend.  However, his example is a bit different from this case.

We wanted  somehow to use the ratings in the analysis because it should give you more sensitivity to capture variability across items. By categorizing the stimuli in low medium high groups, we are ignoring the fact that some stimuli in the medium group overlap with the ratings of the high group. The ratings give us a continuous variable, but our grouping then assumes that every stimulus within the group has the same intensity, but this is not the case. So it seems reasonable to assume that somehow using the information of the ratings in our analysis would improve sensitivity. Do you agree? But it is not clear what would be the best way to incorporate this information into the analysis.  The ratings scale is in itself arbitrary. We could have used any other scale to extract judgments from people.  So I do not think that using 7 EVs (one of the possibilities you suggest) would be the right thing to do.  This would also reduce power as few events will be included in each value.

Any more thoughts are welcome.


Silvia



 



 
On 20 Jul 2007, at 22:17, Christian Beckmann wrote:

Hi

In your case I suspect it's the second type of analysis you want. In principle, you could model this using 7 EVs, one per rating level (where you simple use standard height of 1 for each EV!) This might not be the optimal analysis, though, because you might end up with very few events per EV so that your design is not particularly efficient ( you can check this by looking at the efficiency calculation in feat)

If you believe that you can validly group these different types into different classes then you'd model this with 3 EV where again you'd model the height as a constant 1. Your first type of analysis is a hybrid way of doing the analysis: your using 3 EVs because you believe you can validly group but then you end up using the ratings within an EV which introduces assumptions about the relation between signal amplitude and difference in rating, e.g. if you explicitly put in 6 and 7, say, you effectively introduce the assumption that under the highest semantic category the raw intensity increases by 1/6 compared to the mean intensity measured during the events of type 6... the classification into 7 levels is purely categorial - or do you have reason to believe that the change in signal during events 3 and 4 is equal to the change in intensity when you switch from event type 6 to event type 7?

The third way of doing this does not make sense to me at all - by de-meaning the 3EVs by the global mean you end up EV1 'on' events being modelled as negative changes (reduction in BOLD). I'd guessed that the +1 0 +1 contrast looks similar to the -1 0 1 contrast in your first analysis?

hope this helps
Christian



On 20 Jul 2007, at 19:07, Silvia Gennari wrote:

Hello,

I have a question about how to model a parametric design with some specific characteristics.

In a rapid event related design, we are presenting sentences that have been independently associated with semantic ratings. The stimuli can be grouped into low, medium and high stimuli (where the low condition is really the absence of the semantic feature and the other two are of medium and high intensity). Hopefully, only areas that are associated with the specific semantic feature investigated here should be sensitive to the manipulation. These are the areas that we are trying to identify.

We have modeled our pilot data from a few subjects in three different ways. I thought that these ways of looking at the data should turn out to be fairly similar, but I am puzzled now by some differences that we got.

In the first analysis, we created three custom files with three EVs (low, medium high), putting the semantic ratings as weights (the ratings are in a scale from 1 to 7). Then we computed the contrast [-1 0 +1], to capture the linear trend, as well as other contrasts such as [0 -1 1], [-1 1 0].

In the second analysis, we created similar custom files but this time we assign a weight of 1 to each of the EVs (low medium high). Then again we calculated the contrasts as above.

In the third analysis, we demeaned the ratings (subtracting the overall mean rating from each individual rating) and put them on three EVs as above. This time, the “low” condition has negative values, the medium has values around 0 and the high has values around 1.

 The first two analyses look fairly similar, say, for the first contrast [-1 0 +1]. There are some difference in the size of the activated areas but a lot of overlap (as I expected). However, demeaning the ratings gives us something completely different (and not entirely meaningful, like lost of activity in the cerebellum) and I am not quite sure why this is. Are we applying the wrong logic?

Thanks

Silvia




____
Christian F. Beckmann
University Research Lecturer
Oxford University Centre for Functional MRI of the Brain (FMRIB)
John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK.
[log in to unmask] http://www.fmrib.ox.ac.uk/~beckmann
tel: +44 1865 222551 fax: +44 1865 222717



Silvia Gennari
Department of Psychology
University of York
York, YO10 5DD
United Kingdom