Hi
I'm not suggesting that you should be scaling rating into a 1-2
range, just that you need to think about what you currently impose on
the analysis when using category 'names' like 6 or 7... whether or
not scaling makes sense really depends on your scientific hypothesis.
cheers
Christian
On 24 Jul 2007, at 11:32, Silvia Gennari wrote:
> Hi, Christian,
>
> Again, thanks for this.
>
> I am thinking that we will not be able to use 7 EVs with the
> ratings, because we will not have enough events in each category.
> But I will try it anyway to check for efficiency.
>
> Maybe, we can scale the ratings within each EV to values between 1
> and 2, as I think you are suggesting. Are you also implying that
> we could demean the ratings within each category?
>
> Silvia
>
> On 22 Jul 2007, at 13:00, Christian Beckmann wrote:
>
>> Hi
>>
>> there are separate issues wrt modelling: if you assume that there
>> are de-activations in the data you still need to make sure that
>> you use the right contrasts. In your case you might choose to
>> model the events in class 1 with negative amplitude - if you then
>> want to estimate linear increases between low-medium and high
>> amplitude condition you then can't do this with the -1 0 +1
>> contrast. So overall I think that the possibility of modelling on
>> periods by using negative amplitudes is not useful.
>>
>> The most important reason why your third type of analysis does not
>> make sense to me is that I think you de-mean the 3EVs by their
>> global mean - not by their individual means. This will not give
>> you sensible results: the data has been de-meaned, so all the
>> separate EVs also need to be mean 0. Otherwise almost all linear
>> combination of these EVs will have non-zero mean and will not fit
>> the data correctly. The question of positivity is secondary - that
>> could be sorted out later using the appropriate contrasts, though
>> it's easiest sticking with modelling all effects as having
>> positive change over baseline.
>>
>> Wrt using the rating information: yes, if you believe that these
>> ratings contain valuable information you'd want to use this -
>> though I don't think that fixing the relative amplitudes to the
>> arbitrary numbering of ratings does not necessarily make sense
>> unless you truly believe that a change in stimulus from level 2 to
>> 3 gives a 50% increase in BOLD while a change from event type 6 to
>> 7 only gives a 16.6% increase. The numbering of feature levels
>> should be independent from the numerical assumptions you want to
>> make about the amount of BOLD increase, i.e. it is reasonable to
>> group into your 3 EVs but then use 1 for the lowest level within
>> each group and maybe 1.2 for the second lowest level within each
>> EV - that way you assume a 20% increase in BOLD.
>> In general, the most flexible way of using this information this
>> is to use separate EVs (7 in your case) of equal height 1 (i.e.
>> your arbitrary scale becomes inimportant, though you create a less
>> efficient design if number of events per level are low.
>> cheers
>> christian
>>
>>
>>
>> On 22 Jul 2007, at 11:42, Silvia Gennari wrote:
>>
>>> Hi,
>>>
>>> Thanks for this, I agree with your comments. I was just trying
>>> to understand why the results are so different from one way of
>>> modeling to another. You are right that by getting negative
>>> values on the custom files (the demeaned option), one would
>>> expect deactivation, but this is not so unreasonable, if you
>>> think that the absence of the semantic feature should then give
>>> you deactivation in the areas in which the corresponding semantic
>>> activity is processed.
>>>
>>> I got confused because Worsley’s chapter in the book edited by
>>> Jezzard et al. (if I am reading it correctly) suggests demeaning
>>> intensity levels in this way, and also suggests that doing this
>>> should be similar to the analysis with the weight of 1, at least
>>> for the contrast looking for the linear trend. However, his
>>> example is a bit different from this case.
>>>
>>> We wanted somehow to use the ratings in the analysis because it
>>> should give you more sensitivity to capture variability across
>>> items. By categorizing the stimuli in low medium high groups, we
>>> are ignoring the fact that some stimuli in the medium group
>>> overlap with the ratings of the high group. The ratings give us a
>>> continuous variable, but our grouping then assumes that every
>>> stimulus within the group has the same intensity, but this is not
>>> the case. So it seems reasonable to assume that somehow using the
>>> information of the ratings in our analysis would improve
>>> sensitivity. Do you agree? But it is not clear what would be the
>>> best way to incorporate this information into the analysis. The
>>> ratings scale is in itself arbitrary. We could have used any
>>> other scale to extract judgments from people. So I do not think
>>> that using 7 EVs (one of the possibilities you suggest) would be
>>> the right thing to do. This would also reduce power as few
>>> events will be included in each value.
>>>
>>> Any more thoughts are welcome.
>>>
>>>
>>> Silvia
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 20 Jul 2007, at 22:17, Christian Beckmann wrote:
>>>
>>>> Hi
>>>>
>>>> In your case I suspect it's the second type of analysis you
>>>> want. In principle, you could model this using 7 EVs, one per
>>>> rating level (where you simple use standard height of 1 for each
>>>> EV!) This might not be the optimal analysis, though, because you
>>>> might end up with very few events per EV so that your design is
>>>> not particularly efficient ( you can check this by looking at
>>>> the efficiency calculation in feat)
>>>>
>>>> If you believe that you can validly group these different types
>>>> into different classes then you'd model this with 3 EV where
>>>> again you'd model the height as a constant 1. Your first type of
>>>> analysis is a hybrid way of doing the analysis: your using 3 EVs
>>>> because you believe you can validly group but then you end up
>>>> using the ratings within an EV which introduces assumptions
>>>> about the relation between signal amplitude and difference in
>>>> rating, e.g. if you explicitly put in 6 and 7, say, you
>>>> effectively introduce the assumption that under the highest
>>>> semantic category the raw intensity increases by 1/6 compared to
>>>> the mean intensity measured during the events of type 6... the
>>>> classification into 7 levels is purely categorial - or do you
>>>> have reason to believe that the change in signal during events 3
>>>> and 4 is equal to the change in intensity when you switch from
>>>> event type 6 to event type 7?
>>>>
>>>> The third way of doing this does not make sense to me at all -
>>>> by de-meaning the 3EVs by the global mean you end up EV1 'on'
>>>> events being modelled as negative changes (reduction in BOLD).
>>>> I'd guessed that the +1 0 +1 contrast looks similar to the -1 0
>>>> 1 contrast in your first analysis?
>>>>
>>>> hope this helps
>>>> Christian
>>>>
>>>>
>>>>
>>>> On 20 Jul 2007, at 19:07, Silvia Gennari wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I have a question about how to model a parametric design with
>>>>> some specific characteristics.
>>>>>
>>>>> In a rapid event related design, we are presenting sentences
>>>>> that have been independently associated with semantic ratings.
>>>>> The stimuli can be grouped into low, medium and high stimuli
>>>>> (where the low condition is really the absence of the semantic
>>>>> feature and the other two are of medium and high intensity).
>>>>> Hopefully, only areas that are associated with the specific
>>>>> semantic feature investigated here should be sensitive to the
>>>>> manipulation. These are the areas that we are trying to identify.
>>>>>
>>>>> We have modeled our pilot data from a few subjects in three
>>>>> different ways. I thought that these ways of looking at the
>>>>> data should turn out to be fairly similar, but I am puzzled now
>>>>> by some differences that we got.
>>>>>
>>>>> In the first analysis, we created three custom files with three
>>>>> EVs (low, medium high), putting the semantic ratings as weights
>>>>> (the ratings are in a scale from 1 to 7). Then we computed the
>>>>> contrast [-1 0 +1], to capture the linear trend, as well as
>>>>> other contrasts such as [0 -1 1], [-1 1 0].
>>>>>
>>>>> In the second analysis, we created similar custom files but
>>>>> this time we assign a weight of 1 to each of the EVs (low
>>>>> medium high). Then again we calculated the contrasts as above.
>>>>>
>>>>> In the third analysis, we demeaned the ratings (subtracting the
>>>>> overall mean rating from each individual rating) and put them
>>>>> on three EVs as above. This time, the “low” condition has
>>>>> negative values, the medium has values around 0 and the high
>>>>> has values around 1.
>>>>>
>>>>> The first two analyses look fairly similar, say, for the first
>>>>> contrast [-1 0 +1]. There are some difference in the size of
>>>>> the activated areas but a lot of overlap (as I expected).
>>>>> However, demeaning the ratings gives us something completely
>>>>> different (and not entirely meaningful, like lost of activity
>>>>> in the cerebellum) and I am not quite sure why this is. Are we
>>>>> applying the wrong logic?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Silvia
>>>>>
>>>>>
>>>>>
>>>>
>>>> ____
>>>> Christian F. Beckmann
>>>> University Research Lecturer
>>>> Oxford University Centre for Functional MRI of the Brain (FMRIB)
>>>> John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK.
>>>> [log in to unmask] http://www.fmrib.ox.ac.uk/~beckmann
>>>> tel: +44 1865 222551 fax: +44 1865 222717
>>>>
>>>>
>>>
>>> Silvia Gennari
>>> Department of Psychology
>>> University of York
>>> York, YO10 5DD
>>> United Kingdom
>>>
>>>
>>>
>>
>> ____
>> Christian F. Beckmann
>> University Research Lecturer
>> Oxford University Centre for Functional MRI of the Brain (FMRIB)
>> John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK.
>> [log in to unmask] http://www.fmrib.ox.ac.uk/~beckmann
>> tel: +44 1865 222551 fax: +44 1865 222717
>>
>>
>
> Silvia Gennari
> Department of Psychology
> University of York
> Heslington, York
> YO10 5DD
> United Kingdom
>
>
>
>
____
Christian F. Beckmann
University Research Lecturer
Oxford University Centre for Functional MRI of the Brain (FMRIB)
John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK.
[log in to unmask] http://www.fmrib.ox.ac.uk/~beckmann
tel: +44 1865 222551 fax: +44 1865 222717
|