Hi Regina,
> Dear all,
>
> I am currently trying to make a decision on whether or not I should
> use the
> automatic outlier de-weighting option in my higher level analyzes. I
> have
> just ran Flame1+2 on 24 subjects. The reason I am interested in this
> approach is for having observed by looking at individual level
> analyzes
> outputs that a couple of participants "look quite different", e.g.,
> have
> deactivations in response to painful stimulation instead of
> activations
> (2/24). What I would like to do is to get at the extent to which those
> deviations (deactivations instead of activations) would qualify as
> "outliers
> deviations", and if so, decrease the impact they might have on group
> statistics. I haven't been able to find any errors on stimulus
> timing files
> or uncorrected motion that could explain these widespread
> deactivations-
> hence my keeping the subjects in the group analyzes- so I thought
> perhaps
> the outlier de-weighting could be a good way to go- if they are
> indeed
> outliers.
>
> Here are my questions:
>
> 1) Based on http://www.fmrib.ox.ac.uk/fsl/feat5/detail.html#higher ,
> it
> seems that Flame 1+2 would give an indication of whether there are
> outliers
> in the data, correct? I don't think the images look "speckled", so my
> inclination would be to believe there are not based on looking at
> the higher
> level images, but is there a more quantitative/objective way to get
> at that?
> For what type of information should I be looking for in the feat log
> files?
It is not objective, but as well as looking at the first level effect
sizes/copes (as you already have) you can also look at the first-level
varcopes. An easy way to do this if you have already run a group
analysis is to look in the group feat directories at the
var_filtered_func_data (this contains the lower-level varcopes). This
is the variance information that gets used in flame1/2 and not in OLS.
Of course the outlier approach itself is intended to be the
quantitative/objective way of looking for outliers.
> 2) Is using Flame 1+2 *and* automatic outlier de-weighting redundant
> in any way?
Flame1/2 uses the lower-level variance information to effectively
downweight subjects with high first level variance. So outlier
subjects that have high first level variance can be dealt with by
flame1/2, obviating the need for them to be inferred as outliers using
the automatic outlier deweighting - whereas if OLS was being used the
outlier inference may need to kick in instead to deal with them.
However, not all outlier subjects have high first level variance, and
so the outlier inference can still have an important part to play when
flame1/2 is being used.
> 3) Are there any concerns with adopting the outlier de-weigthing
> option? In
> particular, taking into consideration that: a) I have only 24
> subjects and
> b)I'm primarily interested in looking at individual differences in
> this
> paradigm?
No concerns - other than the extra computation time. The approach is
conservative and will default to non-outlier behaviour (i.e. assuming
the error is purely Gaussian) if there is insufficient evidence of
outliers in the data - including the issue of if there are enough
subjects in the group to reliably identify outlier behaviour.
Cheers, Mark.
|