Hi
On 18 Dec 2006, at 14:40, Mervi Eerola wrote:
> On Mon, 18 Dec 2006, Tim Behrens wrote:
>
>> Hi
>>>
>>>
>>> 1. What is the difference in using -c 2 or -c 3 in randomise?
>>> If this is related to the cluster size, as you write, how does
>>> 2 differ from 3? What is the unit?
>>>
>>
>> the -c option is the cluster-forming threshold. The units are units
>> of "t"
>> voxels with t-values above this will be clustered together spatially,
>> and these
>> spatial clusters will then be considered by the statistical test.
>>
>>
>>> If I overlay the results of these two runs, I get somewhat more
>>> significant areas by using 3 than 2. Some of them are overlapping,
>>> others not.
>>
>>
>> This is because you will need a larger number of voxels in your
>> cluster if you threshold at a t-value of 2 than if you threshold at a
>> t-value of 3 (because, in the null case, a large cluster with a t-
>> threshold of 2 is more likely than a large cluster with a t-threshold
>> of 3
>
> Right. This is exactly what confused me because, as I wrote, I get
> more and larger clusters with the higher threshold (3) than with
> the lower
> (2).
>
This means that you large clusters were not large enough to be
significant at t(2), but
the required cluster size is smaller if they all have t-values >3, so
at t(3), you saw clusters that did not pass the significance test at t
(2)
>>
>> When we say "correlate with the covariate", we mean put this
>> covariate into the GLM as a regressor of interest.
>> We can then compute the parameter-value and t-score for this variable
>> and perform corrected thresholding using randomisation exactly as
>> before.
>
> So, 'correlating' means here actually regression on a covariate. It
> could
> be truly voxelwise correlation like in Tuch et al.(2006). That would
> make more sense in many cases. You would perhaps not want to 'explain'
> the FA values with, for example, neuropsychological test scores but
> rather
> look at their association in some brain areas.
>
Sure, you could do a true correlation, but doing a GLM with a single
regressor is essentially the same thing.
With a correlation approach you cannot get a true p-value with
parametric stats, you can only get an approximation with the Fischer
r->z transform. I think this should give you exactly the same values
as the GLM parametric approach.
You could, of course, do a correlation followed by permutation-based
inference, but it is not clear to me why this would be better than
the GLM equivalent.
>>
>> "Regressing out" means putting them into a GLM as "regressors of no
>> interest"
>> we compute the linear coefficients using the GLM, and subtract these
>> components from the data before we start the analysis.
>>
>
> By 'subtracting these components from data' you mean y'=y-beta*x is
> your new dependent variable in the subsequent regression ('the
> analysis')
> when x is the 'regressor of no interest'?
yes
>
>> Just think of these things exactly as you do in the GLM. There is no
>> difference in the modelling. The only difference is in the inference,
>> which we do by creating the null distribution from the data, rather
>> than assuming a standard form.
>>
> The permutation idea and null distribution are clear. The number of
> permutations easily becomes formidable. Do you take a random sample
> of all possible permutations in the test with 5000 samples?
>
yes
cheers
T
> Regards,
> Mervi
|