Print

Print


Hello,

just my 2 cents: an alternative to setting an absolute threshold might 
be masking the DTI data with a white matter mask. I also tend to agree 
about the cluster extent: yes, it may not be the statistically purest 
way of handling things, but a cluster of 3 with a 2x2x2mm voxel size is 
just ridiculously small, and not biologically plausible. But three, the 
issue of non-normality should be considered very carefully. The paper 
you cite is about VBM proper, i.e., using structural imaging data. The 
case for VBM on DTI data is much less clear, as suggested by Jones et 
al., 2005 ("Our results suggest that, even with moderate smoothing, a 
large number of voxels within central white matter regions may have 
non-normally distributed residuals thus making valid statistical 
inferences with a parametric approach problematic in these areas."). So 
if you want to stick with that small a filter width, using SnPM may be a 
good alternative. If you want to test the normality assumption in your 
dataset, you could use SPMd, the method also used by Derek and 
colleagues to compute a Shapiro-Wilk statistics at each voxel.

Cheers,
Marko

Min Liu wrote:
> Dear Michel,
> Thanks a lot for your suggestions.
> Placing an FA threshold is just like placing an absolute threshold for
> grey matter VBM to account for the confounds around the grey matter and
> white matter edges. I don't see a problem with that. It is true that an
> extent threshold is not necessary for voxel based analysis due to the
> variant underlying smoothness of different image regions. But for my
> case, because a lot of detected clusters are very small (like 3 voxels,
> 5 voxels...) and locating at the edge of white matter area, I'd like to
> exclude them in a convincing way. As for the non-Gaussian distribution
> problem you pointed out, smoothing can partly account for that. Although
> the Gaussian kernel I chose was relatively small (4mm) (because I'd like
> to have higher spatial sensitivity), Salmond 2002 (Distributional
> assumptions in VBM) demonstrated that "in balanced designs, provided the
> data are smoothed with a 4-mm FWHM kernel, nonnormality is sufficiently
> attenuated to render the tests valid." However, I've never tested the
> normality of the residuals of my data. So I am not confident to say that
> my data meet the normality standard. Do you happen to know a script that
> can test the normality of residuals compatible with SPM? I think if the
> result broke the normality assumption, I'd choose to do SnPM.
> TBSS is another good choice. I'll consider that in the future.
> Thanks a lot for your thought.
> Sincerely,
> Min
>
> *From:* Michel Thiebaut de Schotten <mailto:[log in to unmask]>
> *Sent:* Wednesday, October 13, 2010 12:40 PM
> *To:* Min Liu <mailto:[log in to unmask]>
> *Cc:* [log in to unmask] <mailto:[log in to unmask]>
> *Subject:* Re: [SPM] Empirical extent threshold for voxel based analysis
>
> Dear Min,
>
> You don't need to apply an FA threshold for your voxelwise comparison
> with SPM8.
> I would recommend you to use non parametric statistics for your
> comparison, as the distribution of the FA values in group comparison is
> not gaussian (Jones et al. 2005).
> If you want to threshold your FA (and thus, reduce the sensibility to
> partial volume effect), you can use a Tract Based Spatial Statistic
> approach (Smith et al. 2006), which is going to make the comparison
> solely on the core of the white matter.
>
> cheers
>
> michel
>
> On Oct 13, 2010, at 7:29 PM, Min Liu wrote:
>
>> Dear all,
>> I am comparing two groups of FA maps voxelwisely using SPM8. In order
>> to limit the comparing volume to white matter only, I set an absolute
>> threshold 0.2 to all FA maps. This threshold reduced the comparing
>> volume dramatically. The findings were corrected by False Discover
>> Rate at 0.05 significant level. Many detected clusters were very small
>> (voxel number under 10, partly due to a relatively small smoothing
>> kernel used before voxel-based comparison, 4mm). Additionally, I also
>> want to correct for extent threshold. In order to empirically
>> determine the extent threshold rather than defining one arbitrarily, I
>> intended to use 'Expected Number of Voxels per Cluster' calculated by
>> SPM. However, this value turned out to be very very small, 3 voxels. I
>> understand that it is because of the small smoothing and small
>> comparing volume. But it just doesn't sound right. 3 voxels? I am
>> wondering if it is still OK to use this number for extent threshold
>> correction.
>> Thank you very much for your thoughts and suggestions.
>> Sincerely,
>> Min
>
> Michel Thiebaut de Schotten, PhD
> _NATBRAINLAB_
> ANR-CAFORPFC/ANR-HMTC
> CRICM-INSERM UMRS 975
> Pavillon de l'Enfance et l'Adolescence
> 47 Bd de l'Hôpital
> 75651 Paris Cedex 13, France
> www.natbrainlab.com <http://www.natbrainlab.com>
> +33 613579133
>

-- 
=====================================================================
Marko Wilke                                            (Dr.med./M.D.)
                 [log in to unmask]

Universitäts-Kinderklinik              University Children's Hospital
Abt. III (Neuropädiatrie)             Dept. III (Pediatric neurology)
             Hoppe-Seyler-Str. 1, D - 72076 Tübingen
Tel.: (+49) 07071 29-83416                   Fax: (+49) 07071 29-5473
=====================================================================