Interesting discussion this one
I guess Tom will be too polite to mention this. But I think it would be
very convincing if statistic maps were accompanied by results of an
SPMd analysis to demonstrate normal and white residuals (or lack
thereof!). Unfortunately for first level analysis it is really only an
option for the SPM folks. But I think SPMd would serve as a great
quality check.
best
torben
Torben E. Lund
Danish Research Centre for MR
Copenhagen University Hospital
Kettegaard Allé 30
2650 Hvidovre
Denmark
email: [log in to unmask]
webpage: http://www.drcmr.dk
On 16 Feb 2005, at 01:35, Matthew Brett wrote:
> Hi,
>
> I would put in a plea for continuous activation maps to be made
> available - and displayed in the paper or supplementary material. The
> thresholded maps we are all used to can be seriously misleading:
>
> Jernigan TL, Gamst AC, Fennema-Notestine C, Ostergaard AL. More
> "mapping" in brain mapping: statistical comparison of effects. Hum
> Brain Mapp. 2003 Jun;19(2):90-5
>
> In my experience, continuous maps also give a much clearer picture of
> the quality of the data.
>
> Also, it seems to me that any ROIs used should be made available
> online.
>
> Best,
>
> Matthew
>
> On Mon, 14 Feb 2005 17:36:23 -0500, Thomas E Nichols
> <[log in to unmask]> wrote:
>> Max,
>>
>>> Is anyone aware of papers about presenting results for fMRI studies?
>>> Specifically I'm looking for any attempts that have been made to
>>> standardize what is reported and how.
>>
>> I don't know of any such efforts, but I think it's badly needed. I
>> was once asked by an editor for such standards and started to make a
>> list of statistical and non-statistical issues. I'd love to hear
>> comments on such guidlines.
>>
>> -Tom
>>
>> -- Thomas Nichols -------------------- Department of
>> Biostatistics
>> http://www.sph.umich.edu/~nichols University of Michigan
>> [log in to unmask] 1420 Washington Heights
>> -------------------------------------- Ann Arbor, MI 48109-2029
>>
>> All papers should give sufficient detail so that if the reader were
>> armed with the authors' data they could reproduce the results. Some
>> important items:
>>
>> 1. What voxel-wise statistic image threshold was used? Corrected or
>> uncorrected? FWE or FDR?
>>
>> 2. Was cluster size inference used? If so, what is the
>> cluster-defining statistic image threshold? What is the cluster
>> size threshold (in voxels) and significance (corrected or
>> uncorrected).
>>
>> 3. How many voxels corrected for? Whole brain voxel count, or
>> sub-volume count for 'Small Volume Correction'. If small volume
>> correction, define how the sub-region was defined.
>>
>> 4. If random field theory is used, what is the smoothness (FWHM,
>> x,y,z)? What is the RESEL count? (This allows one to independly
>> recompute the corrected threshold)
>>
>> Not directly related to the statistics, but crucial for any complete
>> reporting are:
>>
>> a. Basic image properties: image dimensions and voxel size.
>> Properities of data as acquired *and* after intersubject
>> registration (aka Spatial Normalization). For PET/SPECT, image
>> reconstruction smoothness parameter (e.g. 'ramp filtered', 'Hanning
>> filter, *** mm cutoff').
>>
>> b. Was slice timeing correction used?
>>
>> c. Smoothing applied. At 1st level and 2nd level if done twice.
>>
>> d. Basic intrasubject registration info. What software, what sort of
>> interpolation.
>>
>> e. Basic itersubject registration parmaeters. Affine/Linear? If so,
>> how many parameters (9 or 12, typically). If Nonlinear, 'how'
>> nonlinear? (E.g. with AIR, you specify a polynomial order; with
>> SPM, you specify a basis size, like 3x2x3). Regularization
>> setting. What interpolation?
>>
>> This may sound like a lot, but they are all very basic parameters and
>> can be concisely reported. They also can be reported in detail in one
>> publication from a lab and then cite that publication for details that
>> haven't changed.
>>
|