Print

Print


Dear Andre,
 
see below.
 
Dear Helmut,
Thank you very much for your reply.

> 1) smoothing was applied accidentally another time / with a larger smoothing kernel (trivial)
I don't change the standard parameters for smoothing in any processing step, so it could have happened only by mistakenly changing a parameter without noticing. However, I did the analysis twice, once quite a while ago in SPM8 (and this was actually with students running the analysis individually on their lab computers), and now with SPM12. I think it is very unlikely that this mistake would have happened twice.
Okay, I wanted to mention that option just in case (and exclude e.g. running the analysis a second time based on the same, accidentally incorrectly preprocessed data).

> 2) the raw data were acquired with a (very) large voxel size, interpolation to 2x2x2 mm^3 would result in many more voxels
No, physical scanning resolution was 3x3mm in-plane resolution, 3mm thick, no gap, interleaved slice acquisition. The site I scanned was a proper research setting with two well-maintained scanners and expert personnel. This, of course, doesn’t exclude mistakes, but it seems rather unlikely. Also, scanning was spread across two days and all data show the problem.
Well, it could be a sequence setting. We once acquired some data with a multi-channel head coil that we hadn't used before. Surprised by the spatial resolution of the raw data, which didn't agree with voxel resolution and field of view, we had a closer look and indeed, there was some default setting (or someone else having modified it) in the sequence card reulting in an interpolation of the acquired data with an increased spatial resolution by factor 2.

> 3) high spatial autocorrelation on single-subject level due to some global effects (massive drifts?)
a) How could I check for this?
b) Shouldn't the HP filter remove such massive drifts? (HP filter was 1/165s)
Yes, you're right, slow-frequency drifts should be removed by the HPF (within its limitations). But global effects due to high-frequency noise (massive head motion, spikes, ...) would not. You can reproduce this by setting up an GLM with one artificially introduced "outlier volume" with much higher signal. It's just a single volume, but it has a large impact on the residuals of the time series and will result in very high FWHM.

> 4) possibly overfitted models, resulting in very small residuals (?)
It was a blocked design with 35s block length. Indeed, all blocks are modelled (convolved HRF). However, blocks were separated by 4s (2 TRs) inter-block-interval to display the instructions, and these 4s were not modelled. However, I used this procedure many times and never had problems with it. The design matrix is attached. Seems to be alright, the predictor values vary to some extent between blocks, but this is due to the block design / block interval / HPF setting, shouldn't result in any problems.
We had 2 sessions, each with ~500 volumes (TR 2s), i.e. roughly 1000 volumes in total. There are 12 different experimental conditions, and each conditions has been repeated 6 times (3 times per session). Not all conditions had the same duration. Some conditions will be combined for analysis.

> Thus, do you observe the FWHM for all your subjects (how much variability is there?),
I checked based on the first-level contrasts for each subject. Yes, it is bad (even worse) on the single-subject level:
For most subjects, 1 resel consists of 10000-13000 voxels, up to 17000 voxels. This means, that many participants have only 10-50 resels in the whole brain.
FWHM is usually in the range of 18-25mm in each dimension.
On the single-subject level, smoothness is identical for all contrasts. Yes, FWHM is based on the residuals, which are always the same within a subject (but differ for different group models).
On the second level, smoothness varies but is always poor. I didn't check all possible contrasts, but I observed values between 1655 and 9400 voxels constituting one resel.

Just as a reminder, while all these things are strange, the pattern of activated brain areas is as expected. This seems to preclude any severe errors, e.g. a struggling DICOM import filter sorting the files in the wrong order, or alike...
Hm, artefacts might correlate with the conditions and just "boost" the beta estimates on single-subject level, one could probably also come up with some setting in which the group results would be alright.

I'm happy to do any further checks if that is of any help.
 
As a first start, I would check the following:
  • How do the ResMS images look like on single-subject level / on group level? Maybe there's something obvious, e.g. one slice/region with very large values compared to the others.
  • How do the beta / con images look like, are they very "smooth"? What about the constant term / last beta image?
  • What about the RPV images? The scaling is going to be very different from those one usually obtains, but what about the pattern within the RPV file?
  • As you stated "for most subjects", do you have subjects with "normal" FWHM / resel count? If so, is there anything special about them, e.g. were they the first participants? This might point to some technical problem occuring at some point.
  • What do the raw data time series look like, are there any strange effects (possibly affecting only some of the slices like spikes)? E.g. ArtRepair toolbox has some movie function that also allows to enhance the contrast.
  • If there are no obvious artefacts in the data, is there anything else unusual / weird (e.g. very low/high anatomical contrast, very smooth raw data)?
This doesn't solve the issue, but it might be helpful to find out at which stage things went wrong.

Best wishes,
Andre



_______________________________________________________

Dr André J. Szameitat
Reader in Psychology
Co-Director Centre for Cognition and Neuroimaging (CCNI)
T +44(0)18952 67387 | E [log in to unmask]

Gaskel Building, Room GASK263
Office hours: Wed 11.30-12.30, Thu 13.30-14.30
_______________________________________________________


> -----Original Message-----
> From: H. Nebl [mailto:[log in to unmask]]
> Sent: 22 June 2015 14:13
> To: [log in to unmask]; Andre Szameitat
> Subject: Re: Huge resels? (1 resel = 2400 voxel)
>
> Dear Andre,
>
> The voxel count within the volume sounds reasonable for a whole-brain analysis
> (minus possibly the most dorsal or ventral parts reflecting the field of view). The
> FWHM seems to be very large though. In general, it might be large due to
> (unusually) large applied or intrinsic smoothness, maybe
> 1) smoothing was applied accidentally another time / with a larger smoothing
> kernel (trivial)
> 2) the raw data were acquired with a (very) large voxel size, interpolation to
> 2x2x2 mm^3 would result in many more voxels, which are highly dependent
> though (might also be a scanner setting, some sequences offer to reconstruct
> the data with a higher spatial resolution than that with which it was acquired)
> 3) high spatial autocorrelation on single-subject level due to some global effects
> (massive drifts?)
> 4) possibly overfitted models, resulting in very small residuals (?)
>
> Thus, do you observe the FWHM for all your subjects (how much variability is
> there?), or only on the group level (which might be affected by a very extreme
> subject)? In the later case, is this just for a particular contrast / that particular
> one-sample t-test or for all the contrasts?
>
> Best
>
> Helmut
>
>
> --
> This message has been scanned for viruses and dangerous content by
> MailScanner, and is believed to be clean.