> I have been processing a number of scans with spm2 under matlab v6.1.
> After processing, most of the activation maps turn out fine, but several of
> them look grainy and pixelated, as if with streaks of activation. I
> separated the images using MRIcro, applied a slice timing correction,
> realigned as "coregister only", smoothed using FWHM to 7x7x11mm, reoriented
> setting yaw=pi, and normalized to an EPI template. Have you heard of a
> problem like this before, and do you have any suggestions? I'm happy to
> send any more details of the situation you need, or an image of the
> problem. Thank you so much!
Grainy and pixelated activation maps can be a result of discretisation, as the
data are typically stored as integer values that get scaled (by values in the
headers) to some real values. If the mean integer value in the images is
around 50, and you get a 1% signal increase then this may or may not show up
in the data. 1% of 50 is 0.5, so (depending on noise etc), a 1% signal
increase may appear as a value of 51, or a value of 50. When you do the
stats, this will then reveal itself as the kind of artifacts you see.
When data is smoothed by SPM, the datatypes and scalefactors in the header
remain the same. So if there is a small range in the original data, then
there will also be a small (probably even smaller) range in the smoothed
data. Smoothing the data loses some of the accuracy with which the
intensities are represented.
I (or someone) should probably re-write the smoothing code so that the
smoothed images are scaled, so the maximum value in the smoothed image is
saved with an intensity value of 32767 (for signed 16 bit data). However, I
have resisted doing this as it would make it complicated to smooth 4D
datasets. Because there is a single scalefactor for the whole volume, then
the whole dataset would need to be smoothed first in order to find the
maximum, before it is saved to disk. Another alternative would be to write
smoothed data as floating point, but this would use twice as much disk space.
Similar loss of accuracy is also encountered in resliced data, or
slice-time-corrected data.
An alternative is to have the the original data rescaled according to the
maximum of the original data. In SPM2, the DICOM conversion used to write
each volume (for the Siemens mosaic data only) such that the intensities used
the full range of values. This was a problem sometimes though, because
researchers found that the scalefactor for each image was different, so when
they analysed such data in a package other than SPM (which didn't use the
scalefactors) they were getting poor results.
Rescaling the original data also has other problems - especially with regards
to algorithms that use intensity histograms of the images (e.g. mutual
information coregistration, bias correction etc). Rescaling has a nasty
effect on such histograms. In principle, this could be fixed, but it would
require random noise to be added to the data when rescaling. That would be a
bit nasty.
In most DICOM files, only the first 12 bits of each voxel are used for
representing the image intensities. Therefore, it is safely possible to
multiply the integer values in the images by 8 (or 16 if the output is stored
as unsigned short). This would then reduce the discretisation artifact that
arises in the smoothed images.
Best regards,
-John
|