Dear Franz,
> Dear Jesper and Michael,
>
> thank you so much for the replies and the code.
> I am aware that those approaches are far from optimal. That's why I am going to record blipupdown images in the future.
>
> Jesper, the data I am working on are from an ongoing longitudinal study. For the second time point (1 year after the first) we are acquiring blipupdown b=0 images. Using the reverse blipped time point 2 b=0 images for correction of TP1 directly probably won't work. However, do you think that the following might work, or would this introduce some bias:
> *For the 2nd time point, unwarp the blipped b=0 with topup as it is intended to be used. This would produce an unwarped b=0 image.
> *Use this unwarped b=0 for correction of the 1st time point: feed the distorted TP1 b=0 and the unwarped TP2 b=0 into topup (with a readout time very close to zero for the unwarped image).
> Sorry for taking topup for a ride so far from home.
that is a clever idea. I can't see off the top of my head any reason it shouldn't work. Put the distorted image first and the zero-readout image second when you set it up (to make the distorted one the reference w.r.t. location). Let me know how it goes.
Jesper
>
> Thanks so much for your help,
> Franz
>
>
> Am 12.08.2013 um 21:28 schrieb Michael Dwyer:
>
>> Hi Franz,
>>
>> I disavow all responsibility for the results, but here is a very dirty kludge that might help. I'm copying in a small python script I've used to non-linearly map intensities from one image to another. It essentially voxel-wise regresses the intensities using a K-nearest neighbor approach. Obviously it's no substitute for what Jesper is suggesting, but it might help with retrospective data. It'll sort of "predict" what the B0 would look like if you made it from the T2w.
>>
>> Copy the below into a file called similarize_intensities.py, and then run:
>>
>> ./similarize_intensities.py t2w.nii.gz b0.nii.gz t2w_as_b0.nii.gz
>>
>> N.B. Please make sure you use INTEGER values, or it will never finish. So, run the raw files through fslmaths with "-odt int". Also a little smoothing (around 0.5mm) might help stability. Also, the images need to be co-registered first for this to work, but from your context I assume this is already the case.
>>
>> =========== CODE===================
>>
>> #!/bin/env python
>>
>> import optparse
>> import numpy
>> import nibabel
>> from sklearn.neighbors import KNeighborsRegressor
>> from scipy import ndimage
>>
>> # Set up the option parser to handle the command-line
>> usage = """%prog [options] source-img target-img output-image"
>>
>> This program takes two images, source and target. It determines a non-linear
>> mapping from intensities in source to target, and then remaps source
>> accordingly. It uses a nearest neighbors regressor to do so. Theoretically,
>> the "successfulness" of this operation depends on the joint histogram entropy.
>>
>> Warning: this program is BNAC VRMQC, and should be used with appropriate
>> caution. It is not designed for use without manual review of the
>> outputs."""
>> parser = optparse.OptionParser(usage=usage)
>>
>> parser.add_option("-v", "--verbose", dest="verbose",
>> help="Increase verbosity", action="store_true", default=False)
>>
>> parser.add_option("-d", "--downsample-factor", dest="ds_factor",
>> type="float", help="Spatial downsampling factor (0.25)",
>> metavar="FACTOR",
>> default = 0.25)
>>
>> (options, args) = parser.parse_args()
>> if len(args) != 3:
>> print "You must supply source, target, and output image names"
>> exit()
>>
>> if options.verbose:
>> def vprint(*args):
>> # Print each argument separately so caller doesn't need to
>> # stuff everything to be printed into a single string
>> for arg in args:
>> print arg,
>> print
>> else:
>> vprint = lambda *a: None # do-nothing function
>>
>> # Load up the images with Nibabel
>> vprint("Loading source images")
>> img1 = nibabel.load(args[0])
>> img2 = nibabel.load(args[1])
>>
>> # Get the actual data
>> data1 = img1.get_data()
>> data2 = img2.get_data()
>>
>> # K-neighbors algorithms aren't the fastest, so we downsample. It's 3D, so
>> # the factor will be df_factor^3 -- a substantial speedup.
>> vprint("Downsampling data by a factor of ", options.ds_factor)
>> data1_small = ndimage.zoom(data1,options.ds_factor)
>> data2_small = ndimage.zoom(data2,options.ds_factor)
>>
>> # Assume the images are already deskulled, so don't include 0's.
>> vprint("Removing voxels with 0 intensity")
>> data1_cut = data1_small[data1_small != 0]
>> data2_cut = data2_small[data1_small != 0]
>> data1_cut_f = data1_cut.flatten()
>> data2_cut_f = data2_cut.flatten()
>>
>> # Set up a scikit-based K-nearest neighbors regressor
>> neigh = KNeighborsRegressor()
>> vprint("Fitting model")
>> neigh.fit(numpy.transpose(numpy.atleast_2d(data1_cut_f)),data2_cut_f)
>>
>>
>> # To speed things up, since there are many less intensities than voxels, we
>> # first create a lookup table.
>> vprint("Creating lookup table")
>> full_data1 = numpy.transpose(numpy.atleast_2d(data1.flatten()))
>> vals = numpy.sort(numpy.unique(full_data1))
>> dvals = numpy.transpose(numpy.atleast_2d(vals))
>> predvals = neigh.predict(dvals)
>> vdict = dict(numpy.vstack((vals, predvals)).T)
>> vdict[0] = 0
>>
>> # Then apply the table to the raw source image data
>> vprint("Applying transform")
>> data1_trans = [(vdict[i]) for i in data1.flatten()]
>> data1_trans_3d = numpy.reshape(data1_trans, data1.shape)
>>
>> # Save the output file
>> vprint("Saving output")
>> oimg = nibabel.Nifti1Image(data1_trans_3d, img1.get_affine())
>> nibabel.save(oimg, args[2])
>>
>> ========END CODE===================
>>
>>
>>
>>
>> On Mon, Aug 12, 2013 at 11:55 AM, Jesper Andersson <[log in to unmask]> wrote:
>> Dear Franz,
>>
>>> I have a set of diffusion data without fieldmaps or reverse-blips and would like to perform distortion correction. A couple of months ago, there was a thread started by Mark where Jesper proposed to use T2w images with topup to correct for distortions in the DWIs (Andreas was interested as well)(https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=FSL;b7e1c3eb.1303).
>>> I tried the same with not terrible but also not great results. It would be great if someone has suggestions on optimizing this workflow or share their experience about what works and what does not.
>>>
>>> My DWI are 2x2x2mm, my T2w is 0.5x0.5x5mm
>>> I brought the T2w into DWI space (dof=6; matrix was calculated with beted images, but the unbeted images were transformed).
>>> Then I proceeded as described in the previous thread.
>>>
>>> I have a single subject with blip-up-down b=0 images as ground truth, however, the goal is to perform the correction without these as I don't have them for the most subjects. I have attached a my_topup_results_fieldcoef for the t2w approach and the blip-up-down b=0.
>>> What worries me is that for the t2w images the fieldcoef seems considerable less smooth than for the blip-up-down images.
>>
>> this is not the intended use of topup. topup is built on the assumption that the signal in the two images is identical, except for some geometric distortions (including subject movement). Hence, any difference topup sees it will try to "fix" by applying a geometric transform.
>>
>> When you have two images acquired with the same sequence, but with some change in phase-encoding this assumption is likely to be (almost) true. When you have two different sequences you need to be very lucky for it to work well as there will be all sorts of signal changes that are unrelated to geometric distortions (in addition to those actually caused by geometric distortions).
>>
>> I think the additional structure in your "estimated field" is due to topup trying to compensate for differences in contrast by applying a geometric transform.
>>
>> There have been a couple of cases of people that have reported useful results from registering a SE-EPI to a T2 weighted non-EPI image, but there will not be any guarantees that it will work. Chances are it will work for some T2 sequences/images and not for others.
>>
>> It takes very little time to acquire an additional SE-EPI image, much less than to acquire a T2 image, so I strongly recommend doing that.
>>
>> Jesper
>>
>>> Do you think that slice thickness of 5mm is too large to use the T2w?
>>>
>>> Alternatively, is there a hack to use T1w images (1x1x1mm) or the EPIs from the resting state sequence. The RS EPIs are acquired with a different total readout time than the DWIs but also with a fast field echo single-shot sequence (the DWIs are spin echo single-shots).
>>>
>>>
>>> Any help is very much appreciated.
>>> Best,
>>> Franz
>>>
>>> <t2w_my_topup_results_fieldcoef.png>
>>> <blipupdown_my_topup_results_fieldcoef.png>
>>>
>>>
>>
>>
>>
>>
>> --
>> Michael Dwyer
>> Director of Technical Imaging Development
>> Buffalo Neuroimaging Analysis Center
>> 100 High St. Buffalo NY 14203
>> [log in to unmask]
>> (716) 859-7065
|