Answers/suggestions again in-line.

Michael


On 20 Jan 2017, at 13:13, Pieter Vandemaele <[log in to unmask]> wrote:

Hi Michael,

Thanks for the elaborate answer.

Adding to that some questions to your usefull comments:

=== MOTION CORRECTION ===
It will take some time to implement it, but is the following the right flow?

mcflirt generates transformation matrix files. Are these calculated against the reference image? I mean, when I have a 4D file with all labels and give mcflirt the first label as reference, the first transformation matrix will be the identity matrix? If so, this would make programming a lot easier.
mcflirt will certainly give you all the transformation matrices you need for each volume in the image to get them into a common motion corrected space. You can thus combine them with you label to control transformation as needed. It is that registration that is the one where you might get some form of bias due to the contract difference from labelling, and thus why I am not totally sure how much difference it makes separating label and control images for motion correction.
For registration to structural and standard space you can do what you suggest here and base it on the control image, if you have background suppression then you might be better using the calibration image. My current impression (and something that I am building into the new version of BASIL) is that a good final perfusion image is actually a useful basis for registration - since it inherently contains white/gray contrast - which the control or calibration images tend not do. What ever you do, I would so the modelling and quantification in the ASL data space and not at high resolution.

Is the following correct?
- create a 4D data file from all labels
- create a 4D data file from all controls
- register 4D label to first label
- register 4D control to first control
- register first control to first label (flirt or mcflirt?)
- compute for every label image the label2control transformation
- first control to structural
- structural to std
- compute for every image the corresponding the asl2struct and asl2std transformation
- apply all transformation matrices on individual images (in native space) (4D images are not used here, correct?)
- do modelling and quantification
- apply transformation to standard space

=== SLICE TIMING CORRECTION ===
We use a 3D GRASE readout module. I think slice timing is not the issue then?
Agreed.


=== PVE CORRECTION ===
For the PCASL, I have no repetitions, but I am still able to use PVEc? I will certainly keep your comment in mind and will do both non-PVEc and PVEc analysis.
The current version of oxford_asl isn’t setup to deal with data without repeated measurements and still do PVEc. I have a solution for future release, but I think if you try it at the moment it will potentially do more spatial smoothing that is optimal.

Best,

Pieter

Op 20/01/2017 om 13:50 schreef Michael Chappell:
[log in to unmask]" type="cite" class=""> I will try to add some information - answers in-line.

Michael


On 20 Jan 2017, at 12:17, Pieter Vandemaele <[log in to unmask]> wrote:

Dear ASL expers,

I'm diving into ASL data processing and already have read articles and programmed some basic processing.
However, the more I read the more confused I get...
Below are some questions, mainly on correctoins, and I hope someone is willing to answer (at least partially).

=== LABEL-CONTROL ===
We use a multi TI PASL and single TI PCASL Q2Tips sequence on a Siemens Trio Tim system.
When reading literature, some people use the non-selective image as label and the selective as control, others the other way around. In our sequence the non-selective images are acquired first. In which direction should I make the difference: ns-ss or ss-ns?


Q2TIPS is used with pASL, so I suspect you don’t have Q2TIPS pcASL. It is likely that you would have multi-TI pASL with Q2TIPS used to set the bolus duration, the single-PLD (a la single-TI) pcASL will have a bolus duration fixed by the labelling without further intervention.
If you have a FAIR pASL scheme then Alina is probably right and you need to do ss-ns, in practice you should be able to tell if you have got it right, as the perfusion signal comes out negative otherwise. The all_file program in FSL allows you to select the ordering in the data (TC or CT) and then do subtraction.

Secondly, I read about several strategies to make difference labels, amongst wich the running difference yielding more images.
Is this recommended? Is this supported in FSL and if so how to do it?

If you are simply planning on computing a single perfusion image from your data then the choice of subtraction strategy is unimportant. The default in all_file would be fine. You can do surround subtraction - the different between all adjacent pairs to get more data-points, but this is most use for functional paradigms. 

=== MOTION CORRECTION ===
I'm thinking of doing motion correction described by Di Cataldo et al 2011. All label images are registered with the first label, all controls with the first control and then label with control with mcflirt. This introduces resampling of the images, but when I register the labels to the controls, I resample again the labels but not the controls. Is this an issue and if so, what can I do about it?

This is quite a sensible strategy. In practice you might find a straight motion correction of all the data prior to subtraction is fine. What you are trying to avoid is the motion correction inadvertently correcting the label-control difference, but even this strategy doesn’t fully account for that. Ideally you would estimate all the transformation matrices involved and combine the relevant ones before finally applying so that you only get a single resampling, as resampling itself introduces errors that can be an issue for ASL contrast.
=== SLICE TIMING CORRECTION ===
It is recommended to use the -slicedt in oxford_asl/basil to account for acquisition timing differences. I suppose this is valid for sequentially scanned sequences from bottom to top. However, I checked the data I need to analyze and these are acquired interleaved. How do I do slice timing correction?
If your readout is 2D then this will be a good idea, potentially less critical for your single-PLD pcASL. Unfortunately I don’t think we have the option to correct for interleaved slice acquisition. Although this strikes me as an odd choice for ASL, the sequential inferior to superior approach seems to be preferable as it ‘follows’ the passage of the label and is thus less likely to interfere with the ASL signal itself.

=== PVE CORRECTION ===I've read the articles of Allani and Chappell on PVE correction. The main difference is the domain in which they work: respectively spatial and temporal. How does basil deal with single TI-single image for partial volume correction? Is the Asllani method implemented?
BASIL can run PVEc on single-TI data, it does require (in the current version at least) that you input all the repeated measurements, i.e. do not average the difference images before passing to oxford_asl. Having temporal information helps the PVEc in BASIL, but is not strictly necessary. We do not have the LR (Asllani) method in the released version of BASIL, although will have in future. I would always do a convention (non-PVEc) computation as well as a PVEc one, it is useful to be able to see and interpret both, as it is still early days for using PVEc corrected images.

Lastly: what is the correct order of corrections?


I would do motion correction early on, before subtraction. Do look at your subtracted data, you might spot glaring artefacts here that can be a sign that motion correction hasn’t worked as well as it should. In BASIL PVEc is done as part of the quantification procedure, so that correction is incorporated for you. If you do LR, then I think it makes no marked difference for single-TI/PLD data if you do it before or after perfusion quantification. For multi-TI do the LR on the difference images and the pass into oxford_asl for model fitting.

I appreciate every bit of information getting me further

Best regards,

Pieter


<gifmi logo small.png>

Pieter Vandemaele, MSc-Ing
GIfMI Site Manager
Ghent Institute for Functional and Metabolic Imaging
MR Department -1K12
Ghent University Hospital
De Pintelaan 185
9000 Ghent - BELGIUM


[log in to unmask] tel: +32 (0)9 332 48 20
http://gifmi.ugent.be fax: +32 (0)9 332 49 69
ORCID ID 0000-0002-4523-2476

<pleaseconsidertheenvironmentbeforeprinting.gif>



--
Re: Netwerk MR afdeling UZGent
<gifmi logo small.png>
Pieter Vandemaele, MSc-Ing
GIfMI Site Manager
Ghent Institute for Functional and Metabolic Imaging
MR Department -1K12
Ghent University Hospital
De Pintelaan 185
9000 Ghent - BELGIUM


[log in to unmask] tel: +32 (0)9 332 48 20
http://gifmi.ugent.be fax: +32 (0)9 332 49 69
ORCID ID 0000-0002-4523-2476

<pleaseconsidertheenvironmentbeforeprinting.gif>