This is the output from start (preprocess). I have verbose on.
[niklasl@aiaagn DTI_kataloger]$ ./preprocess.txt
Usage: /usr/local/fsl/bin/bedpostx_datacheck data_dir
Reading images
Filling empty planes
Running Register
Loading prediction maker
Evaluating prediction maker model
Estimated hyperparameters: 8.558573
0.440032
-10.286440
Calculating parameter updates
Segmentation violation, Invalid permission for address, Offending address = 0x7fea4bffd434
eddy ) [0x45e8c8] [���
./preprocess.txt: line 2: 4318 Segmentation fault (core dumped) eddy --imain="${katalog}_DTI.nii" --mask="${katalog}_DTI_bet_mask.nii" --acqp="${katalog}_DTI_acq.txt" --index="${katalog}_DTI_index.txt" --bvecs="${katalog}_DTI.bvec" --bvals="${katalog}_DTI.bval" --out="${katalog}_DTI_eddy" --slm=linear --fwhm=10,0,0,0,0 --niter=5 --verbose --fep=true
data file NYPUM_078_003_30T_V01_DTI_eddy.nii
mask file NYPUM_078_003_30T_V01_DTI_bet_mask.nii
bvecs NYPUM_078_003_30T_V01_DTI.bvec
bvals NYPUM_078_003_30T_V01_DTI.bval
when using eddy_cuda I get
eddy_cuda: error while loading shared libraries: libcudart.so.6.0: cannot open shared object file: No such file or directory
I this some kind of similar error to what I get for fslview? I have not attended to that yet.
I will test --initrand. Still, regarding convergence, is it the hyperparameters you refer to when talking about SSD after each iteration? Because I have been concerned over the fact that the hyperparameters "jump" in worrying manner even in later iterations.
Thank you very much for your help so far.
best
nick
________________________________________
Från: FSL - FMRIB's Software Library <[log in to unmask]> för Jesper Andersson <[log in to unmask]>
Skickat: den 19 juni 2016 07:37
Till: [log in to unmask]
Ämne: Re: [FSL] eddy-concerns
Hi Niklas,
>
> I have som data where there might be concerns with movement and I therefore applied fwhm=10,0,0,0,0 as recommended. However, I get segmentation errors for some data when I do this, but it does not appear when using five zeros. Also, it has happened several times that a segmentation error occors for strict zeros as well for some other data, but then it has worked the next time. Any clue on this? I use Red hat 7.2 and apply slm as there are relatively few gradients. no topup applied as there is only one phase direction.
I think in the first released version of eddy there was a bug that caused crashes when fwhm was non-zero. But I thought that was fixed for the latest release. What message do you get when it crashes?
Intermittent crashes sounds like an Openmp issue. I know there is an issue with the linear algebra library we use (NEWMAT) and Openmp, so maybe that is what you see. Do you have any possibility to use the CUDA version?
>
> I also wonder, when it comes to eddy, if any of the output data is useful for objectivly declining data (apart from visual inspection).
The current release don’t have the option to replace outlier slices, but I think it still produces a text file with outlier information. Too many outliers in a volume may cause you to want to discard that volume. Too many outliers overall may mean that you want to reject the subject. It will be up to you to decide what “too many” is.
> There is no convergence criteria in eddy, but what differences could be expected "run-to-run", using the same input parameters, i.e what differences in the translation matrix can one expect? is like 20-30 thousands of a mm normal in say the x-direction? or should each run gice the exact same result if eddy has fully converged?
There is a “random” step in eddy where a random set of voxels from within your brainmask is used for the estimation of the hyperparameters for the Gaussian Process. That means that eddy will give very slightly different results run to run even if/when fully converged. There is hidden option, --initrand, that I _think_ was included in the latest release. If you set that you should get identical results each time.
>
> And using 10-0-0-0-0 compared to five zeros, can one objectively determine that the correction becomes better? or using 10 zeros instead? Is the number of slices outside the SE useful?
There isn’t really any independent objective criterion I am afraid. Eyeballing is probably your best option. My experience would be that 10 zeros is at least as good as 10,0,0,0,0. The 10 is just to speed up convergence. To “observe” the convergence you can run it with the -v flag and that should give you a summary of the SSD after each iteration. On well behaved data that measure remains essentially constant from iteration 3 and forward. You want to see that it doesn’t still decrease appreciably towards the later iterations (8-10 in your case).
>
> the SSE-image from dtifit is mentioned as a way of detacting artefacts, is that useful for detecting motion misalignments as well?
I would say it is of some use. You can also look at the individual tensor component maps and look for “strange” behaviour towards the edges.
Jesper
>
> any help on this issue would be most appreciated?
>
> best
>
> nick
|