Hi Mark,
Thanks for your elaborate response. I think you make a very good point. Truth is I’m not exactly sure if smoothing is the best way to approach the situation, it just seemed like a sensible choice. The dataset consists of a couple hundred tumor (GBM) patients, so a (semi-)automated approach would be preferred. The noise ratio varies amongst studies but some are definitely noisy. The overall objective is to prepare the data for tumor segmentation. Therefore, any pre-processing steps that will sharpen or emphasize the tumor-brain interface are helpful. Moreover, any steps that smoothes normal brain tissue outside of the tumor areas are also very helpful. The former is especially the case for the T2 and FLAIR sequences, where the change to normal brain can be very gradual. Any suggestions are very welcome, thanks!
Best,
Floris
On Apr 16, 2014, at 2:01 AM, Mark Jenkinson <[log in to unmask]> wrote:
> Hi,
>
> If you have substantial tumors in these brains then getting FAST to work reliably will be difficult. You would need to create a lesion mask and exclude the lesion from the brain mask in general (or you might try using 4 classes instead of 3, but I find that this has mixed success). If you need to perform noise reduction then I suspect that FAST will also struggle because of the noise. It would be easier to select appropriate intensities by hand, just by picking some candidate GM and WM points and then working out the difference between them.
>
> One important question is why do you want to do smoothing? We normally do not use such smoothing in our analyses, especially structural analyses. What is your overall objective with this data?
>
> If you really do need to do smoothing then I'd suggest choosing the brightness threshold manually (as described above), using 2D mode if the slices are very thick (which is common in clinical data), using 1 for the use_median option (as this will help with very noisy data) and using 0 for the n_usans option (as you are smoothing a single image). The use of 3D or 2D dimensionality has nothing to do with whether the image was acquired with a 2D or 3D mode of acquisition.
>
> All the best,
> Mark
>
>
> On 15 Apr 2014, at 21:28, Floris Barthel <[log in to unmask]> wrote:
>
>> Dear list members,
>>
>> I am very new to FSL so please bear with me. I've performed a search on SUSAN and read all the previous posts that came up but I haven't been able to answer my questions.
>>
>> I'm trying to pre-process a large brain tumor MRI dataset for segmentation purposes. The data comes from 6 different centers and an ever larger number of acquisition protocols have been used. All patients have at least three of T1, T1 post-contrast, FLAIR and T2 sequences.
>>
>> I would like to perform automated SUSAN noise reduction on all of the images however I'm having trouble finding the optimal parameters.
>>
>> (1) How can I find the ideal value for the brightness threshold for each patient and each sequence? I've read in an older post the suggestion of using FAST to calculate the mean gray/white matter tissue intensities but I'm unsure how to go about this.
>>
>> (2) Should I input 3D or 2D dimensionality? I have converted all single slice/image DICOM files to 3D NiFTY files using MRIcron dcm2nii. However, while some of the patients originally had high resolution 3D FSGR T1-weighted sequences, but many did not. Moreover, T2 and FLAIR sequences were never 3d.
>>
>> (3) The last two options I really don't know what to pick here
>>
>> Thanks for any help.
>>
>> Best,
>> Floris
|