Dear FSL users,
I'm using FEAT to analyse a task-related ASL dataset, using full perfusion signal modelling as described on the FSL website. The normalized, thresholded results look fine... except that there is a lot of activity outside the brain. I looked at the mask that's being used (in the .feat directory), and it's a good deal larger than the ASL brain.
I searched the FSL archives and found information on changing the brain/background threshold% in the misc tab, so I did play with it, going up to 30% with still no major improvements in the mask size. From my understanding, the mask is thresholded on the conservative side so as to avoid losing any "brain," and that normally, with BOLD data, most extra-brain voxels don't get included in the mask due to the tissue properties outside the brain. However, how does this work with ASL data?
I also tried using/not using BET in the prestats, but it doesn't seem to make a big difference. What I'd like to do is increase the threshold on the BET so that the mask is smaller, but I'm 1) not sure if that's the right thing to do in this case, and 2) how to do it.
Any help is greatly appreciated!
Thank you,
Lei
|