Dear FSLers:
This is regarding intensity variation in our DTI scan. We acquired
2.5x2.5x2.5mm(40 slices b=0,b=1250 12 directions) using Siemens 3.0T
machine. After going through a FDT pipeline and compute ADC and FA maps,
we noticed there is a small reduction in ADC near frontal lobe. The
intensity of b=0 image also seems to suggest intensity reduction in the
same area. I assumed this kind of intensity variation is due to bias-filed
originated from inhomogenuity of a B0 field, and thus I tried to estimate
the amount of bias-field by FAST. I segmented the b=0(T2W) image into 3
compartments, and selected bias-field option. From what I understand the
bias field image intensity represents a correction factor between I
(observed)=I(ideal)*b(correction) as stated in the eq20 of the FAST
technical note. To correct for the bias-field, I multiplied BOTH b=0 and
b=1250 images by b(correction) image acquired from FAST, and reran the FDT
pipeline again.
I’m attaching ADC maps of before and after making the correction for the
bias-field.
I appreciate if someone can help me with this method. Is method valid?, ok
but needs more work?, or definitely wrong?
Any suggestion would be appreciated.
Hedok
|