Print

Print


Hello. I have some questions about bedpostX 

I have data with the following properties:

a rhesus macaque ex-vivo sample.

brain stored in fomblin for ~60 weeks (moved to fomblin 30 days after fixation).

scanned with ten b0 volumes and 120 gradient volumes (60 gradients in both polarities) in a 7T small bore brukker, 250um isotropic resolution 0.25mm spatial voxels. Scans were taken twice for averaging.

TE=31ms
GM T2 in range 14-20ms
WM T2 in range 28-35ms

clearly SNR in GM is quite low in this aged sample, maybe acceptable in WM.

I have three questions:

1) given the investment in computation time, what would be a sensible bedpostX burnin for this data? 20,000? Is it possible to have a burning that is too long (e.g. some overfitting issue)? 

2) how can I tell if bedpostx has failed to converge for some or all of the data?

3) I also have another set, identical in protocal to this one but: there are only 30 gradients, but: this set was aquired six weeks after perfusion, so SNR is around 50% better in GM, 20% better in WM. 

We aquired the 60 gradient set because we thought 60 is a better number than 30 for bedpostx. I was thinking really of comparing both, of course, but is the 30 gradient set just going to give a superior result because of the SNR, under all circumstances?

Or were we correct that 60 is going to help a lot in finding good distributions with bedpostx?

The WM SNR only dropped around 20% after all.

I expect it is an emprical matter of testing against known anatomy.

thanks,

Colin
Sackler Centre
U. Sussex