Dear SnPM developers and users:
I posted a previous question regarding the memory problem I encountered
while using SnPM. The problem was solved by choosing the slice by slice
calculating mode. However, there is no similar option if you want to do
the variance smoothing (pseudo-t). I have 13 subjects and each scan has
the dimension of 199 x 199 x 199. I want to see how much memory is needed
for this calculation. And is there a good way to get around this memory
issue?
Thanks!
Xue
|