Hi Stefan,
The problem has no easy answer I'm afraid.
It isn't something that we've really come across very much, although
we have been thinking about ways of better incorporating this in
future releases now.
There are three potential things that you could do.
One is to do what David suggests and just make the masks bigger (e.g.
by lowering the %brain/background threshold) but this may include
voxels that do not contain useful signal and hence bias your results,
although with such a high N, this might not be too bad.
The second thing to do is to include some voxel-specific regressors
which model out the effects of the missing data points. You would
also need to increase the masks to get these voxels included, and
you would need one regressor per number of subjects that you wanted
to reinstate. For example, to include all voxels where one subject
only was missing you would need one voxel-wise regressor which
contained zeros for all timepoints except a single one where there
was missing data. This doesn't have to be the same subject in each
voxel - this is the advantage of the voxel-wise regressors - so that
you would have certain voxels containing an all zero regressor (if
they had data for all timepoints) and others with a single one in the
appropriate slot corresponding to the subject which was missing
*for that voxel*. Note that timepoints = subject index in the above.
This is a nicer mathematical solution but is practically a little
difficult
to do and if you wanted to include voxels where two subjects were
missing then you'd need another voxel-wise regressor and so on.
Therefore if you wanted to include voxels where you had N=300
or 299 or 298 or ... 270, then you'd need to make 30 voxel-wise
regressors and include them all which is cumbersome and may
slow the analysis down a fair bit.
The third solution would be to use the outlier detection mechanism
to do the work for you. This again requires enlarging the masks for
each subject (to include all the voxels you want to - and it may be
better to do this by replacing the mask files in reg_standard directly
with a common mask that you want). Once you've done this you
would need to replace values in the cope images (again in reg_standard)
which were previously outside the mask (and hence would currently
be zero) with values which are clearly outliers. You should be able
to work out a sensible outlier value by looking at your current
analyses and seeing how much the valid copes vary. If you do this
the outlier detection should hopefully identify these subjects as
outliers in those voxels and hence effectively remove them from
the analysis. I must say that I haven't tried this solution (or in fact
the previous one) but in principle it should work.
Sorry I haven't got a nicer and easier solution for you.
All the best,
Mark
On 14 May 2009, at 04:10, David V. Smith wrote:
> Hi Stefan,
>
> Which version of FSL are you using? I believe 4.0 was a bit too
> conservative when generating the masks (see previous posts on this
> matter), but the newer versions don’t really have a problem with this.
>
> But, regardless of what version you’re using, once a voxel is gone,
> it’s gone for everyone. So, even if you have just one bad mask in
> your dataset, it will mess up the others.
>
> I think your approach to overcoming this (i.e., lowering the %brain/
> background threshold) is fine. Just make sure you do it for everyone.
>
> Hope this helps,
> David
>
>
> --
> David V. Smith
> Graduate Student, Huettel Lab
> Center for Cognitive Neuroscience
> Duke University
> Box 90999
> Durham, NC 27708
> Lab phone: (919) 668-3635
> Lab website: http://www.duke.edu/web/mind/level2/faculty/huettel/
>
> From: FSL - FMRIB's Software Library [mailto:[log in to unmask]] On
> Behalf Of Stefan Ehrlich
> Sent: Wednesday, May 13, 2009 7:52 PM
> To: [log in to unmask]
> Subject: [FSL] problems due to concatenated masks in GFEAT of a very
> large study - 3rd posting - no answer?
>
> Dear FSL'ers,
>
>
>
> I had posted that earlier. This is the 3rd posting. It would be
> *very* helpful if someone could give me some advice on that. Please
> also let me also know If the problem is not described clearly enough
> or if there is no easy answer.
>
>
>
> I am running FEAT on a large fMRI dataset (n > 300). Each subject
> has 3 runs consisting of 16 blocks of a memory paradigm. After
> hundreds of hours of computing I have processed all first- and
> second level-analyses (from here on referred to as cross-runs). I
> have checked the registrations and individual as well as cross-run
> activation-maps (retrieval versus fixation) for a subset of subjects
> and all seems fine. As a next step I ran a “Single-Group Average”
> over all 300 cross-runs just to get an impression of the overall
> activation patterns. The results were in line with previous studies
> but in the top slices of the brain (horizontal slices) as well as in
> a rim covering the parts of the brain closest to the skull there was
> no activation whatsoever. I found out that this was probably due to
> the mask.nii of the gfeat. This group mask did not include the top
> slices and the outer rim .
>
> Subsequently I went back and checked all cross-run mask.nii and
> identified a very few masks which were missing several top slices
> (due to bad positioning in the scanner, I guess). After deleting
> these subjects from the overall analysis my final results and the
> “Single-Group Average” mask looks much better. However, the outer
> rim is still missing.
>
> It seems like FEAT concatenates all cross-run masks and does not
> include voxels which have a missing value in any single mask. Vince
> Calhoun told me on the phone that this might be due to the fact that
> FSL smoothes relatively late in the processing stream (in contrast
> to SPM). Concequently I went back and changed the brain/background
> threshold (brain_thresh) from 10 to 1 and rerun a few subjects which
> had slightly impaired cross-run masks. With the new threshold more
> voxels get included. Do you think that is an approbriate approach?
> Has anybody experienced that problem before? Are there other
> solutions?
>
>
>
> Thank you so much for your thoughts!
>
>
>
> Stefan
>
>
>
>
|