Your posting made me think of another potential solution -- at least
in the short term. You could leverage the power of WFU_BPM.
You could create an ANOVA design, or 1 sample t-test. Then you could
create N imaging covariates. Where each covariate has all 0s except
for the the location that subject n has bad data. Then you could run
your analysis.
This would have the effect of excluding certain subjects from certain
voxels. Just make sure you don't have NaNs at the bad locations in
your DV. Converting them to -100 should work. These values would be
put into the imaging covariates as only 1 subject would have values in
each imaging covariate.
Best Regards, Donald McLaren
=================
D.G. McLaren, Ph.D.
Postdoctoral Research Fellow, GRECC, Bedford VA
Research Fellow, Department of Neurology, Massachusetts General
Hospital and Harvard Medical School
Office: (773) 406-2464
=====================
This e-mail contains CONFIDENTIAL INFORMATION which may contain
PROTECTED HEALTHCARE INFORMATION and may also be LEGALLY PRIVILEGED
and which is intended only for the use of the individual or entity
named above. If the reader of the e-mail is not the intended recipient
or the employee or agent responsible for delivering it to the intended
recipient, you are hereby notified that you are in possession of
confidential and privileged information. Any unauthorized use,
disclosure, copying or the taking of any action in reliance on the
contents of this information is strictly prohibited and may be
unlawful. If you have received this e-mail unintentionally, please
immediately notify the sender via telephone at (773) 406-2464 or
email.
On Thu, May 5, 2011 at 2:42 PM, Jonathan Peelle <[log in to unmask]> wrote:
> Dear Bob,
>
>> From what I understand, during second-level estimation (e.g., one-sample
>> t-test) SPM performs the test only for voxels in which ALL subjects have
>> data. In areas of the brain in which signal quality is highly variable from
>> subject-to-subject (e.g., high susceptibility areas such as ventral frontal
>> and temporal), this procedure is quite problematic, especially for large
>> samples. Has any one customized the SPM algorithm to bypass the all-or-none
>> exclusion procedure? I imagine this would require also producing a
>> degrees-of-freedom image (e.g., to use when reporting statistics).
>
> I'm not aware of anyone who has dealt with this, although some folks
> (like Donald) are working on solving this in an elegant fashion.
>
> In the meantime, I imagine you could get around this by adjusting the
> threshold SPM uses on the 1st level to be less restrictive to the
> voxels it includes in the analysis; see, for example:
>
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind1104&L=SPM&P=R65987
>
> If there are subjects for whom you don't think you have useful data,
> you could include additional regressors in your second level design to
> remove their contribution, which should also appropriately adjust the
> degrees of freedom (although this would obviously get tricky if it
> varied across voxels/regions that you are interested in). If there
> are specific areas you care about, you could also extract the values
> and do statistics outside of SPM, which may offer you some additional
> flexibility in how you model things.
>
> Best regards,
> Jonathan
>
> --
> Dr. Jonathan Peelle
> Department of Neurology
> University of Pennsylvania
> 3 West Gates
> 3400 Spruce Street
> Philadelphia, PA 19104
> USA
> http://jonathanpeelle.net/
>
|