Hi,
On 14 Jan 2009, at 17:46, Michelle W. Voss wrote:
> Hello,
>
> I've followed Jeanette's suggestion to obtain correlation
> coefficients between a seed ROI and the rest of the brain as my
> copes at the first level. I then ran a fixed effects analysis to
> combine several sessions from each subject, where each session
> combined had been treated the same in regard to seed voxel and
> rescaling copes and varcopes. Now I have a single fixed-
> effects .gfeat for each subject based on the seeding analysis of the
> runs I'm interested in. Next, I have two time-points so I run a
> second fixed-effects analysis to compare the cope.feat of interest
> from the previous .gfeat and run the time comparison. Last, I run a
> mixed-effects group analysis to compare the change in connectivity
> to the seed ROI for one group vs another.
>
> My question, 1) does this sound like a good progression to look at
> longitudinal effects?
I think this sounds good, yes.
> 2) what is the best way to threshold these final stats? Since I
> rescaled the copes and varcopes at the first-level, the following
> zstats and tstats are overestimates, no? Since the final comparison
> is a comparison of correlation maps, I'm wondering your thoughts on
> a reasonable thresholding?
I would think that by the time you do the highest-level cross-subject
mixed-effects analysis the inputs are pretty much gaussian distributed
and so the default thresholding carried out by FEAT at the highest
level should be pretty much fine as it is. As long as your highest-
level activation maps (Zstats) don't contain massive amounts of
activation you can just do a sanity check by confirming that their
histograms look largely zero-centred mean-std Gaussian.
Cheers.
>
>
> many thanks,
> Michelle
>
>
> On Fri, Jan 25, 2008 at 1:01 PM, Phil Reiss <[log in to unmask]>
> wrote:
> Hello,
>
> I am interested in comparing two groups of subjects in terms of
> functional connectivity with
> respect to a seed voxel X. Evidently it would be in line with
> common practice to perform this
> comparison by
> 1) running a general linear model with the voxel X time series as a
> predictor,
> and
> 2) testing a 2nd-level contrast representing the difference between
> the groups' average
> coefficients for that predictor.
>
> However, this approach, by substituting regression coefficients for
> correlations as the measure of
> functional connectivity, appears to run into a serious problem: if,
> say, the model finds a between-
> groups difference at voxel Y, this could mean either that the groups
> differ in terms of X-Y
> correlation (i.e. the type of difference we're interested in), *or*
> that the groups differ in terms
> of the amount of signal at Y (but not in terms of X-Y correlation).
>
> It seems to me that this problem could be removed by scaling each
> voxel's time series (including
> that of the seed voxel) to a common variance, which would more or
> less eliminate the difference
> between regression coefficients and correlations. But I'm very
> inexperienced with FSL, so I'd
> appreciate it very much if anyone could comment on either my
> diagnosis of the problem or my
> proposed solution.
>
> Thanks very much!
>
> Regards,
> Phil Reiss
>
> P.S. My much more FSL-savvy colleague Jeanette Mumford has proposed
> the procedure below to
> implement the above suggestion. If anyone has any thoughts on this
> procedure's feasibility, or if
> anyone has tried anything similar, I'd very much appreciate hearing
> about it.
>
> -----------------------------
> [from JM:]
>
> First, simply fix the seed voxel time series in the design matrix
> (divide by its sd), then estimate the
> first level model the usual way. Before feeding the first level
> copes into the second level you
> could first copy the original copes and varcopes under a different
> name and then create the
> properly weighted cope/varcope images and save using the original
> cope/varcope number. The
> reason you'd have to be sneaky is because I think there are more
> than just the cope/varcope files
> that the next level of feat will be looking for.
>
> 1) Create the weighting image using avwmaths dir.feat/
> filtered_func_data -Tstd
> sd_filtered_func_data
> 2) Copy copes and varcopes cp cope# cope#_copy
> 3) Create new cope/varcope avwmaths cope#_copy -div
> ../sd_filtered_func_data cope#
> avwmaths varcope#_cope -
> div
> ../sd_filtered_func_data -div ../sd_filtered_func_data varcope#
>
> I think simply dividing twice for the varcope should do the trick.
> Oh, if you're using the newest
> fsl, then avwmaths is fslmaths.
>
> I've never tried switching out the original cope/varcope files
> before, but it seems like it would
> work.
> ------------------------------
> Philip Reiss, Ph.D.
> Associate Research Scientist
> New York University Child Study Center
> 215 Lexington Ave., 16th floor
> New York, NY 10016
> phone: 212-263-3669
> fax: 212-263-2476
> e-mail: [log in to unmask]
>
---------------------------------------------------------------------------
Stephen M. Smith, Professor of Biomedical Engineering
Associate Director, Oxford University FMRIB Centre
FMRIB, JR Hospital, Headington, Oxford OX3 9DU, UK
+44 (0) 1865 222726 (fax 222717)
[log in to unmask] http://www.fmrib.ox.ac.uk/~steve
---------------------------------------------------------------------------
|