Subject: | | Re: snpm5 |
From: | | Roberto Viviani <[log in to unmask]> |
Reply-To: | | [log in to unmask][log in to unmask]> > Subject: Re: slow smoothing > > On Tue, 22 Apr 2008 16:42:46 +0100, Kathy Pearson <[log in to unmask]> wrote: > > >I have resolved the problem of slow smoothing locally at the command > line=20 > by > >calling Linux to create a new subdirectory each time to receive the=20 > smoothed > >output files. Smoothing then takes .12 seconds per volume or about one > >minute in total for 460 input volumes. It takes an additional 10 seconds > >for a Linux call to move the generated swaf* files into the same > directory > >with the waf* files. > > > >Profiling the smoothing function reveals that much of its time is spent > in > >the MATLAB exist function, so I'm not sure what a general solution across > >platforms might be for faster i/o. > > I'm not a high-level Linux guru, but I've played with Linux/Unix for > some=20 > time now, and the idea that the speed depends on how many files are in > the=20 > directory sounds very strange to me, unless there's something weird > going=20 > on with your hard disk. > > I often see things proceed more quickly at the beginning of a > calculation,=20 > and then a slow down (I haven't plotted it, but I assume it's reaching=20 > some kind of steady-state rate asymptotically). I've always assumed > it's=20 > a virtual memory thing. Something like pages getting sucked into RAM, > but=20 > then in the steady-state pages have to be swept both in and out of RAM. > > Again, I don't know enough to say for sure that it shouldn't depend on > the=20 > number of files in the directory, but it sounds very strange to me. > > Cheers > > > > >Kathy Pearson > >UAB Psychology Dept. > >=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > -------------------------------------------------------------------------- > CONFIDENTIALITY NOTICE: This e-mail message, including any > attachments, is for the sole use of the intended=20 > recipient(s) and may contain confidential, proprietary,=20 > and/or privileged information protected by law. If you are=20 > not the intended recipient, you may not use, copy, or=20 > distribute this e-mail message or its attachments. If you=20 > believe you have received this e-mail message in error,=20 > please contact the sender by reply e-mail and destroy all=20 > copies of the original message > > ------------------------------ > > Date: Wed, 23 Apr 2008 15:01:13 +0100 > From: "Stephen J. Fromm" <[log in to unmask]> > Subject: Re: slow smoothing > > On Tue, 22 Apr 2008 16:42:46 +0100, Kathy Pearson <[log in to unmask]> wrote: > > >I have resolved the problem of slow smoothing locally at the command > line=20 > by > >calling Linux to create a new subdirectory each time to receive the=20 > smoothed > >output files. Smoothing then takes .12 seconds per volume or about one > >minute in total for 460 input volumes. It takes an additional 10 seconds > >for a Linux call to move the generated swaf* files into the same > directory > >with the waf* files. > > > >{‰¾f |
Date: | | Tue, 1 Apr 2008 14:45:22 +0200 |
Content-Type: | | text/plain |
Parts/Attachments: |
|
|
|
|
Hallo,
may I ask what is permuted, the explanatory variable of interest
alone, or the variable of interest and nuisance covariates? I would
think that only the variable of interest should be permuted, but I
would be interested to hear if there is a rationale for permuting
variables of interest and covariates together, or if anyone has strong
opinions in one or the other direction.
We are currently using permutation tests in the statistical analysis
of our data, and permute the variable of interest only, so my question
also has a practical relevance. If no one cares, then we'll keep using
our approach.
Thanks a lot,
Roberto Viviani
Dept. of Psychiatry
University of Ulm, Germany
Quoting Thomas Nichols <[log in to unmask]>:
> Dear Johanna,
>
> No, currently the paired t-test plug-in in SnPM doesn't allow for nuisance
> covariates. However, a paired t-test is equivalent to a one-sample t-test
> on the pairwise differences, and the One Sample T-test plug-in *does* allow
> for nuisance (or "confounding") covariates.
>
> The attached script, PairDiff.m will produce the differences if you don't
> already have them. Let me know if it works for you OK.
>
> Sorry for the trouble.
>
> -Tom
>
> On Mon, Mar 31, 2008 at 11:13 AM, Scheuerecker, Johanna <
> [log in to unmask]> wrote:
>
>> Dear SPMers,
>>
>> does anyone know if there is a possibility to use paired t-test in SNPM5
>> with one covariate of interest?
>>
>>
>> Thanks a lot,
>>
>> Johanna
>>
>
>
>
> --
> ____________________________________________
> Thomas Nichols, PhD
> Director, Modelling & Genetics
> GlaxoSmithKline Clinical Imaging Centre
>
> Senior Research Fellow
> Oxford University FMRIB Centre
>
|
|
|
|