Print

Print


Hi Angelika,

Please see below:

On 18 December 2016 at 23:12, Mennecke, Angelika <Angelika.Mennecke@uk-
erlangen.de> wrote:

> Hi Anderson,
>
>
>
> thank you for your answer. Yes, I am concerned about the interactions,
> not about the “normal” probability of below 5% to get false positives.
> Thank you for clarifying.
>
>
>
> If those two contrasts (0 0 1 -1 and 0 0 -1 1 which you mentioned below)
> both do not show any significant voxel - is it than possible to proceed
> with *one *nuisance EV for both groups without separation?
>

Yes, that's fine, although you'd ideally pick one model from the outset and
stay with it. If you keep testing multiple models, then this alone can
increase the chance of false positives. Although it is possible to correct
for multiple designs with permutation tests (e.g., in PALM), it seems you
don't have to actually merge those two EVs if you from the outset had a
hypothesis in which these could vary independently..


> Or could * that* process yet lead to a spurious group difference (for
> example if there are smaller and below significance differences between the
> nuisance regressions of both groups)?
>

It's difficult to say that other than just by virtue of testing multiple
designs unnecessarily.


>
>
> Within my study data, there is no significantly different
> nuisance-dependence between the groups (= everything far from significance
> in the contrast you mentioned below). When I proceed with *one* EV as
> nuisance parameter, there is a significant group difference in a huge
> cluster. However, when I proceed with *two* EVs (separated
> nuisance-parameter, separately demeaned) the group difference vanishes. Now
> I do not know if I “produce” false positives through the one-EV-analysis or
> probably reduce the statistical power through the two-EVs-analysis too much
> – in short, which one should I believe?
>
> It’s perhaps important to mention that my data is very dependent on the
> nuisance parameter (contrast 0 0 1 leads to one huge significant cluster
> all over the brain).
>

I think this is a bit of evidence that using two EVs for nuisance explains
better the data. I would use this model.


>
>
> Probably there is something happening with the demeaning; if I do *not *demean
> separately (again) the two nuisance-EVs in the two-EVs-analysis, the group
> difference is still there…
>

There's no need to do any demeaning (of any kind) in this design as the
intercept is already modelled in the design (through EV1 and EV2).


>
>
> In the most studies I read no one does a separate nuisance analysis
> regarding the groups, is there a reason why not?
>

Never heard of this. If there is no reason to suspect the interaction would
be significant, a single nuisance EV could be used from the start.

If still unsure, or if you really want to test multiple models, consider
correcting across these.

All the best,

Anderson



>
>
>
>
> With best regards,
>
>
>
> Angelika
>
>
>
>
>
>
>
>
>
> *Von:* FSL - FMRIB's Software Library [mailto:[log in to unmask]] *Im
> Auftrag von *Anderson M. Winkler
> *Gesendet:* Sonntag, 18. Dezember 2016 15:23
> *An:* [log in to unmask]
> *Betreff:* Re: [FSL] AW: [FSL] AW: [FSL] PALM - saving data with nuisance
> variables regressed out
>
>
>
> Hi Angelika,
>
>
>
> Please, see below:
>
>
>
> On 16 December 2016 at 12:05, Mennecke, Angelika <
> [log in to unmask]> wrote:
>
> Hi Anderson,
>
>
>
> thank you very much for your answers. I’ll try fsl_glm. The data without
> nuisance regression aren’t for statistics but only for calculation of the
> mean difference between the groups after nuisance regression.
>
>
>
> Could you please answer another follow-up question regarding the false
> positives after nuisance regression?
>
> If I
>
> 1.       put everything in one big model (nuisance regression and group
> difference) as you stated and
>
> 2.       check, that the correlation with the nuisance parameters isn’t
> different between the groups (00 1 -1 and 0 0-1 1 in .con, EV2 = nuisance
> group1, EV3 = nuisance group 2) and
>
> 3.       check with t-test that the groups aren’t different regarding the
> nuisance variable
>
>
>
> -am I protected from producing false positives in the group difference
> then or is there possibly something left to check to prevent false
> positives? How about probably nonlinearity of the nuisance-dependence?
>
>
>
> Nothing can prevent false positives. At best we control the number of
> false positives on the universe of all possible repetitions of such
> experiment (or, in a different interpretation, the number of false
> positives that appear as the experiment is repeated indefinitely).
>
>
>
> In the above it seems you might be concerned with interactions, i.e., that
> the relationship between group and the imaging data may change as a
> function of the nuisance. This can be tested in a design as:
>
>
>
> EV1: group 1 (coded as 0/1)
>
> EV2: group 2 (coded as 0/1)
>
> EV3: nuisance group 1
>
> EV4: nuisance group 2
>
>
>
> With contrasts as:
>
>
>
> C1: [0 0 1 -1]
>
> C2: [0 0 -1 1]
>
>
>
> If these aren't significant, you can proceed to the group comparison.
>
>
>
> Regardless of testing interactions or not, the chance of false positives
> will be controlled at the level you choose (e.g., 0.05), but that isn't
> eliminated.
>
>
>
> All the best,
>
>
>
> Anderson
>
>
>
>
>
>
>
>
>
> With best regards,
>
>
>
> Angelika
>
>
>
>
>
> *Von:* FSL - FMRIB's Software Library [mailto:[log in to unmask]] *Im
> Auftrag von *Anderson M. Winkler
> *Gesendet:* Freitag, 16. Dezember 2016 10:35
> *An:* [log in to unmask]
> *Betreff:* Re: [FSL] AW: [FSL] PALM - saving data with nuisance variables
> regressed out
>
>
>
> Hi Angelika,
>
>
>
> Please, see below:
>
>
>
> On 15 December 2016 at 12:17, Mennecke, Angelika <
> [log in to unmask]> wrote:
>
> Hi Anderson and all,
>
>
>
> I’m looking for a way to produce the same output after nuisance regression
> with randomise and tbss and came along this question.
>
>
>
> Is it possible to compute this nuisance regressed data after randomise?
>
>
>
> It's possible to produce it with fsl_glm (option --out_res), using a
> simpler design with only the nuisance variables.
>
>
>
>
>
> Could you furthermore please explain how the tfce-algorithm influences the
> regression?
>
>
>
> TFCE doesn't affect the regression. It uses the height and spatial
> contiguity of signals in the test statistic image to produce another test
> statistic, from which different p-values are computed.
>
>
>
>
>
> I tried to calculate the pearson correlation by myself for some voxels
> exemplarily but that didn’t yield to the same p-value.
>
>
>
> The Pearson correlation would ignore spatial relationships, which are used
> in TFCE.
>
>
>
>
>
> Am I protected from producing false positives through nuisance regression
> if I exclude a group difference in the nuisance parameter (simply by
> t-test)? Especially with tfce?
>
>
>
> No. And in fact, regressing out the nuisance then proceeding with the test
> increases the number of false positives because the degrees of freedom
> aren't calculated correctly. The best thing to do is to keep the nuisance
> in the model, and run all in a single step.
>
>
>
> All the best,
>
>
>
> Anderson
>
>
>
>
>
>
>
>
>
> Best regards,
>
>
>
>
>
> Angelika
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Von:* FSL - FMRIB's Software Library [mailto:[log in to unmask]] *Im
> Auftrag von *Anderson M. Winkler
> *Gesendet:* Donnerstag, 15. Dezember 2016 12:02
> *An:* [log in to unmask]
> *Betreff:* Re: [FSL] PALM - saving data with nuisance variables regressed
> out
>
>
>
> Hi JF,
>
>
>
> Looks right: you'd have then the residuals after only the nuisance have
> been regressed out. This is only valid if the EVs of interest are
> orthogonal to the nuisance (they are, as it's ensured internally using the
> default partitioning method), and if it's for a t-test (for an F-test it'd
> be necessary to zero out more than just the first regression coefficient in
> the variable psi). So, as long as this isn't an F-test, it's fine.
>
>
>
> All the best,
>
>
>
> Anderson
>
>
>
>
>
> On 14 December 2016 at 17:53, Jean-Francois Cabana <[log in to unmask]>
> wrote:
>
> Hi Anderson,
>
>
>
> I realized the res output gives me the data with all my variables
> regressed out, including the variable of interest. Just to make sure what I
> did is correct, what I want is to keep the effect of my variable of
> interest, which is the first column of my design matrix, and remove the
> others. So I added these lines to the code and I then save the ‘res2’
> variable, which I think is what I need. Is that correct?
>
>
>
> psi2 = psi;
>
> psi2(1,:) = 0;
>
> res2 = Y - M*psi2;
>
>
>
> Cheers,
>
> JF
>
>
>
> *De :* FSL - FMRIB's Software Library [mailto:[log in to unmask]] *De la
> part de* Anderson M. Winkler
> *Envoyé :* 14 décembre 2016 03:21
>
>
> *À :* [log in to unmask]
> *Objet :* Re: [FSL] PALM - saving data with nuisance variables regressed
> out
>
>
>
> Hi JF,
>
>
>
> Yes, that output can definitely be produced with -saveglm in future
> versions. Thanks for the suggestion and glad you made the changes to the
> code as needed!
>
>
>
> All the best,
>
>
>
> Anderson
>
>
>
>
>
> On 13 December 2016 at 11:49, Jean-Francois Cabana <[log in to unmask]>
> wrote:
>
> Hi Anderson,
>
>
>
> Thank you for your answer. That is an interesting stat to look into, I
> will use try that. What I was looking for though is a way to save the
> actual data with confounding variables regressed out. Actually I found a
> way to do this. I added a few lines in « palm_core.m » in the -saveglm
> option block to additionnally save the « res » variable, which seems to be
> what I am looking for. Maybe that could be something to add in future
> builds, if you think other people might be interested in using that option?
>
>
>
> Cheers,
>
> JF
>
>
>
> *De :* FSL - FMRIB's Software Library [mailto:[log in to unmask]] *De la
> part de* Anderson M. Winkler
> *Envoyé :* 13 décembre 2016 04:18
> *À :* [log in to unmask]
> *Objet :* Re: [FSL] PALM - saving data with nuisance variables regressed
> out
>
>
>
> Hi JF,
>
>
>
> Yes, I think there is something: Make sure the confounding variables are
> all in the model (don't regress them out first), then use the option
> -pearson. It will produce correlation coefficient instead of the
> t-statistic, and R^2 instead of the F statistic. For the correlations,
> square them to obtain R^2. This is the fraction of the variance explained
> by the variable of interest, with confounding variables regressed out.
>
>
>
> Hope this helps.
>
>
>
> All the best,
>
>
>
> Anderson
>
>
>
>
>
> On 12 December 2016 at 20:27, Jean-Francois Cabana <[log in to unmask]>
> wrote:
>
> Dear PALM users,
>
>
>
> I was looking for a way to look at my data with several confounding
> variables in my design matrix regressed out. The idea is to later draw box
> and whiskers plots showing the residual variance that comes only from my
> variable of interest, with confounding variables like age and sex regressed
> out.
>
>
>
> Before I go on and implement this myself in Matlab, I was wondering if by
> chance there was a way to save this output directly from PALM? That way I
> would be sure to plot exactly the data from which the p-values were
> computed.
>
>
>
> Cheers,
>
> JF
>
>
>
>
>
>
>
>
>
>
>