Hi Jon,
As long as the models you want to compare are nested, you can use the
F-test. Have a look at slides 13-18 of this presentation:
http://www.fil.ion.ucl.ac.uk/spm/course/slides16-oct/03_Inference.pptx
for a description of the F-test in terms of the extra sum of squares
principle (model comparison) and multidimensional contrasts. This is
particularly relevant for neuroimaging as it means that you only need to
fit the full model to the data and use different contrasts to compute
the F-values you are interested in without having to explicitly specify
and estimate the corresponding full and reduced models each time you
want to do a new F-test.
For a formal correspondence between the two approaches (eg your
implementation in poly_regress.m and the one from SPM), have a look at
pages 12-13 of this book chapter:
http://www.fil.ion.ucl.ac.uk/spm/doc/books/hbf2/pdfs/Ch8.pdf
Sometimes things are clearer or more reassuring with a real example:
data = rand(16,1);
factor = randn(16,1);
[model2_b,f_values,df,f_crit] = poly_regress(data,factor,2,0.05);
[F,df,beta,xX,xCon] = spm_ancova(...
[covariates, factor-mean(factor), (factor-mean(factor)).^2],...
eye(16),...
data,[0 0 1;0 0 0]');
F and f_values should match. Note that I made a couple of changes in
your code:
* used pseudo-inverse, eg, model1_b = pinv(model1_x) * data;
* slight fix for the degrees of freedom:
% Calculate degrees of freedom
df1 = size(model2_x,2) - size(model1_x,2);
df2 = size(model2_x,1) - size(model2_x,2);
% Calculate F
f_values = ((SSE1 - SSE2)./(df1)) ./ (SSE2./df2);
df = [df1,df2];
Best regards,
Guillaume.
On 11/04/17 14:49, Jonathan Orawe wrote:
> Hi Guillaume,
>
> I was under the impression that to statistically test for a polynomial
> effect, what's most appropriate is to test for a reduction in SSE from
> n-1th lower order model. Does this F-test end up simplifying to the
> F-test against the squared parameter, in the case of quadratic effect,
> for example?
>
> I've attached a generic function I wrote to deal with this issue, since
> I hadn't seen an option in SPM to test embedded models against each
> other. But if testing against the highest order parameter simplifies to
> the same F-test, then that simplifies my life a bit!
>
> Best,
> Jon
>
> On Tue, Apr 11, 2017 at 7:11 AM, Guillaume Flandin <[log in to unmask]
> <mailto:[log in to unmask]>> wrote:
>
> Dear Fred,
>
> The multiple regression would be [1 V V.^2] where V has been
> mean-centred:
> V = V - mean(V);
> Then you can test for a nonlinear (quadratic) effect with the F-contrast
> [0 0 1].
> In SPM, when looking at results, you can do a plot for a selected voxel
> by clicking on the "plot" button, choosing "fitted response", "adjusted"
> and "plot against an explanatory variable" 'V'.
>
> Best regards,
> Guillaume.
>
>
> On 10/04/17 17:53, Fred Sampedro wrote:
> > Dear Guillaume,
> >
> >
> > Thanks a lot for your quick response. I watched the presentation you
> > mention and thought I had understood the approach.
> >
> > So my attempt was to define a gray matter voxel-based-morphometry
> model
> > as follows: Set up a Regression model with my clinical variable V and
> > its squared values V2. Then after estimating the model, set the
> > following F-contrast: 0 0 1, that is to say I am only interested
> in the
> > V2 parameter (the first 0 is a “mean” column that I see in the design
> > matrix and the second 0 is the V variable).
> >
> >
> > Several clusters appear at p<0.005. In one of them, I compute the gray
> > matter volume for each subject (GMVc), and plot the relationship
> between
> > V and GMVc. I was expecting a quite good quadratic fit given that
> > p-value. However, I think I did some step wrong because the data does
> > not seem very quadratic.
> >
> >
> > In short, could you or anyone go in a little more detail in SPM on how
> > to accomplish this kind of quadratic voxel map?
> >
> >
> > Thank you very again for your help,
> >
> > Best regards,
> >
> > F.
> >
> >
> > On Mon, Apr 10, 2017 at 1:28 PM, Guillaume Flandin
> <[log in to unmask] <mailto:[log in to unmask]>
> > <mailto:[log in to unmask] <mailto:[log in to unmask]>>> wrote:
> >
> > Dear Fred,
> >
> > You can use a polynomial expansion of the clinical variable,
> see slice
> > 18 of this presentation:
> > http://www.fil.ion.ucl.ac.uk/spm/course/video/#Design
> <http://www.fil.ion.ucl.ac.uk/spm/course/video/#Design>
> > <http://www.fil.ion.ucl.ac.uk/spm/course/video/#Design
> <http://www.fil.ion.ucl.ac.uk/spm/course/video/#Design>>
> >
> > Best regards,
> > Guillaume.
> >
> >
> > On 10/04/17 08:39, Fred Sampedro wrote:
> > > Dear SPM experts,
> > >
> > >
> > > I’ve been using SPM to obtain voxelwise GMV or FDG
> correlations with
> > > clinical variables. Naturally, by using the standard Regression
> > option I
> > > obtain linear (either positive or negative) correlations
> with the
> > variables.
> > >
> > >
> > > However, sometimes I would like to know if there is a quadratic
> > > (typically inverted U shape) voxelwise relationship between the
> > clinical
> > > variable and the voxel’s GMV or FDG. Please find an illustrative
> > picture
> > > attached.
> > >
> > >
> > > Could anyone guide me on how to obtain a statistical
> voxelwise map
> > where
> > > the significant voxels fit some inverse quadratic model?
> > >
> > >
> > > Thanks a lot in advance,
> > >
> > > F.Sampedro
> > >
> >
> > --
> > Guillaume Flandin, PhD
> > Wellcome Trust Centre for Neuroimaging
> > University College London
> > 12 Queen Square
> > London WC1N 3BG
> >
> >
>
> --
> Guillaume Flandin, PhD
> Wellcome Trust Centre for Neuroimaging
> University College London
> 12 Queen Square
> London WC1N 3BG
>
>
>
>
> --
> Jonathan O'Rawe
> Graduate Student in Integrative Neuroscience
> Dept. of Psychology
> Stony Brook University
> Stony Brook, NY 11794-2500
> Office: Psychology B 339
> Email: [log in to unmask] <mailto:[log in to unmask]>
--
Guillaume Flandin, PhD
Wellcome Trust Centre for Neuroimaging
University College London
12 Queen Square
London WC1N 3BG
|