JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for SPM Archives


SPM Archives

SPM Archives


SPM@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

SPM Home

SPM Home

SPM  2000

SPM 2000

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Fwd: Re: rfx

From:

Richard Perry <[log in to unmask]>

Reply-To:

Richard Perry <[log in to unmask]>

Date:

Wed, 15 Nov 2000 19:38:03 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (164 lines)

Dear Ian,

>i continue to wonder
>if it is necessarily a priori biased to somehow weight each
>subject by the quality of their fit when performing the second
>stage t-test during an rfx analysis.  on the one hand its seems
>legitimate since a subject can be poorly fit regardless of the size
>of the height estimate (or difference estimate).  in practice however,
>i would expect that subjects with small estimates would in
>fact have larger residuals.

Here is an interchange from back in April between Rik Henson and 
Thomas Nichols which deals with the issue of the difference between 
taking the t images through and taking the betas through to the 
second level.  Thomas Nichols also touches on the issue of 
within-subjects variance having to be the same from one individual to 
another, but without on this occasions trying to explain why.  Rik 
mentioned this message to me earlier, but it took me a while to find 
it!

Rik asked:
>  > Perhaps I could ask the "community"s feeling on a related
>  > issue? While in SPM we take con*.imgs through to second-level
>>  analyses, which are simply linear combinations of the
>>  parameter estimates, other groups appear to take statistics
>>  (eg t-statistics) through to second-level analyses
>  > (equivalent to selecting spmT*.imgs).
>

Thomas Nichols replied:
>If the interest is in population inference, that is, performing
>modeling which accounts for subject-to-subject variability, then the
>appropriate approach is the two-level 'RFX' analysis.

[And I think that by this he meant taking the beta images through to 
the second level, as we are advised to do in SPM.]

>If the interest
>is simply to combine several fixed effects analyses, answering the
>question 'Are all of these individual subject's null hypotheses
>true?', then combining p-values or t-statistics in a meta-analytic
>fashion would be appropriate.

[Whether this just means taking the t images through to the second 
level and using them exactly as you would have used the beta images I 
am not quite sure.  Perhaps 'meta-analytic fashion' indicates 
something more fancy.  Anyway, he continues...]

>The random effects model can 'say' more, i.e. makes a statement about
>a population parameter.  The meta-analysis approach just makes a
>statement about the conjunction of null hypotheses; in particular a
>significant result could simply be due to one subject with profoundly
>significant data.  The cost of the broader scope of inference is
>stronger assumptions.  In particular the first level parameter
>estimates are assumed to be
>
>	  1. Normally distributed (across subject), and to have
>	  2. Homogeneous variance (across subject).
>
>Point one would be violated, for example, if one subject had a very
>large response but all others had a very small response, or, if there
>was a bimodal distribution of responses, e.g. half the subjects
>responded strongly the other half hardly at all.  Point two would be
>similarly violated if a particular subject had a relatively poor
>fitting model, or simply if there were dramatic differences in the
>variance images across subjects.

Anyway, back to your comments:
>i would guess that the answer to this rests on exactly what one wants to
>say about the "population".  in the extreme such a weighting approach
>would face the same problem as a fixed effects approach,
>in that a minority of the sample could determine the conclusions
>(i.e. really poorly fit subjects would be weighted negligibly).

I think that the situation is worse than this.  If all of the 
subjects are selected randomly for a fixed-effects analysis, then it 
could be thought of as a useful case study for understanding the 
characteristics of the population from which it was drawn.  You don't 
know how typical these subjects are of the population (that's what a 
RFX analysis is for), but there's no particular reason for thinking 
that your results are biased in one particular direction or another. 
However, if you do an analysis which is weighted towards low-variance 
individuals, then it's as though the data itself is being used to 
select the subjects, which seems much more worrying even to a 
non-statistician such as myself.

Let's imagine that, unbeknownst to you, there are two sub-groups in 
your 'population', a large sub-group with quite a bit of variance, 
and a small sub-group with unusually small variance.  These 
sub-groups appear in appropriate proportions in your sample.  A 
random-effects analysis will tend to give a result which is fairly 
representative of the population taken as a whole (in spite of the 
departure from the homogenous-variance assumption which strictly 
speaking renders it invalid, I now learn).  However, taking the t 
images through will bias the results towards the low-variance 
sub-group, which seems a much worse thing to do.

>nevertheless,
>i can't help thinking that taking 15 or 20 gigabytes of subject data,
>reducing it to 12 or so synthetic height estimates that are all considered
>equally
>valid, and then asking whether the 95 percent CI of these
>encompasses zero is a bit extreme.

I am completely in tune with this sentiment.  I really fear the swing 
towards referees demanding random-effects analyses in all fMRI 
studiesin a knee-jerk manner, and the consequences for any department 
in which scanning time is limited.  As pointed out by Karl and 
others, one can ask many useful questions about the brain using 
fixed-effects analyses, and one has to have a very good reason indeed 
to throw away all of the useful information about how reliable your 
observations are within the individual brains in which they were made.

Clearly if you want to compare two groups, then RFX analyses are 
mandatory.  On the other hand, once Hubel and Wiesel had demonstrated 
ocular dominance columns electrophysiologically in several brains, 
one wouldn't have valued the services of a referee who refused to 
allow the paper to be published until they had statistical evidence 
that these observations could be extended to the population!

>one alternative that we have tried is to start with the fixed effects
>analysis, and then followup by asking if the regions identified in the fixed
>effects are reliable across the subjects in the sample using roi extraction
>and random effects anova on the extracted timecourse data for each subject.

Sorry, I didn't quite follow this.  Perhaps someone else will comment.  But...

>if the condition effects and condition by time interactions are significant,
>one can assume the activation is reliable in the sample.

...surely the standard fixed-effects analysis already tells you this?

>the presumption
>then is that given another sample of 7 or 8 right-handed, undergraduate,
>native-english speakers, i should expect a similar outcome.  of course this
>is an reasoned versus statistical "population" inference.  i'd be
>interested if you or anyone has any comments regarding the
>veracity of this method.

I agree wholeheartedly that such a 'reasoned' population inference 
may be entirely appropriate for many questions (see above).

>cheers.
>ian.

I hope that you get some other (better-informed) responses to your 
messages soon.  Sorry that I seem to be incapable of writing a short 
message to this list!

Best wishes,

Richard.


-- 
from: Dr Richard Perry,
Clinical Lecturer, Wellcome Department of Cognitive Neurology, 
Institute of Neurology, Darwin Building, University College London, 
Gower Street, London WC1E 6BT.
Tel: 0207 679 2187;  e mail: [log in to unmask]


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager