JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for SPM Archives


SPM Archives

SPM Archives


SPM@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

SPM Home

SPM Home

SPM  2005

SPM 2005

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Any Papers on Presenting fMRI Results?

From:

Doug Burman <[log in to unmask]>

Reply-To:

[log in to unmask]

Date:

Fri, 4 Mar 2005 16:06:36 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (149 lines)

Thank you, Daniel, for doing a nice job of summarizing the key issues that I've felt relevant to
this discussion.

I would just add that since fMRI is becoming more prevalent with a lot of naive newcomers to the
field, it would be instructive to provide general guidelines to data analysis and interpretation
(for both Wiki & the SPM website?).  There, the proper role & importance of informal review of
unthresholded maps may be raised, without adding cumbersome requirements to the process of getting
manuscripts published.

Doug Burman

==============Original message text===============
On Fri, 04 Mar 2005 1:16:29 pm GMT Daniel Y Kimberg wrote:

Matthew (and others), hi.  I figured I'd throw out a few more replies
before I cave.

In general, I hope you didn't take anything I wrote to suggest I think
researchers should omit figures they consider informative.  In cases
where the unthresholded map suggests something that isn't well
captured by the statistical tests, of course I think it's worth
including.  The same goes for any of the other types of figures
imagers routinely examine in the course of analyzing their data.  I
just don't agree that there's anything so special about unthresholded
maps that they should be included even when the authors and reviewers
of an article are all in agreement that they're uninformative.  I
especially don't think it should be included when it's competing for
space and attention with figures the authors do consider important.
But the outline map solves that problem as long as there was going to
be a map anyway.  So this is perhaps mostly an academic disagreement,
or will be once everyone gets the knack of producing those figures.

> The key point here is that I think people _are_ universally drawing
> an _implicit_ conclusion about A vs B when commenting on a
> thresholded map.

I don't know if this is true or not, but I feel like unthresholded
maps are in general much more effective than thresholded maps at
encouraging people to draw unsupported conclusions.  There are lots of
interesting patterns in noise, especially when the data are spatially
smooth.  Part of my reason for leaning this way is that I like to
think that authors do look at their unthresholded maps and/or trend
level maps, and duly report things they know readers would consider
informative, including the whole map if that's what it takes.  One
would hope that people with advanced degrees don't grossly
misinterpret thresholded activation maps, though.

> To take the behavioral example.  Let us say you are doing a study on
> patients with dorsolateral prefrontal cortex damage and test them on
> (task A) spatial working memory and (task B) a stroop task.  A gives
> p=0.05, B gives p=0.06.  You don't report the result for B atall and
> only report A, and say, 'frontal lobe patients are impaired on
> spatial working memory'.  It would be true to say this, but it would
> be very misleading, because it implies that patients with frontal
> lobe lesions are _particulary_ impaired on spatial working memory,
> for which you have no good evidence.  The reason that 'frontal lobe
> patients are impaired on spatial working memory' implies the
> unsupported 'frontal lobe patients are _particularly_ impaired on
> spatial working memory' is that, if frontal lobe patients are
> impaired on all tests, or even all tests of memory, stating that
> they are impaired on spatial working memory is entirely
> uninteresting.

It's true that in behavioral work authors generally report all/most of
their statistical tests whether they exceed some threshold or not.
Although I think in cases where you have a whole class of
non-significant findings, it's generally okay to say things like,
"none of the language tasks exceeded corrected thresholds (p>0.3 for
all)."  I also think visual maps are much easier to over-interpret
than lists of p-values.  But anyway, I'm in complete agreement that
authors should never hide data in order to mislead readers.  With the
frontal example, I think if you had 1,000 behavioral measures, solely
for presentation reasons, you would probably only report stats for
those that met some (corrected) threshold, and you might label some
trends and others findings.  While it would be possible for readers to
mistakenly infer that the reported and unreported tests differ
reliably, no one who actually paid attention in grad school or at any
time since should make that mistake.

> Obviously I'm drawing a parellel with the thresholded SPM map.
> Again we have done many measurements.  Again we are simply not
> reporting the results of the large majority of the measurements.
> Let's say 'Area X is activated by task A'.  On its own, this is
> misleading, because this statement would be entirely uninteresting
> if it is also true that the whole of the rest of the brain is
> activated to a similar extent.  So, I believe that 'Area X is
> activated by task A' actually strongly implies 'Area X _in
> particular_ is activated by task A' for which it is very rare to
> present any good evidence.

Sure, it's misleading to say area X is activated when there's good
reason to believe the whole brain is activated.  But the source of the
problem is the authors' poor (and in this case one might guess
willfully deceptive) decisions about what should be reported.  You're
entitled to expect when you read an article that the authors will be
duly diligent about exploring and reporting on whatever is interesting
about their data (some in the results section, some in the
discussion).  I would love to see more of: "Here's the whole map.  The
statistics we ran don't capture it, but obviously this effect isn't
really specific to region X."  But I won't be offended if I never see:
"Here's the whole map.  We don't really have anything of note to point
out, but maybe you'll spot something."  The same goes for any of the
different kinds of displays functional imagers regularly use in
exploring their data.  It's not that I don't think there's information
in the figures, and obviously there's some risk authors will decide to
omit a figure that someone else would have considered crucial.  But
that's a general problem that I don't think can be solved
administratively.

> > One thing we haven't talked about is the kinds of invalid inferences
> > encouraged by unthresholded maps.  If you have maps from under-powered
> > studies of two tasks (B-A and C-A), side-by-side comparison is liable
> > to suggest some obvious but false differences and/or similarities.
>
> Again, this is an important point.  Should you remove a lot of your
> data by using a thresholded map, and prevent people from drawing
> possibly invalid conclusions about the data that is not significant?
> My own view would be you should not, and that I would be happy for
> someone to make a reasoned argument about - say - an area that was not
> significant, but that was close to signficance, looked as though it
> was specifically activated (red surrounded by blue) and was bilateral.
>  That also happens in the behavioral literature - you can discuss
> trends in data.

Whenever you publish a report, you're removing a lot of your data and
preventing people from making large classes of inferences.  It's
supposed to be a good thing.  Instead of collecting data and
publishing it as-is, you're replacing the data with summaries (e.g.,
statistical tests and figures) that capture what it is about the data
that you, having spent some time with it, consider scientifically
informative.  If there's a bilateral trend of interest, one would hope
that good judgment would prevail and it would make it into the
article.  But there will always be more information in the raw data,
often useful information, and sometimes critical information.
Researchers miss stuff, and sometimes they have different standards
for what informal observations they feel are appropriate to report.

Incidentally, I did run a study once in which the whole brain differed
between two conditions.  I didn't show a continuous map, but I did
report the percentage of positive voxels, something like 94%.  I don't
believe there was anything else worth knowing about the global map,
and I can't work up any guilt over not showing it.

Okay, maybe one more reply and I'll be on board.

dan

===========End of original message text===========

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager