Matthew, hi. I could probably go either way on unthresholded stat
maps, but let me raise a few objections and see if it leads anywhere.
Basically, I'm not sure I agree that unthresholded maps would be all
that helpful. They do give the reader the ability to do coarse
numerical comparisons that the authors (and editors and reviewers)
didn't feel were worth explicit statistical comparison. But you don't
need an unthresholded map to detect grossly unsupported inferences of
the kind you describe, you just need an alert reviewer. (Of course,
even with a sharp reviewer, no set of reporting guidelines is adequate
safeguard against the range of things you can mess up, from statistics
and inferential logic to stimulus design and subject recruitment to
scientific incisiveness).
In many cases, the continuous map doesn't even really solve the
problem. I can't tell by looking at a continuous valued map which
areas have signficantly stronger effects than which others. I can't
tell which of the non-significant effects have confidence intervals
that fail to include effect sizes of interest. I can make decent
educated guesses, but for things like that, I'd rather depend on the
authors (and reviewers) to carry out appropriate statistical analyses.
I feel this way even knowing that both authors and reviewers often
make ridiculous mistakes and don't always share my views about which
analyses are the most informative.
Of course, we look at unthresholded maps in the lab all the time. But
the reasons almost all fall under the broad umbrella of sanity
checking, not analysis or reporting. In a sense, only the
statistically significant areas are findings. Absent a power
analysis, the rest aren't. That may be a little arbitrary, but I
think the best case for unthresholded maps is calling attention to
invalid inferences that happen to conflict with the numerical pattern
-- if, for example, you're claiming that L>R and the two happen to
look very visually similar. The continuous map might remind you to
think about whether there really is a statistical reason to believe
the two are different. But asking authors to present this kind of
sanity checking data could easily get out of hand, and the list of
commonplace errors extends well beyond the image analysis aspects of
the study. It's probably not good policy to expect authors to report
all diagnostics that could potentially turn up trouble.
A continuous map is indeed better for eyeball meta-analysis, and if
space were free, it would certainly be in my top 25 list of new things
I'd suggest should often be reported, if for no better reason than to
reassure readers. But since space isn't free, I'd rather leave it to
the editorial process to decide when it's relevant. At least for the
moment. I'm not really sure how I feel about this, but for the moment
I'm not sure unthresholded maps are so universally useful that they
should be part of standard reporting. They do satisfy a certain kind
of curiosity we probably all share, though.
dan
|