Print

Print


Tom Johnstone wrote:
> I think perhaps it would be useful to make a distinction between
> information that is necessary for other researchers to be able to
> replicate an experiment, and information not necessarily needed for
> replication, but that a reviewer might want to see. The former would
> include all the details of the experimental design and processing
> steps that have been mentioned in this discussion, and could be made
> manditory (at least as supplemental info.).

There's a related distinction I think is also worth making, between
items that are currently routinely but not uniformly reported (e.g.,
the items in Tom Nichols's list) and items that are more rarely
reported (most of what's been posted since).

For items that are useful for replication and/or routinely reported,
these are items clearly already required by community standards, and
to a first approximation only omitted, when otherwise appropriate, by
oversight.  So the issue is mostly quality control with the writing
and review process, which I'm guessing is what Tom N. was most
concerned with.  I can see an argument for making up an informal
checklist to help authors, editors, and reviewers, although placing a
more formal administrative burden on editors or reviewers seems
potentially problematic.

The other end of this I think is a bit more of a slippery slope.  It's
certainly worth considering if as a community we should routinely
require additional types of information (e.g., design matrices,
unthresholded maps, SPMd output).  But (to agree with Tom J's comments
about demanding proof of competence), I don't think it's a good idea
for functional imaging to develop a formal "we don't trust you"
policy.  Most of the proposed new standards are along these lines:
information that would help reviewers catch mis-analyzed data or
mis-interpreted data.  As much as I share everyone's frustration at
occasionally reading articles where I have a strong suspicion the
authors have botched something or other without knowing it, or even
just omitted something they should have considered interesting, I feel
like we have to place the responsibility for this level of quality
control in the hands of reviewers and editors, even though we know the
system isn't perfect.  That's not to say I think the current de facto
set of standards is perfect, but I do think it's worth being
conservative about imposing new formal requirements.  Once you start
expressing a lack of trust, there's no limit to the number of things
you might want to double-check.  Beyond occasions when reviewers do
ask for diagnostic plots, I'd much rather see people lead by example.
If you feel readers would be better served (or better reassured) by
your including some particular figure or chunk of information, then do
so and encourage your collaborators to do so as well.  Things do
sometimes catch on that way, and at worst, if you're right, your
articles will be better than everyone else's.

I think it's worth considering, by the way, that the concept of the
information required for replication is not always clear-cut.
Everyone has their preferred level of detail.  One researcher may not
care that much what the echo time was, but might feel quite strongly
about the procedures used to color balance the stimulus display.
Another might feel a study is worthless because the source code for
the recon or the circuit diagram for the response box isn't made
available.  Formal requirements won't completely solve this problem --
it's still at some level up to editors and reviewers to decide, in
context, which features of a study are incidental.  Some proportion of
the time, those omitted details will be things you personally care
about, but others don't.

Just my two cents.

dan