[log in to unmask]" type="cite"> HiDuring this correspondence on the SPM email list, Matthew Brett made the excellent suggestion of creating an ROI libraryThis would be invaluable to those of us in smaller institutions without a significant knowledge baseThis could be coordinates used plus size of box/sphereOr an image library (with of course full details of how said ROI image was derived)So, could one be initiated on the SPM web site or the SPM Wiki?Is there anyone who could set the ball rolling?Hopeful thanks :-)Rachel Mitchell-----------------------------------------------------------------------
Dr Rachel L. C. Mitchell
Lecturer in Cognitive Psychology, University of Reading
Honorary Research Fellow, Institute of Psychiatry
Research Psychologist, Berkshire Healthcare NHS Trust
Correspondence Address:
School of Psychology
Whiteknights Road
University of Reading
Reading
Berkshire
RG6 6AL
Tel: +44 (0)118 378 8523
Direct Dial: +44 (0)118 378 7530
Fax: +44 (0)118 378 6715
----------------------------------------------------------------------------Original Message-----Hi,
From: SPM (Statistical Parametric Mapping) [mailto:[log in to unmask]]On Behalf Of Daniel Kelly (AKA Jack)
Sent: 18 February 2005 18:29
To: [log in to unmask]
Subject: Re: [SPM] Any Papers on Presenting fMRI Results?
I've taken the liberty of starting a page on the SPM Wiki concerning this issue:
http://en.wikibooks.org/wiki/SPM-Information_to_include_in_papers
No doubt I've butchered the arguments and misrepresented the ideas. Please do take a look and edit out anything that offends. I've attempted to put together a single page which takes into consideration all the points mentioned in this great discussion.
Perhaps the Wiki would be a good starting place for a community-authored set of guidelines?
Thanks,
Jack
PhD Student
Institute of Cognitive Neuroscience
UCL
Daniel Y Kimberg wrote:[log in to unmask]" type="cite">Tom Johnstone wrote:I think perhaps it would be useful to make a distinction between information that is necessary for other researchers to be able to replicate an experiment, and information not necessarily needed for replication, but that a reviewer might want to see. The former would include all the details of the experimental design and processing steps that have been mentioned in this discussion, and could be made manditory (at least as supplemental info.).There's a related distinction I think is also worth making, between items that are currently routinely but not uniformly reported (e.g., the items in Tom Nichols's list) and items that are more rarely reported (most of what's been posted since). For items that are useful for replication and/or routinely reported, these are items clearly already required by community standards, and to a first approximation only omitted, when otherwise appropriate, by oversight. So the issue is mostly quality control with the writing and review process, which I'm guessing is what Tom N. was most concerned with. I can see an argument for making up an informal checklist to help authors, editors, and reviewers, although placing a more formal administrative burden on editors or reviewers seems potentially problematic. The other end of this I think is a bit more of a slippery slope. It's certainly worth considering if as a community we should routinely require additional types of information (e.g., design matrices, unthresholded maps, SPMd output). But (to agree with Tom J's comments about demanding proof of competence), I don't think it's a good idea for functional imaging to develop a formal "we don't trust you" policy. Most of the proposed new standards are along these lines: information that would help reviewers catch mis-analyzed data or mis-interpreted data. As much as I share everyone's frustration at occasionally reading articles where I have a strong suspicion the authors have botched something or other without knowing it, or even just omitted something they should have considered interesting, I feel like we have to place the responsibility for this level of quality control in the hands of reviewers and editors, even though we know the system isn't perfect. That's not to say I think the current de facto set of standards is perfect, but I do think it's worth being conservative about imposing new formal requirements. Once you start expressing a lack of trust, there's no limit to the number of things you might want to double-check. Beyond occasions when reviewers do ask for diagnostic plots, I'd much rather see people lead by example. If you feel readers would be better served (or better reassured) by your including some particular figure or chunk of information, then do so and encourage your collaborators to do so as well. Things do sometimes catch on that way, and at worst, if you're right, your articles will be better than everyone else's. I think it's worth considering, by the way, that the concept of the information required for replication is not always clear-cut. Everyone has their preferred level of detail. One researcher may not care that much what the echo time was, but might feel quite strongly about the procedures used to color balance the stimulus display. Another might feel a study is worthless because the source code for the recon or the circuit diagram for the response box isn't made available. Formal requirements won't completely solve this problem -- it's still at some level up to editors and reviewers to decide, in context, which features of a study are incidental. Some proportion of the time, those omitted details will be things you personally care about, but others don't. Just my two cents. dan