Greetings all,
I have been researching this topic for some time but have been unable to reach a solid understanding of the problem.
Prior to performing a Monte Carlo simulation (e.g., AlphaSim or 3dClustSim) it is important to execute 3dFWHMx on the ResMS file generated from a 1st level or 2nd level analysis. Most researchers who cite this method in their publication report calculating the square root of the ResMS file generated from their second level analysis but fail to explain why this is a superior method to just using the ResMS file.
I somewhat understand that the ResMS image represents the estimated variance (noise) in the dataset, and that 3dFWHMx functions to calculate the mean of all FWHMs for each axis. Thus, the larger the 'kernel' reported by 3dFWHMx, the larger the mean variance in the dataset. Taking the square root of the ResMS file thus yields an image that represents something akin to standard deviation and is characterized (by my experience with datasets) by increased mean FWHMs as reported by 3dFWHMx.
Why is this the case? And why is the mean variance represented by the square root of the ResMS image superior for determining cluster extent thresholds for significance testing?
Thanks in advance for shedding any light on this complicated topic.
Best,
Patrick
MUSC Neurosciences