Print

Print


Dear Anderson,

Thank you a lot for taking your time to offer me an explanation. It has been very clear to me.

Kind regards,
Rosalia.

Hi Rosalia,

Please, see below:

FDR is not recommended.


There's nothing with FDR that would make it less recommendable. It's a method that, under mild and reasonable assumptions, delivers exactly what it promises. And very importantly, it doesn't deliver what it doesn't promise.
 

However, using it, your threshold should be 0.001 in order to avoid false positives.


This isn't correct I'm afraid. Changing the threshold doesn't avoid false positives, only changes the (average) proportion of false positives. And a given threshold is hardly less arbitrary than any other. The only way to avoid false positives is with q=0, in which case there will be no discoveries at all.

In the original question nothing survived with 0.05, so at 0.001 nothing will suvive either.
 

In general, Journals do not "like" FDR results.


This isn't right. While it is true that FDR results contain, by definition, a certain proportion of false discoveries, this amount is controlled, and it can well be tolerated in many studies, including brain imaging.

The original B&H1995 paper has been cited more than 23000 times according to Google Scholar. In 2014 alone, there are already 2490, the most recent this one, in nowhere less than in Nature. In the last year, 3751 (see a little plot attached). This doesn't include papers that used FDR (as available in various statistical packages), without citing the original publication. How can we say that journals don't like FDR?
 

FWE is more robust.


FWE controls a different quantity. Certainly it gives stronger evidence against the null, but it's still a different quantity.
 
All the best,

Anderson



Cheers,
Rosalia.

El 07/08/2014 22:39, "Jason S. Lee" <[log in to unmask]> escribió:

Hi fslers,

I am trying FDR for multiple comparison correction of FA images of patients and healthy controls. As the fsl website guides, I created uncorrected p-value images using voxel-based thresholding and those are named as "tbss_FA_vox_p_tstat1" and "tbss_FA_vox_p_tstat2". I can see many regions whose voxels are greater than 0.95 in those 1-p image. However, when I try running fdr, the result says, "Probability threshold is 0", and I don't know why. (I read and know that zero means nothing is significant, though). Is it possible that FDR removes that many regions greater than 0.95 and assumed those many regions are not significant?

The patients and healthy controls consist of 14 and 14 (total 28), and I used command as follows:
fdr -i tbss_FA_vox_p_tstat2 --oneminusp -m mean_FA_skeleton_mask -q 0.05 --othresh=thresh_grot_vox_p_tstat2

I am working on this because when I tried FEW-correction before, nothing was significant (but there are many regions greater than 0.94 but smaller than 0.95. they were so close to 0.95). So, I am trying FDR instead of FWE-correction now. Would you please let me know anything in this? You help will be greatly appreciated in advance.

Thank you!