Hi,
Post hoc power analyses do not make sense and you can defend this point in a response to reviewers. You can cite many of the references given here (scroll down to the bottom of the page):
http://www.childrensmercy.org/stats/size/posthoc.aspx
My favorite is the 6th (The Abuse of Power: The Pervasive Fallacy of Power Calculations for
Data Analysis)
If you ran the analysis in an ROI (chosen a priori) you can construct a confidence interval, which may be useful in showing what the effect size is and how large the confidence interval is. I believe the end of the Hoenig reference I mentioned above talks about this. Otherwise the conclusion is that you basically didn't have enough power.
Hope that is helpful,
Jeanette
Hi Michael,
What you said sounds definitely quite reasonable, thank you. But we were asked to 'formally test if the negative result was due to our statistical power' for a publication. So we dont have much choice.
On Sun, 22 Jan 2012 11:17:39 -0600, Michael Harms <[log in to unmask]> wrote:
>Hi Sinan,
>Post-hoc power-analyses of that sort are not really meaningful.
>By definition, given your null result, if there is an effect present of a
>given size, you didn't have sufficient power to detect it.
>
>cheers,
>-MH
>
>
>> Dear FSLers,
>>
>>
>> We compared the GM, volumetric abnormalities between 3 groups of subjects
>> in an ANOVA design using TFCE based thresholding, at .05. However we did
>> not find any differences between two of the 3 groups. We are now trying to
>> understand if this was due to our sample sizes.
>>
>> The previous posts on this list were not helpful, probably bec of my level
>> of knowledge in statistics.
>>
>>
>> Could you please guide me to do this power analysis ?
>>
>> Thank you,
>>
>> Sinan
>>