Print

Print


Hi Kate and Jeremy,

That was a very interesting discussion. Kate's viewpoint is more along the lines of my supervisor who suggests that by doing an effect size calculation I can be more certain that the results I found are meaningful and not influenced by the sample size. 
Jeremy I don't think I understood your bit about the Cls. I didn't understand how the sample size fitted into that.

Iljana

PhD candidate at Bournemouth University, UK
LinkedIn: http://lnkd.in/WWM_yT

Date: Wed, 5 Feb 2014 09:24:56 +0000
From: [log in to unmask]
Subject: Re: Mediation Sample Size - Bootstrapping
To: [log in to unmask]










As this is a debating arena, I have to say that I don't agree with Jeremy about effect sizes.  If you have a large sample, you are much more likely to pick up significance.  Whether that significance is meaningful is not given by the p value: you can not
 talk about 'really significant' (though people do!) just because you get  0.01 rather than 0.05.  It's significant or it's not. What the effect size helps you explore is how meaningful is that effect: a small effect size with a large sample compared with a
 large effect: if therefore Iljana, you find you have a medium to large effect size then you are more likely to have a result that means something than a small one.

 

I also think that knowing your power is important for replication: you may have significance, but if you have low power owing to a small effect, then a replication with the same sample size is less likely to find your result, and so not support your finding.  
 That's important to know I think?

 

Very happy to know where I'm wrong on this as I'm only a student too and looking to learn

all best

Kate



From: Research of postgraduate psychologists. <[log in to unmask]> on behalf of Jeremy Miles <[log in to unmask]>

Sent: 04 February 2014 18:40

To: [log in to unmask]

Subject: Re: Mediation Sample Size - Bootstrapping
 








On 4 February 2014 10:16, iljana schubert 
<[log in to unmask]> wrote:




Hi again,



Thank you Jeremy and Kate for your input. 



My supervisor has asked me to check effect size on my regression results because he is worried about significant results being due to my relatively large sample (N=500). Is there another way of checking my results that would do what he wants me to check with
 the effect size ? If I understood Jeremy correctly there is little point in doing it as it  just restates what the p-value says or is there more to it?



I am currently doing SEM but have only tackled the measurement model so far.



Thanks a million for your advice!



Best wishes














A large sample size gives you confidence that your estimates are correct.



If they're too small to be interesting, you can say that they are too small to be interesting.



500 is not a large sample size. The paper by Bland that I cited earlier reviewed two medical journals and found the median sample size was about 3000.  



Another way to think about a sample size of 500 is to look at the confidence intervals of the correlation, before you've controlled for anything, a correlation of r = 0.3 has CIs of 0.22 to 0.38 - that's a huge confidence interval. And square it to get
 proportion of variance, our CIs go from (about) 0.04 to 0.15 - 4 times larger(!!).   "Yeah, maybe we've accounted for 4% of the variance, or maybe 15%." That's not very certain.



Jeremy


This email has been scanned for all viruses by the MessageLabs Email

Security System.





This email has been scanned for all viruses by the MessageLabs Email

Security System.