Dear Jo,
I was never trying to suggest that you were trying to falsify anything nor that your question was inappropriate. I was simply trying to answer your question in as simple a way as possible. Many people overlook that evidence in court and statistical evidence have certain similarities. They may find it easier to understand instinctively the value of independence in the former than in the latter. I was offering the parallel as a way of grasping why simply doubling your data doesn't work. The issue has to do with independence.
Stephen
Stephen Senn
Professor of Statistics
School of Mathematics and Statistics
Direct line: +44 (0)141 330 5141
Fax: +44 (0)141 330 4814
Private Webpage: http://www.senns.demon.co.uk/home.html
University of Glasgow
15 University Gardens
Glasgow G12 8QW
The University of Glasgow, charity number SC004401
________________________________________
From: jo kirkpatrick [[log in to unmask]]
Sent: 03 August 2011 23:04
To: Stephen Senn; [log in to unmask]
Subject: Re: Sample Size Question
Hi Stephen and EBH
I wasn't trying to find ways to fool anybody or falsify anything or I would have asked in FraudsRUs not EBH.
The original question was how he [for his own personal reassurance, not to doctor the data] could tell if his data were valid before presenting the findings. That was the best option I could come up without digging out Howell. I was hoping more of you would have more hopeful answers for the original enquirer rather than further reasons why my 'off-the-top-of-my-head' suggestion wouldn't work [I had already admitted it was probably dumb but perhaps could be used as a last resort and then only as self-reassurance not for publication].
Small samples and the inability to randomise, are facts of research life, and ones that we can't always avoid. Researchers need to know how best to handle these, especially when they are too important to ignore. It would be nicer if they never happened or only happened to other researchers, but life doesn't work like that does it ;o).
Huge thanks to Ted, Mark, Klim and Amy who each had some excellent advice that they explained very well so I am going with that.
Best wishes Jo
________________________________
From: Stephen Senn <[log in to unmask]>
To: [log in to unmask]
Sent: Wed, 3 August, 2011 20:28:14
Subject: Re: Sample Size Question
A court of law requires two reliable witnesses for a conviction. The prosecution only has one so they decide to get him to give the evidence twice. Any jury would see through this (I hope) and any scientist would see (I hope) that measuring the one subject twice does not give the same evidence as measuring two subjects once.
Stephen
Stephen Senn
Professor of Statistics
School of Mathematics and Statistics
Direct line: +44 (0)141 330 5141
Fax: +44 (0)141 330 4814
Private Webpage: http://www.senns.demon.co.uk/home.html
University of Glasgow
15 University Gardens
Glasgow G12 8QW
The University of Glasgow, charity number SC004401
________________________________________
From: Evidence based health (EBH) [[log in to unmask]<mailto:[log in to unmask]>] On Behalf Of Klim McPherson [[log in to unmask]<mailto:[log in to unmask]>]
Sent: 03 August 2011 20:14
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Re: Sample Size Question
Magnifying mathematically is entirely illusory – for the reasons I have explained. The point is to distinguish a signal from noise – magnification does not distinguish, while merely amplifying both.
Klim
From: jo kirkpatrick <[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>>
Date: Wed, 3 Aug 2011 18:35:43 +0100
To: Klim McPherson <[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>>
Cc: "[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>" <[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>>
Subject: Re: Sample Size Question
So apply common sense, caution and above all honesty along with SPSS, Stata or whatever. Any effects that are important will usually be visible in the small sample to the researcher and therefore to the reader who can judge the validity for themselves - keeping the sample size in mind. If in contrast the only way to see the effect is to magnify it mathematically be very wary about calling it an effect, regardless of small Ps or large Ts - it is probably noise. This is what I thought but I just wanted to be sure about what options or extrapolations could be applied, without losing validity or integrity especially as some of my own samples [when I have them] are likely to be quite small.
If I can even find 12 participants, for some of my addiction studies I will be lucky. Initially the research will be qualitative patient narrative ethnographies. However, I hope to support these with demographics and then do some follow up studies involving mainly quantitative findings. As well as understanding why people use drugs - I want to learn what is the conscious or unconscious reward for taking drugs, or the perceived punishment for not taking them.
I also would like to measure and compare the long term and short term effects of diamorphine, Physeptone, and oral methadone on different areas of cognition, such as long-term, and working memory; cognitive load, processing speed and digital span. I realise I am going to need much larger samples for most of these at least 50 or 60. Would I be wasting my time attempting to obtain similar data from between 6 and 12 original participants? They are all that are still traceable from the late 1960s and early 70s of what was once referred to as the most researched group of people of their generation.
Best wishes Jo
________________________________
From: Klim McPherson <[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>>
To: jo kirkpatrick <[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>>
Sent: Wed, 3 August, 2011 17:16:46
Subject: Re: Sample Size Question
Precisely; under the 'significance testing' model. If a small sample reveals something that is so unlikely to have occurred by chance or other artefact then sure proceed with caution. But mostly the 'noise' from chance effects drowns any real effects however important in small samples – unless as I say they are so large as to be unambiguous. There is no getting out of that using conventional methods of inference and if you want to escape you still will have to address the question however obliquely of the true role of random variation in your data – its important. Common sense is other main option.
From: jo kirkpatrick <[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>>
Date: Wed, 3 Aug 2011 16:47:01 +0100
To: Klim McPherson <[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>>
Subject: Re: Sample Size Question
So Klim are you saying there is nothing we can do with small samples, even though it might contain a cure for cancer? Or are there other options available?
Best wishes Jo
________________________________
From: Klim McPherson <[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>>
To: [log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>
Sent: Wed, 3 August, 2011 9:48:14
Subject: Re: Sample Size Question
I think it is a genuine misunderstanding about the nature of statistical
inference which I have come across many times in my career.
Clearly the intention would not have been to falsify data - but it is
important to realise that, in the context of Neyman Pearson inference,
that is what doing what is suggested would amount to.
Mathematically what doing that would do to the data can be of no interest,
largely because multiplying the data by an arbitrary number has an
entirely predictable, and utterly uninteresting, effect.
We might as well imagine there are 47 Rupert Murdochs - then what ? There
aren't, at least not that we observe !
klim
On 03/08/2011 09:30, "Dr. Amy Price" <[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>> wrote:
>Dear Klim,
>
>It is possible you may have misunderstood the intent of the query. I don't
>think that was the reason the question was asked. The idea was
>mathematically what could this do to the data and this is important as Ted
>suggests because sample size and efforts to obtain the same are not as
>clear
>cut as it would seem. Real and random are not black and white and it is
>important for a student to understand how probability works. Making up
>data
>however creatively is unacceptable ethically and obviously any student
>that
>would take the time to write a list serve would not be asking a list on
>evidence based health for ways to falsify data.
>
>Best regards,
>
>Amy
>
>Amy Price PhD
>Http://empower2go.org<http://empower2go.org/><http://empower2go.org/>
>Building Brain Potential
>
>
>
>-----Original Message-----
>From: Evidence based health (EBH)
>[mailto:[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>] On Behalf Of Klim McPherson
>Sent: 03 August 2011 04:20 AM
>To: [log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>
>Subject: Re: Sample Size Question
>
>Please ! Doing what is suggested is simply making up data - lazily.
>
>That's what is wrong with it !
>
>The 'significance' is premised on real random samples - which would be
>violently violated.
>
>Klim
>
>
>
>Klim McPherson Phd FFPH FMedSci
>Visiting Professor of Public Health Epidemiology
>Nuffield Dept Obs & Gynae & New College
>University of Oxford
>Mobile 007711335993
>
>
>
>
>
>On 03/08/2011 08:26, "Ted Harding" <[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>> wrote:
>
>>On 03-Aug-11 01:25:54, jo kirkpatrick wrote:
>>> Please forgive what might be a really dumb suggestion but
>>> could we magnify the significance of say a T-Test by feeding
>>> the same 12 results through 4 or 5 times? Please don't all
>>> scream at once, I am only an MSc student!
>>>
>>> Best wishes Jo
>>> [The rest of the inclusions snipped]
>>
>>Jo,
>>If by this you mean stringing a set of 12 results together with
>>itself (say) 5 times, and then feeding the resulting 60 data
>>values into a t-test, then the answer is that you will indeed
>>magnify the significance!
>>
>>The basic reason is that the sample mean of the 60 will be the
>>same as the sample mean of the 12, while the sample Standard
>>Error of the mean will be 1/sqrt(5) times that of the 12.
>>
>>Hence the t-value for the 60 will be sqrt(5) = 2.236 times
>>the t-value for the 12. So if, say, your t-value for the 12
>>was 1.36343 (on 11 degrees of freedom) so that the 2-sided
>>P-value was then 0.20 (rather disappointing ... ), then if
>>you did the above you would get a t-value of 3.048722, and
>>the t-test procedure (being unaware of your deviousness)
>>would treat this as having 59 degrees of freedom, with the
>>resulting P-value then being 0.0034 which is much more
>>satisfying!
>>
>>Your question is not as "dumb" as it might at first seem.
>>While it is clearly invalid to create a large dataset by
>>chaining together replicates of a small one, until you get
>>one large enough to give you an extreme P-value, this is
>>not grossly different from going back to the population
>>again and again, repeatedly sampling 12 each time until
>>you again get the desired result.
>>
>>This is because, if the initial 12 were a fair sample,
>>future samples of 12 are unlikely to be grossly dissimilar
>>to the initial 12. So sooner or later (and with reference
>>to the above example probably with around 5 repetitions)
>>you could move from P=0.2 to P < 0.01 by repeated sampling.
>>
>>The aggregate sample at any stage is then a valid sample
>>of that size from the population, as opposed to the invalid
>>"sample" generated by recycling the original small one.
>>
>>What is invalid about the procedure is the intention to
>>keep going until you get a small enough P-value. This
>>will inevitably occur if you keep going long enough.
>>
>>No Null Hypothesis is ever exactly true in real life.
>>If it is off by some small amount, then a large enough
>>sample (and you may need a very large one) will almost
>>surely result in a P-value smaller than your target.
>>
>>The real question is: How far off is it? Is this difference
>>of any interest? This leads on to the question: If the
>>smallest difference which is of practical interest is,
>>say, D, then how large a sample would we need in order
>>to have a good chance of a significsant P-value if the
>>true difference were at least D?
>>
>>Also, the "How far off is it?" question can be addressed
>>by looking at a confidence interval for the difference.
>>Such broader approaches should always be used, rather
>>than simplistic reliance on mere P-values.
>>
>>Hoping this helps!
>>Ted.
>>
>>--------------------------------------------------------------------
>>E-Mail: (Ted Harding) <[log in to unmask]<mailto:[log in to unmask]><mailto:[log in to unmask]<mailto:[log in to unmask]>>>
>>Fax-to-email: +44 (0)870 094 0861
>>Date: 03-Aug-11 Time: 08:26:20
>>------------------------------ XFMail ------------------------------
>
|