Hi Brian,
Your e-mail was really helpful (thank you) and this approach does make
sense, but I wonder how odd it would read if I stated "... it was
predicted that there would be a positive associatio between X and Y...",
but then a 2 tailed test was used? Most professionals I know use 1
tailed tests in conjunction with directional predictions. Adopting the
approach below may seem incorrect to many (including reviewers of
journals perhaps??), though I myself think it makes sense.
Thanks
Kathryn
>>> <[log in to unmask]> 22/03/2007 23:48:25 >>>
Hi Kathryn,
I expect Jeremy is worn out, so hopefully I can help with what I
understand about the one-tailed issue (it's been such an active day
for
the list and I was away from email so missed most of it!).
I agree with Jeremy that it looks strange (sometimes suspicious) to
read
that someone used a one-tailed test, so you should be careful and
default
to two-tailed unless you have a good a priori reason (and maximising
your
chances of finding a statistically significant usually isn't enough
unfortunately!). An example of an acceptable case might be if you are
developing a new drug for dementia and you're testing it for the first
time to see if it has cognition-enhancing effects. Here you could
justify
a one-tailed t-test for a beneficial effect of drug over placebo based
on
the a priori decision that if you see no benefit with the drug, or it
makes patients worse, either way you're going to ditch it and not
develop
it further.
With regard to your example, you need to be slightly careful
interpreting
the direction of the correlation in relation to what the scores on
your
tests actually mean. For instance, if scale A gives a point for every
question answered correctly and scale B produces a score of the number
of
errors, you could have a high, but negative, correlation (i.e. higher
scale A scores are associated with lower scale B scores) which would
support your hypothesis.
Assuming in your case both scales gave scores for the number of
questions
correctly answered and you found a high negative correlation, this
would
not support your hypothesis but would certainly require some
explanation
(why do people get high scores on scale A but low scores on scale B if
they're supposed to be measuring the same thing?). This is very
different
from finding no relation between the two scales (i.e. a correlation
close
to 0) - it's very difficult to come up with a reason beforehand why
you
could ignore a counter-intuitive finding, therefore I would recommend
you
use a two-tailed test.
I hope that helps,
Brian Saxby
Institute for Ageing and Health
Newcastle University
> Hi again Jeremy (I do stop working but a few hours left of the night
yet
> :-)
>
> Would you mind clarifying something for me again (i've been
re-reading
> some of these e-mails and giving things some deeper thought). You
said
> "A one tailed test should be used when an effect in the opposite
> direction to that which was expected would be theoretically
equivalent
> to a zero effect." However, I think I skimmed past the word
> "theoretically" which now gives me a number of ways to interpret
that
> sentence. Can you provide one of your useful examples of such a
> situation? Or elaborate on this example... For one of my analyses
where
> I am looking at convergence between tests of the same construct and
> would expect a high correlation, how would an effect in the opposite
> direction (i.e., negative correlation) be theoretically equivalent
to
> zero?
>
> Thanks
> ps. do you ever stop working? :-)
>
>>>> Jeremy Miles <[log in to unmask]> 22/03/2007 21:08:20 >>>
> On 22/03/07, Kathryn Jane Gardner <[log in to unmask]> wrote:
>> Thanks again Jeremy.
>>
>> In answer to your question about whether I'd claim a null result
> with
>> means in the wrong direction, the answer is no. In my current
> research I
>> have taken to approach of not running a t-test on groups with
means
> in
>> the wrong direction, but then commented that say males scored
higher
>> than females, thus, no further statistical analysis was conducted.
> Then
>> I have briefly discussed this finding the discussion (but only in
> the
>> context of mean scores and not statistical sig., obviously). Do you
>> agree with this approach (given a 1 tailed test)? I suppose ideally
> you
>> are saying use 2 tailed tests, which I assume would address this
>> problem.
>>
>
> Well, if you're going to do that (and you've proved that you're
doing
> that), then I guess it's OK. But if I were meta-analysing the data,
> I'd be sad.
>
>> I like your definition of conditions for a 1 tailed test. Why
wasn't
>> this given out to me years ago at undergrad level? Just out of
> interest,
>> do you have a text reference for this kind of approach to defining
1
>> tailed tests? I'd like to read more as none of my books or hundreds
> of
>> stats papers seem to adopt this approach and google also fails me
> :-(
>> Nothing like a bit of bed time stats reading, though I have to
admit
> I
>> like stats (shall I lock my doors now? :-)
>>
>
> Abelson covers it, I think, in his book 'statistics as principled
> argument'.
>
>> I don't know Patrick McGhee (he must've left the dept a while back)
>> though the name rings a bell. I've been at UCLan nearly 6 years now
> and
>> don't recall him being a staff member. But then you finished your
PhD
> a
>> while back didn't you.
>>
>
> He's something important like assistant vice-chancellor. I don't
> think he's ever been in the psychology department. (I did my PhD at
> Derby, when he was HoD). (In 1999, if anyone's interested.)
>
> Jeremy
>
>
>
>> Kathryn
>>
>>
>> >>> Jeremy Miles <[log in to unmask]> 22/03/2007 20:38:46 >>>
>> On 22/03/07, Kathryn Jane Gardner <[log in to unmask]> wrote:
>> > Thanks Jeremy for answering my questions. Just to clarify though,
> I
>> was
>> > using directional to refer to 1 tailed tests (slip in terminology
> as
>> I
>> > realise that these aren't necessarily the same thing, though they
>> are
>> > often used synonymously). Someone in my dept said that if you run
> a
>> 1
>> > tailed test (say a t test) and the means in are in the wrong
>> direction,
>> > then the t test shouldn't be run i.e., you inspect the group
means
>> first
>> > and then only run t tests if results are in the direction you
>> predicted.
>> > I think approach is consistent with what you were saying about
not
>> > reporting a sig result if it is in the wrong direction. I think?
>> >
>>
>> That's true, but if the means are in the wrong direction, would you
>> *really* say that you have found nothing.
>>
>> Let's say that you do a test of intelligence on black and white
>> children. All the evidence (that I know of) would suggest that, if
>> you find a difference, it would be that the black children should
>> score lower.
>>
>> So you run the test, and you find that the black children score
>> significantly higher. Do you then say "Well, that's a null result.
> I
>> found no effect."?
>>
>>
>> > I do see your point re: 1 tailed tests, and you clearly don't see
> a
>> lot
>> > of them in the papers you review. You said "You can make a
>> directional
>> > prediction based on anything. But if you then use that
> directional
>> > prediction to argue that you can do a one tailed test, then
that's
>> (in
>> > my opinion) naughty." I think like many, I have assumed that a 1
>> tailed
>> > test is used when a directional prediction is made and there is
>> enough
>> > theory and/or evidence to do so. But it seems you don't agree
with
>> this
>> > and do not advocate using 1 tailed tests. As I said earlier, I
>> haven't
>> > come across the use of 2 tailed tests for directional
predictions.
>> Maybe
>> > I am missing the basic underlying principles of the use of 1 and
2
>> > tailed tests and how they differ from directional and
>> non-directional
>> > tests, but if am then so are many of my colleagues! So...if you
>> could
>> > define the conditions for a 1 tailed test to be run, what would
> they
>> > be?
>> >
>>
>> A one tailed test should be used when an effect in the opposite
>> direction to that which was expected would theoretically equivalent
> to
>> a zero effect.
>>
>> Jeremy
>>
>> P.S. Do you know Patrick McGhee, at UCLAN? He was my PhD
> supervisor.
>>
>>
>>
>>
>> --
>> Jeremy Miles
>> Learning statistics blog: www.jeremymiles.co.uk/learningstats
>>
>
>
> --
> Jeremy Miles
> Learning statistics blog: www.jeremymiles.co.uk/learningstats
>
|