Thank you for your answer and
I'm posting the answer so for.
Have a good one~
----------------------------------------------------------------------------
If you have only two groups, one-way ANOVA is functionally equivalent to
a t test for two independent groups, and in this case the F statistic is
simply the square of the t-statistic. Mathematically, the square of a
t[n] distribution (i.e. with n degrees of freedom) follows an F[1,n]
distribution.
To see an example for yourself (or illustrate it to someone else) take a
table of the t distribution for a given significance (say 0.05) and
degrees of freedom (say 5). Square the value and compare it with tables
of the F statistic where the numerator is 1, and the denominator has 5
degrees of freedom. You should get the same answer.
Regards
Miland Joshi (Mr.)
Department of Epidemiology and Public Health
University of Leicester
----------------------------------------------------------------------------
The results should be identical. I'd use the t.
Duncan
----------------------------------------------------------------------------
ANOVA or T-test. Does not matter. With large data you need not worry about
the
distribution either. The outcome: significant difference.
What you are trying to do: not a good idea. (Why have a powerful test
based on
many units of data, when you can have a lot of tests, each with little
power.)
Ignore the business of significance, and estimate the difference of the two
levels
of the factor (with high precision).
Nick Longford, DMU
----------------------------------------------------------------------------
----------------------------------------------------------------------------
> I asked about the sample size and the difference between
> ANOVA and T-test.
>
> Thank you for some of your comments.
> I still have something want to know, though.
>
> That is sample size.
> In the T-test, for example, we have n - the number of size - in the
> numerator term. So as the smaple size increases we got bigger t-value
saying
> that there's difference between two independent samples even though we
have
> almost same mean and standard deviation.
>
> I'm just wondering if I made the right decision.
> Do more data give me a better result in this case?
----------------------------------------------------------------------------
The sample standard deviation is a measure of the variability of the
population. The expected value of this cannot change as it is always
measuring the same thing irrespective of the sample size.
In carrying out a t-test, we do so by comparing sample means. We reject the
Null Hypothesis of no difference between treatments if the difference in
sample means is larger than we would have expected by chance given the
amount of information in the data. In comparing sample means, we need to
know the standard deviations of the sample means. This is known as the
standard error and is the standard deviation divided by the square root of
the sample size. As the sample size increases the standard error decreases
and we are more likely to be able to reject the Null Hypothesis. Thus, the
expected value of the means remain the same but we are more able to say that
they are statistically different. This is not the same as saying that the
difference is of scientific relevance. To investigate this, we calculate a
confidence interval and compare the values in the confidence interval with
what we believe to be of scientific relevance.
____________________________________________________________________________
AstraZeneca R&D Charnwood
Clinical Sciences, Bakewell Road, Loughborough, Leics LE11 5RH, England
Tel: +44 (0) 1509 645044 Fax: +44 (0) 1509 645563
[log in to unmask]
----------------------------------------------------------------------------
In one way, you are correct: As you increase your sample size, you
increase your chance of finding a statistically significant difference
between two means. In fact, ANY difference between two means will be
statistically significant, given large enough sample sizes.
However, the operative word in your question is "better." It is important
to remember the difference between PRACTICAL significance and
STATISTICAL significance. This is why emphasis has been
increasingly placed on the inclusion of effect size in the reporting of
research.
Cohen's two seminal articles do an excellent job in addressing this issue:
1) Things I have learned (so far). American Psychologist, 1990, v. 40, pp.
1304-1312
2) The earth is round (p<.05). American Psychologist, 1994, v. 61, pp.
997-1003
~~~~~~~~~~~~~~~~~
Yongkyu Shin
Office Water Resource
CSUS Civil Eng.
~~~~~~~~~~~~~~~~~
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|