Hi All
Problem - for which I am grateful for the input to all who have helped thus
far.
122 tested with result 28 severe and 15 lesser consequences.
122 controls with result 20 severe and 10 lesser affected.
The most popular suggestion was to use Chi Square Testing, with one degree
of freedom or two - (My input was degrees of freedom and Yates corrections
etc).
The results either way were very similar.
My result indicates about a 25% likelihood. Ie not significant that the
extra 8 or extra 5 were anything more than "chance".
Even 4 extra serious effects (32 severe events) would be "tolerated" by such
statistics.
Statistics says 33 severe effects compared to 20 would be needed to prove
the testing was wrong.
Ie Actual result obtained was as possible as spinning two coins and getting
head both times.
One problem is that 28 and 20 relate to very serious outcomes - fatality.
Is their any way that slight discrepancies can be picked up as significant
without losing lives unnecessarily?
Obviously if a 1 000 gave the same proportion, then the outcome is beyond
dispute, but again the deaths to people to prove this might be
unacceptable - many hundreds.
Ethics and medico-legal implications have been called into question and the
result is either one of "victory" for the experimenters or for those who
campaign for the lost people in the testing programme. At present the
testers are vindicated.
Is "common sense" more powerful than statistics here? Obviously not at the
moment.
Thanks to all those who took an interest and contributed.
Any comments still appreciated.
I am working on a suggestion of "pairing" of controls and experimental
people to see if this can give an earlier indication of benefit or harm.
Chi Squared seems not be a "useful" statistical test for small sample sizes
of 250 when differences are only of the order of 50% either way with an
effect on 10 -20% - would this be a fair comment? It seems to show
significance either for large samples (1 000's) or for large changes and not
subtle differences.
Is there therefore a sensitive way to pick up such subtle changes in small
amounts or does this mean that sample sizes or subtle differences can
effectively be "buried" using statistics.
I do have a "feeling" that the test used was more than "wrong"!
Statistics appears to say it was OK.
Either I am mistaken or their may be very serious shortcomings in using
statistics in some applications.
Again I would welcome comments which a "non-statistician" can investigate
further.
John Fryer Chemist
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.393 / Virus Database: 223 - Release Date: 30/09/2002
|