I think they are mixing apples and oranges. To be consistent, all the patients
should have been scanned. A telephone interview is not going to tell me whether
any of that subgroup of patient had or not an intracranial bleeding. Furthermore,
the small validation sub-study they carried out on the proxy interview showed
that a 13% of "clinically important brain injury" could be missed.
They lost 363 patients over the 1043 who were not scanned (34.8%).
I cannot make any assumption on these patients, if I want to stick to scientific
methodology. And they cannot just wipe them out and say that they "would have
seen these patients return if there had been adverse outcomes". If anything,
one should give them the worst possible outcome and then make the calculations
accordingly.
Why 1358 patients were not included? This is a significant number (not just
a few). There is the potential for introducing a bias (that is why, for example,
in randomized studies the method of randomization should be clearly stated).
I still believe that a better study is required.
M. Della Corte
Staff Grade ED
Oxford
-----Original Message-----
From: Accident and Emergency Academic List [mailto:[log in to unmask]]
On Behalf Of Adrian Boyle
Sent: 03 February 2002 11:40
To: [log in to unmask]
Subject: Re: Wafarinised head injuries
This Canadian study was not so bad. The 33% of patients who did not get a CT
scan had a good proxy measure of neurological function at 14 days which was
shown to be sensitive and specific when compared with those who had a scan.
363 patients were lost to follow-up, this represents 88% follow-up. I have
trouble believing that there are enough patients in this 'lost to follow up
group' who had adverse outcomes to significantly bias the findings. The rate
of necessary neurosurgical intervention in the study group was 1% 44
patients. If the rate was similar in the lost to follow-up group, then only
4 patients would have been missed. If it was twice then only 7 patients.
This would not have altered the overall findings much. (Actually 88%
follow-up in a cohort study on Emergency patients is very impressive.) The
1358 patients who were not included in the study would not affect the
validity of the results, but it is a matter of judgement whether this group
affects the generalisibility of the findings to our practice. I am
reasonably happy that the group of patients studied is comparable to my own
practice. (Valididity should always come ahead of generlisibility, you can't
generalise an invalid result!)
GCS is usually the best and most consistent way of assessing neurology
following head injury. Nausea and lethargy are very soft symptoms and very
difficult to quantify. I still stand by my initial classification.
Adrian Boyle
|