This looks to be an interesting project, and I might have a look in the 'demo' account at some point.
However, having had a brief look at their information leaflet they make available on their website, I am a little puzzled as to why they ask for a blanket average of 20 patient results for all the tests they offer to monitor, which includes a number of enzymes.
From the Cembrowski paper I referenced earlier, I recall a great deal of discussion around deriving a suitable number of patient results to combine, based on the ratio of the patient biological variance (BV) to the analytical variance. This is important as it defines how much variation seen in the patient results originates from the analyser itself. If the ratio is small, we can assume that most of the patient variation is actually caused by the analyser, and when the ratio is large the variation is assumed to be biologically-based on the patient side. If the variation is mostly biological then a large number of patient results must be averaged to ensure that very high/low results are not assumed to be from analytical error.
Having found a source that displays one of the nomograms from the Cembrowski paper (here: http://cdn.intechopen.com/pdfs-wm/14849.pdf , top of p.13), it is possible to see how this ratio relates directly to the ability to detect analytical error from patient results. As a hypothetical example using this nomogram...
For Alkaline phosphatase, given a patient CV of 26.1%, and an analytical CV of 2.6% (one would need to be sure these values come from the same concentration in real life, but these values are reasonable) a ratio of 10.0 is calculated. From the nomogram in the paper, with a grouping of just 20 patient samples, the probability of detecting a systematic error of 2SD in the test is virtually zero.
From the equation I derived from the Cembrowski data, to detect the 2SD error at 50% PED (probability of error detection) we would actually need to average 200+ patient results.
While grouping 20 patient samples may be sufficient in some circumstances (overkill, even, for some tests), I would be tempted to work the alogrithm backwards to see just how much systematic error is *actually* being detected by this scheme for tests with a high patient:analyser variance ratio. I expect though, that you'd need to see huge systematic errors in the magnitude of +/-15 to 20 SD before AoN flagged up error for ALP with groups of 20 patient results.
I'd also be cautious about assuming analytical stability from this sort of calculation, assuming I haven't missed something obvious.
Andy Minett
Specialist Biomedical Scientist
Biochemistry
Pathology
Hull Royal Infirmary
------ACB discussion List Information--------
This is an open discussion list for the academic and clinical community working in clinical biochemistry.
Please note, archived messages are public and can be viewed via the internet. Views expressed are those of the individual and they are responsible for all message content.
ACB Web Site
http://www.acb.org.uk
Green Laboratories Work
http://www.laboratorymedicine.nhs.uk
List Archives
http://www.jiscmail.ac.uk/lists/ACB-CLIN-CHEM-GEN.html
List Instructions (How to leave etc.)
http://www.jiscmail.ac.uk/
|