Print

Print


Thanks a lot to everybody who replied to my query on estimating 
variability in a series of binary outcomes. The following is a 
summary of the replies.  

Thanks for all your help.

Paul

*********************************************************************

Query: variability in binary series

Dear Allstaters,

a clinical test results in binary outcomes, N (normal) and D
(defective). If a deteriorating subject is tested repeatedly during
the course of the disease, the ideal sequence of outcomes should be
similar to NNNNNDDDDD (zero variability).  However, empirical data are
noisy (e.g.  NDNNDDNDDD).  

Is there a suitable statistic to quantify the variability / noise of
the outcome variable? 

I would really be grateful for any help / references.

Paul

---------------------------
Paul H Artes, MCOptom
Research Optometrist
University of Manchester

[log in to unmask]


**********************************************************************
* Hi Paul, I work in agriculture, and we often get this type of data
in the context of mortality with time.  Typically we will observe a
sample of insects and record the number dead at, for example, the
start of each day, so will obtain a set of data that looks like:-

Day	Total Number dead
---	-----------------
0	0
1	2
2	4
3	6
4	7
5	7
6	8
7	9
>8	10

What is actually being measured with this type of data is the time to
a response - in your case, defective.  If you could monitor the
individual continuously you could (in theory) say exactly when his/her
status changed from N to D.  If you had this detailed information,
then you could summarise it as the mean time to response, and the
standard deviation of the time to response, so that the random
variable TIME TO RESPONSE, t, would have a p.d.f., say f(t), with a
corresponding cdf F(t), where both functions have unknown parameters
such as the mean and standard deviation.  Where you are recording over
discrete time intervals, as you are here, you will know the proportion
of the study population that went defective in time interval (t1, t2),
and this is predicted by (F(t2)-F(t1)).  The unknown parameters can
then be estimated using maximum likelihood (the numbers responding in
each time interval are multinomially distributed with category
probabilities given by (F(t2)-F(t1)), for time interval (t1,t2)).  The
methodology is encapsulated in a Genstat procedure, CUMDISTRIBUTION. 
A group of us are preparing a paper on the subject at present, and I
have published the theory in a Genstat Newsletter article if that is
of any interest.  If this IS relevant I can run a sample set of data
through our procedure to give you an idea of what the results look
like.

All the best,

Phil Brain


*********************************************************************

You're looking for a single change-point, with an additional
feature p=Prob(indicates N | it is N) and q=Prob(indicates D | it is
D). 1-p and 1-q are the probabilities of an error.  (the vertical line
| can be read as "if").

  You have to estimate p and q.  You also have to estimate the 
change-point, n[i],  for each patient i.  Let jth observation for ith
individual be z[i][j].  You model could be:

       Likelihood (z[i][1],...,z[i][m] | n[i], p,q) = 
                      P(observe z[i][1] | it is N)....P(observe
                      z[i][n[i]]
| it is N)
                      P(observe z[i][n[i]+1] | it is D)....P(observe
z[i][m] | it is D)

       This will be a function of p, 1-p, q and 1-q terms.

  For all individuals, the result is the product of likelihoods for
  each one
(assuming independence).  Solving for maximum likelihood is the big
thing. You have many variables, p, q and n[i] for each individual.  
EM algorithm is probably the best (just the name of a method used to
solve maximum likelihood problems).   Don't know if I can complete the
work without a larger investment of time.

   Noise (or error rate) would be related to 1-p (false observation of
   a D,
 when 
it should have been a N) and1-q (false observation of a N when it
should have been a D).


Pat

********************************************************************

There is a test for randomness in a distribution that looks 
for runs. In a binary sequence like the one you describe, 
it counts the number of runs of the same outcome, so your 
first example (NNNNNDDDDD) has 2 runs and your second 
example has 6 runs of size 1,1,2,2,1 and 3 respectively 
(NDNNDDNDDD). There is a test for randomness which tests 
whether or not the probability of each outcome occurs at 
random with constant probability throughout the sequence. 
There are tables for when this parameter reaches 
statistical significance. Thus the number of runs (in a 
fixed length sequence) may be some way of quantifying the 
variance.

Reference: the one-sample runs test for randomness, p58,
in Nonparametric statistics for the behavioural sciences, 
Sidney Siegel, N john Castellan. Mcgraw-hill internl 
editions.

-----------------------------------
Ms Hilary C. Watt
[log in to unmask]
St. Georges Hospital Medical School

*********************************************************************

You can treat this as a standard ROC curve, with 
observation number being the predictor & Normal/defective 
being the outcome.

Paul T Seed    MSc CStat   ([log in to unmask])     
Departments of Obstetrics & Gynaecology and Public Health Sciences,  
Guy's Kings and St. Thomas' School of Medicine, King's College London,


*********************************************************************

Your concern is that false test results are inhibiting your ability 
to diagnose the presence of disease.   It is probably reasonable to
assume that the probability of getting a false positive result is
independent of patient (such an assumption, of course, is much harder
to justify for false negative results, which is why it may be best to
approach the problem from this angle).  In which case, consecutive
tests on the same patient can be considered as independent binomial
trials under the Null hypothesis that the patient does NOT have the
disease.  

Let the probability that a positive test result is wrong be p.  The
probability of getting a particular series of results if the Null
hypothesis is true can then be computed from the formula for the
binomial distribution with parameter p.  

For example, suppose the probability of a patient without the disease
testing positive be 0.25.  The probabilities of getting only positive
results (i.e. no negative results) in a run of 1, 2 and 3 consecutive
tests are 0.25, 0.0625 and 0.0156 respectively.  So, using a
conventional 5% significance level, three consecutive positive tests
would be sufficient to reject the Null hypothesis and to diagnose the
presence of disease.

The probability of one of three tests coming up positive is 0.1406, so
the Null hypothesis is retained.  If the next test is positive, we now
have three out of four positive, the probability of which is 0.0469,
so it might now be reasonable to reject the Null hypothesis.  And so
on.

Probably too simplistic - but just a thought.

Dr. Brian Faragher
Senior Lecturer
Organisational Psychology and Health Group
Manchester School of Management
UMIST

*********************************************************************

Fit a logistic regression line in time to the data, and use the
regression coefficient on time as a sufficient statistic for the rate
of deterioration in each subject, weighted inversely as the square of
its standard error. The deviance is a crude measure of goodness of
fit.

Tim Cole

[log in to unmask]   Phone +44(0)20 7905 2666  Fax +44(0)20 7242
2723 Paed. Epid. & Biostats, Institute of Child Health, London WC1N
1EH, UK

*********************************************************************



---------------------------
Paul H Artes, MCOptom
Research Optometrist
University of Manchester

[log in to unmask]


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%