Fiellers theorem is a standard way to calculate a relative potency estimate
and its confidence intervals. Basically relative potency rho = b0/b1 with
b0 the intercept and b1 the slope parameters of e.g. a linear regression.
Application of Fiellers theorem to calculate confidence intervals on the
rho estimate leads to a quadratic equation in rho (see e.g. Collett,
modelling binaray data 2nd edition p 109). The solution of this equation
reveals an estimate (b0/b1 - g*(v01/v11)) / (1-g). Here b0 and b1 are
parameter estimates, v01 the covariance estimate of intercept and slope and
v11 the slope variance estimate and g = (z*z *v11) /(b1*b1). Currently I
use this type approach in a data set. Here it appears that, at the extremes
of the assay range the value for g is about 0.1. This leads to a biased
estimate.
My questions are:
1. Is there litrature on this bias?
2. How should this bias be interpreted? I understand that the if the
variance on the slope is relatively high compared to the slope parameter
(leading to a relevant g value > 0) the confidence intervals on the
relative potency estimate should get larger. However, intuitively I do not
understand why this high relative variability should lead to a systematic
bias.
3. Is anyone aware of whether the observed bias can be used as an argument
for not using Fiellers theorem for this kind of data analysis, especially
towards regulatory authorities.
Thanks in advance
Geert
|