I asked what was meant by the terms "major axis
regression" and "restricted major axis regression".
Thanks to Paul Wicks, Tim Cole, Alan Hutson, and Jonathan
Alsop, I now know:
Major Axis regression -
Line is fit by minimizing BOTH x- and y-residuals simultaneously. All
data are given equal weight.
Use when units and range of X1 and X2 are the same.
Reduced Major Axis Regression -
Slope of line is the geometric mean of the two slopes determined by
regressing Y-on-X and X-on-Y. The y-intercept is obtained by running
the line through the centroid. All data are given equal weight.
Use when units or range of X1 and X2 are different.
-- Paul
My understanding of the reduced major axis regression line is the
least-products line, i.e. the line minimising the sum of the product of
residuals in the x and y directions. The slope of the line is the ratio of
SDy to SDx, and is the geometric mean of the direct and inverse regression
slopes. Teissier wrote about it in Biometrics (1947 or so), and said that
the standard error of the least-products slope was the same as for the
conventional regression slope, i.e. y SDy/SDx times sqrt[(1-r^2)/(n-2)].
I've never known how he derived this.
This line is of course different from the Deming line, which minimises the
sum of squares of residuals perpendicular to it. However the slopes of the
two lines can be very similar.
If the measurement errors of x and y are the same, the least-products line
is also very similar to the regression line of x-y on x+y, i.e. the
regression fitted to the Bland-Altman plot.
-- Tim
Maybe this reference will lead to a clue. Alan
93e:62167 62J05
Babu, Gutti Jogesh(1-PAS-S); Feigelson, Eric D.(1-PAS-AA)
Analytical and Monte Carlo comparisons of six different linear least
squares fits.
Comm. Statist. Simulation Comput. 21 (1992), no. 2, 533--549.
Summary: "For many applications, particularly in allometry and
astronomy, only a set of correlated data points $(x\sb i,y\sb i)$ is
available to fit a line. The underlying
joint distribution is unknown, and it is not clear which variable is
`dependent' and which is `independent'. In such cases, the goal is an
intrinsic functional relationship
between the variables rather than $E(Y\big\vert X)$, and the choice of a
least squares line is ambiguous. Astronomers and biometricians have used
as many as six
different linear regression methods for this situation: the two ordinary
least-squares (OLS) lines, Pearson's orthogonal regression, the
OLS-bisector, the reduced major
axis and the OLS-mean. The latter four methods treat the $X$ and $Y$
variables symmetrically. Series of simulations are described which
compare the accuracy of
regression estimators and their asymptotic variances for all six
procedures. General relations between the regression slopes are also
obtained. Among the symmetrical
methods, the angular bisector of the OLS lines demonstrates the best
performance. This line is used by astronomers and might be adopted for
similar problems in
biometry."
I have come across these terms in population dynamics literature.
Briefly, they refer to two alternative estimators of the slope in linear regression, to
that normally used. The construction of their respective CIs also varies.
Some references:
Gaston KJ, Lawton JH (1987) A test of statistical techniques for detecting density
dependence in sequential censuses of animal populations. Oecologia 74: 404-410
Slade NA (1977) Statistical detection of density dependence from a series of
sequential censuses. Ecology 58: 1094-1102
Vickery WL, Nudds TD (1984) Detection of density-dependent effects in annual duck
censuses. Ecology 65: 96-104
Alan
------------------------------------------------------
Martin Bland
Prof of Medical Statistics
Dept of Public Health Sciences
St. George's Hospital Medical School
London SW17 0RE
England
Tel: (+44) (0)181 725 5492
Fax: (+44) (0)181 725 3584
Email: [log in to unmask]
Web pages: www.sghms.ac.uk/depts/phs/staff/jmb/jmb.htm
------------------------------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|