On Thu, 21 Dec 2000, Coomaren Vencatasawmy wrote:
> Can data collection or coverage be an end to itself? How can one separate
> data coverage and data analysis?
The central problem to me, and covered in the notes sent to Coomaren, is
a distinction that is too often ignored: between true research and
confirmation.
Statistical methods are taught and applied with little attention to the
assumptions and meaning. The t-test, for example, was developed in the
context of quality control and makes sense against a background of mass
production of a stable, uniform [and excellent] product. It is now
generally applied to any problem with two samples, thus making the strong
assumption that the mean is a "meaningful" parameter to compare.
Research, sensu structu, always involves assumptions (we *assume* that any
observations we make are pertinent to the topic which is, by definition,
unknown at the start). There has to be a circularity in using the data
to test (at least show consistency with) the assumptions and *also* to
test the substantive issues.
While the same mathematics may be used, there has to be a difference in
interpretation between the situations (a) we collected these data and
look through it for any associations, (b) we collected these data with a
prior belief that x influences y but test for a null effect, and (c) we
know that x influences y so collected data to measure the size of that
effect with known precision. In this I term myself a quasi-Bayesian, in
that a mathematical Bayesian approach seems to me to introduce just
another layer of assumptions - that the estimated priors are valid.
Hence, the mathematics is a useful tool, but then I expect the researcher
to explain what it *means* in the real world if x influences y. In true
research we should employ techniques that make visible both patterns in
the data and exceptions to the patterns.
R. Allan Reese Email: [log in to unmask]
|