From: Osher Doctorow [log in to unmask], Thurs. Oct. 18, 2001 6:58AM
Dr. Poses' contribution is very useful and interesting.
There is a certain difficulty in trying our best to attain objectivity,
although I think that it is very much worth the try. The difficulty
concerns the very foundations of philosophy, statistics, science,
mathematics, research and involves our tendency to adopt implicit
assumptions which we do not state because they appear *trivial*, *obvious*,
and so on. The extreme case is adopting an axiom such as the parallel
postulate in Euclidean geometry, which was adopted by the Ancient Greeks
apparently because if you construct two line segments which are parallel, it
seems inconceivable that if you *project them infinitely* they would ever
meet (which they would for example if they were really drawn on various
curved surfaces and you followed them long enough). In probability and
statistics, however, this level has not even been reached. For example,
Bayesian conditonal probability/statistics (BCP) divides probabilities, but
mainstream probability/statistics has never tried subtracting probabilities
in similar circumstances - I discovered a whole new alternative field to BCP
called Logic-Based probability-statistics (LBP) by doing the latter. If BCP
had specified that it was assuming division and not subtraction or addition
or multiplication in the appropriate circumstances, LBP could have entered
the mainstream at approximately the same time as BCP, not to mention other
types of probability/statistics.
If addition, subtraction, multiplication, division, are too *trivial* to
mention in our assumptions, then we have a long way to go in theory vs
evidence. We can start by examining these and similar operations and
remembering to mention what alternative models could exist by altering model
assumptions and making sure that the most tiny operations are included in
our axioms where appropriate. That way, he help the competition, but in
the long run we also help ourselves.
Osher Doctorow Ph.D.
Formerly (and still intermittently in parts) California State Universities
and Community Colleges
----- Original Message -----
From: "Roy Poses" <[log in to unmask]>
To: <[log in to unmask]>
Sent: Thursday, October 18, 2001 6:39 AM
Subject: Re: theory or evidence? (fwd)
> ---------------------------------------------------------
> Roy M. Poses MD
> Brown University Center for Primary Care and Prevention
> Memorial Hospital of RI
> 111 Brewster St.
> Pawtucket, RI 02860
> USA
> 401 729-2383
> fax: 401 729-2494
> [log in to unmask]
>
> ----------------------------Original message----------------------------
> "Simon, Steve, PhD" <[log in to unmask]> said
> This is the problem of biased assimilation effect. This is the tendency to
> look harder for flaws in papers that we disagree with and to overlook
flaws
> in papers that we agree with. This has been demonstrated empirically (see
> MacCoun 1998 for several good examples). I see it most often when dealing
> with research that is highly emotional and where opinions are very
strongly
> held. Gun control proponents are highly critical of John Lott's research
> that claims that concealed carry laws reduce crime and at the same time
will
> cite much of the public health literature that claims that having a gun in
> the home increases your risk of injury and death. Opponents of gun control
> praise Lott's research and criticize the public health literature. Both
sets
> of research are based on weak observational designs and have similar flaws
> and shortcomings. There is similar polarization about the relationship
> between IQ and heredity. One side claims that half the research is bad and
> cites the other half as proof. The opposite side will do the same, but
will
> reverse the two halves.
>
> Is it possible that we are more critical of homeopathy research not
because
> it is bad research, but because it supports a viewpoint that we disagree
> with? Are proponents of homeopathy too ready to overlook the gaps in the
> research?
> ------------------------------------------------------------------------
>
> Actually, I think this is a version of a cognitive bias that has been
> observed in other situations, "the illusion of validity," discounting
> information that goes against one's preferred conclusion.
>
>
> ----------------------------------------------------------------------
> I don't want to sound like one of the post-modernist thinkers, but I do
> believe that it is very hard to be truly objective in evaluating research.
> I've noticed disagreement even on something as basic as whether a given
> article provides supportive evidence of or refutes homeopathy (I don't
have
> the citation handy). And I've noticed all too often the following request:
> "the conclusions of this paper are all wrong--help me find the flaw in
their
> reasoning."
> -----------------------------------------------------------------------
>
> I don't think this is post-modernist, or extreme relativism. A
post-modernist
> would say any objectivity is impossible, so don't bother trying to be
ojective,
> in fact, do your best to push your personal point of view regardless of
its
> merits. A non-naive realist would say true objectivity is very difficult,
and
> perfect objectivity may be unobtainable, but one should strive to be as
> objective as possible. Furthermore, knowing about cognitive biases may
make
> it more possible to consciously minimize their effects on one's own
thinking.
|