Ted Harding writes:
>For instance: out there among all you list members, self-declared
>as particularly interested in evidence -- can anyone point to any
>serious survey of the Medical literature which evaluates articles
>in terms of the likelihood that they got published as a consequence
>of pressures (political, commercial or prejudicial), and/or in terms
>of comparison to studies that did not get published for similar
>reasons?
It's not about medicine, but is psychology close enough? There's a wonderful
review article about bias in
Robert J. MacCoun "Biases In The Interpretation And Use Of Research Results"
Annu. Rev. Psychology 1998, Vol. 49: 259-287.
The full text of this article is available on the web at
http://biomedical.annualreviews.org/cgi/content/full/17/49/259
The gist of the article is that two people viewing the exact same evidence
can come up with opposite conclusions. So even if the journals themselves
are biased in their selection of papers, a far more serious source of bias
is our imperfect interpretation of these findings. In my humble opinion,
this has serious implications for the practice of evidence based medicine.
I hope I won't violate any copyrights by excerpting a few paragraphs here.
>The notion that observers' personal prejudices and interests
>might influence their interpretation of scientific evidence
>dates back at least to Francis Bacon (Lord et al 1979). But
>talk is cheap-it is easier to accuse someone of bias then to
>actually establish that a judgment is in fact biased. Moreover,
>it is always possible that the bias lies in the accuser rather
>than (or in addition to) the accused. There are ample psychological
>grounds for taking such attributions with a grain of salt.
>
>For example, research using the attitude attribution paradigm
>(see Nisbett & Ross 1980) suggests that we might be quick to
>"shoot the messenger," viewing unpalatable research findings
>as products of the investigator's personal dispositions rather
>than properties of the world under study. Research on the
>"hostile media phenomenon" (Vallone et al 1985, Giner-Sorolla
>& Chaiken 1994) shows that partisans on both sides of a dispute
>tend to see the exact same media coverage as favoring their
>opponents' position. Keltner & Robinson (1996) argue that
>partisans are predisposed to a process of naïve realism; by
>assuming that their own views of the world are objective, they
>infer that subjectivity (e.g. due to personal ideology) is the
>most likely explanation for their opponents' conflicting
>perceptions. Because this process tends to affect both sides of
>a dispute, Robinson, Keltner, and their colleagues have
>demonstrated that the gap between partisans' perceptions in a
>variety of settings are objectively much smaller than each side
>believes.
Here's another telling section.
>Lord et al (1979) conceptually replicated Mahoney's results and
>extended them in several important ways. Because their study has
>inspired considerable research on these phenomena, it is worth
>describing their paradigm in some detail. Based on pretesting
>results, 24 students favoring capital punishment and 24 opposing
>it were recruited; each group believed the existing evidence
>favored their views. They were then given descriptions of two
>fictitious studies, one supporting the deterrence hypothesis,
>the other failing to support it. For half the respondents, the
>prodeterrence paper used a cross-sectional methodology (cross-state
>homicide rates) and the antideterrent paper used a longitudinal
>methodology (within-state rates before and after capital punishment
>was adopted); for the remaining respondents, the methodologies were
>reversed. Each description contained a defense of the particular
>methodology and a critique of the opposing approach. Students
>received and provided initial reactions to each study's results
>before being given methodological details to evaluate.
>
>Analyses of student ratings of the quality and persuasiveness
>of these studies revealed a biased assimilation effect-students
>more favorably evaluated whichever study supported their initial
>views on the deterrent effect, irrespective of research methodology.
>Students' open-ended comments reveal how either methodology-
>cross-sectional or longitudinal-could be seen as superior or
>inferior, depending on how well its results accorded with one's
>initial views. For example, when the cross-sectional design
>yielded prodeterrence results, a death-penalty proponent praised
>the way "the researchers studied a carefully selected group of
>states...," but when the same design yielded antideterrence results,
>another death-penalty advocate argued that "there were too many
>flaws in the picking of the states...." Having been exposed to two
>studies with imperfect designs yielding contradictory results, one
>might expect that Lord et al's participants would have become more
>moderate in their views; if not coming to an agreement, at least
>shifting toward the grey middle zone of the topic. But Lord et al
>argue that such situations actually produce attitude polarization.
>Thus, in their study, respondents in each group actually became more
>extreme in the direction of their initial views. Lord and colleagues
>argued that "our subjects' main inferential shortcoming...did not lie
>in their inclination to process evidence in a biased manner....Rather,
>their sin lay in their readiness to use evidence to bolster the very
>theory or belief that initially `justified' the processing bias."
This tells me that subjective overviews of research can be seriously flawed.
We all have a tendency to remember evidence that supports our basic
viewpoint and forget or discount evidence that opposes our viewpoint. This
makes meta-analysis and other systematic overview methods all the more
important.
You should also see
Gilovich, Thomas (1991) How We Know What Isn't So. The Fallibility of Human
Reason in Everyday Life. New York NY: Simon & Schuster, Inc. ISBN:
0-02-911706-2
This is a delightful book that explains why we tend to believe in things
that have no scientific basis. Gilovich explains how our cognitive
perceptions cause us to see patterns in random data and how we are so ready
to believe what we want to believe. He discusses specific beliefs, such as
alternative medicines and ESP, which arise from biases in our perceptions
and in our expectations. He ends with a nice chapter on how to develop
critical thinking skills.
Steve Simon, [log in to unmask], Standard Disclaimer.
STATS - Steve's Attempt to Teach Statistics: http://www.cmh.edu/stats
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|