Ray wrote:
>You have not replied to the particular question I had in mind. Perhaps, I
>was not clear in my language.
>
>My question is: Does the field of ethics have a decision model that would
>help (not necessarily resolve) ethical issues where the facts about actions
>and consequences are subject to the condition of uncertainty?
No.
Jim T.
p.s. aww, all right, why not . . . after all, it has been kind of quiet
lately. Plus it's Friday. :-)
Some might argue that utilitarianism provides such a model. The strengths
and weaknesses of utilitarianism are well known. Nonetheless, Bentham long
ago envisaged a type of pure moral calculus that could be applied to such
technical ethical problems as you have in mind. Game theory, decision
theory, rational choice theory, etc. are all modern attempts to mechanize
in Benthamic fashion what Aristotle much longer ago conceived of as
"practical deliberation." So depending on one's philosophical
proclivities, it's possible to answer the question you pose by referring to
Aristotelian "models" of deduction and phronesis (wisdom, or intelligence).
Now, is that the answer you want? I suspect not. I take it that you are
really more concerned with the "uncertainty" part of your question than
with the ethical reasoning part of your question. Does the professional
discipline of ethics *specifically* and/or directly help us with
uncertainty? No, again I suspect not--or at least, not that I can tell.
Now, to the extent that ethicists and environmental ethicists are generally
unacquainted with statistical decision theory, this represents a serious
gap in their formal theorizing. This is not to say, however, that there
are NO environmental philosophers out there who aren't doing good work in
this area. Kristin Shrader-Frechette comes to mind: for example, there is
a rather full treatment of statistically informed ethical reasoning in her
book, _Method in Ecology_ (co-authored with Earl McCoy), where she argues
that we should minimize type II rather than type I statistical errors in
environmental contexts where both errors are unavoidable. This of course
is a reversal of normal scientific procedure and places the burden of proof
on those who argue that human activities will have no lasting/damaging
ecological effects versus those who predict they will have negative
effects.
I have no further comment on Shrader-Frechette's approach except to say
that that it leaves unanswered important questions about things like human
nature (through no fault of hers or McCoy's, I might add--after all, you
can only do so much in a single work--and I commend the book to everyone on
this list). While an analysis of moral deliberation in light of the most
current logical and mathematical theories of decision is, on the one hand,
a commendable endeavor, it remains to be seen that individuals *as*
individuals actually organize their decision-making in real life in this
way--that is to say, through the making of decision-theoretic utility
calculations that maximize outcomes according to preference rankings and
desirability rankings.
To be sure, most people make seat-of-the-pants judgments every day in terms
of preference and desirability, but hardly in the purest Bayesian sense.
And let's consider the case of a committed cigarette smoker, say, who
continues to smoke despite the well established probability that smoking
will shorten his life (with the concomitant and obvious undesirability of
*that*). We would ordinarily hold this up as an example of irrational
behavior. The smoker, however, may in fact hold to a (somewhat) principled
belief that it is better to "live for today," since for all he knows, the
bus may hit him tomorrow, in which case it would be wholly irrelevant
whether he continues smoking or quits smoking (in terms of lengthening his
life). Now, surely this is a question of uncertainty (i.e. whether the bus
will hit him or not hit him, ever!). But if in fact a bus *does* hit him
tomorrow, can we really say whether the smoker acted irrationally in
choosing to continue smoking as opposed to quitting smoking, given the
ultimate actual outcome?
This question of human irrationality I take to be one of the problem areas
for Bayesian approaches to deliberation in general and moral deliberation
in particular. And then when we consider what it would take when it comes
to large scale social or political decision-making . . . whew! How are we
to factor in all of the uncertainties, i.e. all of the individual, social,
political, economic, technological, (etc. etc.) unknowns of the future? Or
even the uncertainties of the present?
For example, how do we know that the global warming case is NOT unlike the
situation that the world faced during the decades of uncertainty about
nuclear war which existed during the time of conflict between the United
States and the U.S.S.R.? One might have argued then, as I am sure people
must have argued then, that the potential risks of all-out nuclear war
(with the resulting catastrophic harms that would ensue in the wake of such
war--i.e., the end of the world, in a very real sense) were *so* great,
that it was the absolute moral obligation of the United States to back down
immediately, capitulate to the Soviets, and move immediately and
unambiguously to reduce the threat of thermonuclear conflict at all costs.
And yet note what happens here: by focusing on the undesirability of a
single outcome (nuclear war), such precautionary thinking ignored what we
*now* know (in hindsight) were some of the political and economic realities
at the time: e.g. the suicidally bankruptive economic effects of nuclear
one-upsmanship for the Soviet Union; the ultimate plausibility of "mutually
assured destruction" (MAD) as the basis for nuclear deterrence policy; and
so forth. Now, was the MAD policy of the United States during the Cold War
irrational? At some very deep level, yes, of course. Did it in fact,
work? In the end, yes. Could that outcome have been predicted ahead of
time? No, not really.
Personally, I think the threat of nuclear war is *still* the greatest
problem facing humankind today--you want an issue to sink your teeth into,
that's the one. In comparison, the threat of global warming is almost
trivial. Just like the irrational smoker who knows he should quit smoking
in order to lengthen his life, we know we should cut greenhouse gas
emissions in order to reduce the unknown probability of uncertain future
harms. But if the bus hits tomorrow--i.e. if some terrorist group decides
to make their point by detonating a handful of nuclear bombs in major
cities around the globe--then I suspect we'll have bigger problems than
worrying about the glaciers melting.
And, as an aside, when I see expressions of sympathy and support for
terrorism as a generally valid approach to problem-solving among so-called
environmentalists and animal rights activists (including some people on
this list), I have little reason to believe that environmentalists as a
total set are any more moral than any other group of people.
To those who would say, "But global warming is serious!" and, "Desperate
times call for desperate measures!" I guess I would simply respond by
quoting Thoreau's observation . . . "But it is a characteristic of wisdom
not to do desperate things."
By the way, Ray, I think that as an ethical decision model, Thoreau's
precept here is sound . . . albeit in an "obiter dictum" sort of way.
:-)
As always, YMMV (your mileage may vary).
jt
Your
>*particular* approach is not the question I had. Though it is interesting
>and suggestive of your bias. And we all have our own individual biases - I
>even have biases that I didn't know I had and haven't recognized!! :-)
>
>Ray
>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|