Dear Jeremy, Dear all,
'but that is one thing that actor-network means when we say agency, we mean that the thing acted in the world. It doesn't matter if it has 'intent' or 'agency'. It just has to action. The machine is held accountable for its actions insofar as it removes the guilt or parts of it from the person'
I am not entirely convinced this is Latour's argument, or the ANT that follows from it. The whole point of Latour seems to me to revolve around the opening of an existencial possibility in which we could conceptualize a permanent state of indefinition between agency and intencionality, blurring the boundaries between humans and things. In that sense, in that conceptualization, whether things have agency matters, even if that agency, or better said, even if determining the degree of (in)definition between action and agency in a given thing, remains a human property, in Latour and elsewhere.
As humans, I like to believe, we have the capacity to do what Grant MacCracken has called 'divestment rituals' and held things accountable in order to expurge our guilt or other parts of ourselves that we rather displace and locate somewhere else. Denying the process of displacement invests things with an apparent agency of their own. This opens the possiblity for things to come back at us, phenomenologically-speaking.
People do kick cars while cars are less able to kick people unless someone is behind the steering wheel. Doors don't slam us that often as much as we slam doors. And this takes me to Fillipo's previous emails around AI while taking me to Gregory Bateson and the notion of deutero-learning.
I know I can learn about a thing. I know I can learn to learn about things by generalizing procedures of learning (deutero-learning). If I am not great at recognizing that as a human I have the capacity for deutero-learning (and acknowledge this capacity as a part of myself, rather than divesting it) things will unravel to me quite often. They will emerge to me as designing themselves in a state of full intentionality. I saw it many times working with psychotic patients who are, by law, rendered far more accountable than the things that unravel around them. Except that I genuinely believe that plenty of psychotics live in a reality where things REALLY design themselves autopoietically, reason why I had to stop working with them and contributing to make them accountable in the eyes of the law.
Psychotics aside, this is my question. As humans we can write things such as a theory of deutero-learning. We learn about things. Our things can learn. We are able to learn about the processes by which we 'deutero-learn' with things and humans. Our current things, nonetheless, are still far less able to deutero-learn. Better as our things may become in deutero-learning, my question is: will things ever be able to write a theory on the difference between learning and deutero-learning? I find this hardly to believe. In that sense, I think I am closer to Ken.
I cannot start to imagine a thing that would write down a theory of deutero-learning. If I saw a thing like that, I would probably run for my life. But maybe, exactly like a psychotic, I wouldn't need to run for my life anymore. In that world, things would just carry on emerging, designing themselves autopoietically and divesting in me whatever parts of themselves they wanted, guilt or something else.
Just a thought. Cheers.
--- On Sun, 19/6/11, jeremy hunsinger <[log in to unmask]> wrote:
From: jeremy hunsinger <[log in to unmask]>
Subject: Re: New direction in user-based design
To: [log in to unmask]
Date: Sunday, 19 June, 2011, 23:39
but that is one thing that actor-network means when we say agency, we mean that the thing acted in the world. It doesn't matter if it has 'intent' or 'agency'. It just has to action. The machine is held accountable for its actions insofar as it removes the guilt or parts of it from the person. The thing is that we actually do punish things all the time for their inactions or actions. People kick their cars, people slam doors, etc. etc.
The point is less that we need to pay any attention to the mental states of the human 'agents' any more than we need to pay attention to the mental states of 'trees' that fall in the road. The law treats these as different things with ontological status in some cases, but not always, as do humans in general.
|