In this evolving thread, I have enjoyed many posts. Without getting to
“mind” or any deeper issue, I tend to agree with Keith and Johann that human
If I were to be more precise, I would argue that only a knowing agent can
design. Once again, I am going to define design using Herbert Simon’s (1982:
129, 1998: 112) definition of design as the process by which we “[devise]
courses of action aimed at changing existing situations into preferred
ones.” Without accepting all of Simon’s views on design, it is a useful to
define design as the process we use to change existing situations into
Since this involves intentionality, it therefore involves agency. That
brings design to an issue of sentience or knowing agents, that is, creatures
that can – or can be said to – 1) think or know, 2) possess will or the
ability to intend, and 3) act on their intentions.
For the purposes of this thread, I’m not going to argue what such words as
“think,” “know,” “intend” or “act.” That leads away from the question at hand.
Without positing “minds,” “beetles,” or “black boxes,” – in fact without
entering the discussion on philosophy of mind or what it is to think – I
simply use the words “think,” “know,” “intend” and “act” in what I hope is a
common and reasonably well understood sense.
In this sense, I would accept that several kinds of primates “design.” In
this sense, they are often observed to undertake a process of thought, will,
and action to change existing situations into preferred situations, often by
using tools or by restructuring the environment. In her book Our Own
Metaphor, Mary Catherine Bateson (1972) reports an incident of a horse that
thinks. I have had dogs that reason things out, intend, and act on their
In some cases, my thinking dogs made plans and carried them out based on
what they had learned. For example, our dog Jacob loved raspberries. When we
moved from Sweden to Norway, we discovered raspberry bushes at the bottom
our garden. Watching my wife pick raspberries for dinner one day, he
realized that these were a food he enjoyed. The next day, Jacob set about
learning how to pick them for himself – carefully, using his teeth to avoid
touching the stem with his lips. From that time on, all raspberries up to
knee height were Jacob’s. Or, netter put, there were no raspberries below
knee height left for us to pick. He still insisted on his share at dinner,
of course, since he never learned how to open the refrigerator to get out
the crème fraise.
During raspberry season, Jacob learned that new raspberries came in
frequently when the sun was up – and they did not ripen when it was overcast
or rainy. Once raspberry season began, therefore, Jacob would check the sky
as soon as he went out in the morning. On sunny days, he would vanish
swiftly to the bottom of the garden. On overcast days, he would look around
and go about other activities.
I would argue that he designed the effective equivalent of a
pre-agricultural gathering system, or perhaps even an agricultural system
based on division of labor in which he did the harvesting. He may possibly
have developed an economic argument to go with it, since he also controlled
consumption, but we never talked over the fine points.
Kidding aside, though, he observed that we had raspberries, watched a human
pick them, and reasoned the process through in some way. I’m not making an
argument on what this reasoning process involved, but it involved more than
simple imitation, since he had first to recognize what it was that my wife
was picking, then learn to harvest the berries and avoid the prickles
without fingers or opposable thumbs.
Many years ago, I had another dog – Eleanor – would learned to cover things
up when she wished to hide the evidence of the minor misdemeanors that dogs
commit. She was also able to follow complex verbal and pointed instructions
in navigating buildings and construction sites on occasions when I was
several floors away and wanted her to go from one part of the building to
That’s a long way round to make a point, but the point is important.
Intending creatures can plan and carry out acts. This is quite different to
machines that cannot intend.
Instead, machines embody and enact the intentions and plans of their
creators. The agency that intention represents is that of the designer,
inventor, engineer, programmer, etc. Or the smart orangutan that moves the
box or puts sticks together.
These tools and artifacts obviously shape the way we work because they form
the system and context of our actions, and the affordances they permit shape
the way we work or can work. This is obvious to anyone with a foundation in
social and behavioral science – you don’t need to be a science and
technology expert to know this. These systems are human system, involving
more than the individual designer. As Patrice Flichy (1995: passim) famously
stated, all technologies are social technologies.
To say, however, that we respond to and act within the context of technical
systems is not to say that technical systems design. I understand the
fascination of the actor-network theory and I recognize its value as a
thought experiment, but there is no evidence for the proposition that a
machine or a computer program intends or acts. Rather, machines and programs
embody and represent the intentionality of their designers, and influence
the way human act and can act in using them.
The key difference becomes evident in the concept of repurposing. A human
being can repurpose a tool or tweak software. Neither tools nor software can
Consider how most of us see Eric von Hippel’s (2005) book, Democratizing
Innovation. Von Hippel’s body of work is renowned for the way in which it
demonstrates user adaptation and user-led innovation. That’s exactly what
most of us talk about in relation to such issues as co-creation and
user-centered design. But then, if machines design, are we to believe that
machines design and users design while designers are merely artifacts of the
I am aware that I can be accused of exaggerating Terry’s argument here or
Berger’s, but the directions of the two arguments tend in this way even
though I exaggerate for effect.
Humans design the systems. Then the systems do, indeed, affect us. One of
the most famous examples of this view was Winston Churchill’s 1943 speech on
whether – and how – to rebuild the House of Commons after the historical
building was destroyed in the bombing of London. Churchill argued that the
very shape of Commons was an aspect of British parliamentary democracy. ‘‘We
shape our buildings,” he said, “and afterwards our buildings shape us.’’ The
speech notably explained wee the very specific shape of the rooms, large
enough for daily business but just a bit too small for the full seating of
all members kept the flow of democracy working – and ensured that urgent
issues were debated to a crowded house, increase the sense of urgency and
In much the same way, military tactics have had to change dramatically with
the development of new weapons. The British won their decisive victory at
Agincourt not merely because they had archers, but because the French nobles
refused to adjust their tactics to the decisive firepower of the yew
longbow. The same thing happened in the United States Civil War, when the
rifled musket made gunfire more accurate at a far greater distance than
previously possible – the death rate was massive until armies shifted from
the massed volley tactics developed for the great battle squares of the
The Commons did not rebuild itself. Neither could the longbow nor the rifled
musket exercise agency. These were human innovations, and the buildings,
bows, and muskets we humans designed shaped the ways that we had to work in
debate or in battle.
Now here I want to diverge from Johann just a bit. Johann argues, “design
thinking is not done by the mind.” I’d have to say that this is not quite
so, at least not in my view. Body memory, learned patterns, experience, all
play a role in design, but design thinking – the iterative planning activity
of design – is a thinking act. Here I’ve got to beg the question modestly:
if there is no “mind” but only a “beetle,” or a “black box” then perhaps it
can be said that design thinking is not a mental act, but rather a
“beetalic” act or a “boxic” act. But for now, I’m going to argue that design
thinking is an attribute of whatever organs of the human being “think,”
“know,” “intend” and “act.”
Whatever it is that we humans do when we design, machines cannot do it. This
may change after we reach the Kurzweil Singularity, but we are not at that
So far, design is a human process, at least design as we work with it in
design schools, and definitely where designers work on behalf of other human
beings. For all his intelligence, Jacob was the very model of the romantic
genius, exercising his skills to shape the world as it pleased him. Well,
perhaps not quite. He was a loyal genius, and quite dedicated to the other
members of his pack. But when he invented or designed the processes he was
perpetually developing – raspberry harvesting was only one example – these
processes were generally intended to suit his own interests, intentions, or
needs. He did not act on behalf of clients or users.
For the rest of us, the profession of design generally involves design as
service for others. In this sense, design and design thinking are distinctly
human processes, often social, and always lodged in systems of some kind.
The technological supports and structures we use influence and affect those
systems, but the design act itself is a human act.
Merriam-Webster’s (1993: 343) defines design as: “1 a : to conceive and plan
out in the mind <he ~end a perfect crime> b : to have as a purpose : intend
<he ~end to excel in his studies> c : to devise for a specific function or
end <a book ~ed primarily as a college textbook> 2 archaic : to indicate
with a distinctive mark, sign or name 3 a : to make a drawing, pattern or
sketch of b : to draw the plans for c : to create, fashion, execute or
construct according to plan : devise, contrive…”
We must change the history and current meaning of the word design in some
essential way to argue that machines, computers, or software artifacts
design. These are design tools. These tools influence and assist the ways
that human beings design, often in vital – even decisive – ways.
Nevertheless, it is human beings that conduct the act of design and practice
the profession of design.
If this were not so, we would not hire and pay professional designers with
the experience and skill to make appropriate decisions on behalf of clients
and users. We would not have design schools that teach the requisite skills.
For that matter, we’d hardly need design research – only informational
technology research or other forms of technological research to create the
machines that design for us.
This, however, is not the case. As long as design involves value-laden
decisions and intentionality only creatures that “think,” “know,” “intend”
and “act” can design. This is even more substantially the case for
professional design, a service profession that requires designers to work
with clients and users in making decisions on artifacts, services, systems,
and processes to meet their needs.
These criteria limit the possibility of design to sentient creatures. These
criteria classify machines and software as tools in the design process,
quite distinct from designers as actors or agents.
Human beings are better suited to the professional practice of design than
orangutans or dogs. While Jacob was a smart fellow with high social
intelligence, he had a tough time dealing with abstract representations of
user needs, and he never did master the computer keyboard.
Even so, many – perhaps most – orangutans and dogs are distinguished by four
key qualities that no machine now has: the capacity to “think,” “know,”
“intend” and “act.”
These issues are at the heart of design. Computers and software, like books
or movies, can represent or in some sense be said to embody, model, or enact
“thought,” “knowledge,” “intention” and “action.” In limited circumstances,
machine may even be able to “act.”
In all cases, these representations and actions are conceived, planned, and
programmed by a human being. That human being is a designer, an independent
agent possessed of agency, will, and ethics.
Ken Friedman, PhD, DSc (hc), FDRS
Swinburne University of Technology
Bateson, Mary Catherine. 1972. Our own metaphor. A personal account of a
conference on the effects of conscious purpose on human adaptation. New
York: Alfred A. Knopf.
Flichy, Patrice. 1995. Dynamics of Modern Communication. The Shaping and
Impact of New Communication Technologies. London: Sage Publications.
Merriam-Webster, Inc. 1993. Merriam-Webster’s Collegiate Dictionary. Tenth
edition. Springfield, Massachusetts.
Simon, Herbert. 1982. The Sciences of the Artificial. 2nd ed. Cambridge,
Massachusetts: MIT Press.
Hippel, Eric von. 2005. Democratizing Innovation. Cambridge, Massachusetts:
The MIT Press.
Downloadable PDF edition available at URL: