JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for PHD-DESIGN Archives


PHD-DESIGN Archives

PHD-DESIGN Archives


PHD-DESIGN@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

PHD-DESIGN Home

PHD-DESIGN Home

PHD-DESIGN  February 2017

PHD-DESIGN February 2017

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Epistemological Differences -- Research in an Academic Discipline vs. Research in a Professional Practice

From:

Terence Love <[log in to unmask]>

Reply-To:

PhD-Design - This list is for discussion of PhD studies and related research in Design <[log in to unmask]>

Date:

Fri, 17 Feb 2017 09:13:04 +0800

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (398 lines)

Hi, Ken,

Great post. I enjoyed reading it

The issue of the statistics  of the 6-10 samples is not well addressed in classic university research thinking.

The most common (and often assumed only) approach to doing statistics on data is the equivalent of 'going to a posh restaurant and only eating the bread roll' (Tawnee, Thud, Pratchett pp.266-267).

The most common approach is to gather the data, apply the single research question to them, and see which way the numbers  point and with how much confidence.

Any data, however, contains much more information about many other things. Typically, the majority of the information in data lies outside what is tested by the statistics.

This enormous amount of extra value in the data  is usually ignored - and often with good reason, as it is considered irrelevant to the question at hand.

But not always.

Here are three examples.

1.  In academic assessment, the data of students' exam marks is an indication of the learning of each student. That is the most common use of that exam mark data.

However, the data also contains all sorts of other information including: the relative ability and biases of the markers, and how they change over time; the relative homogeneity of the student body and their learning; the relative quality, homogeneity and bias over time of exam setters and a whole load  of other factors. Drawing these other factors from the data enables accurate individualised  correction of student marks and assessment of how valid the examination was as an examination.

This way of drawing more out of the data using Multifaceted Rasch Analysis is now commonplace where examinations have to be accurate and unbiased. Rasch analysis does not intrinsically depend on large sample  numbers because comparison between the measures elicited also gives a measure of confidence in their reliability (in effect three analysis bites at the same data cherry). Interestingly this extended use of data via statistical methods  is relatively rare in universities and hardly ever taught to research students outside Education.

2. Deanonymisation methods offer ways of building huge bodies of tightly linked data about individuals. Again it does not need large datasets. Again the approach uses the additional implicit information embedded in data. SCL/CA use\ exactly these methods to get the data to influence the outcomes of national elections.

3. Data surveillance and information gathering in the standard ethical hacking model uses collected data about network responses from devices in multiple ways to infer even larger bodies of information about targets.

In the case of design research data collected from a small cadre of representative participants, this is very similar to the  use of the conventional Purposive Sampling approach in statistics.

The data available from the participants is firstly used in multiple roles in terms of the problem itself (see for example http://dissertation.laerd.com/purposive-sampling.php )

Second is the secondary data explicitly or implicitly inferable from the decisions made about the membership of the purposive sample compared to populations.

Third are Rasch-like analyses that can derive meta-data from comparisons between elements of the first data that inform not only the confidence in the data and what can be derived but also identify biases in both the data and its collection

Fourth are the analyses that compare and contrast the  above with more general data about populations.

Five are analyses that test, given the above, whether the sample size was big enough to infer the above findings.

Six...

Seven...

To recap, seen over-simplistically in terms of the research methods of large-sample size, associative, single-question analysis, the use of small samples in design research may appear to be faulty. 

However, seen in terms of methods of analysis that make use of the rich complex body of information available in each datum and between data, small sample analysis methods can be both more reliable and offer better information.

Two final comments. There is increasing criticism of statistical analysis of big data as resulting in false findings. I've an excellent paper on this but can't put my hand on it at the moment. 

Second, the statistics providing the evidence of the Higgs-Boson particle are in essence the same kind of statistics used in David's design sampling. It is statistical analysis on a relatively small number of particles with characteristic behaviours that are very complex (and which is where the maths is). I'm assuming the behaviours of David's research participants are a little less mathematically complex :-)

Regards,
Terry

==
Dr Terence Love 
FDRS, AMIMechE, PMACM, MISI, MAISA
Director
Design Out Crime & CPTED Centre
Perth, Western Australia
[log in to unmask] 
www.designoutcrime.org 
+61 (0)4 3497 5848
==
ORCID 0000-0002-2436-7566





-----Original Message-----
From: [log in to unmask] [mailto:[log in to unmask]] On Behalf Of Ken Friedman
Sent: Friday, 17 February 2017 6:21 AM
To: PhD-Design <[log in to unmask]>
Subject: Epistemological Differences -- Research in an Academic Discipline vs. Research in a Professional Practice

Dear All,

In the recent thread on interdisciplinarity, the question of epistemology in research emerged. The discussion involved a contrast between the epistemological approach of research in academic disciplines and the epistemology of research in professional practice. Merriam-Webster’s defines epistemology as "the study or a theory of the nature and grounds of knowledge especially with reference to its limits and validity.” I’ve been thinking about this issue since it came up on February 9. It took some time to write this reply. I apologize in advance for the length — these are thorny issues and I’m doing my best to sort them through. 

At the start, I should note that I do not discuss the role of experience and reflective practice in expertise. These are important enough issues to deserve a note of their own, coming later. Expert practice in most professions involves three dimensions: cognitive, affective, and psychomotor. I will address these in another post. At this time, I want to focus on the epistemology of research, and the different research epistemologies we see in academic disciplines as contrasted with professional practice. I’ve been struggling with this post for a week, and I want to get it off my desk.

Epistemology is a major formal subject field of philosophy, but I’m not going to address this here. For more on epistemology, try the Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy.

https://plato.stanford.edu/entries/epistemology/

http://www.iep.utm.edu/epistemo/

The question of research epistemology emerged in a conversation on interdisciplinary work across the boundaries of academic disciplines. Interdisciplinary research requires greater range than any single discipline. Those who work in a discipline immerse themselves in the knowledge of the field. Interdisciplinary scholars do not have the same grasp of what exists outside their home field. Interdisciplinary work requires sacrificing disciplinary depth for interdisciplinary scope. Working in an interdisciplinary frame therefore demands:

1) A responsible understanding of our limits;
2) A reasonable awareness of what we don’t know;
3) Enough humility to recognize our ignorance; and
4) Enough confidence to compensate for ignorance. 

To acknowledge our ignorance is neither shameful nor stupid. Stupidity involves acting despite ignorance.   

Some of these issues emerged in a conversation on interdisciplinary research and teaching. Toward the end of the thread, David Sless raised the question of test population numbers in the design field. While I want to reflect on this, some issues in this reply involve all professions.

All professions require action. Action involves uncertainty because we act in an uncertain world. The degree of uncertainty in any action may be large or small; it is always present. Uncertainty is a formalized version of ignorance. All professional action involves some degree of ignorance because we can never fully predict the outcome of our actions. We are uncertain about what we don’t know and can’t predict. To act, professionals must assess the level of risk. We must ask, therefore, “When does action under uncertainty shift from acceptable risk to stupidity?”

Professionals attempt to determine the best course of action while limiting the risks of action under uncertainty. This is a key to the difference between research in an academic discipline and research for professional action. Scholars and scientists engage in research to learn about the world. The typical complaint that professionals make against scholars and scientists  is that we live in an ivory tower, separated from the world study. 

This is not entirely true. All professionals are citizens somewhere. Many of us are politically active. Most of us have personal engagements in the world outside the university. 

But neither is this complaint entirely false. Scholars and scientists do not typically act on the world within the frame of their research work. Physicists don’t create particles, at least not for more than the brief periods of time that they blink into existence at places like CERN. Sociologists do not create the worlds that they examine unless they examine the social life of sociologists at work in academia. We seek some measure of objectivity that permits us to disclose the reality of the world we examine. Archimedes sought a place to stand from which he could move the world; scientists and scholars seek a place to stand that permits them to study it. 

While Archimedes was a designer and an engineer, he was also a mathematician and a scientist. He wanted to understand the world. He designed machines that enabled him to move the world around him. He also planned and built those machines, guiding their operation. Archimedes lived long ago. We remember his successes and we don't know much about his failures.

We know much more about the professionals of our time, and we know how often they fail.  

Design is a profession. For many years, designers teaching studio design at university or in the design schools now designated as university-level professional schools argued that there was no need for research in design. Today, a great deal of research in design involves professional practice, and some practice-related research involves engaging in practical action rather than doing research. This, in part, stems from a difference in epistemology. Some of it also involves a confusion between simply reporting on our practice — what we did — and reporting research results. Those of us that work in academic disciplines outside design are aware of the confusion. 

The differences between the disciplines and the professions involve habits of mind. The habits of mind we require for action in professional practice is a contrast to the habits of mind we require for research, at least in part. There are critical differences between the habits of mind of the professions and those of the academic disciplines, and there are differences in the epistemology underlying those habits of mind. 

In a post on these issues, David Sless writes: “Many traditional subject matter disciplines are concerned with accumulating knowledge. Design, by contrast, is concerned with solving open ended ‘problems’ either arising from a client or from a designer’s area of interest. Thus there are different epistemologies at work and different criteria applied to judging potential end points — success, failure etc.”

This is a major difference between subject matter disciplines and professions. This difference raises the question of professional judgment. Judgment is at the core of every profession, whether the profession is medicine or law, piloting airplanes or design. The goal of every skilled professional practitioner is to make wise decisions and to act on those decisions. Nearly all professional decisions involve uncertainty and action under uncertainty. Professionals seek to avoid stupid action despite a necessary degree of uncertainty. Skilled professionals, therefore, base decisions on heuristics linked to solid research and a sophisticated epistemology. In many professions, the epistemology of action rests on a deeper epistemology derived from the subject matter disciplines that form the foundations of the profession.

We see this form of epistemologically sound heuristic judgment in the case of expert designers. But relatively few designers are experts. Expert designers are a tiny percentage of the total population.

The lack of expertise amongst young designers explains, in part, the massive attrition rate in the design profession. By ten years after graduating with a design degree, nearly 90% of all design graduates have exited the field in which they took a degree. Larger economic trends account for some attrition. Bad luck and personal circumstances account for some attrition. But the greatest number of designers leave the field because they lack the expertise required for skilled professional practice in a highly competitive industry. 

Design educators sometimes claim that this attrition rate is actually a positive sign. They argue that design — like sociology, philosophy or mathematics — provides a good foundation for other careers. Perhaps this is so. Perhaps not. While the claim is used as an argument for design education, no one has undertaken a study that would enable us to confirm or reject this claim in contrast with other forms of education. 

We see attrition in every profession, but few fields seem to exhibit the massive drop-off rate we see in design. It is likely that a combination of factors accounts for lower attrition rates in other fields. These factors include selective admission for undergraduate programs, greater selectivity for professional school, professional certification for admission to practice, and carefully planned training programs for new professionals. Finally, many professions require continuing education.

If we contrast these factors for design against those for other professions, the differences are clear. Several forms of selective admission lead from undergraduate admission to professional practice for physicians, lawyers, and airplane pilots, followed by continuing professional education, and sometimes followed by ongoing recertification. 

While selective undergraduate admission typifies some design schools, the selection process often involves admission by portfolio rather than checking habits of mind. The same holds true for most graduate design programs. Standards are rarely as rigorous in any graduate design program as they are for lawyers, physicians, or for pilots. No one certifies designers. Few design firms have serious training programs for new professionals. Few design firms provide ongoing education, and nearly none require it.

These issues differ too much across professions to discuss them here. The complexities of admission and selection are difficult, so I won’t address them. I'm not calling for professional certification for designers. I am pointing to reasons for different attrition rates. Greater selectivity for admission to a profession means lower attrition in practice. 

I do argue for the benefit of professional development programs and continuing education in design firms. These would increase the human capital of the firm while enhancing the investment in junior designers.

What does expert professional practice require? It is interesting to observe that many expert designers have an interdisciplinary background. Many genuine experts — perhaps most — have a broader and deeper background than studio design alone. Expert designers have studio design skills. They also have skills, experience, and knowledge from other backgrounds. I know several such designers. One example will do: Per Mollerup.

Per began in business and became expert in statistics. Early in his career, he taught statistics at the university level. Before opening his design practice, he managed a company and worked as a publisher. Per understands research, and he understands business. When he opened his design firm — DesignLab — Per always insisted that 10% of any project budget be used for research on the project at hand. Without doing the sort of research one might require for generalization, Per made sound heuristic choices based on a deep background in research and research methods. 

What kind of background do designers require for skilled heuristic judgement? Professional judgment for effective action requires the skilled diagnosis of problems. This is the equivalent of clinical research in medicine. In discussing clinical research (Friedman 2003: 510-511), I once wrote: 

—snip—

Clinical research generally involves specific forms of professional engagement. In the flow of daily activity, most design practice is restricted to clinical research. There isn’t time for anything else. Precisely because this is the case, senior designers increasingly need a sense of research issues with the background and experience to distinguish among classes and kinds of problems, likely alternative solutions, and a sense of the areas where creative intervention can make a difference.

In today’s complex environment, a designer must identify problems, select appropriate goals, and realize solutions. Because so much design work takes place in teams, a senior designer may also be expected to assemble and lead a team to develop and implement solutions. Designers work on several levels. The designer is an analyst who discovers problems or who works with a problem in the light of a brief. The designer is a synthesist who helps to solve problems and a generalist who understands the range of talents that must be engaged to realize solutions. The designer is a leader who organizes teams when one range of talents is not enough. Moreover, the designer is a critic whose post-solution analysis considers whether the right problem has been solved. Each of these tasks may involve working with research questions. All of them involve interpreting or applying some aspect or element that research discloses.

Because a designer is a thinker whose job it is to move from thought to action, the designer uses capacities of mind to solve problems for clients in an appropriate and empathic way. In cases where the client is not the customer or end-user of the designer’s work, the designer may also work to meet customer needs, testing design outcomes and following through on solutions.

This provides the first benefit of research training for the professional designer. Design practice is inevitably located in a specific, clinical situation. A broad understanding of general principles based on research gives the practicing designer a background stock of knowledge on which to draw. This stock of knowledge includes principles, facts, and theories. No single individual can master this comprehensive background stock of knowledge. Rather, this constitutes the knowledge of the field. This knowledge is embodied in the minds and working practices of millions of people. These people, their minds, and their practices, are distributed in the social and organizational memory of tens of thousands of organizations.

—snip—

You can find the full article at URL:

https://www.academia.edu/2508830/Friedman._2003._Theory_Construction_in_Design_Research_Criteria_Approaches_and_Methods

Let me offer two related cases.

W. Edwards Deming was a working professional with a strong background in mathematical physics and statistics. Deming played a vital role in the dramatic rise of Japan’s industrial base over the past half century (for a full account of Deming’s work, see Halberstam 1986). Deming’s methods still help people to solve problems in ways that usually keep the problems solved. This required a platform in the epistemology of science, an understanding of how things are in the world rather than how things seem to be in our experience of the world. While these overlap, effective action in the world requires engagement with the world as it is. Deming’s argument runs quite counter to the intuitive and purely experiential heuristics of much design practice: “Experience alone, without theory, teaches . . . nothing about what to do to improve quality and competitive position, nor how to do it. If experience alone would be a teacher, then one may well ask why are we in this predicament? Experience will answer a question, and a question comes from theory” (Deming 1986:19).  

For a deeper look into Deming’s ideas and influence, visit the website of the W. Edwards Deming Institute. The site provides a rich array of free resources at URL:

https://deming.org

Another example appeared last August, when economist Paul Romer published an article titled “Why It Makes Sense for an M.D. to Lead the World Bank.” Romer praised the choice of a physician to head the World Bank because of the skilled heuristic capacity of physicians to make swift choices in the face of difficult decisions with limited information. Romer wrote:

—snip—

When I heard in 2012 that the final choice for the president of the World Bank was between an outsider M.D. and two insider economists, I sided with the economists. What I did not see then, but do see now, is that the Bank had the same problem as the pharmaceutical company that Ginny joined. An early success generated lots of revenue. For years, it followed a strategy of “letting 100 flowers bloom” by spending more on research when new projects surfaced. Before Ginny arrived, it had reached the limit on research spending but had not yet developed the capacity to shut down projects that turned out to have only modest prospects. Because she had experience with clinical medicine and could look with fresh eyes, Ginny was better than the insiders at acting without delay, pulling the plug on even good projects, and freeing up resources for new ones with the potential to be great.

I now see that back in 2012, the Obama Administration made an inspired choice when it nominated the outsider M.D., Jim Yong Kim, to be the president of the Bank. This summer, I decided to join the Bank because I saw the cuts and restructuring of Jim’s first term and expected that he would be asked to stay on and complete the transformation of the Bank into the type of impatient organization that forces decisions, refuses to settle for modest success, and shuts things down without concern for the feelings of insiders. This will give it the chance to keep starting projects that are as unreasonably ambitious as the ones that turned out so well in Singapore, South Korea, China, and India. Some of the new ones, perhaps many, will disappoint and will in turn be stopped. As long as everyone commits to ‘failing fast,’ and no one treats a shut-down as shameful or an insult to honor, failures cost little. What matters are the dramatic successes. It takes only a few to make a world of difference.

—snip—

You can find Romer’s full article at URL: 

https://paulromer.net/md-to-lead-worldbank/

Now we come to a central issue. David Sless wrote:

—snip—

For example, in my own field of information design, we routinely test our designs with the people who have to use them. So we do things that look like the techniques used in social and experimental science, but our approach to sampling people for our research differs quite radically from those of science. Scientists typically sample in order to make generalisations about a population. We typically sample in order to make decisions. 

So a scientist asks how many people do I need to sample so that I have a representative sample of the population about which: 

1. I can collect enough data about which I have an acceptable  statistical confidence, 2. I can claim are generalisable to the populations. 

We, as designers, ask how many and what types of people do we need to involve our testing so that we can confidently arrive at a decision about what is or is not working in our designs. Statistical confidence plays no part in our epistemology. Instead we use principles of inclusive design to recruit test participants—people who are likely to have difficulty using our designs—and we stop testing when we think we are no longer collecting any new data. 

Typically this figure in our work is around 6 to 10 participants. These figures of 6 to 10 have been widely replicated and reported on in the design research literature. I could elaborate and explain why the figures are so small, but that is a different topic. That the figure is so small might be the subject of scientific investigation, but that is not my immediate concern.

—snip—

These points raise several questions. Here are the issues that concern me:

1) We know that designers often rely on between 6 and 10 test participants. The literature shows that this is common practice. The problem is that common practice may not be best practice. 

It is not clear that a test population of 6 to 10 participants is reliable for everything that designers test. Some designers produce solutions based on intuition alone. Some designers test solutions on 6 to 10 participants. What we generally don't know is what would happen with deeper, richer testing across a larger population. In each case, the question is whether the solution works in the real world for which designers create it.

In research, the word "replicate" generally carries a different meaning to saying that "figures of 6 to 10 have been widely replicated." Replication means replicating appropriate studies with comparable methods to see whether the results are comparable. This requires comparative testing. In this case, we would need to test different size populations to see whether the results are the same with six to ten participants as compared against smaller or greater numbers of test participants. David is not saying this. He is saying that nearly everyone in the design field does the same thing, and they are all happy with what they are doing now. 

It is important to distinguish between the epistemology of a literature that reports common practice and the different epistemology of a literature that leads to best practice.

The medical literature of the 1700s shows that many physicians used the practice of letting blood as the treatment for many ills. The literature of the era reports this as effective treatment. In the same way, the medical literature of the 1800s shows that physicians treated patients with calomel and other toxic compounds containing mercury. 

Using mercury on human patients is, in fact, a recent practice in dentistry. The dental literature showed arguments for mercury-based amalgams until quite recently. Dentists who refused to let patients take extracted teeth home due to the toxic effects of mercury-based amalgam did not seem to think that mercury posed an equal risk in the human mouth!

The literature of a field can demonstrate conclusively that a vast majority of professional practitioners do something even when they are doing the wrong thing. These professionals do not seek to do the wrong thing. They act under the mistaken impression that their actions are correct.  

What we need is literature that shows us how to decide the appropriate number of participants in any test. What is the best number of people to test for which kind of design decision?

Between 6 and 10 participants can be useful. David uses 6 and 10 participants. His projects and his book (Sless and Wiseman 1997) show that these numbers can lead to outstanding results. 

What no one seems to have examined is an appropriate method for deciding on appropriate numbers. The issue of best practice in testing is more than a professional decision for immediate action. It is a research question. As David notes, this is a scientific question. To get an answer that we can trust requires comparative testing across a large enough number of instances to warrant confidence in the result.

2) As Jens Bernsen (1986) said, “the problem comes first.” The problem must determine the solution. A robust solution emerges from a problem based on the nature of the problem. 

In many cases, designers seem to impose a solution on a problem rather than studying the problem to disclose the probable best solution. The economics of the design business is one reason for this. Quick, intuitive work is more profitable for a professional design firm than painstaking work based on clinical research. Careful inquiry to disclose the true nature of a problem takes time. Time is money. Once a contract is signed and a budget specified, reducing time increases profit.

Few design contracts allow increased budget for the expense of more testing. We sometimes see budgets of this kind within the closed boundaries of organizations that do their own testing and design. If external contracts were to grow with a need for greater testing, design firms would turn testing into a profit center. They would expand their testing capacity accordingly. 

The creation of testing as a profit center would expose clients to a new and different kind of risk. Instead of the risk that clients face with insufficient testing, they would face some of the risks we see in medicine and law. Many medical practices and hospitals run more tests than required to increase profits, and some run more expensive tests than needed to make each test more profitable. In a situation that could make serious design testing profitable, some design firms would add serious research to the repertoire of offerings. Others might demonstrate a pattern of behavior similar to the “profit center” approach of profit-oriented medical practice. 

Many law firms charge clients in ten-minute increments for every conversation, every information search, and every task. In contrast, design firms often bid on price, and they bid for the entire project. As a result, the time required for serious research often comes at a cost to the design firm, rather than to the client.

In today's world, design firms without the capacity for serious research sometimes promise research-based results. I have also seen design firms partner with qualified research teams to bid on a large project. After winning the design firm wins the contract, one way to increase profits is to cut research time from the budget on the basis that research takes too much time. It is more common to see design firms promise research while doing inadequate testing. In other words, designers deliver what they believe that they have promised without understanding that they do not deliver research-based results. 

As greater numbers of design schools offer courses inadequate research training, it is increasingly common for design firms to do this. The failure of design firms to engage is serious research is one effect of the increased number of PhD programs in design. In many of today'sPhD programs, designers can earn a PhD degree without foundations in research methods and comparative research methodology. The failure of designers to understand research has consequences in academic life and in the business world.

Don Norman (2010) discusses some of these problems in an article titled “Why Design Education Must Change.” This article is available at URL:

http://www.jnd.org/dn.mss/why_design_education.html

At the same time, many businesses are now developing design capacity within the firm (Muratovski 2015). Many of these organizations already have research capacity. As they develop design capacity, they bring serious research and design together. For more information on this trend, visit URL:

http://www.sciencedirect.com/science/article/pii/S2405872615300265

3) Without developing this issue fully, it is worth noting that a great deal of the professional practice in any field involves solving small, limited problems. In comparing the academic disciplines with design, David Sless wrote, "Design, by contrast, is concerned with solving open ended ‘problems’ either arising from a client or from a designer’s area of interest.” This comment on the open-ended nature of design problems applies to a specific, limited sub-set of design problems. Creating a corporate design program is, to some degree, an open-ended problem. Applying the corporate design program across hundreds or even thousands or artifacts is not an open-ended problem. It is a specific task requiring skill and judgment, but it is not open to a variety of solutions. Quite the contrary, a corporate design program imposes uniformity on the designed artifacts that represent the corporation. Potential solutions to any problem are limited, and the corporate design program establishes the limits that close the solution space.

4) Professionals sell the promise of a solution to the problems of those who buy their services. Selling a solution is one thing. Solving a problem is another. 

These are often quite different. How are we to know the difference? This question lies at the heart of David’s comments. Designers, along with other professionals, have a bias for action. The bias for action is good. 

Action typifies good leadership in most professions. See, for example: Peter Drucker (2003), Heike Bruch & Sumantra Ghoshal (2004) or John Kotter (2008, 2012). These authors discuss the importance of action. At the same time, they examine the boundaries and constraints of effective decision-making for action leading to the right consequences. 

Effective leadership requires action under uncertainty. Acting under uncertainty is the core issue at the heart of all strategy. 

In October 1805, two weeks before the British victory at Trafalgar, Lord Nelson sent a secret memorandum to his captains. In this memo, Nelson wrote (Nicholson 2005: 45):

“Something must be left to chance; nothing is sure in a Sea Fight beyond all others. Shot will carry away the masts and yards of friends as well as foes … Captains are to look to their particular Line as their rallying point. But, in case Signals can neither be seen or perfectly understood, no Captain can do very wrong if he places his ship alongside that of an Enemy.” 

Expert action requires expert judgment. There is often a difference between expert action and swift action. Expert action solves problems. Swift action may simply involve completing a contract to get paid. To understand the difference, it is interesting to consider a profession in which expert musty act swiftly.

4) To examine the difference between expert action and other forms of action, I’ll take up Paul Romer’s idea. Instead of comparing designers with academics whose goal it is to study problems for secure knowledge, I’ll compare designers with physicians. Physicians must act swiftly, often within minutes or hours. Designers typically act on a time frame that may last for weeks or even months. When physicians act swiftly, they often do so under uncertainty. The risks are far greater than the risks that attend most design decisions. When physicians fail, illness is often the outcome. The worst outcome is sometimes death.   

Physicians use rapid heuristics to enable swift, decisive action. These heuristics and the actions that flow from them rest on three foundations: 

(1) The first foundation is massive professional knowledge of significant data based on painstaking medical research conducted in gold-standard clinical trials. This knowledge increasingly involves the comparative meta-analysis of many trials. It also includes research in disciplines related to medicine, including biology, genetics, anatomy, pharmacology, neuroscience, neurology, microbiology, biophysics, and more. 

(2) The second foundation is the education of the individual physician. This education typically starts with undergraduate education in science including statistics and research skills. In some nations, this is defined in a specific “pre-med” curriculum. Aspiring physicians follow pre-med with medical education, including hands-on apprentice experience in medical practice.

(3) The third foundation is medical residency. The residency is a hands-on training period in which newly graduated physicians serve a working apprenticeship, generally in a university hospital under the supervision of expert senior physicians. For many physicians, residency also includes specialization residency. This involves in-depth training in a narrow medical specialty, together with advanced education, followed by specialization board examinations for certification. 

(4) Many physicians also add a fourth foundation. This involves some combination of evidence-based medicine, continued reading in journals, and advanced continuing education. 

These four foundations make an immense difference, especially the last. Let me give you a personal example of this. 

My physician is relatively new general practitioner specialist dedicated to the advanced medical arts. (General practice is now itself a medical specialty.) Based on new information emerging from clinical trials, he recommended last year that I change my heart medicine. I went from a daily micro-dose of aspirin to a new prescription blood thinner that reduces the risk of heart attack or stroke by 90% as against aspirin. He also suggested a new treatment that did not exist when my condition was first diagnosed. My former physician did not know about this. He was a lovely guy, but he did not keep up with treatments that have now been available for several years. Just before I moved to a new physician, he had suggested that I consider warfarin, a treatment that is four decades old. I have always declined warfarin due to problematic side effects. I met my new physician because the first was on vacation when I needed assistance for something else entirely. That is how I learned about a new and beneficial approach to my heart problem. 

Expert action to solve problems effectively requires knowledge and judgment. Four foundations make this possible. We have relatively little in design education or design research to equal these four foundations.

When Paul Romer argues for a medical approach to economics at the World Bank, he is calling for rapid decisions based on education, expertise, and skilled practice. Designers generally do not have this kind of foundation. While we can measure a great deal of what designers do, no one bothers to measure it. Part of this is the fact that it doesn’t seem to matter to clients — for some design services, clients change out design approaches and design firms so rapidly that they don’t care. But that also suggests that they fail to see the long-term value of design investment. Whatever the reason, there is no long-term development of empirical data in most fields of design.

This raises issues and problems that are far too extensive to discuss here. I do address some of them in a book chapter you will find at URL:

https://www.academia.edu/250736/Friedman._1997._Design_Science_and_Design_Education

While I wrote it with respect to design education, the goal was improving professional practice, so these issues came to the front.

5) Altogether, these issues suggest major changes to design education. Without going into detail, I envision three or four developments occurring in university-level design education over the next decade. For the most part, design education today remains what it has been in the vast majority of the world’s 14,000 or so universities, as well as in thousands of other schools for design education. This is no longer sufficient. The 90% attrition rate has long made this clear. As universities increasingly face shrinking budgets and ever-greater demands from government, business, and industry, I expect many university design schools to shrink, and I expect that some universities may simply close their design schools. I also expect significant problems in other design schools — independent not-for-profit schools and for-profit schools.  

Simply because the world needs universities to exist, and because design in interesting, many design programs will continue to exist even though they do not do especially well in equipping students for professional life. In this sense, the world’s design schools very much resemble medical schools before the Flexner Report. Abraham Flexner's (1910) report on medical education in North America was a landmark document that is relevant for any field that educates practitioners in research-based professions.

In 1908, the Carnegie Foundation engaged Flexner to survey and report on medical education in the United States and Canada (Bonner 2002: 69-113). The survey had practical results. Inadequate medical schools rapidly began to vanish. Flexner recommended that 120 of 155 medical schools should be closed. Most of the 120 schools he labeled as inadequate did, indeed, close. In one notable example, eleven of Chicago's fourteen medical schools simply disappeared.

The report also led to positive improvements as universities strengthened professional education for medical practice. These improvements had worldwide influence as the new vigor of North American medical training influenced the rest of the world. Flexner went on to survey medical education in Europe, and his views were so influential that one British medical school dean in the 1960s credited Flexner with the evolution of British medical education to its modern state

While I don’t expect all inadequate design schools to close, I do expect a real differentiation between first-rate schools and the rest. I also expect to see a clear differentiation between outstanding studio schools and outstanding studio schools with a foundation in research. 

With respect to doctoral education, I am coming to feel that the glass is half empty rather than half full. On one hand, there are more good doctoral programs than before, and more excellent graduates than ever. On the other hand, there are many more troubled doctoral programs. Problem programs are the vast majority. What is a good doctoral program? It is a program that ensures every graduate — *every* graduate — the benefit of serious research training. This is a key element of a good program program. 

There are many good doctoral supervisors in schools with problematic doctoral programs. Lucky students may therefore get good supervisors. Nevertheless, this is chance. I believe that the excellent programs are going to become increasingly visible, and I suspect that there will be some kind of eventual shake-out in which universities seek the graduates of strong programs in hiring, much as universities now compete for the graduates of top programs in physics, social science, cognitive science, or the STEM disciplines. 

Today, there it is possible to earn a PhD in design at several hundred universities. As in most fields, fewer than two dozen doctoral programs are genuinely first-rate. The difference is that in addition to two dozen first-rank programs, most fields also have many more doctoral programs that meet basic standards for competent research training. This is not yet the case in design.

The third major development is the growth of new approaches to design education that integrate different forms of industry or business experience with studio courses in both the undergraduate and graduate curriculum. There are several models for these programs. They typically work hands-on, often with serious engagement from involved companies. They tend to run for a full semester or even a year at the undergraduate level, and they may constitute the complete program for a master’s degree. Again, I’m not going to go into details. I simply say that this development is significant and — in some respects — revolutionary.  

The fourth and final major development is online education. Again, there are several models. Some models are for-profit, others are not-for-profit. Some models function within or alongside regular studio programs in what is called mixed-mode education. Others models focus on working designers with an interest in expanding or enhancing their repertoire of skills and knowledge. Some aim specifically at a specific market niche while others go broad. One program sells books and course materials alongside the for-profit courses while another delivers all course materials open access and makes all courses available to members of the non-profit organization. One not-for-profit also adjust membership fees so that people living in India pay less than people living in the United States or Europe.        

When we add all these significant trends, I believe that design education will undergo radical change in the coming ten years. This will affect all design schools. The interesting question is which design schools will prepare for these changes, and which will shrink or even vanish as 120 of 155 medical schools vanished in 1910.

6) Here, I return to the world of academic research. Time and again, I have seen design articles as well as journal and conference submissions in which the authors draw general conclusions from the limited data that David notes do not lead to general conclusions. Designers generally do this in a context the requires research publications as a condition of employment or promotion. They start by doing the kind of testing that design firms might use to sell projects. They end by making general truth claims. Some of this work is based on pure intuition. Some of this work is based on practice mistakenly labeled practice-based research. (The vast majority of university-level design teachers do not actually practice professional design. They teach studio skills to students who hope to become professional designers.) Some of this work is supported by limited testing. The difficulty is that much of this work is of such poor quality that it will not even function is the world of responsible case study research. 

In many fields, researchers now use Karl Weick’s (1979: 35-42) “Research Clock” to explain the trade-offs between simplicity, accuracy,  and generality. The “research clock” is an imaginary device that looks like an ordinary 12-hour clock with numbers from 1-12. At 12, place the word “general.” At 4, place the word “accurate.” At 8, place the word “simple.” The clock has two hands of equal length. 

Weick proposes that any given research project in the social sciences -- and possibly other fields -- can meet two of these three criteria at the cost of neglecting the third criterion. Case study research, for example, can be simple and accurate, but it cannot be general. This is the case for much practice-based research located in a specific design project. It also applies to most forms of clinical research. (The difference in medicine is the massive accumulation of data derived from the conclusions of generalizable studies across hundreds or even thousands of cases. Medicine also involves meta-research that accumulates the data of dozens of clinical trials involving hundreds of thousands of human beings.)  

Many kinds of research combine simplicity and generality at the cost of accuracy. Quantitative  research based on massive comparison of many  cases tends to function this way. The specific, accurate details of a given case or project vanish in the weight of evidence through which  one sifts to achieve generality.

With no knowledge of the limits and requirements of general statements, a great many designers make an intuitive choice, test it on 6 users, and proclaim a general fact. In some circumstances, expert designers with an appropriate, rich background can act effectively based on limited information. No one can make general claims based on limited information.

I appreciate and agree with much of what David wrote. Nevertheless, designers who work in academic research often fail to recognize what we cannot appropriately do with small samples. I also suggest that many cases exist in which working designers also fail to test properly. If more designers understood how to test products and services properly, there would be fewer failed product launches and companies would not change design firms as often as they do.

David’s description of how designers sometimes work is reasonable. But this heuristic approach to design often produces outcomes that only *seem* to work. The designer measures success based on whether the client buys the solution, not based on whether the solution solves the problem. The outcome of the normal heuristic design process in developing products and services is far from reliable. Much like the process of evolution in nature, heuristic processes work effectively at a high price in failed developments and extinct lines. The evidence of new product failure is clear. In one study, Mansfield et al. (1971: 57) concluded that once new product ideas move beyond the proposal stage, 57% achieve technical objectives, 31% enter full-scale marketing, and only 12% earn a profit . According to others, over 80% of all new products fail when they are launched, and another 10% fail within five years (Lukas 1998, McMath 1998).

Any study of how businesses use design firms and change them out would show equally high attrition rates. What fascinates me about Per Mollerup’s work is that some clients are *still* using the corporate identities that his firm designed twenty-five and thirty years ago. Some of these firms have been merged into other firms or bought by new owners. This kind of change is often the occasion for a name change or a new corporate design program. In Mollerup's case, many former clients recognize the brand equity that accrues to a good name and a strong design program. The equity remains. Those who want to learn more about Mollerup’s (2013) principles of branding can read the book he wrote on these issues, Marks of Excellence: The History and Taxonomy of Trademarks.

In my view, this kind of elegance is only possible when we think systematically based on an awareness of what we can and can’t learn from other disciplines. It requires the expertise that only comes into design when designers have a combination of skilled professional experience and interdisciplinary expertise.

Yours,

Ken Friedman 

--

References

Bernsen, Jens. 1986. Design. The Problem Comes First. Copenhagen: Danish Design Council.

Bonner, Thomas Neville. 2002. Iconoclast. Abraham Flexner and a Life in Learning. Baltimore, Maryland: The Johns Hopkins University Press.

Bruch, Heike, and Sumantra Ghoshal. 2004. A Bias for Action. How Effective Managers Harness Their Willpower, Achieve Results, and Stop Wasting Time. Boston: Harvard Business School Press.

Deming, W. Edwards. 1986. Out of the Crisis. Quality, Productivity and Competitive Position. Cambridge: Cambridge University Press.

Drucker, Peter F. 2003. The New Realities. Revised Edition. New Brunswick, New Jersey: Transaction Publishers.

Flexner, Abraham. 1910. Medical Education in the United States and Canada. Bulletin No. 4. New York: Carnegie Foundation for the Advancement of Teaching.

Friedman, Ken. 2003. "Theory construction in design research: criteria, approaches, and methods.” Design Studies 24 (2003) 507–522. doi:10.1016/S0142-694X(03)00039-5

Halberstam, David. 1986. The Reckoning. New York: William Morrow and Company.

Kotter, John P. 2008. A Sense of Urgency. Boston: Harvard Business School Press.

Kotter, John P. 2012. Leading Change. 1st Revised Edition. Boston: Harvard Business School Press.

Lukas, Paul. 1998. “The Ghastliest Product Launches.” Fortune 16 March 1998, p. 44.

Mansfield, Edwin, J. Rapaport, J. Schnee, S. Wagner, and M. Hamburger. 1971. Research and Innovation in Modern Corporations. New York: Norton.

McMath, Robert. 1998. What Were They Thinking? Marketing Lessons I’ve Learned from Over 80,000 New Product Innovations and Idiocies. New York: Times Business.

Mollerup, Per. 2013. Marks of Excellence: The History and Taxonomy of Trademarks. 2nd edition, revised and expanded. London: Phaidon.

Muratovski, Gjoko. 2015. "Paradigm Shift: Report on the New Role of Design in Business and Society.” She Ji: The Journal of Design, Economics, and Innovation. Volume 1, Issue 2, Winter 2015, Pages 118–139.
http://dx.doi.org/10.1016/j.sheji.2015.11.002
Accessible at URL:
http://www.sciencedirect.com/science/article/pii/S2405872615300265
(Accessed 2017 February 12)

Nicolson, Adam. 2005. Men of Honour. Trafalgar and the Making of the English Hero. London: HarperCollins.

Norman, Don. 2010. Why Design Education Must Change. Core77, 2010 November 26. Accessible at URL:
http://www.jnd.org/dn.mss/why_design_education.html
(Accessed 2017 February 10)

Romer, Paul. 2016. "Why It Makes Sense for an M.D. to Lead the World Bank.” Paul Romer [Blog August  28, 2016.]
URL: https://paulromer.net/md-to-lead-worldbank/
Accessed 2017 February 9.

Sless, David, and Rob Wiseman. 1997. Writing About Medicines for People: Usability Guidelines for Consumer Medicine Information. Melbourne: Communication Research Institute of Australia.

Weick, Karl. 1979. The Social Psychology of Organizing. Second Edition. New York: McGraw-Hill.

—

Ken Friedman, PhD, DSc (hc), FDRS | Editor-in-Chief | 设计 She Ji. The Journal of Design, Economics, and Innovation | Published by Tongji University in Cooperation with Elsevier | URL: http://www.journals.elsevier.com/she-ji-the-journal-of-design-economics-and-innovation/

Chair Professor of Design Innovation Studies | College of Design and Innovation | Tongji University | Shanghai, China ||| University Distinguished Professor | Centre for Design Innovation | Swinburne University of Technology | Melbourne, Australia 

Email [log in to unmask] | Academia http://swinburne.academia.edu/KenFriedman | D&I http://tjdi.tongji.edu.cn 


-----------------------------------------------------------------
PhD-Design mailing list  <[log in to unmask]> Discussion of PhD studies and related research in Design Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------


-----------------------------------------------------------------
PhD-Design mailing list  <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager