JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for PHD-DESIGN Archives


PHD-DESIGN Archives

PHD-DESIGN Archives


PHD-DESIGN@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

PHD-DESIGN Home

PHD-DESIGN Home

PHD-DESIGN  December 2014

PHD-DESIGN December 2014

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Clinical Research and Clinical Guidelines

From:

Ken Friedman <[log in to unmask]>

Reply-To:

PhD-Design - This list is for discussion of PhD studies and related research in Design <[log in to unmask]>

Date:

Sat, 6 Dec 2014 00:53:07 +0800

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (180 lines)

Dear Terry,

In your posts to this thread [1, 2 below], you criticise the rest of us for taking an approach that is essentially old-fashioned and unworkable. You’ve said something like this quite often, in different words but the same meaning. Our methods represent the past. Your methods of quantitative modelling represent the future. 

You claim to have a method for modelling design in a predictable way by accounting for complex dynamic systems with multiple loops of action and behaviour. 

You have never demonstrated this method. You haven’t explained it or described it. You simply assert that the method exists. Starting from the premise that this method exists, you use logic to derive conclusions. These conclusions would be true if and only if your premises are correct. Without the method, your premises are incorrect and the conclusions are false. 

List members have asked you for examples and evidence that allow others to examine and test your claims. You do not provide these. Instead, you occasionally offer unworkable or irrelevant examples. Some are quite memorable. In one thread, you offered an unpublished and invalid paper on 14th-century Byzantine history. You stated that the paper was an example that modelled complex dynamic systems with multiple loops of action and behaviour. An engineer and computer scientist who works with the methods described in the paper showed that the paper was methodologically inadequate and empirically incorrect. In another case, you offered Hermann Hesse’s novel Magister Ludi as an example. Magister Ludi is fiction.  

I am skeptical to the claim that anyone can model complex dynamic systems in design. If you have modelling methods for design that make responsible predictions, let’s see them.  

You have repeatedly stated that any design student can learn to model complex dynamic systems. I am even more skeptical to this claim. 

Modelling complex systems is a challenge to senior physicists, neuroscientists, and mathematicians. This kind of work requires fluent mathematical skills that relatively few people in any field can master. I can’t see how undergraduate students or master’s students can hope to do so. When asked to show how undergraduate and master’s students can master these skills, you have not done so. 

You state that there is no need to support your claims with evidence. You argue that logical argument is sufficient to demonstrate your claims. This is not the case when you use the word “reality.” “Reality” refers to an empirical world of human interaction, natural systems, and physical systems. This requires more than logic. The claim that something is so in the world requires evidence.

Two issues affect these repeated debates. The first issue involves the relation between theory and evidence. Questions and hypotheses come from theory. Only evidence allows us to determine which theories and hypotheses are correct in reality. 

The second issue involves the relation between logic and useful design solutions. An argument or solution may be logically valid without being true or useful. From Aristotle to Pierre Abelard, from Lewis Carroll to the present day, philosophers, logicians, and scientists have demonstrated that false premises lead to logically valid but false conclusions.

Is there radical new way of doing design and design research? If there is, we need examples and explanations. In research and in design, we must be able to examine and understand ideas to put those ideas to work. You’ve made these kinds of claims for nearly fifteen years without providing evidence. It’s my sense that most people no longer take your claims seriously. From time to time, the arguments become so problematic that one or another among us feels compelled to say something. 

There is no evidence for a mathematical system that can predictably model design as a complex dynamic system with multiple loops of action and behaviour. There is no evidence for a mathematical system that models design activity any better than the admittedly problematic mix of research, experience, and intuition we rely on today. Serious research can close the gap between pure intuition and outcomes that are partially predictable. The notion of comprehensive mathematical modelling for generally predictable outcomes is beyond reach.

There are sound reasons for this in behavioural science and physical science both. In behavioural science and in economics, Friedrich Hayek offered convincing arguments for the unpredictable nature of human behaviour in social groups. 

It is also difficult to model complex adaptive systems in nature. Physicists, mathematicians, and engineers have attempted to model many kinds of complex adaptive systems with limited success. Neuroscientists and biologists have done no better. It has taken years of patient work to achieve minor advances and occasional breakthroughs.

When we attempt to model human action and interaction, problems cascade. One reason for this is the fact that human social systems are open systems. These systems are open to exogenous influences. It is inherently impossible to model the full system because there is no way to model all nodes and connections in a social world that may at any moment influence individuals and groups in unplanned ways. 

We can improve our models to do better than we have done in the past. Predictable certainty is impossible outside the world of science fiction. You would not be as certain as you seem to be if you worked in any research field that does this kind of modelling. You don’t. Researchers in those communities continually publish their results. They test different models and approaches. They debate their approaches, changing their approaches to better approximate reality. No one who does this kind of work with recognised results claims the level of certainty you claim — and no one believes that they can teach this kind of modelling at a professional level in an undergraduate program.  

Don Norman wrote [3 below], “Designers who believe they know enough about human behavior to predict all the results are simply delusional -- and therefore dangerous.” 

The question is whether you can demonstrate robust predictive models that account for human behaviour. Forget novels and links to unpublished papers. If you cannot show the rest of us how to do this in design, your claims are irrelevant to the design field. 

To state that other approaches than your own are bound to fail when you can't demonstrate your approach is irresponsible.

It’s time to move beyond fictional examples. The challenge you face is to show that your concept of mathematical modelling works in reality. 

Does it work in the real world of human interaction? Do you know something that Friedrich Hayek didn’t know? If the rest of us are to take your claims seriously, we need to see working models. You’ve got to show that your quantitative methods work, and you’ve got to show that designers can learn these methods well enough in a standard design education to deploy them. 

In his article, “How Do We Know that Albert Einstein was Not a Crank,” Jeremy Bernstein (1993: 17-18) writes, “I would propose two criteria to help us distinguish between crank science and the real thing: ‘correspondence’ and ‘predictiveness’. … I would insist that any proposal for a …. radically new theory in [any science] contain a clear explanation of why the precedent science worked. What new domain of experience is being explored by the new science, and how does it meld with the old?”

Bernstein (1993: 20) continues, “The crank is a scientific solipsist who lives in his own little world. He has no understanding nor appreciation of the scientific matrix in which his work is embedded. I would gladly read a paper on perpetual motion which began by explaining why we had all been overlooking something about entropy, something that makes a correspondence with the science we know. In my dealings with cranks, I have discovered that this kind of discussion is of no interest to them. If you find a specific flaw in their machine, they will come back the next day with a new design. The process never converges.” 

You are proposing a scientific approach to design rooted in quantitative modelling and robust prediction. Bernstein’s criteria apply to your proposals. While Bernstein (1993) explains this in ways appropriate to physics, the same issues apply to design and design research when you claim that you can produce robust predictive models.

Bernstein (1993: 20) goes on “The second criterion that genuine science should satisfy is predictiveness.” This requires demonstration. Bernstein explains predictiveness in several pages of careful discussion. Bernstein’s full article is available for download from the Teaching Documents section of my Academia page:

https://swinburne.academia.edu/KenFriedman

How do your claims measure up?

Start with the criterion of correspondence. Many earlier approaches to design and design research work reasonably well. While they are not perfect, they function in the real world — they function in reality. Just as there was some merit in the physics that led from Kepler and Galileo to Planck and Einstein, there is some merit to many of the design approaches that expert designers used for much of the 20th century. Despite problems and imperfections, the projects of outstanding designers work reasonably well. The best researchers have been reasonably successful. If your approach cannot explain why this is so, there is no correspondence. 

Predictiveness is simple. While it is not easy to demonstrate predictiveness, the nature of predictiveness is simple. Simply demonstrate that a model can make accurate predictions. You have never demonstrated this. You talk about predictions in much the same way that people talk about Hesse’s glass bead game. Describing the game doesn’t make the game real. Describing the logical consequences of an imagined but unreal model is not a case of prediction: it is a case of false premises at work. Without a real model that can operate in the real world of human action and interaction, there is no predictiveness.

Without correspondence and prescriptiveness, Albert Einstein could well have been a crank. To be sure that someone is not a crank requires explanatory correspondence and predictiveness in the real world.

Most of the posts in this thread on clinical research and clinical guidelines call for pragmatic inquiry, iterative development, rich evidence, and sound theory. Is this approach perfect? No. Pragmatic inquiry and iterative development in an open system are always subject to change. This is necessarily the case when we design with and for human beings. 

Don describes this well: “The good designer is humble, willing to observe and learn — over and over and over again. (The same is true of a good psychologist, or anthropologist, or scientist.)” Iteration brings action from the real world into the design process. It works in design and design research just as it does in science. Real research always discloses things that surprise us.

In a famous critique, the sociologist Herbert Blumer criticised the inappropriate use of quantitative models in social and behavioural science. This is the result of a perspective more concerned with specific research methods than with the nature of the empirical world. 

Blumer (1969: 24) wrote: “Today ‘methodology’ in the social sciences is regarded with depressing frequency as synonymous with the study of advanced quantitative procedures, and a ‘methodologist’ is one who is expertly versed in the knowledge and use of such procedures. He is generally viewed as someone who casts study in terms of quantifiable variables, who seeks to establish relations between such variables by use of sophisticated statistical and mathematical techniques, and who guides such study by elegant logical models conforming to special canons of ‘research design’.”

Blumer (1969: 27) respected the “obdurate character of the empirical world.” He writes (Blumer 1969: 21) that “an empirical science presupposes the existence of an empirical world. Such an empirical world exists as something available for observation, study, and analysis. It stands over against the scientific observer, with a character that has to be dug out and established through observation, study, and analysis. This empirical world must forever be the central point of concern. It is the point of departure and the point of return in the case of empirical science. It is the testing ground for any assertions made about the empirical world. ‘Reality’ for empirical science exists only in the empirical world, can be sought only there, and can be verified only there.” 

To speak of reality is to speak about the empirical world. Design and design research require a robust cycle for generative action in the real world of human engagement. While this often functions on the case-by-case basis, these cases are fundamental to conceptual progress. Progress enables us to fill gaps in what we think, what we know, what we understand, and what we can do. For effective design and design research, we must bridge those gaps. This is the pathway to new insights and significant growth. This is what we need for the discipline of design research, and this is what we need for the field of design.

A workable method for modelling design that accounts for complex dynamic systems with multiple loops of action and behaviour in a predictable way is as imaginary as the glass bead game in Magister Ludi.

So far, there is neither correspondence nor predictiveness in your claims. 

Ken 

Ken Friedman, PhD, DSc (hc), FDRS | Editor-in-Chief | 设计 She Ji. The Journal of Design, Economics, and Innovation | Published by Elsevier in Cooperation with Tongji University Press | Launching in 2015

Chair Professor of Design Innovation Studies | College of Design and Innovation | Tongji University | Shanghai, China ||| University Distinguished Professor | Centre for Design Innovation | Swinburne University of Technology | Melbourne, Australia

Email [log in to unmask] | Academia http://swinburne.academia.edu/KenFriedman | D&I http://tjdi.tongji.edu.cn 

—

Reference

Bernstein, Jeremy. 1993. Cranks, Quarks and the Cosmos. New York: Basic Books.

—

[1] Terry Love wrote:

—snip—

I feel there is a problem in what you wrote.

Which do you mean? Are you suggesting the design outcomes are:

1. None-predictable  with the analysis tools that are currently being used in that field; or,
2. *Intrinsically* non-predictable?

I can see that the issues of custom and habit  of design make  the dynamics design outcomes less easy to predict because in reality they are not an accurate proxy for designs. I can also see that dynamically changing contexts make the dynamics of outcomes less easy to predict. Ditto for variants in users, their education, and the dynamics of their individual learning trajectories.

It also presents a potential problem if the evidence gathering, analysis and modelling tools that have been used are only suitable for  situations in which the outcomes are a  fixed state. (In which case, dynamic modelling tools for design assessment are needed. The latter are widely available and used in other areas of design than visual design and communications.

All of these are issues of 'less easy to predict' rather than 'intrinsically non-predictable'

None of it  however is reason to suggest the dynamics of design outcomes in the areas you are working are *intrinsically* non-predictable.

It’s a strong claim you make. I'd like to see  your  reasoning as to why the claim that communication designs are *intrinsically* non-predictable should be true.

If it's not true, then all that is required for better prediction is to use better methods of modelling and analysis.

Of course that would mean more cost - but for a research institute, that’s often not a bad thing?

—snip—

--

[2] Terry Love wrote:

—snip—

I feel, however, that form of thought experiment takes design and design research even further down the wrong path.

It’s the kind of thinking that leads to unhelpful concepts such as 'wicked problems'.

The simple reality is the methods of analysis common to design and design research to date focus on fixed states and fixed outcomes.

They don't work when contexts and outcomes are dynamically varying and change over time. It's time to move on to methods that do.

As I've written before, the answer for design research and design is to take on board and use analyses and modelling that work when contexts and outcomes are dynamically varying.

This of course, requires a different skill set from what many designers and design researchers are taught.

The historic alternative - pretending anything with varying outcomes or contexts is impossible - is a position of the past. Time to move on.

Incidentally, this focus on fixed outcomes, and a couple of other reasons, is why the approaches presented by DesignX are likely to fail.

Simply, as a field we can do better.

It  involves doing design and design research differently.

—snip—

—

[3] Don Norman wrote:

—snip—

​I disagree​

On Thu, Dec 4, 2014 at 3:00 AM, Karel van der Waarde <[log in to unmask]> wrote:

"One of the reactions afterwards questioned the value of testing, by stating: “A good designer would have predicted most of those test results beforehand. Those results are not very surprising.” My answer was of course fairly standard: ’They probably are not surprising, but they provide quantifiable results about some of the tasks. Those responses are vital to check if you make any progress. And they confirm that the assumptions are correct.’ "

​I consider myself an expert on human psychology, with multiple decades of experience in addition to a deep knowledge of theory and experimental results (some of which I contributed).​ Nonetheless, whenever I conduct a test or do field observations, I always discover things that surprise me.

The person who says “a good designer would have predicted most of those test results …” is simply showing their arrogance and lack of actual experience.

The good designer is humble, willing to observe and learn — over and over and over again. (The same is true of a good psychologist, or anthropologist, or scientist.)

Karel is wrong by apologizing, trying to excuse the need for tests by saying that they are valuable (only) to provide quantitative results. They are valuable because they show weaknesses in the designs. Designers who believe they know enough about human behavior to predict all the results are simply delusional -- and therefore dangerous.

—snip--

--


-----------------------------------------------------------------
PhD-Design mailing list  <[log in to unmask]>
Discussion of PhD studies and related research in Design
Subscribe or Unsubscribe at https://www.jiscmail.ac.uk/phd-design
-----------------------------------------------------------------

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager